Git Product home page Git Product logo

k3sup's Introduction

k3sup ๐Ÿš€ (said 'ketchup')

k3sup logo

k3sup is a light-weight utility to get from zero to KUBECONFIG with k3s on any local or remote VM. All you need is ssh access and the k3sup binary to get kubectl access immediately.

The tool is written in Go and is cross-compiled for Linux, Windows, MacOS and even on Raspberry Pi.

How do you say it? Ketchup, as in tomato.

Sponsor this License: MIT build Github All Releases

Contents:

What's this for? ๐Ÿ’ป

This tool uses ssh to install k3s to a remote Linux host. You can also use it to join existing Linux hosts into a k3s cluster as agents. First, k3s is installed using the utility script from Rancher, along with a flag for your host's public IP so that TLS works properly. The kubeconfig file on the server is then fetched and updated so that you can connect from your laptop using kubectl.

You may wonder why a tool like this needs to exist when you can do this sort of thing with bash.

k3sup was developed to automate what can be a very manual and confusing process for many developers, who are already short on time. Once you've provisioned a VM with your favourite tooling, k3sup means you are only 60 seconds away from running kubectl get pods on your own computer. If you are a local computer, you can bypass SSH with k3sup install --local

Do you use k3sup?

k3sup was created by Alex Ellis - the founder of OpenFaaS ยฎ & inlets.

Sponsor this project

Want to see continued development? Sponsor alexellis on GitHub

Uses

  • Bootstrap Kubernetes with k3s onto any VM with k3sup install - either manually, during CI or through cloud-init
  • Get from zero to kubectl with k3s on Raspberry Pi (RPi), VMs, AWS EC2, Packet bare-metal, DigitalOcean, Civo, Scaleway, and others
  • Build a HA, multi-master (server) cluster
  • Fetch the KUBECONFIG from an existing k3s cluster
  • Join nodes into an existing k3s cluster with k3sup join

Bootstrapping Kubernetes

Conceptual architecture Conceptual architecture, showing k3sup running locally against any VM such as AWS EC2 or a VPS such as DigitalOcean.

Download k3sup (tl;dr)

k3sup is distributed as a static Go binary. You can use the installer on MacOS and Linux, or visit the Releases page to download the executable for Windows.

curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/

k3sup --help

A note for Windows users

Windows users can use k3sup install and k3sup join with a normal "Windows command prompt".

Demo ๐Ÿ“ผ

In the demo I install Kubernetes (k3s) onto two separate machines and get my kubeconfig downloaded to my laptop each time in around one minute.

  1. Ubuntu 18.04 VM created on DigitalOcean with ssh key copied automatically
  2. Raspberry Pi 4 with my ssh key copied over via ssh-copy-id

Watch the demo:

asciicast

Usage โœ…

The k3sup tool is a client application which you can run on your own computer. It uses SSH to connect to remote servers and creates a local KUBECONFIG file on your disk. Binaries are provided for MacOS, Windows, and Linux (including ARM).

Pre-requisites for k3sup servers and agents

Some Linux hosts are configured to allow sudo to run without having to repeat your password. For those which are not already configured that way, you'll need to make the following changes if you wish to use k3sup:

# sudo visudo

# Then add to the bottom of the file
# replace "alex" with your username i.e. "ubuntu"
alex ALL=(ALL) NOPASSWD: ALL

In most circumstances, cloud images for Ubuntu and other distributions will not require this step.

As an alternative, if you only need a single server you can log in interactively and run k3sup install --local instead of using SSH.

๐Ÿ‘‘ Setup a Kubernetes server with k3sup

You can setup a server and stop here, or go on to use the join command to add some "agents" aka nodes or workers into the cluster to expand its compute capacity.

Provision a new VM running a compatible operating system such as Ubuntu, Debian, Raspbian, or something else. Make sure that you opt-in to copy your registered SSH keys over to the new VM or host automatically.

Note: You can copy ssh keys to a remote VM with ssh-copy-id user@IP.

Imagine the IP was 192.168.0.1 and the username was ubuntu, then you would run this:

  • Run k3sup:
export IP=192.168.0.1
k3sup install --ip $IP --user ubuntu

# Or use a hostname and SSH key for EC2
export HOST="ec2-3-250-131-77.eu-west-1.compute.amazonaws.com"
k3sup install --host $HOST --user ubuntu \
  --ssh-key $HOME/ec2-key.pem

Other options for install:

  • --cluster - start this server in clustering mode using embedded etcd (embedded HA)
  • --skip-install - if you already have k3s installed, you can just run this command to get the kubeconfig
  • --ssh-key - specify a specific path for the SSH key for remote login
  • --local - Perform a local install without using ssh
  • --local-path - default is ./kubeconfig - set the file where you want to save your cluster's kubeconfig. By default this file will be overwritten.
  • --merge - Merge config into existing file instead of overwriting (e.g. to add config to the default kubectl config, use --local-path ~/.kube/config --merge).
  • --context - default is default - set the name of the kubeconfig context.
  • --ssh-port - default is 22, but you can specify an alternative port i.e. 2222
  • --no-extras - disable "servicelb" and "traefik"
  • --k3s-extra-args - Optional extra arguments to pass to k3s installer, wrapped in quotes, i.e. --k3s-extra-args '--disable traefik' or --k3s-extra-args '--docker'. For multiple args combine then within single quotes --k3s-extra-args '--disable traefik --docker'.
  • --k3s-version - set the specific version of k3s, i.e. v1.21.1
  • --k3s-channel - set a specific version of k3s based upon a channel i.e. stable
  • --ipsec - Enforces the optional extra argument for k3s: --flannel-backend option: ipsec
  • --print-command - Prints out the command, sent over SSH to the remote computer
  • --datastore - used to pass a SQL connection-string to the --datastore-endpoint flag of k3s. You must use the format required by k3s in the Rancher docs.

See even more install options by running k3sup install --help.

  • Now try the access:
export KUBECONFIG=`pwd`/kubeconfig
kubectl get node

Note that you should always use pwd/ so that a full path is set, and you can change directory if you wish.

Checking if a cluster is ready

There are various ways to confirm whether a cluster is ready to use.

K3sup runs the "kubectl get nodes" command using a KUBECONFIG file, and looks for the "Ready" status on each node, including agents/workers.

Install K3s directly on the node and then check if it's ready:

k3sup install \
  --local \
  --context localk3s

k3sup ready \
  --context localk3s \
  --kubeconfig ./kubeconfig

Check a remote server saved to a local file:

k3sup install \
  --ip 192.168.0.101 \
  --user pi

k3sup ready \
  --context default \
  --kubeconfig ./kubeconfig

Check a merged context in your default KUBECONFIG:

k3sup install \
  --ip 192.168.0.101 \
  --user pi \
  --context pik3s \
  --merge \
  --local-path $HOME/.kube/config

# $HOME/.kube/config is a default for kubeconfig
k3sup ready --context pik3s

Merging clusters into your KUBECONFIG

You can also merge the remote config into your main KUBECONFIG file $HOME/.kube/config, then use kubectl config get-contexts or kubectx to manage it.

The default "context" name for the remote k3s cluster is default, however you can override this as below.

For example:

k3sup install \
  --ip $IP \
  --user $USER \
  --merge \
  --local-path $HOME/.kube/config \
  --context my-k3s

Here we set a context of my-k3s and also merge into our main local KUBECONFIG file, so we could run kubectl config use-context my-k3s or kubectx my-k3s.

๐Ÿ˜ธ Join some agents to your Kubernetes server

Let's say that you have a server, and have already run the following:

export SERVER_IP=192.168.0.100
export USER=root

k3sup install --ip $SERVER_IP --user $USER

Next join one or more agents to the cluster:

export AGENT_IP=192.168.0.101

export SERVER_IP=192.168.0.100
export USER=root

k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER

Please note that if you are using different usernames for SSH'ing to the agent and the server that you must provide the username for the server via the --server-user parameter.

That's all, so with the above command you can have a two-node cluster up and running, whether that's using VMs on-premises, using Raspberry Pis, 64-bit ARM or even cloud VMs on EC2.

Use your hardware authentication / 2FA or SSH Agent

You may wish to use the ssh-agent utility if:

  • Your SSH key is protected by a password, and you don't want to type it in for each k3sup command
  • You use a hardware authentication device key like a Yubico YubiKey to authenticate SSH sessions

Run the following to set SSH_AUTH_SOCK:

$ eval $(ssh-agent)
Agent pid 2641757

Optionally, if your key is encrypted, run: ssh-add ~/.ssh/id_rsa

Now run any k3sup command, and your SSH key will be requested from the ssh-agent instead of from the usual location.

You can also specify an SSH key with --ssh-key if you want to use a specific key-pair.

K3sup plan for automation

A new command was added to k3sup to help with automating large amounts of nodes.

k3sup plan reads a JSON input file containing hosts, and will generate an installation command for a number of servers and agents.

Example input file:

[
  {
    "hostname": "node-a-1",
    "ip": "192.168.129.138"
  },
  {
    "hostname": "node-a-2",
    "ip": "192.168.129.128"
  },
  {
    "hostname": "node-a-3",
    "ip": "192.168.129.131"
  },
  {
    "hostname": "node-a-4",
    "ip": "192.168.129.130"
  },
  {
    "hostname": "node-a-5",
    "ip": "192.168.129.127"
  }
]

The following will create 1x primary server, with 2x additional servers within a HA etcd cluster, the last two nodes will be added as agents:

k3sup plan \
  devices.json \
  --user ubuntu \
  --servers 3 \
  --server-k3s-extra-args "--disable traefik" \
  --background > bootstrap.sh

Then make the file executable and run it:

chmod +x bootstrap.sh
./bootstrap.sh

Watch a demo with dozens of Firecracker VMs: Testing Kubernetes at Scale with bare-metal

The initial version of k3sup plan has a reduced set of flags. Flags such as --k3s-version and --datastore are not available, but feel free to propose an issue with what you need.

Create a multi-master (HA) setup with external SQL

The easiest way to test out k3s' multi-master (HA) mode with external storage, is to set up a Mysql server using DigitalOcean's managed service.

  • Get the connection string from your DigitalOcean dashboard, and adapt it

Before:

mysql://doadmin:80624d3936dfc8d2e80593@db-mysql-lon1-90578-do-user-6456202-0.a.db.ondigitalocean.com:25060/defaultdb?ssl-mode=REQUIRED

After:

mysql://doadmin:80624d3936dfc8d2e80593@tcp(db-mysql-lon1-90578-do-user-6456202-0.a.db.ondigitalocean.com:25060)/defaultdb

Note that we've removed ?ssl-mode=REQUIRED and wrapped the host/port in tcp().

export DATASTORE="mysql://doadmin:80624d3936dfc8d2e80593@tcp(db-mysql-lon1-90578-do-user-6456202-0.a.db.ondigitalocean.com:25060)/defaultdb

You can prefix this command with two spaces, to prevent it being cached in your bash history.

Generate a token used to encrypt data (If you already have a running node this can be retrieved by logging into a running node and looking in /var/lib/rancher/k3s/server/token)

# Best option for a token:
export TOKEN=$(openssl rand -base64 64)

# Fallback for no openssl, on a Linux host:
export TOKEN=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 64)

# Failing that, then try:
export TOKEN=$(head -c 64 /dev/urandom|shasum| cut -d - -f 1)
  • Create three VMs

Imagine we have the following three VMs, two will be servers, and one will be an agent.

export SERVER1=104.248.135.109
export SERVER2=104.248.25.221
export AGENT1=104.248.137.25
  • Install the first server
k3sup install --user root --ip $SERVER1 --datastore="${DATASTORE}" --token=${TOKEN}
  • Install the second server
k3sup install --user root --ip $SERVER2 --datastore="${DATASTORE}" --token=${TOKEN}
  • Join the first agent

You can join the agent to either server, the datastore is not required for this step.

k3sup join --user root --server-ip $SERVER1 --ip $AGENT1

Please note that if you are using different usernames for SSH'ing to the agent and the server that you must provide the username for the server via the --server-user parameter.

  • Additional steps

If you run kubectl get node, you'll now see two masters/servers and one agent, however, we joined the agent to the first server. If the first server goes down, the agent will effectively also go offline.

kubectl get node

NAME              STATUS                        ROLES    AGE     VERSION
k3sup-1           Ready                         master   73s     v1.19.6+k3s1
k3sup-2           Ready                         master   2m31s   v1.19.6+k3s1
k3sup-3           Ready                         <none>   14s     v1.19.6+k3s1

There are two ways to prevent a dependency on the IP address of any one host. The first is to create a TCP load-balancer in the cloud of your choice, the second is for you to create a DNS round-robbin record, which contains all of the IPs of your servers.

In your DigitalOcean dashboard, go to the Networking menu and click "Load Balancer", create one in the same region as your Droplets and SQL server. Select your two Droplets, i.e. 104.248.34.61 and 142.93.175.203, and use TCP with port 6443.

If you want to run k3sup join against the IP of the LB, then you should also add TCP port 22

Make sure that the health-check setting is also set to TCP and port 6443. Wait to get your IP, mine was: 174.138.101.83

Save the LB into an environment variable:

export LB=174.138.101.83

Now use ssh to log into both of your servers, and edit their config files at /etc/systemd/system/k3s.service, update the lines --tls-san and the following address, to that of your LB:

ExecStart=/usr/local/bin/k3s \
    server \
        '--tls-san' \
        '104.248.135.109' \

Becomes:

ExecStart=/usr/local/bin/k3s \
    server \
        '--tls-san' \
        '174.138.101.83' \

Now run:

sudo systemctl daemon-reload && \
  sudo systemctl restart k3s-agent

And repeat these steps on the other server.

You can update the agent manually, via ssh and edit /etc/systemd/system/k3s-agent.service.env on the host, or use k3sup join again, but only if you added port 22 to your LB:

k3sup join --user root --server-ip $LB --ip $AGENT1

Finally, regenerate your KUBECONFIG file with the LB's IP, instead of one of the servers:

k3sup install --skip-install --ip $LB

Log into the first server, and stop k3s sudo systemctl stop k3s, then check that kubectl still functions as expected:

export KUBECONFIG=`pwd`/kubeconfig
kubectl get node -o wide

NAME              STATUS                        ROLES    AGE   VERSION
k3sup-1           NotReady                      master   23m   v1.19.6+k3s1
k3sup-2           Ready                         master   25m   v1.19.6+k3s1
k3sup-3           Ready                         <none>   22m   v1.19.6+k3s1

You've just simulated a failure of one of your masters/servers, and you can still access kubectl. Congratulations on building a resilient k3s cluster.

Create a multi-master (HA) setup with embedded etcd

In k3s v1.19.5+k3s1 a HA multi-master (multi-server in k3s terminology) configuration is available called "embedded etcd". A quorum of servers will be required, which means having an odd number of nodes and least three. See more

  • Initialize the cluster with the first server

Note the --cluster flag

export SERVER_IP=192.168.0.100
export USER=root

k3sup install \
  --ip $SERVER_IP \
  --user $USER \
  --cluster \
  --k3s-version v1.19.1+k3s1
  • Join each additional server

Note the new --server flag

export USER=root
export SERVER_IP=192.168.0.100
export NEXT_SERVER_IP=192.168.0.101

k3sup join \
  --ip $NEXT_SERVER_IP \
  --user $USER \
  --server-user $USER \
  --server-ip $SERVER_IP \
  --server \
  --k3s-version v1.19.1+k3s1

Now check kubectl get node:

kubectl get node
NAME              STATUS   ROLES    AGE     VERSION
paprika-gregory   Ready    master   8m27s   v1.19.2-k3s
cave-sensor       Ready    master   27m     v1.19.2-k3s

If you used --no-extras on the initial installation you will also need to provide it on each join:

export USER=root
export SERVER_IP=192.168.0.100
export NEXT_SERVER_IP=192.168.0.101

k3sup join \
  --ip $NEXT_SERVER_IP \
  --user $USER \
  --server-user $USER \
  --server-ip $SERVER_IP \
  --server \
  --no-extras \
  --k3s-version v1.19.1+k3s1

๐Ÿ‘จโ€๐Ÿ’ป Micro-tutorial for Raspberry Pi (2, 3, or 4) ๐Ÿฅง

In a few moments you will have Kubernetes up and running on your Raspberry Pi 2, 3 or 4. Stand by for the fastest possible install. At the end you will have a KUBECONFIG file on your local computer that you can use to access your cluster remotely.

Conceptual architecture, showing k3sup running locally against bare-metal ARM devices.

  • Download etcher.io for your OS

  • Flash an SD card using Raspbian Lite

  • Enable SSH by creating an empty file named ssh in the boot partition

  • Generate an ssh-key if you don't already have one with ssh-keygen (hit enter to all questions)

  • Find the RPi IP with ping -c raspberrypi.local, then set export SERVER_IP="" with the IP

  • Enable container features in the kernel, by editing /boot/cmdline.txt (or /boot/firmware/cmdline.txt on Ubuntu)

  • Add the following to the end of the line: cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

  • Copy over your ssh key with: ssh-copy-id [email protected]

  • Run k3sup install --ip $SERVER_IP --user pi

  • Point at the config file and get the status of the node:

export KUBECONFIG=`pwd`/kubeconfig
kubectl get node -o wide

You now have kubectl access from your laptop to your Raspberry Pi running k3s.

If you want to join some nodes, run export IP="" for each additional RPi, followed by:

  • k3sup join --ip $IP --server-ip $SERVER_IP --user pi

Remember all these commands are run from your computer, not the RPi.

Now where next? I would recommend my detailed tutorial where I spend time looking at how to flash the SD card, deploy k3s, deploy OpenFaaS (for some useful microservices), and then get incoming HTTP traffic.

Try it now: Will it cluster? K3s on Raspbian

Caveats on security

If you are using public cloud, then make sure you see the notes from the Rancher team on setting up a Firewall or Security Group.

k3s docs: k3s configuration / open ports

Contributing

Blog posts & tweets

Blogs posts, tutorials, and Tweets about k3sup (#k3sup) are appreciated. Please send a PR to the README.md file to add yours.

Contributing via GitHub

Before contributing code, please see the CONTRIBUTING guide. Note that k3sup uses the same guide arkade

Both Issues and PRs have their own templates. Please fill out the whole template.

All commits must be signed-off as part of the Developer Certificate of Origin (DCO)

License

MIT

๐Ÿ“ข What are people saying about k3sup?

Checkout the Announcement tweet

Similar tools & glossary

Glossary:

  • Kubernetes: master/slave
  • k3s: server/agent

Related tools:

  • k3s - Kubernetes as installed by k3sup. k3s is a compliant, light-weight, multi-architecture distribution of Kubernetes. It can be used to run Kubernetes locally or remotely for development, or in edge locations.
  • k3d - this tool runs a Docker container on your local laptop with k3s inside
  • kind - kind can run a Kubernetes cluster within a Docker container for local development. k3s is also suitable for this purpose through k3d. KinD is not suitable for running a remote cluster for development.
  • kubeadm - a tool to create fully-loaded, production-ready Kubernetes clusters with or without high-availability (HA). Tends to be heavier-weight and slower than k3s. It is aimed at cloud VMs or bare-metal computers which means it doesn't always work well with low-powered ARM devices.
  • k3v - "virtual kubernetes" - a very early PoC from the author of k3s aiming to slice up a single cluster for multiple tenants
  • k3sup-multipass - a helper to launch single node k3s cluster with one command using a multipass VM and optionally proxy the ingress to localhost for easier development.

Troubleshooting and support

Maybe the problem is with K3s?

If you're having issues, it's likely that this is a problem with K3s, and not with k3sup. How do we know that? K3sup is a very mature project and has a few use-cases that it generally performs very well.

Rancher provides support for K3s on their Slack in the #k3s channel. This should be your first port of call. Your second port of call is to raise an issue with the K3s maintainers in the K3s repo

Do you want to install a specific version of K3s? See k3sup install --help and the --k3s-version and --k3s-channel flags.

Is your system ready to run Kubernetes? K3s requires certain Kernel modules to be available, run k3s check-config and check the output. Alex tests K3sup with Raspberry Pi OS and Ubuntu LTS on a regular basis.

Common issues

The most common problem is that you missed a step, fortunately it's relatively easy to get the logs from the K3s service and it should tell you what's wrong.

  • For the Raspberry Pi you probably haven't updated cmdline.txt to enable cgroups for CPU and memory. Update it as per the instructions in this file.

  • You ran kubectl on a node. Don't do this. k3sup copies the file to your local workstation. Don't log into agents or servers other than to check logs / upgrade the system.

  • sudo: a terminal is required to read the password - setup password-less sudo on your hosts, see also:Pre-requisites for k3sup agents and servers

  • You want to install directly on a server, without using SSH. See also: k3sup install --local which doesn't use SSH, but executes the commands directly on a host.

  • K3s server didn't start. Log in and run sudo systemctl status k3s or sudo journalctl -u k3s to see the logs for the service.

  • The K3s agent didn't start. Log in and run sudo systemctl status k3s-agent

  • You tried to remove and re-add a server in an etcd cluster and it failed. This is a known issue, see the K3s issue tracker.

  • You tried to use an unsupported version of a database for HA. See this list from Rancher

  • Your tried to join a node to the cluster and got an error "ssh: handshake failed". This is probably one of three possibilities:

    • You did not run ssh-copy-id. Try to run it and check if you can log in to the server and the new node without a password prompt using regular ssh.
    • You have an RSA public key. There is an underlying issue in a Go library which is referred here. Please provide the additional parameter --ssh-key ~/.ssh/id_rsa (or wherever your private key lives) until the issue is resolved.
    • You are using different usernames for SSH'ing to the server and the node to be added. In that case, playe provide the username for the server via the --server-user parameter.
  • Your .ssh/config file isn't being used by K3sup. K3sup does not use the config file used by the ssh command-line, but instead uses CLI flags, run k3sup install/join --help to learn which are supported.

Note: Passing --no-deploy to --k3s-extra-args was deprecated by the K3s installer in K3s 1.17. Use --disable instead or --no-extras.

Getting access to your KUBECONFIG

You may have run into an issue where sudo access is required for kubectl access.

You should not run kubectl on your server or agent nodes. k3sup is designed to rewrite and/or merge your cluster's config to your local KUBECONFIG file. You should run kubectl on your laptop / client machine.

If you've lost your kubeconfig, you can use k3sup install --skip-install. See also the various flags for merging and setting a context name.

Smart cards and 2FA

Warning: issues requesting support for smart cards / 2FA will be closed immediately. The feature has been proven to work, and is provided as-is.

You can use a smart card or 2FA security key such as a Yubikey. You must have your ssh-agent configured correctly, at that point k3sup will defer to the agent to make connections on MacOS and Linux. Find out more

Misc note on iptables

Note added by Eduardo Minguez Perez

Currently there is an issue in k3s involving iptables >= 1.8 that can affect the network communication. See the k3s issue and the corresponding kubernetes one for more information and workarounds. The issue has been observed in Debian Buster but it can affect other distributions as well.

k3sup's People

Contributors

aledbf avatar alexellis avatar billimek avatar blackjid avatar braybaut-globant avatar burtonr avatar dependabot[bot] avatar elmariofredo avatar frezbo avatar hades32 avatar hasheddan avatar heavyhorst avatar henry2man avatar hrak avatar johnmccabe avatar jwalton avatar karuppiah7890 avatar kaspernissen avatar leonklingele avatar martindekov avatar matti avatar natikgadzhi avatar prateekgogia avatar rashedkvm avatar rgee0 avatar simoncas avatar utsavanand2 avatar waterdrips avatar wingkwong avatar yankeexe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k3sup's Issues

[Feature Request] Encrypted SSH keys

Current Behaviour

k3sup install --ip $IP --user root panics with

Public IP: xxx.xxx.xx.xx
ssh -i /home/rkaufmann/.ssh/id_rsa [email protected]
panic: ssh: cannot decode encrypted private keys

goroutine 1 [running]:
github.com/alexellis/k3sup/pkg/cmd.loadPublickey(0xc4200ba1a0, 0x1b, 0xc42005bbb0, 0x3)
	/home/alex/go/src/github.com/alexellis/k3sup/pkg/cmd/install.go:132 +0x136
github.com/alexellis/k3sup/pkg/cmd.MakeInstall.func1(0xc4200dc280, 0xc4200c0240, 0x0, 0x4, 0x0, 0x0)
	/home/alex/go/src/github.com/alexellis/k3sup/pkg/cmd/install.go:54 +0x445
github.com/alexellis/k3sup/vendor/github.com/spf13/cobra.(*Command).execute(0xc4200dc280, 0xc4200c0200, 0x4, 0x4, 0xc4200dc280, 0xc4200c0200)
	/home/alex/go/src/github.com/alexellis/k3sup/vendor/github.com/spf13/cobra/command.go:826 +0x468
github.com/alexellis/k3sup/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4200dca00, 0x0, 0xc4200dc780, 0xc4200dcb50)
	/home/alex/go/src/github.com/alexellis/k3sup/vendor/github.com/spf13/cobra/command.go:914 +0x306
github.com/alexellis/k3sup/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4200dca00, 0xc42005bf60, 0x1)
	/home/alex/go/src/github.com/alexellis/k3sup/vendor/github.com/spf13/cobra/command.go:864 +0x2b
main.main()
	/home/alex/go/src/github.com/alexellis/k3sup/main.go:22 +0x145

Possible Solution

The Program could ask for the ssh key passphrase and use
signer, err := ssh.ParsePrivateKeyWithPassphrase(key, []byte("password")) instead of
signer, err := ssh.ParsePrivateKey(key)

k3sup doesn't work ootb in Fedora 31

Expected Behaviour

Download k3sup and run it against a Fedora 31 box

Current Behaviour

It requires to install policycoreutils-python-utils package (which is not installed by default in the cloud image) to manage selinux stuff. I've installed the package and tried again. This time:

Oct 30 11:18:08 f31-01 systemd[1]: Stopped Lightweight Kubernetes.
Oct 30 11:18:08 f31-01 systemd[1]: Starting Lightweight Kubernetes...
Oct 30 11:18:08 f31-01 systemd[9298]: k3s.service: Failed to execute command: Permission denied
Oct 30 11:18:08 f31-01 systemd[9298]: k3s.service: Failed at step EXEC spawning /usr/local/bin/k3s: Permission denied
Oct 30 11:18:08 f31-01 systemd[1]: k3s.service: Main process exited, code=exited, status=203/EXEC
Oct 30 11:18:08 f31-01 systemd[1]: k3s.service: Failed with result 'exit-code'.
Oct 30 11:18:08 f31-01 systemd[1]: Failed to start Lightweight Kubernetes.
Oct 30 11:18:13 f31-01 systemd[1]: k3s.service: Service RestartSec=5s expired, scheduling restart.
Oct 30 11:18:13 f31-01 systemd[1]: k3s.service: Scheduled restart job, restart coun

Running the k3s binary manually:

...
INFO[2019-10-30T11:19:21.695847768Z] Run: k3s kubectl                             
INFO[2019-10-30T11:19:21.695940194Z] k3s is up and running                        
WARN[2019-10-30T11:19:21.696013040Z] Failed to find cpuset cgroup, you may need to add "cgroup_enable=cpuset" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi) 
ERRO[2019-10-30T11:19:21.696061981Z] Failed to find memory cgroup, you may need to add "cgroup_memory=1 cgroup_enable=memory" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi) 
FATA[2019-10-30T11:19:21.696118045Z] failed to find memory cgroup, you may need to add "cgroup_memory=1 cgroup_enable=memory" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi

Possible Solution

I think this is probably because Fedora 31 has switched to cgroupsv2 https://fedoraproject.org/wiki/Changes/CGroupsV2 but I'm not 100% sure.

Steps to Reproduce (for bugs)

  1. Create a Fedora 31 x86_64 VM
  2. Download k3sup using the script
  3. Run k3sup as k3sup install --ip 192.168.122.235 --user fedora and observe the semanage issue
  4. Fix the semanage requirement by ssh-ing in the VM and installing policycoreutils-python-utils as sudo dnf install -y policycoreutils-python-utils
  5. Run k3sup again
  6. ssh-ing again and see journalctl -u k3s
  7. Run k3s as root to observe the cgroup stuff

Context

I just wanted to see if I could run k3sup against a Fedora 31 VM

Your Environment

  • What OS or type or VM are you using? Where is it hosted?
    Fedora 30 in my laptop, Fedora 31 VM for k3s to be installed
  • Operating System and version (e.g. Linux, Windows, MacOS):

Certificate problem after installing k3s on AWS with k3sup

I'm having a problem after installing k3s on AWS using k3sup, any kubectl gets "Failed to connect to proxy error=x509: certificate signed by unknown authority".
The k3s on the server works fine, including the cluster with multiple nodes, as I can see by ssh to the server, but local kubectl with the kubeconfig file gets the above result.
I think that this problem is new, as I did the same procedure a while ago and there was no problem.
I was told on the rancher slack channel to rotate the certificates using the rancher UI (don't yet know how) but as I installed it using k3sup I thought I could get other insights here.
Anyone have any idea?

  • Operating System:
    Ubuntu on both my laptop and AWS instance.

Add --merge option to k3sup install

Description

Add --merge option to k3sup install

Detail

This would take an existing kubeconfig file and merge the incoming configuration. By default this tends to switch the context.

Examples:

https://ahmet.im/blog/mastering-kubeconfig/

This can also be done via code with kubectl:

https://github.com/civo/cli/blob/0a5cd47484999fb61f5bd27c46c81209211b2bb4/lib/kubernetes.rb#L200

If we can avoid vendoring the Kubernetes clientset, then this will keep the binary quick to build and relatively lean. Perhaps using go/exec to run kubectl?

[Feature] Different SSH options for Server and Node on Join command

Expected Behaviour

I've got systems in which the usernames of where K3S has been set up with on the Server and where it should be set up with on the node differ from each other.

Current Behaviour

Both username and SSH port are the same for server to join and the joining node.

Possible Solution

Implement optional flags to set up the SSH connection for the server differently. Create Flags '--server-user' and '--server-port' with empty default value and when empty set them to the same value as '--user' and '--port', respectively.

Steps to Reproduce (for bugs)

  • k3s install --user 'pi' --ip '192.168.1.2'
  • k3s join --user 'ubuntu' --ip '192.168.1.3' --server-user 'pi' --server-ip '192.168.1.2'

Context

Installing agent with k3sup locally on my laptop failed while I built a multi-architecture cluster where individual nodes either run on armhf or arm64 and my laptop runs with amd64.

Your Environment

  • Running k3s v0.8.1 thx to k3sup on 3 devices, now.
  • My TravelMate B got LinuxMint Tina with Cinnamon desktop
  • Master + 2 Nodes r. Raspberry Pi 4B's w. Raspbian @ Kernel 4.19.73-v8+

Help wanted: Test on Windows 10

Expected Behaviour

I'd like to know if k3sup works on Windows 10, and if not, what is needed to make that possible.

Current Behaviour

Users can download the code and rebuild on Windows using Go to find out.

Context

It would be good to offer a binary for the main OSes

[Documentation] Please clarify possible architecture

From the README it sounds like I could run a k3s server on a RPi at home and then have an EC2 instance and some other VPS join as agents. Basically spanning a kubernetes cluster across multiple networks around the world. Is that what you are trying to say?

Expected Behaviour

It would be nice if the examples could be a more explicit on the details. Especially the networking part.

Current Behaviour

Conceptual architecture, showing k3sup running locally against any VM such as AWS EC2 or a VPS such as DigitalOcean

Possible Solution

In the demo I install Kubernetes (k3s) onto two separate machines and get my kubeconfig downloaded to my laptop each time in around one minute.

It would be great to have a little visualization here.

Note: It's obvious that running the master a laptop is no production deployment - but it sure is interesting to experiment with the effect of the master going away.

Steps to Reproduce (for bugs)

NA

Context

NA

Your Environment

NA

Additional note

Great project :)

k3s only works with iptables-legacy

Expected Behaviour

After deploying k3s with k3sup the networking should be working

Current Behaviour

It seems iptables-legacy is required in Debian Buster at least.

Possible Solution

k3sup should verify the OS and the iptables version and switch to the legacy version if required

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy > /dev/null
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy > /dev/null

Steps to Reproduce (for bugs)

  1. Install k3sup
  2. Deploy k3s using k3sup in latest raspbian in at least 2 raspberry pis

Context

Pods cannot reach other pods or the internet

Your Environment

RPI3 cluster (1+1) with latest raspbian

Question on how to handle kubectl non-zero exit codes

kubectlTask() does not check exit code, causing in some scenarios failed kubectl commands to be treated as successful but silently failing.

Expected Behaviour

If a kubectl commands fails, kubectlTask should return an error. If the error from kubectlTask is checked, this should stop k3sup in its task.

Current Behaviour

If a kubectl command fails with an non-zero exit code, k3sup will continue its task.

Possible Solution

Add a simple exit code check just like the one in kubectl()

Steps to Reproduce (for bugs)

Install tiller once and install it again, kubectl commands will fail but k3sup will not stop.

Context

This is not critical but complicates the development of new features, I was once troubleshooting something and commands were silently failing, took me some time to realize what was the issue.

Your Environment

Linux

Error when setting --k3s-extra-args for agents

After setting an --k3s-extra-args='--docker' flag in the master install, future created kubelets/ agent-nodes do not automatically inherit the flags. Additionally, passing the flag during agent creation results in an error.

Expected Behaviour

I would assume that agents would inherit the extra args set at master creation. Even though the do no inherit the flags, setting them via the k3sup join command results in an error.

Current Behaviour

Agent nodes do not inherit the --k3s-extra-args set at master creation. Setting --k3s-extra-args='--docker' results in an error.

Steps to Reproduce (for bugs)

  1. k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER --k3s-extra-args="--docker"
  2. results in curl -sfL https://get.k3s.io/ | K3S_URL="https://10.0.1.78:6443" K3S_TOKEN="<redacted_token>::node:<redacted>" sh - --docker --> sh: 0: Can't open --docker

Context

I would like to have k3s run as docker instead of containerd so that I can use cAgent monitoring tools.

Your Environment

Master - Jetson Nano- L4T-ubuntu - Kernel: 4.9.140 - docker://18.9.7
Agents - RaspberryPi-4 4gb Buster Debian - Kernel:4.19.57-v7l+ - containerd://1.2.7-k3s1
K3sup - Version: 0.2.4

Add figlet ASCII logo

Description

We could make the version / help message look better than this:

By including a figlet ASCII logo as per faas-cli

To generate a logo you can use the figlet package available on most Linux package managers and in the OpenFaaS function store.

Feature request: force namespace for the metrics-server to kube-system

The way the apps are installed with k3sup (rendering helm chart templates and then applying these rendered templates with kubectl) does not work with the metrics-server chart.
The problem is that it contains resources that specify a namespace (in the form {{ .Release.Namespace }} ), some which do not, and even some that must be applied to "kube-system" namespace.
When applying rendered templates with kubectl resources would in 3 namespaces:

  1. kube-system
  2. the namespace specified with -n (now forced to "default")
  3. the namespace specified from the current context in which kubectl is executed

Only if current namespace matches that specified with -n (it must be "default" currently) then metrics-server will be installed successfully. Otherwise, the install is broken.

I would propose the following steps to fix the problem:

  1. remove the option to specify a namespace
  2. use "kube-system" as namespace to install the app into. It is a system application after all.
  3. use "kubectl -n kube-system" to apply a rendered chart templates.

Fetch latest k3s version as default version

Expected Behaviour

In case k3s version is not defined in k3sup install cmd, fetch latest version without manually updating the config.go content or specifying k3s-version.

Current Behaviour

Default k3s version is defined in config.go file.

// K3sVersion default version
const K3sVersion = "v0.8.1"

Possible Solution

config.go

package config

import (
	"log"
	"net/http"
	"path"
)

// GetK3sVersionLatest gets lastest k3s version from https://github.com/rancher/k3s/releases/latest url
func GetK3sVersionLatest() string {

	k3sVersionURL := "https://github.com/rancher/k3s/releases/latest"

	resp, err := http.Get(k3sVersionURL)
	if err != nil {
		log.Fatal(err.Error())
	}

	K3sVersion := path.Base(resp.Request.URL.String())
	defer resp.Body.Close()

	return K3sVersion
}

update install.go

44:   command.Flags().String("k3s-version", config.GetK3sVersionLatest(), "Optional version to install, pinned at a default")

63:   fmt.Println("k3s version: " + k3sVersion)

update join.go

33:   command.Flags().String("k3s-version", config.GetK3sVersionLatest(), "Optional version to install, pinned at a default")

51:   fmt.Println("k3s version: " + k3sVersion)

Context

Using lastest k3s version as default, in case the k3s-version is not defined.

Join arguments are confusing

k3sup join arguments are confusing. Both IPs are for servers.

Example

export SERVER_IP=159.65.25.233
export IP=159.65.22.30

k3sup join --ip $IP --server-ip $SERVER_IP --user root

Both IP and SERVER_IP are both remote servers.

A better description would be something like ...

k3sup join --agent-node-ip $IP --master-node-ip $SERVER_IP --user root

Remove sudo when installing as root

If I run the k3sup with photons OS trimmed version https://github.com/vmware/photon/wiki/Downloading-Photon-OS. In the OVA with 120 mb sudo does not exist as command so the installation fails.

Expected Behaviour

When the user with which you ssh is root, skip sudo.

Current Behaviour

sudo is ran no matter the user name(even when root).

Possible Solution

I think that you might rename your superuser to be !=root, but we can skip sudo if the username in the --user flag is root. Other solution is add additional flag --root-user=myfunky_root_name which will skip appending sudo.

Steps to Reproduce (for bugs)

1.Deploy VM with the following OVA - http://dl.bintray.com/vmware/photon/3.0/Beta/ova/photon-hw11-3.0-5e45dc9.ova (default cred:root pass: changeme)
2. Run k3sup against it
3. See bash: sudo: command not found

Context

Some operating systems might not contain sudo. I can work on the implementation in case this is approved.

Side note

Noticed that this:

    server: https://127.0.0.1:6443

is copied to my kubeconfig. I believe we can swap the 127.0.0.1 reference with the value of --ip.

Your Environment

  • What OS or type or VM are you using? Where is it hosted?
    Check out steps to reproduce(PhotonOS)
  • Operating System and version (e.g. Linux, Windows, MacOS):
    Linux

k3sup assumes and switches to the pulled kubeconfig which has the default context

Expected Behaviour

The switching of contexts should be something that should be left upon the user to decide?
It should not be automatic and be changed everytime the user creates a new k3s cluster.
Your thoughts?

Current Behaviour

As discussed here with @alexellis the latest commit assumes that the pulled config has the 'default' context. But this switching of contexts can cause kubectl to have two default contexts (if there is an existing default context)

Possible Solution

Omit the code section that switches the context to 'default'.

Pod on remote worker could not resolve hostnames

With a fresh installation of k3s, one master and one worker, dns works only on master node.

To try it, I run a pod on each node and trying to ping google.com:

  1. kubectl run -it --rm --restart=Never alpine_one --image=busybox sh
  2. ping google.com

Expected Behaviour

On both pods ( one scheduled on master and one on worker ):
/ # ping google.com
PING google.com (172.217.18.206): 56 data bytes
64 bytes from 172.217.18.206: seq=0 ttl=52 time=6.515 ms
64 bytes from 172.217.18.206: seq=1 ttl=52 time=6.542 ms
64 bytes from 172.217.18.206: seq=2 ttl=52 time=6.734 ms

Current Behaviour

On master:
/ # ping google.com
PING google.com (172.217.18.206): 56 data bytes
64 bytes from 172.217.18.206: seq=0 ttl=52 time=6.515 ms
64 bytes from 172.217.18.206: seq=1 ttl=52 time=6.542 ms
64 bytes from 172.217.18.206: seq=2 ttl=52 time=6.734 ms
On worker:
/ # ping google.com
ping: bad address 'google.com'

Steps to Reproduce (for bugs)

on my local machine:

  1. export SERVER_IP=my_master_ip
  2. k3sup install --ip $SERVER_IP --user k3s --local-path=./kubeconfig --k3s-extra-args '--docker'
  3. export IP=my_worker_ip
  4. k3sup join --ip $IP --server-ip $SERVER_IP --user k3s --k3s-extra-args '--docker'

Your Environment

  • What OS or type or VM are you using? Where is it hosted?
    Master node is a VPS on OVH
    Worker node is an AWS EC2 t2.micro

  • Operating System and version (e.g. Linux, Windows, MacOS):
    kubectl get node -o wide

NAME                                  STATUS   ROLES    AGE   VERSION         INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                  CONTAINER-RUNTIME
vpsxxx                                Ready    master   13m   v1.15.4-k3s.1   XX.XX.XX.XX   <none>        Ubuntu 18.04.3 LTS   4.15.0-58-generic               docker://19.3.4
ip-yyyyy.eu-west-1.compute.internal   Ready    worker   11m   v1.15.4-k3s.1   YY.YY.YY.YY   <none>        Amazon Linux 2       4.14.146-120.181.amzn2.x86_64   docker://18.6.1

Docs: SAN definition and purpose

I'll volunteer to make this doc update, just wanted to get some clarification first to make sure I understand.

The README says that this tool:

updates the SAN address to the public IP

Looks like this refers to the flag:

			cli.StringSliceFlag{
				Name:  "tls-san",
				Usage: "Add additional hostname or IP as a Subject Alternative Name in the TLS cert",
				Value: &ServerConfig.TLSSan,
			},

...in k3s: https://github.com/rancher/k3s/blob/99716deb08784e62ecd29f4f457fb20c73310748/pkg/cli/cmds/server.go#L122-L125

I had to google "SAN certificate" but I learned that SANs provide the ability to have multiple domain names (or IP addresses) listed on the same certificate. So if I understand correctly, without this feature, k3s may generate a certificate that includes the k3s server's hostname but not the IP address, and that would be problematic for k3s agents or the kubectl CLI that wants to connect via the IP address?

If that is correct I can take a crack at adding an explainer for this.

[feature]sudo on user support?

Hey Alex,

Great work as usually, really impressive project :) What about adding sudoed user to bootstrap options ? Currently only root can install k3sup..

[Feature] Add HA masters with k3s and dqlite

Expected Behaviour

@ibuildthecloud is adding HA master support with dqlite over at -> k3s-io/k3s#1034

Current Behaviour

Not yet available in k3s or k3sup

Possible Solution

We should add support for the new flags in k3sup install/join so that we're ready for the first rc release.

Darren gives examples on the PR ->

# Start one server with clustering enabled on serve1
./k3s server --cluster-init --token SECRET

# On server2 join the cluster
./k3s server -t SECRET -s https://server1:6443

# On server3 join the cluster
./k3s server -t SECRET -s https://server1:6443

[Feature Request] Air Gap support

To run k3sup I need an internet connection to run the install script and download the k3s binary. I'd like to be able to run k3sup on a secure network not connected to the internet.

I'd like to be able to point k3sup to a mirror for the install script and binary. There is a good chance the upstream k3s install script needs to change to support this.

Optionally if k3sup could run as a webserver or just scp the files that would be nice.

UX refinements

Expected Behaviour

Copy and paste the commands from the Installation instructions in README.md would work.

Current Behaviour

The first line from the installation instructions gives me the k3sup-$arch binary and correct instructions:

$ curl -sLS https://get.k3sup.dev | sh
=========================================================
==    As the script was run as a non-root user the     ==
==    following commands may need to be run manually   ==
=========================================================

  sudo cp k3sup-darwin /usr/local/bin/k3sup

But the second command in the instructions fails, because the binary name is different:

$ sudo install k3sup /usr/local/bin/
install: k3sup: No such file or directory

Possible Solution

  1. Remove second line of installation, letting the user follow the instructions from the get.sh output
  2. After downloading the binary, rename it from k3sup-$arch to k3sup and adjust the instructions in the get.sh output to sudo cp k3sup /usr/local/bin/k3sup

Steps to Reproduce (for bugs)

  1. Execute exactly the installation instructions

Context

A quick change to yield a better experience when installing the binary. The first impression sticks ๐Ÿ˜…

Your Environment

  • MacOS.

[Feature Request] Add support for Docker instead of Containerd

Current Behaviour

Currently there is no flag available in the install command to use Docker instead of Containerd.

Possible Solution

Could we add an environment variable or a flag in the install command to support this feature ? Technically we might just need to add a --docker flag to the install command here if the user wants to use Docker: install.go

I would greatly appreciate this feature. I can try a Pull Request if you want.

Thanks

[Feature Request] Knative app

I've been experimenting with Knative recently and setup is a pain. It would be very convenient to have an app to install Istio + Knative. Details on a custom install can be found here.

Expected Behaviour

k3sup app install knative

Server address in kubeconfig is set to 127.0.0.1 for k3s 0.9.0

Unable to access server for k3s 0.9.0 since it uses 127.0.0.1 instead of localhost in the server address

Expected Behaviour

The server address should be remote ip, sending a PR

Current Behaviour

The server address is 127.0.0.1

Possible Solution

Replace 127.0.0.1 in addition to `localhost' for server address

Steps to Reproduce (for bugs)

  1. k3su install --ip 139.178.69.85 --k3s-version=v0.9.0
  2. cat kubeconfig | grep server givesserver: https://127.0.0.1:6443

Context

Using kubectl to access the remote cluster

Your Environment

  • What OS or type or VM are you using? Where is it hosted?
    Baremetal running Ubuntu 19.04 on Packet

  • Operating System and version (e.g. Linux, Windows, MacOS):
    MacOS

K3sup hung during installation on Raspberry Pi A+

During installation of k3s on my Raspberry Pi A+ board, k3sup seems to be hung after staring k3s. At first it seemed to be working, as it got to the starting k3s message soon after issuing the command, but then it was hung there for about an hour before I killed it.

Expected Behaviour

I ran the following command on my Mac:

k3sup install --ip 10.10.100.183 --merge --user pi

Current Behaviour

Public IP: 10.10.100.183
ssh -i /Users/verrol/.ssh/id_rsa [email protected]
ssh: curl -sLS https://get.k3s.io | INSTALL_K3S_EXEC='server --tls-san 10.10.100.183 ' INSTALL_K3S_VERSION='v0.9.1' sh -

[INFO] Using v0.9.1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-arm.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s-armhf
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service โ†’ /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s

Possible Solution

Steps to Reproduce (for bugs)

  1. Install k3sup:
    curl -sLS https://get.k3sup.dev | sh

  2. Run k3sup to install k3s on RPi
    k3sup install --ip 10.10.100.183 --merge --user pi

Context

Trying to see if I can get k3s to run on Raspian to experiment with k8s on the edge.

Your Environment

  • What OS or type or VM are you using? Where is it hosted?
    Host:
    % uname -a
    Darwin vee.mv.lorrev.org 18.7.0 Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64 x86_64

Remote/RPi:
% uname -a
Linux raspberrypi 4.19.66-v7+ #1253 SMP Thu Aug 15 11:49:46 BST 2019 armv7l GNU/Linux

  • Operating System and version (e.g. Linux, Windows, MacOS):

Should k3d be supported for local Docker-based dev?

k3d is mentioned in the README in as a similar tool, but with a different purpose.

k3sup installs to a VM, bare metal, ARM device etc

k3d starts a docker container

But should we offer k3d so that k3sup can become the go-to tooling for all k3s users?

[Feature Request] Support ssh-agent

Instead of relying on an ssh-key also support ssh-agent for retrieving / using a private ssh key. Normaly I've no key on my hard drive. It's just loaded via ssh-agent

CLI install script check for user of k3sup on OS X

On OS X /usr/local/bin/ is very often owned by the single user a MBP typically has, this has been the homebrew default for ages. So this is a comfort feature...

Expected Behaviour

When running curl -sLS https://get.k3sup.dev | sh it presents

You already have the k3sup cli!
Overwriting in 1 seconds.. Press Control+C to cancel.

Given that /usr/local/bin/k3sup is owned by me (the active user) I actually expect this overwrite to happen.

Current Behaviour

Overwrite is not happening, because

=========================================================
==    As the script was run as a non-root user the     ==
==    following commands may need to be run manually   ==
=========================================================

  sudo cp k3sup-darwin /usr/local/bin/k3sup

The sudo in this case actually makes no sense as the existing binary is already owned by me in this case given the entire directory is managed that way for consistency...

Your Environment

  • MacOS X, still stuck on latest Mojave

Running --merge after re-installing k3s gave a certificate error

After reinstalling k3s in a cluster, and trying to connect to it again with k3sup (with or without --skip-install), there is a certificate issue, and all commands fail.

Expected Behaviour

Reinstalling k3s using k3sup will handle updating certificates, and kubectl will be configured to use the cluster.

Current Behaviour

k3sup install --ip my-ip --context test --merge --local-path ~/.kube/config
# or
k3sup install --skip-install --ip my-ip --context test --merge --local-path ~/.kube/config

The above command completes successfully, but any kubectl command results in the certificate related error from the title. It happens when test context was already defined in my config.

Steps to Reproduce (for bugs)

k3sup install --ip my-ip --context test --merge --local-path ~/.kube/config
kubectl get pods # all is good

ssh my-ip
k3s-uninstall.sh
exit

kubectl config delete-context test
kubectl config delete-cluster test

k3sup install --ip my-ip --context test --merge --local-path ~/.kube/config
kubectl get pods
Unable to connect to the server: x509: certificate signed by unknown authority

Context

I wanted to completely reset k3s cluster I used for testing, and test my deployment from scratch. I wanted to continue using the same context/cluster names, but couldn't make it work. In the end I had to use a new cluster/context name, and it helped.

Your Environment

client: macos 10.14.5
server: ubuntu 18.04

[Feature] app for tiller

Wouldn't it be nice to be able to setup helm using k3sup ? I've implemented a helm subcommand to deploy tiller and opened this issue to discuss the idea before a potential pull request (https://github.com/Patazerty/k3sup/tree/feat_helm).

Expected Behaviour

A k3sup helm2 subcommand to manage helmv2. Tillerless helm3 is coming, but is not yet out of beta and helmv2 is still widely used. Different from the new app subcommand that only uses helm as a templating engine, the helm subcommand could deploy tiller in different ways, including the usual 'insecure' way of giving tiller the cluster-admin role, or restrict tiller to a given namespace with proper RBAC and good defaults.

Current Behaviour

The app command deploys apps using helm tillerless. It's an awesome feature but is a bit limited.

Context

I was deploying a k3s cluster and wanted to install helm. While installing helm is not that complicated, it's certainly nice to have k3s and helm deployment in a single tool.

Not all arguments are passed to the metrics-server

The snippet:

overrides := map[string]string{}
overrides["args"] = `{--kubelet-insecure-tls,--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname}`

Renders to:

--kubelet-insecure-tls
--kubelet-preferred-address-types=InternalIP
--ExternalIP
--Hostname

Which is not what do you intend I guess.

Also little off-topic but what's the point of having --namespace parameter if you don't allow anything but 'default'?

K3sup Version: 0.5.2
Git Commit: 9193ccf

[Feature] Add non-ssh option for cloud-init/local use

The output to STDIO will be captured in the cloudinit logs if used when provisioning a new VM.

Add an option to go "quiet" and to run local commands instead of ssh for this use case.

The user would then connect from his or her laptop using the --skip-install command to bring back a local kubeconfig file.

SSH Error - handshake failed

I tried adding a server today with the following command and the resulting output:

$ k3sup install --context k3s-dev --ip 163.172.147.187 --user kscarlett --ssh-key ~/.ssh/id_rsa
Public IP: <ip>
ssh -i /Users/kscarlett/.ssh/id_rsa kscarlett@<ip>
Error: unable to connect to <ip>:22 over ssh: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

The strange thing is that when I copy-paste the SSH command it prints, it logs me in just fine. Of note is that when I SSH into the server, it takes ~5 seconds, while k3sup fails immediately.

Expected Behaviour

Successful SSH authentication, just as I get manually.

Current Behaviour

Near-immediate failure of the SSH command.

Possible Solution

Steps to Reproduce (for bugs)

Seems like normal workflow - environment issue?

Context

I am unable to create a new server.

Your Environment

Local

  • OS: macOS 10.14.6
  • SSH: OpenSSH_7.9p1, LibreSSL 2.7.3

Server

  • OS: Ubuntu 18.04.3 LTS
  • SSH: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
  • Hosted at Scaleway (C2L)

[Feature request] Validate that the k3s-agent started with systemd

Synopsis

The k3sup join command doesn't validate that the k3s-agent is running during the join process.

Context

The k3s-agent.service file has a Type=exec which doesn't appear to be working correctly on Ubuntu 18.04.3 LTS (Rancher issue - not k3sup issue)

From the k3sup log output I see ...

[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit

... but the k3s-agent actually failed to start.

Environment

Ubuntu 18.04.3 LTS

Execution Scenario

Execute a join

export SERVER_IP=159.65.25.233
export IP=159.65.22.30

k3sup join --ip $IP --server-ip $SERVER_IP --user root

Expected Behavior

> kubectl get node

NAME    STATUS   ROLES    AGE     VERSION
k3s-1   Ready    master   6m57s   v1.14.6-k3s.1
k3s-2   Ready    worker   55s     v1.14.6-k3s.1

Actual Behavior

> kubectl get node

NAME    STATUS   ROLES    AGE     VERSION
k3s-1   Ready    master   6m57s   v1.14.6-k3s.1

[Feature Request] Write config to ~/.kube/config

Expected Behaviour

k3sup should add a new context to ~/.kube/config.

Current Behaviour

k3sup writes a brand new kubeconfig file.

Possible Solution

Basically what I'd like to see happen is, pass a CLI option like --save-default-config --context foo which would get k3sup to run the equivalent of:

        kubectl config set-cluster foo --server=https://[ip-address]:6443 --certificate-authority=[cafile]
        kubectl config set-credentials foo-admin --username=admin --password=[password]
        kubectl config set-context foo --cluster=foo --user=foo-admin
        kubectl config use-context foo

(Or, just write the equivalent changes directly to ~/.kube/config.)

Context

It's annoying to run multiple kube configs. :P

App: openfaas-ingress

Expected Behaviour

An app called openfaas-ingress should create an ingress record and an issuer.

The inputs are:

  • domain name
  • email address

i.e. k3sup app install openfaas-ingress --domain openfaas.example.com --email [email protected]

Use cert-manager 0.11 as per the chart app.

Additional:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: openfaas-gateway
  namespace: openfaas
  annotations:
    certmanager.k8s.io/acme-challenge-type: http01
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: openfaas.example.com
    http:
      paths:
      - backend:
          serviceName: gateway
          servicePort: 8080
        path: /
  tls:
  - hosts:
    - openfaas.plonk.dev
    secretName: gw-openfaas
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: [email protected]
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource used to store the account's private key.
      name: example-issuer-account-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
    - http01:
        ingress:
          class: nginx

[Feature] Add version to "k3sup version" through git metadata

[Feature] Add version to "k3sup version" through git metadata

Expected Behaviour

The following should print a version just like inlets.dev in its Makefile

k3sup version

Current Behaviour

Just prints a welcome message.

Possible Solution

Follow the approach in the Makefile of inlets.

Steps to Reproduce (for bugs)

  1. Run make
  2. Run ./bin/k3sup version (pick the right binary for your dev environment)

Context

For helping with bug reports etc.

[Feature request] Add app for Docker registry

Expected Behaviour

Add the docker registry by:

k3sup app install registry

In the end show information on how to port-forward your localhost to the registry port.

Current Behaviour

Unavailable.

Context

When developing locally for OpenFaaS, having a local registry in K8S is faster than pushing images to Docker Hub.

Add support to change context name on kubeconfig

I would love to be able to define the name of the context created in the kubeconfig file.

I have something already working, but in the PR template says that I should open an issue first to comment and discuss the functionality.

Current Behaviour

Today the generated kubeconfig file looks like this

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <redacted>
    server: https://<redacted>:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: <redacted>
    username: admin

Possible Solution

I would love to be able to define something different to default on the values for all the values related to cluster, user and context

Something like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <redacted>
    server: https://<redacted>:6443
  name: homelab
contexts:
- context:
    cluster: homelab
    user: homelab
  name: homelab
current-context: homelab
kind: Config
preferences: {}
users:
- name: homelab
  user:
    password: <redacted>
    username: admin

Context

  • Be able to install several k3s clusters with k3sup from the same computer and merge all to the $HOME/.kube/config file without conflicts
  • Have a more meaningful name when changing contexts

[Feature] ability to pass k3s install options

Expected Behaviour

As a user of k3sup,
I want to be able to specify additional k3s install options (such as --no-deploy, --node-taint, etc),
So that I can continue to leverage k3sup but have some control over the way k3s is installed or joined

Current Behaviour

Currently k3sup only has one hard-coded way (except IP address) to install (or join) k3s

Possible Solution

Allow for passing various 'k3s' install options from the k3sup install/join CLI

Context

See k3s install configuration options

Feature request

support in ingress for multiple domains+custom ssl certs and domain -> pod rules

Hi,
I'm bringing over this issue from the k3s repo - k3s-io/k3s#868
k3sup has become our primary way to create k3s clusters and im hoping this is the right place to fix it.
FYI, there are multiple people with similar issues - k3s-io/k3s#795

Basically it boils down to a simple thing : how do we configure a domain on the ingress, add ssl certificates to it (non letsencrypt) and setup domain/url -> pod mapping.

It might be going a little different than openfaas ... but a lot of us just want to run a nextjs + redis stack on k3s !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.