Git Product home page Git Product logo

k3s's Introduction

K3s - Lightweight Kubernetes

Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB.

Great for:

  • Edge
  • IoT
  • CI
  • Development
  • ARM
  • Embedding k8s
  • Situations where a PhD in k8s clusterology is infeasible

What is this?

K3s is a fully conformant production-ready Kubernetes distribution with the following changes:

  1. It is packaged as a single binary.
  2. It adds support for sqlite3 as the default storage backend. Etcd3, MySQL, and Postgres are also supported.
  3. It wraps Kubernetes and other components in a single, simple launcher.
  4. It is secure by default with reasonable defaults for lightweight environments.
  5. It has minimal to no OS dependencies (just a sane kernel and cgroup mounts needed).
  6. It eliminates the need to expose a port on Kubernetes worker nodes for the kubelet API by exposing this API to the Kubernetes control plane nodes over a websocket tunnel.

K3s bundles the following technologies together into a single cohesive distribution:

These technologies can be disabled or swapped out for technologies of your choice.

Additionally, K3s simplifies Kubernetes operations by maintaining functionality for:

  • Managing the TLS certificates of Kubernetes components
  • Managing the connection between worker and server nodes
  • Auto-deploying Kubernetes resources from local manifests in realtime as they are changed.
  • Managing an embedded etcd cluster

Current Status

FOSSA Status Nightly CI Build Status Integration Test Coverage Unit Test Coverage

What's with the name?

We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10 letter word stylized as k8s. So something half as big as Kubernetes would be a 5 letter word stylized as K3s. There is neither a long-form of K3s nor official pronunciation.

Is this a fork?

No, it's a distribution. A fork implies continued divergence from the original. This is not K3s's goal or practice. K3s explicitly intends not to change any core Kubernetes functionality. We seek to remain as close to upstream Kubernetes as possible. However, we maintain a small set of patches (well under 1000 lines) important to K3s's use case and deployment model. We maintain patches for other components as well. When possible, we contribute these changes back to the upstream projects, for example, with SELinux support in containerd. This is a common practice amongst software distributions.

K3s is a distribution because it packages additional components and services necessary for a fully functional cluster that go beyond vanilla Kubernetes. These are opinionated choices on technologies for components like ingress, storage class, network policy, service load balancer, and even container runtime. These choices and technologies are touched on in more detail in the What is this? section.

How is this lightweight or smaller than upstream Kubernetes?

There are two major ways that K3s is lighter weight than upstream Kubernetes:

  1. The memory footprint to run it is smaller
  2. The binary, which contains all the non-containerized components needed to run a cluster, is smaller

The memory footprint is reduced primarily by running many components inside of a single process. This eliminates significant overhead that would otherwise be duplicated for each component.

The binary is smaller by removing third-party storage drivers and cloud providers, explained in more detail below.

What have you removed from upstream Kubernetes?

This is a common point of confusion because it has changed over time. Early versions of K3s had much more removed than the current version. K3s currently removes two things:

  1. In-tree storage drivers
  2. In-tree cloud provider

Both of these have out-of-tree alternatives in the form of CSI and CCM, which work in K3s and which upstream is moving towards.

We remove these to achieve a smaller binary size. They can be removed while remaining conformant because neither affects core Kubernetes functionality. They are also dependent on third-party cloud or data center technologies/services, which may not be available in many K3s' use cases.

What's next?

Check out our roadmap to see what we have planned moving forward.

Release cadence

K3s maintains pace with upstream Kubernetes releases. Our goal is to release patch releases within one week, and new minors within 30 days.

Our release versioning reflects the version of upstream Kubernetes that is being released. For example, the K3s release v1.27.4+k3s1 maps to the v1.27.4 Kubernetes release. We add a postfix in the form of +k3s<number> to allow us to make additional releases using the same version of upstream Kubernetes while remaining semver compliant. For example, if we discovered a high severity bug in v1.27.4+k3s1 and needed to release an immediate fix for it, we would release v1.27.4+k3s2.

Documentation

Please see the official docs site for complete documentation.

Quick-Start - Install Script

The install.sh script provides a convenient way to download K3s and add a service to systemd or openrc.

To install k3s as a service, run:

curl -sfL https://get.k3s.io | sh -

A kubeconfig file is written to /etc/rancher/k3s/k3s.yaml and the service is automatically started or restarted. The install script will install K3s and additional utilities, such as kubectl, crictl, k3s-killall.sh, and k3s-uninstall.sh, for example:

sudo kubectl get nodes

K3S_TOKEN is created at /var/lib/rancher/k3s/server/node-token on your server. To install on worker nodes, pass K3S_URL along with K3S_TOKEN environment variables, for example:

curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -

Manual Download

  1. Download k3s from latest release, x86_64, armhf, arm64 and s390x are supported.
  2. Run the server.
sudo k3s server &
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
sudo k3s kubectl get nodes

# On a different node run the below. NODE_TOKEN comes from
# /var/lib/rancher/k3s/server/node-token on your server
sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}

Contributing

Please check out our contributing guide if you're interested in contributing to K3s.

Security

Security issues in K3s can be reported by sending an email to [email protected]. Please do not file issues about security issues.

k3s's People

Contributors

akihirosuda avatar brandond avatar briandowns avatar brooksn avatar cwayne18 avatar davidnuzik avatar dependabot[bot] avatar dereknola avatar dweomer avatar epicfilemcnulty avatar erikwilson avatar galal-hussein avatar github-actions[bot] avatar ibuildthecloud avatar joakimr-axis avatar luthermonson avatar macedogm avatar manuelbuil avatar matttrach avatar monzelmasry avatar nikolaishields avatar oats87 avatar osodracnai avatar rbrtbnfgl avatar shylajadevadiga avatar tashima42 avatar vadorovsky avatar vitorsavian avatar warmchang avatar yamt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k3s's Issues

EPIC: Support HA

The below items should be broken out into separate issues, once we get closer to working on this feature

Phase 1

  • Provide fault tolerance at 2+ instances.
  • Point to an external SQL database like MySQL, Postgres, or etcd 3. Users can then use RDS or other SQL solution. (Only PostgreSQL 10.7-11.5 as of v0.10.0 release)
  • Support etcd as a backend through manual configuration by user
    • Need to expose configurations params needed for etcd. Implement as generic framework that supports all paras in the form of --<component>-<component flag>
  • Support peer discover that Rancher 2 has. Can reuse the framework from norman, but need to implement the component that does the discovery
  • Add logic in agents to allow for auto-discovery of all api nodes. Then, use client-side logic to round robin between them if leader goes does. Effectively

Phase 2

  • Embed etcd dqlite (experimental in v1.0.0 release) in k3s image and simply SSL setup for "easy" HA configuration

cluster networking is broken?

helm install job never succeed, it seem that it is not possible to reach dns server.

alpine:/home/alpine/k3s/dist/artifacts# ./k3s kubectl  get all -n kube-system 
NAME                             READY   STATUS             RESTARTS   AGE
pod/coredns-7748f7f6df-tp7fq     1/1     Running            1          104m
pod/helm-install-traefik-g5rmk   0/1     CrashLoopBackOff   21         104m

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.43.0.10   <none>        53/UDP,53/TCP,9153/TCP   104m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   1/1     1            1           104m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-7748f7f6df   1         1         1       104m

NAME                             COMPLETIONS   DURATION   AGE
job.batch/helm-install-traefik   0/1           104m       104m

./k3s kubectl   -n kube-system logs -f pod/helm-install-traefik-g5rmk
+ export HELM_HOST=127.0.0.1:44134+ 
tiller --listen=127.0.0.1:44134 --storage=secret
+ HELM_HOST=127.0.0.1:44134
+ helm init --client-only
[main] 2019/02/08 20:48:52 Starting Tiller v2.12.3 (tls=false)
[main] 2019/02/08 20:48:52 GRPC listening on 127.0.0.1:44134
[main] 2019/02/08 20:48:52 Probes listening on :44135
[main] 2019/02/08 20:48:52 Storage driver is Secret
[main] 2019/02/08 20:48:52 Max history per release is 0
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes-charts.storage.googleapis.com on 10.43.0.10:53: read udp 10.42.0.4:39333->10.43.0.10:53: i/o timeout

Verify by running a busy box

alpine:/home/alpine/k3s/dist/artifacts# ./k83s kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
ash: ./k83s: not found
alpine:/home/alpine/k3s/dist/artifacts# ./k3s kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # 
/ # ping 10.43.0.10
PING 10.43.0.10 (10.43.0.10): 56 data bytes
^C
--- 10.43.0.10 ping statistics ---
7 packets transmitted, 0 packets received, 100% packet loss
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 32:03:33:52:8c:19 brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.6/24 brd 10.42.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::3003:33ff:fe52:8c19/64 scope link 
       valid_lft forever preferred_lft forever
/ # ping 10.43.0.10
PING 10.43.0.10 (10.43.0.10): 56 data bytes
^C
--- 10.43.0.10 ping statistics ---
6 packets transmitted, 0 packets received, 100% packet loss
/ # ping 10.42.0.6
PING 10.42.0.6 (10.42.0.6): 56 data bytes
64 bytes from 10.42.0.6: seq=0 ttl=64 time=0.109 ms
64 bytes from 10.42.0.6: seq=1 ttl=64 time=0.108 ms
64 bytes from 10.42.0.6: seq=2 ttl=64 time=0.106 ms
^C
--- 10.42.0.6 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.106/0.107/0.109 ms

Suggestion: copy/paste initialization guide or script

Hey Darren,

Would you be able to put together something someone could copy/paste without thinking about it? Or maybe an initialization script that works on a base Ubuntu 16/18.x machine?

I was thinking of having a play with this but it looks like it might take a bit longer than I had.

Alex

artifact-free repo

why is this so huge? is there a way to break out the source into a separate repo so people can just download that and build/fetch the artifacts when they choose?

unikernel

I have a silly idea and I have to share it :)

Do we need systemd and journals to run kubernetes? Or could we craft a special Linux just with what we need?

I'm just wondering how much memory we would save and if it is worth it or not :)

Preserve client ips

I would like to run a home cluster and provide services like pihole and home assistant. Both products work a lot better if they get the real client ip addresses.

I used to use kube router and metal lb for this purpose. Is it possible to achieve the same result with your embedded load balancer and flannel network?

website: Better wording for "Uses only 512 MB of RAM"

Honestly, this is such a superficial report... but when I first saw the website, my immediate thought was: wow, 512 MB seems like a huge overhead for this program's runtime. Obviously, what this is trying to say is that k3s will run on machines with only 512 MB RAM.

Somehow, it'd be nice to connote that this isn't an overhead of k3s. You still get most of that RAM for your containers.

Current text

Uses only 512 MB of RAM.

Proposed text

Supports nodes with only 512 MB of RAM.
Supports nodes with 512 MB of RAM.

or

Only 512 MB of RAM required to run.

or

Supports systems with only 512 MB of RAM.

or

Runs on machines with only 512 MB of RAM.

or something else. I'm not too sure.

Screenshot of current text
screenshot of current wording

Similar reports of confusion on HN today
evidence of confusion on HN

Add information about surviving reboots

Is your feature request related to a problem? Please describe.
No, request to document how to build a cluster that survives reboots.

Describe the solution you'd like
The systemd docs seem to only be for the server proc, not the agent / nodes. There might even be a better way to do this.

Additional context
My goal would be to create a pi cluster that just works on (re)boot and believe this would be beneficial to others.

Failed to start server with "fannel exited"

Fails with:

$ sudo k3s server
[sudo] password for stratos: 
INFO[2019-02-27T09:33:24.808161017+01:00] Starting k3s v0.1.0 (91251aa)                
INFO[2019-02-27T09:33:24.808756152+01:00] Running kube-apiserver --watch-cache=false --cert-dir /var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key 
INFO[2019-02-27T09:33:24.871861833+01:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false 
INFO[2019-02-27T09:33:24.872329109+01:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false 
INFO[2019-02-27T09:33:24.932063795+01:00] Listening on :6443                           
INFO[2019-02-27T09:33:25.035386408+01:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml 
INFO[2019-02-27T09:33:25.035734075+01:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml 
INFO[2019-02-27T09:33:25.437915586+01:00] Node token is available at /var/lib/rancher/k3s/server/node-token 
INFO[2019-02-27T09:33:25.437980492+01:00] To join node to cluster: k3s agent -s https://10.66.180.31:6443 -t ${NODE_TOKEN} 
INFO[2019-02-27T09:33:25.527701806+01:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml   
INFO[2019-02-27T09:33:25.527719937+01:00] Run: k3s kubectl                             
INFO[2019-02-27T09:33:25.527726410+01:00] k3s is up and running                        
INFO[2019-02-27T09:33:25.560836269+01:00] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2019-02-27T09:33:25.560947659+01:00] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2019-02-27T09:33:25.561094622+01:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" 
INFO[2019-02-27T09:33:26.568673623+01:00] Connecting to wss://localhost:6443/v1-k3s/connect 
INFO[2019-02-27T09:33:26.568778009+01:00] Connecting to proxy                           url="wss://localhost:6443/v1-k3s/connect"
INFO[2019-02-27T09:33:26.582968162+01:00] Handling backend connection request [serenity] 
INFO[2019-02-27T09:33:26.586584725+01:00] Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override serenity 
Flag --allow-privileged has been deprecated, will be removed in a future version
FATA[2019-02-27T09:33:27.619684908+01:00] fannel exited: operation not supported       

Installation (with the curl script) seemed to work correctly. Running on:

Linux serenity 4.20.11-arch2-1-ARCH #1 SMP PREEMPT Fri Feb 22 13:09:33 UTC 2019 x86_64 GNU/Linux

What logs should I be looking at for any clues?

Thanks!

what's left

A good point is that the readme tells what's been removed.
For those who do not know k8s features list by heart, it should also tell what's left in k3s.
Nice initiative by the way.

Plans for ZFS support?

Deployment on ZFS is currently not possible, because OverlayFS does not work with ZFS:

overlayfs: filesystem on '/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/273/fs' not supported as upperdir

From the containerd logfile, it appears the ZFS snapshotter is not included:

time="2019-02-27T14:55:43.605823860+01:00" level=info msg="starting containerd" revision= version=1.2.3+unknown
time="2019-02-27T14:55:43.606278371+01:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2019-02-27T14:55:43.606418919+01:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2019-02-27T14:55:43.606671517+01:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2019-02-27T14:55:43.607001436+01:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2019-02-27T14:55:43.624241298+01:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
...

ZFS support would be awesome! Are there any plans to include that?

consider balena-engine support

Hey - fair play to you, this is a really great project that can change a lot in iot space.

Have you considered adding support for balena-engine? https://github.com/balena-os/balena-engine

I've been playing with it for a while on arm64 and got really good results, it's compatible with docker repos and images still, while footprint is reduced.

cannot run k3s server

I just run k3s server. But got the error. "Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused""
So what I missed. Is that mean I should install the containerd first?

k3s on rancher

Ability to import a k3s cluster into rancher. Fix will be in rancher

Docker in docker setup constantly logs error

You get the following error over and over again when running inside docker. The easiest way to see this is just run docker-compose up.

node_1    | E0223 01:01:20.444373       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/8f62bb7a8c33378725a935accb46039fd4d8dd3917f144f44a79aa052117f97c/kube-proxy": failed to get cgroup stats for "/docker/8f62bb7a8c33378725a935accb46039fd4d8dd3917f144f44a79aa052117f97c/kube-proxy": failed to get container info for "/docker/8f62bb7a8c33378725a935accb46039fd4d8dd3917f144f44a79aa052117f97c/kube-proxy": unknown container "/docker/8f62bb7a8c33378725a935accb46039fd4d8dd3917f144f44a79aa052117f97c/kube-proxy"

I believe this because of the custom changes we made to cadvisor. We need to revert those.

Interop with existing k8s control plane/cluster

How does this play with existing k8s cluster, and I want to use k3s for IoT nodes?

If I use flannel (deployed via official CNI) can I also use flannel integration in k3s?

Can I mix control plane nodes in typical k8s installation with worker nodes in k3s?

Can I use k3s hyperkube, rename it to kubelet/kube-proxy and server as a drop-in replacement in the worker node setup?

Document how to add the HostPath storage driver

Is your feature request related to a problem? Please describe.
Manifests that include PersistentVolumeClaims do not work.

Describe the solution you'd like
I would like documentation on how to install and configure the HostPath driver. I'm not interested in HA storage, etc.

Describe alternatives you've considered
None

Thanks!

Encrypted Networking Support

k3s targets non/mixed-cloud deployments without having a private LAN, for this use case plain flannel seems to be insecure.

Given that e.g. even Armbian (an ARM SoC focussed distro) ships with WireGuard tooling installed by default, you may want to consider it as an an easy to use, secure, low-overhead, high performance (yes, all buzzwords are true ;) VPN solution. It can also deal with NATs/CGN.

Flannel should be able to use WireGuard somehow.

Goals:

  • encrypt traffic
  • no public access to port 4789 (WireGuard's public exposure is very limited)

single master

Hello! Nice project :)

I'd like to replace yunohost with kubernetes, because the API is so much more beautiful than bash :)

In a single node scenario, is flannel still required?

Do you plan to support socket activation for the API, it would be nice :) and maybe having the controller run on Cron instead of events? Just some idea for improvements :)

Do you also support CRI-o ? I think it would further decrease the footprint on small devices.

Do you have more ideas on how to improve it?

What is the current support you provide for this side project? (Just asking, not putting any form of pressure on you, I know how it is to run hoby open source project ;) )

Thanks again for working on that, this is really cool!

How to put back GPU support?

This looks like an awesome project. I'd like to use it to control a small GPU cluster I have on my LAN.

Can you provide advice about how I might put back the GPU support? I don't mind maintaining my own fork.

k3s systemd service should wait until the server is ready

If the k3s service is started and we immediately attempt to use the service it may fail:

root@eeabbe84a77e:~# systemctl stop k3s.service && systemctl start k3s.service && k3s kubectl get nodes
The connection to the server localhost:6443 was refused - did you specify the right host or port?
root@eeabbe84a77e:~# systemctl stop k3s.service && systemctl start k3s.service && k3s kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "k3s-ca")

Build dependencies missing

Tried building it locally, doesn't seem to work. Then when I do a string replace to replace all k8s.io/kubernetes with github.com/ibuildthecloud/k3s imports, and then finally fails with no errors..

What am I missing?

root@7c02726b2cf2:/go/src/github.com/ibuildthecloud/k3s# go build -o k3s github.com/ibuildthecloud/k3s
main.go:9:2: cannot find package "k8s.io/kubernetes/cmd/agent" in any of:
	/go/src/github.com/ibuildthecloud/k3s/vendor/k8s.io/kubernetes/cmd/agent (vendor tree)
	/usr/local/go/src/k8s.io/kubernetes/cmd/agent (from $GOROOT)
	/go/src/k8s.io/kubernetes/cmd/agent (from $GOPATH)
main.go:10:2: cannot find package "k8s.io/kubernetes/cmd/server" in any of:
	/go/src/github.com/ibuildthecloud/k3s/vendor/k8s.io/kubernetes/cmd/server (vendor tree)
	/usr/local/go/src/k8s.io/kubernetes/cmd/server (from $GOROOT)
	/go/src/k8s.io/kubernetes/cmd/server (from $GOPATH)
root@7c02726b2cf2:/go/src/github.com/ibuildthecloud/k3s# sed -i 's@"k8s.io/kubernetes/@"github.com/ibuildthecloud/k3s/@' main.go
root@7c02726b2cf2:/go/src/github.com/ibuildthecloud/k3s# go build -o k3s github.com/ibuildthecloud/k3s
cmd/server/server.go:32:2: cannot find package "k8s.io/kubernetes/pkg/apis/componentconfig" in any of:
	/go/src/github.com/ibuildthecloud/k3s/vendor/k8s.io/kubernetes/pkg/apis/componentconfig (vendor tree)
	/usr/local/go/src/k8s.io/kubernetes/pkg/apis/componentconfig (from $GOROOT)
	/go/src/k8s.io/kubernetes/pkg/apis/componentconfig (from $GOPATH)
/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/accelerators/nvidia.go:30:2: build constraints exclude all Go files in /go/src/k8s.io/kubernetes/vendor/github.com/mindprince/gonvml
root@7c02726b2cf2:/go/src/github.com/ibuildthecloud/k3s# find . -type f -name "*.go"  | xargs sed -i 's@"k8s.io/kubernetes/@"github.com/ibuildthecloud/k3s/@'
root@7c02726b2cf2:/go/src/github.com/ibuildthecloud/k3s# go build -o k3s github.com/ibuildthecloud/k3s
vendor/github.com/google/cadvisor/container/containerd/factory.go:28:2: build constraints exclude all Go files in /go/src/github.com/ibuildthecloud/k3s/vendor/github.com/google/cadvisor/container/libcontainer
vendor/github.com/google/cadvisor/accelerators/nvidia.go:30:2: build constraints exclude all Go files in /go/src/github.com/ibuildthecloud/k3s/vendor/github.com/mindprince/gonvml
root@7c02726b2cf2:/go/src/github.com/ibuildthecloud/k3s# echo $?
1

Missing /root/.kube/k3s.yaml

Hi there, thanks for your project. We're excited to be using it!

I noticed this bug. The logs say the config file was written, but it wasn't:

root@spot-47:/var/lib/rancher/k3s# journalctl -u k3s
...
Feb 01 20:06:51 spot-47 k3s[540]: time="2019-02-01T20:06:51.209386967Z" level=error msg="Failed to generate kubeconfig: server https://localhost:6443/cacerts is not trusted: Ge
Feb 01 20:06:51 spot-47 k3s[540]: time="2019-02-01T20:06:51.209534311Z" level=info msg="Wrote kubeconfig /root/.kube/k3s.yaml"
...

root@spot-47:/var/lib/rancher/k3s# cat /root/.kube/k3s.yaml
cat: /root/.kube/k3s.yaml: No such file or directory

Thanks!

Failed to startup pod after "re-install"

My initial run goes OK. However, when I realize I am running out of disk.
I delete /var/lib/rancher and mount a zfs volume into this dir, restart k3s, I always get error like this:

INFO[2019-02-07T20:27:43.854571633+01:00] shim reaped id=31bf8c59ef07ec0e779fdb210dce68ae9743ef549820cd212cd19d6b28b306dd
ERRO[2019-02-07T20:27:44.012645877+01:00] RunPodSandbox for &PodSandboxMetadata{Name:coredns-7748f7f6df-6p99f,Uid:8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,Namespace:kube-system,Attempt:0,} failed, error error="failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown"
INFO[2019-02-07T20:27:55.742334514+01:00] RunPodSandbox with config &PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:coredns-7748f7f6df-6p99f,Uid:8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,Namespace:kube-system,Attempt:0,},Hostname:coredns-7748f7f6df-6p99f,LogDirectory:/var/log/pods/8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,DnsConfig:&DNSConfig{Servers:[147.214.252.30 147.214.9.30],Searches:[ki.sw.ericsson.se],Options:[],},PortMappings:[&PortMapping{Protocol:UDP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:9153,HostPort:0,HostIp:,}],Labels:map[string]string{io.kubernetes.pod.name: coredns-7748f7f6df-6p99f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,k8s-app: kube-dns,pod-template-hash: 7748f7f6df,},Annotations:map[string]string{kubernetes.io/config.seen: 2019-02-07T20:21:20.14626935+01:00,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},}
INFO[2019-02-07T20:27:55.852741235+01:00] shim containerd-shim started address=/containerd-shim/k8s.io/ce6315d62ca83305c1a7426bdcf4a865336c5a2906e6ab7156b55c51954536c6/shim.sock debug=false pid=7104
INFO[2019-02-07T20:27:55.856834137+01:00] shim reaped id=ce6315d62ca83305c1a7426bdcf4a865336c5a2906e6ab7156b55c51954536c6
ERRO[2019-02-07T20:27:56.024630042+01:00] RunPodSandbox for &PodSandboxMetadata{Name:coredns-7748f7f6df-6p99f,Uid:8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,Namespace:kube-system,Attempt:0,} failed, error error="failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown"
INFO[2019-02-07T20:28:10.742295323+01:00] RunPodSandbox with config &PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:coredns-7748f7f6df-6p99f,Uid:8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,Namespace:kube-system,Attempt:0,},Hostname:coredns-7748f7f6df-6p99f,LogDirectory:/var/log/pods/8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,DnsConfig:&DNSConfig{Servers:[147.214.252.30 147.214.9.30],Searches:[ki.sw.ericsson.se],Options:[],},PortMappings:[&PortMapping{Protocol:UDP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:9153,HostPort:0,HostIp:,}],Labels:map[string]string{io.kubernetes.pod.name: coredns-7748f7f6df-6p99f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,k8s-app: kube-dns,pod-template-hash: 7748f7f6df,},Annotations:map[string]string{kubernetes.io/config.seen: 2019-02-07T20:21:20.14626935+01:00,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},}
INFO[2019-02-07T20:28:10.834958570+01:00] shim containerd-shim started address=/containerd-shim/k8s.io/5cebff3e7e0aecd3d7c6e7ef8bdfe3c449a5fa26ad821700fc004ad5f23ed7c7/shim.sock debug=false pid=7371
INFO[2019-02-07T20:28:10.838955158+01:00] shim reaped

Use-case?

Hi :) I can imagine several potential directions for this, but I'm curious of your actual use-case. Is there a concrete target here?

Typo in the project description

My inner grammar nazi just said:
"Change the project description from 'Lightweight Kubernetes. 5 less then k8s.' to 'Lightweight Kubernetes. 5 less than k8s.' please."

k3s cleaning after itself

Thanks for helping us to improve k3s! We welcome all bug reports. At this stage, we are also looking for help in testing/QAing fixes. Once we've fixed you're issue, we'll ping you in the comments to see if you can verify the fix. We'll give you details on what version can be used to test the fix. Additionally, if you are interested in testing fixes that you didn't report, look for the issues with the status/to-test label. You can pick any of these up for verification. You can delete this message portion of the bug report.

Describe the bug
k3s doesnt kill containerd containers running neither cleans the veths cni and flannel interfaces

To Reproduce
just run k3s server and stop it

Expected behavior
containers should go away as awell as interfaces etc.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
maybe a k3s server stop command would be nice something around those terms.

Doc about changing CNI

Hi,

I would thank you about k3s which seem a pretty cool lightweight k8s implementation.

I seen in the doc the part about changing CNI, but on the server side, it seem it force to use flannel.
It's not possible to disable flannel via the install script.

I wondering use Calico to enable dual-stack (v4/v6) with k3s, any idea how achieve that and stop the use of flannel ?

I've seen some Range are hardcoded in k3s :

pkg/daemons/control/server.go:          _, clusterIPNet, _ := net.ParseCIDR("10.42.0.0/16")

Thanks for any road to follow.

Test volume mount e2e failure for k3s image

e2e tests check permissions of /tmp volume mount, which is missing sticky bit and group permissions for k3s image:

[sig-storage] HostPath
/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance] [It]
  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc000707b10>: {
          s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n    <string>: mount type of \"/test-volume\": 2035054128\n    mode of file \"/test-volume\": drwxr-xr-x\n    \nto contain substring\n    <string>: mode of file \"/test-volume\": dtrwxrwx",
      }
      expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected
          <string>: mount type of "/test-volume": 2035054128
          mode of file "/test-volume": drwxr-xr-x

      to contain substring
          <string>: mode of file "/test-volume": dtrwxrwx
  not to have occurred

  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2395

Install Agent script

curl -sfL https://get.k3s.io | sh - works great for the controlplane.
It would be very useful if there is something similar for installing k3s (with systemd service) as agent.

Wrong interface selection on Vagrant

When trying to install k3s via Vagrant, Flannel selects the first network interface but the first one is using to bridge guest machines on Vagrant.

First solution is using flannel with external installation (with —no-flannel flag). My suggestion is, run the k3s server with “—iface=enp0s8” flag which will be used for flannel configuration or something like that.

How do you pronounce the "k3s"?

I remember seeing k3s can be expressed as k "ate" s somewhere on ibuildthecloud/k3s before, which is the exactly same pronunciation as k8s(k "eight" s), but I couldn't find it on any pages now 👀

Is it kates, or kes like leet language?

Add ability to run k3s as non-root user

When attempting to run release binary k3s server as non-root we prepare a data directory:

INFO[0000] Preparing data dir /home/test/.rancher/k3s/data/XXX
FATA[2019-02-25T17:19:39.192549600Z] must run as root unless --disable-agent is specified

but further attempts to run as non-root result in an error:

FATA[0000] exec: "k3s-server": executable file not found in $PATH

Invalid certificate for https://127.0.0.1:6443

Hi,

first of all, thanks for k3s 👍
Currently, I am testing it as an alternative to minikube :)

I see OpenSSL complaining about invalid certificate when accessing https://127.0.0.1:6443 with ca.crt extracted from kubectl get secrets ....

Reproduction:

Setup

$ docker-compose up -d --scale node=3
$ cp kubeconfig.yaml ~/.kube/config
$ kubectl get nodes # works great :+1: 

Extract ca.crt

$ kubectl get secrets default-token-XXXXX -o go-template='{{index .data "ca.crt" | base64decode}}'  | tee ca.crt

Show issuer

$ openssl x509 -in ca.crt -noout -subject -issuer
subject=CN = k3s-token-ca@1549801496
issuer=CN = k3s-token-ca@1549801496

Show server certs

$ openssl s_client -showcerts -connect 127.0.0.1:6443 < /dev/null &> apiserver.crt

depth=0 O = k3s-org, CN = cattle
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 O = k3s-org, CN = cattle
verify error:num=21:unable to verify the first certificate
verify return:1
CONNECTED(00000003)
---
Certificate chain
 0 s:/O=k3s-org/CN=cattle
   i:/O=k3s-org/CN=k3s-ca
-----BEGIN CERTIFICATE-----
MIIDCDCCAfCgAwIBAgIIR4ZQtTBbrXAwDQYJKoZIhvcNAQELBQAwIzEQMA4GA1UE
....
sy5YAOPhEVA6WESx1xWmdDpsAmvBFsdEPdP88pg8jHSB0tkZ9L7BIc/4X6+q0QPZ
1Ls9bSy5vEJUWoL/XY3UNDmP5ki9VkfWOBVHlWz1RIIeugZzFrD9fDiR11AxhtP3
l179G9cbE4GJ3nT7
-----END CERTIFICATE-----
---
Server certificate
subject=/O=k3s-org/CN=cattle
issuer=/O=k3s-org/CN=k3s-ca
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1353 bytes and written 269 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES128-GCM-SHA256
    Session-ID: EF5219395CABD1D9BDE462EAEFF21A7A6C350EF955978231E7A5CC4F504E6D3B
    Session-ID-ctx:
    Master-Key: KEY
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket:
    ........

    Start Time: 1549802771
    Timeout   : 7200 (sec)
    Verify return code: 21 (unable to verify the first certificate)
    Extended master secret: no
---
DONE

Verifying cert fails

$ openssl verify -verbose -CAfile ca.crt apiserver.crt
O = k3s-org, CN = cattle
error 20 at 0 depth lookup: unable to get local issuer certificate
error apiserver.crt: verification failed

curl'ing fails with provided cert

$ curl -vv --cacert ca.crt  https://localhost:6443/
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: ca.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, Server hello (2):
* SSL certificate problem: unable to get local issuer certificate
* Curl_http_done: called premature == 1
* stopped the pause stream!
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

curl'ing works with -k

$ curl -vv -k  https://localhost:6443/
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: O=k3s-org; CN=localhost
*  start date: Feb 10 12:24:58 2019 GMT
*  expire date: Feb 10 12:24:58 2020 GMT
*  issuer: O=k3s-org; CN=k3s-ca
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
> GET / HTTP/1.1
> Host: localhost:6443
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 401 Unauthorized
< Content-Type: application/json
< Www-Authenticate: Basic realm="kubernetes-master"
< Date: Sun, 10 Feb 2019 12:50:13 GMT
< Content-Length: 165
< 
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
* Curl_http_done: called premature == 0
* Connection #0  to host localhost left intact
}

Is anyone seeing similar behaviour?

It is very likely I am missing something. Hints/feedback is much appreciated 💜

Kind regards,
Peter

Install script fails if wget is not available

Even though we don't use wget we test for its existence, and which wget returns a non-zero exit code which causes the script to silently fail.

root@1ed0ad1de566:/# curl -sfL https://get.k3s.io | sh -
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  Downloading https://github.com/rancher/k3s/releases/download/v0.1.0-rc8/sha256sum-amd64.txt
[INFO]  Downloading https://github.com/rancher/k3s/releases/download/v0.1.0-rc8/k3s
[INFO]  Verifying download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  systemd: Creating /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s
root@1ed0ad1de566:/# rm /bin/wget
root@1ed0ad1de566:/# curl -sfL https://get.k3s.io | sh -
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
root@1ed0ad1de566:/#

Expose agent options in server command

Describe the bug
I'm trying to run k3s on a Raspberry Pi 3 running RancherOS. RancherOS uses docker, so I need to pass the --docker option to the agent. When starting the server, it starts an agent, but there is no way to pass the --docker option (at least not that I can see).

To Reproduce
On a machine with just docker available, start the server with agent enabled and see that it errors out.

Expected behavior
All agent options needs to be possible to pass to the server, so that it can start the agent with those options.

coreDNS unable to resolve upstream

Hello, I have plain installation of k3s on an ubuntu 18.04

I am running a container which is failing to resolve DNS

# nslookup index.docker.io 10.43.0.10
Server:    10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'index.docker.io': Try again

# nslookup quay.io 10.43.0.10
Server:    10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'quay.io': Try again
# k3s kubectl logs -f pod/coredns-7748f7f6df-8htwl -n kube-system
2019-02-26T22:52:50.556Z [ERROR] plugin/errors: 2 index.docker.io. AAAA: unreachable backend: read udp 10.42.0.6:50878->1.1.1.1:53: i/o timeout
2019-02-26T22:52:50.556Z [ERROR] plugin/errors: 2 index.docker.io. A: unreachable backend: read udp 10.42.0.6:38587->1.1.1.1:53: i/o timeout
2019-02-26T22:53:18.425Z [ERROR] plugin/errors: 2 quay.io. AAAA: unreachable backend: read udp 10.42.0.6:48427->1.1.1.1:53: i/o timeout
2019-02-26T22:53:18.425Z [ERROR] plugin/errors: 2 quay.io. A: unreachable backend: read udp 10.42.0.6:53214->1.1.1.1:53: i/o timeout

I am not sure what 1.1.1.1 is and where its coming from

Fix failing Sonobuoy tests

There are two failures for Sonobuoy testing k3s which should be fixed:

Summarizing 2 Failures:

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run  [Conformance]
/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:306

[Fail] [sig-network] Proxy version v1 [It] should proxy through a service and a pod  [Conformance]
/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:255

Ran 200 of 1946 Specs in 5523.245 seconds
FAIL! -- 198 Passed | 2 Failed | 0 Pending | 1746 Skipped --- FAIL: TestE2E (5523.56s)
FAIL

Ginkgo ran 1 suite in 1h32m4.098485263s
Test Suite Failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.