Git Product home page Git Product logo

k3d's Introduction

k3d

License Downloads

Go Module Go version Go Report Card

All Contributors

Contributor Covenant

k3s is the lightweight Kubernetes distribution by Rancher: k3s-io/k3s

k3d creates containerized k3s clusters. This means, that you can spin up a multi-node k3s cluster on a single machine using docker.

asciicast

Note: k3d is a community-driven project but it's not an official Rancher (SUSE) product. Sponsoring: To spend any significant amount of time improving k3d, we rely on sponsorships:

Learning

Requirements

  • docker
    • Note: k3d v5.x.x requires at least Docker v20.10.5 (runc >= v1.0.0-rc93) to work properly (see #807)

Releases

  • May 2020: v1.7.x -> v3.0.0 (rewrite)
  • January 2021: v3.x.x -> v4.0.0 (breaking changes)
  • September 2021: v4.4.8 -> v5.0.0 (breaking changes)
Platform Stage Version Release Date Downloads so far
GitHub Releases stable GitHub release (latest by date) GitHub Release Date GitHub Release Downloads
GitHub Releases latest GitHub release (latest by date including pre-releases) GitHub (Pre-)Release Date GitHub Release Downloads (incl. Pre-Releases)
Homebrew stable homebrew - -
Chocolatey stable chocolatey - -
Scoop stable scoop - -

Get

You have several options there:

  • use the install script to grab the latest release:

    • wget: wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
    • curl: curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
  • use the install script to grab a specific release (via TAG environment variable):

    • wget: wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | TAG=v5.0.0 bash
    • curl: curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | TAG=v5.0.0 bash
  • use Homebrew: brew install k3d (Homebrew is available for MacOS and Linux)

  • install via MacPorts: sudo port selfupdate && sudo port install k3d (MacPorts is available for MacOS)

  • install via AUR package rancher-k3d-bin: yay -S rancher-k3d-bin

  • grab a release from the release tab and install it yourself.

  • install via go: go install github.com/k3d-io/k3d/v5@latest (Note: this will give you unreleased/bleeding-edge changes)

  • use Chocolatey: choco install k3d (Chocolatey package manager is available for Windows)

  • use Scoop: scoop install k3d (Scoop package manager is available for Windows)

or...

Build

  1. Clone this repo, e.g. via git clone [email protected]:k3d-io/k3d.git or go get github.com/k3d-io/k3d/v5@main
  2. Inside the repo run
    • 'make install-tools' to make sure required go packages are installed
  3. Inside the repo run one of the following commands
    • make build to build for your current system
    • go install to install it to your GOPATH (Note: this will give you unreleased/bleeding-edge changes)
    • make build-cross to build for all systems

Usage

Check out what you can do via k3d help or check the docs @ k3d.io

Example Workflow: Create a new cluster and use it with kubectl

  1. k3d cluster create CLUSTER_NAME to create a new single-node cluster (= 1 container running k3s + 1 loadbalancer container)
  2. [Optional, included in cluster create] k3d kubeconfig merge CLUSTER_NAME --kubeconfig-switch-context to update your default kubeconfig and switch the current-context to the new one
  3. execute some commands like kubectl get pods --all-namespaces
  4. k3d cluster delete CLUSTER_NAME to delete the default cluster

Connect

  1. Join the Rancher community on slack via slack.rancher.io
  2. Go to rancher-users.slack.com and join our channel #k3d
  3. Start chatting

History

This repository is based on @zeerorg's zeerorg/k3s-in-docker, reimplemented in Go by @iwilltry42 in iwilltry42/k3d, which got adopted by Rancher in rancher/k3d and was now moved into its own GitHub organization at k3d-io/k3d.

Related Projects

  • k3x: GUI (Linux) to k3d
  • vscode-k3d: vscode plugin for k3d
  • AbsaOSS/k3d-action: fully customizable GitHub Action to run lightweight Kubernetes clusters.
  • AutoK3s: a lightweight tool to help run K3s everywhere including k3d provider.
  • nolar/setup-k3d-k3s: setup K3d/K3s for GitHub Actions.

Contributing

k3d is a community-driven project and so we welcome contributions of any form, be it code, logic, documentation, examples, requests, bug reports, ideas or anything else that pushes this project forward.

Please read our Contributing Guidelines and the related Code of Conduct.

You can find an overview of the k3d project (e.g. explanations and a repository guide) in the documentation: k3d.io/stable/design/project/

Contributor Covenant

Contributors โœจ

Thanks goes to these wonderful people (emoji key):

Thorsten Klein
Thorsten Klein

๐Ÿ’ป ๐Ÿ“– ๐Ÿค” ๐Ÿšง
Rishabh Gupta
Rishabh Gupta

๐Ÿค” ๐Ÿ’ป
Louis Tournayre
Louis Tournayre

๐Ÿ“–
Lionel Nicolas
Lionel Nicolas

๐Ÿ’ป
Toon Sevrin
Toon Sevrin

๐Ÿ’ป
Dennis Hoppe
Dennis Hoppe

๐Ÿ“– ๐Ÿ’ก
Jonas Dellinger
Jonas Dellinger

๐Ÿš‡
markrexwinkel
markrexwinkel

๐Ÿ“–
Alvaro
Alvaro

๐Ÿ’ป ๐Ÿค” ๐Ÿ”Œ
Nuno do Carmo
Nuno do Carmo

๐Ÿ–‹ โœ… ๐Ÿ’ฌ
Erwin Kersten
Erwin Kersten

๐Ÿ“–
Alex Sears
Alex Sears

๐Ÿ“–
Mateusz Urbanek
Mateusz Urbanek

๐Ÿ’ป
Benjamin Blattberg
Benjamin Blattberg

๐Ÿ’ป
Simon Baier
Simon Baier

๐Ÿ’ป
Ambrose Chua
Ambrose Chua

๐Ÿ’ป
Erik Godding Boye
Erik Godding Boye

๐Ÿ’ป
York Wong
York Wong

๐Ÿ’ป
Raul Gonzales
Raul Gonzales

๐Ÿ’ป ๐Ÿ“–
Sunghoon Kang
Sunghoon Kang

๐Ÿ’ป
Kamesh Sampath
Kamesh Sampath

๐Ÿ’ป
Arik Maor
Arik Maor

๐Ÿ’ป โœ… ๐Ÿ’ก
Danny Gershman
Danny Gershman

๐Ÿ’ป
stopanko
stopanko

๐Ÿ’ต
Danny Breyfogle
Danny Breyfogle

๐Ÿ“–
Ahmed AbouZaid
Ahmed AbouZaid

๐Ÿค” ๐Ÿ’ป ๐Ÿ“–
Pierre Roudier
Pierre Roudier

๐Ÿ’ป

This project follows the all-contributors specification. Contributions of any kind welcome!

Sponsors

Thanks to all our amazing sponsors! ๐Ÿ™

k3d's People

Contributors

aabouzaid avatar allcontributors[bot] avatar andyz-dev avatar cartyc avatar dependabot[bot] avatar developer-guy avatar dragonflylee avatar fabricev avatar ibuildthecloud avatar inercia avatar iwilltry42 avatar jacobweinstock avatar k3d-io-bot avatar kameshsampath avatar konradmalik avatar kuritka avatar lionelnicolas avatar louiznk avatar lvuch avatar markrexwinkel avatar misakwa avatar proudier avatar rouxantoine avatar sbaier1 avatar serverwentdown avatar stratusjerry avatar testwill avatar vandot avatar wxdao avatar zeerorg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k3d's Issues

[BUG] Windows v1.2.2 using non-zero timeout parameter always fails

What did you do?

Using k3d create with --wait or -t causes the command to always fail with "exceeded specified timeout" even if it's not close to specified timeout.

  • How was the cluster created?

It wasn't:
k3d-windows-amd64.exe c -n test -t 300 -w 1
The create failed for all non-zero values of time I tried.

  • What did you do afterwards?

Verified it worked for zero value. i.e k3d-windows-amd64.exe c -n test -t 0 -w 1

What did you expect to happen?

Expect command to succeed and cluster created if timeout is not exceeded.

Screenshots or terminal output

C:\>k3d-windows-amd64.exe c -n test -t 300  -w 1
2019/07/03 11:41:41 Created cluster network with ID 2486b706dc21fb281fda5a3e24211c3ce18371a8d4b6d8bc55757c2695ee73ac
2019/07/03 11:41:41 Creating cluster [test]
2019/07/03 11:41:41 Creating server using docker.io/rancher/k3s:v0.5.0...
2019/07/03 11:41:42 Removing cluster [test]
2019/07/03 11:41:42 ...Removing server
2019/07/03 11:41:44 SUCCESS: removed cluster [test]
2019/07/03 11:41:44 Cluster creation exceeded specified timeout

C:\>k3d-windows-amd64.exe  list
2019/07/03 11:41:49 No clusters found!

C:\>k3d-windows-amd64.exe c -n test -t 0  -w 1
2019/07/03 11:41:00 Created cluster network with ID 2b103aee1f0e56d7c9f9d53dd2e9fd4600193eea63faf68d7b875b7c3c7fdc7f
2019/07/03 11:41:00 Creating cluster [test]
2019/07/03 11:41:00 Creating server using docker.io/rancher/k3s:v0.5.0...
2019/07/03 11:41:06 Booting 1 workers for cluster test
2019/07/03 11:41:07 Created worker with ID 264f199473fe45ac0dffc19e5efa68b61864c9f56a0210bc45cfd2d5901d4396
2019/07/03 11:41:07 SUCCESS: created cluster [test]
2019/07/03 11:41:07 You can now use the cluster with:

export KUBECONFIG="$(k3d-windows-amd64.exe get-kubeconfig --name='test')"
kubectl cluster-info

C:\>k3d-windows-amd64.exe d -n test
2019/07/03 11:41:23 Removing cluster [test]
2019/07/03 11:41:23 ...Removing 1 workers
2019/07/03 11:41:24 ...Removing server
2019/07/03 11:41:26 SUCCESS: removed cluster [test]

Which OS & Architecture?

Windows w/Linux containers, amd64.

Which version of k3d?

k3d version v1.2.2

Which version of docker?

Client: Docker Engine - Community
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        6247962
 Built:             Sun Feb 10 04:12:31 2019
 OS/Arch:           windows/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false

[BUG] Error response from daemon: invalid reference format

What did you do?
Run k3d create

  • How was the cluster created?

    • k3d create
  • What did you do afterwards?

    • k3d commands?
 anuruddha@Anuruddhas-MacBook-Pro ๎‚ฐ ~ ๎‚ฐ k3d create
2019/07/24 14:33:41 Created cluster network with ID 2d5b4e7dc27b58c448df1c161d8261af5efb316564a95bd8b659b5760a522e3b
2019/07/24 14:33:41 Created docker volume  k3d-k3s-default-images
2019/07/24 14:33:41 Creating cluster [k3s-default]
2019/07/24 14:33:41 Creating server using docker.io/rancher/k3s:...
2019/07/24 14:33:41 ERROR: couldn't create container k3d-k3s-default-server
ERROR: couldn't create container k3d-k3s-default-server
Error response from daemon: invalid reference format
 โœ˜ anuruddha@Anuruddhas-MacBook-Pro ๎‚ฐ ~ ๎‚ฐ k3d create -v /dev/mapper:/dev/mapper
2019/07/24 14:34:31 Created cluster network with ID 2d5b4e7dc27b58c448df1c161d8261af5efb316564a95bd8b659b5760a522e3b
2019/07/24 14:34:31 Created docker volume  k3d-k3s-default-images
2019/07/24 14:34:31 Creating cluster [k3s-default]
2019/07/24 14:34:31 Creating server using docker.io/rancher/k3s:...
2019/07/24 14:34:31 ERROR: couldn't create container k3d-k3s-default-server
ERROR: couldn't create container k3d-k3s-default-server
Error response from daemon: invalid reference format
- docker commands?
- OS operations (e.g. shutdown/reboot)?

What did you expect to happen?
Create the cluster.

Concise description of what you expected to happen after doing what you described above.

Screenshots or terminal output

If applicable, add screenshots or terminal output (code block) to help explain your problem.

Which OS & Architecture?
System Version: macOS 10.14.2 (18C54)
Kernel Version: Darwin 18.2.0

  • Linux, Windows, MacOS / amd64, x86, ...?
    System Version: macOS 10.14.2 (18C54)
    Kernel Version: Darwin 18.2.0

Which version of k3d?

 โœ˜ anuruddha@Anuruddhas-MacBook-Pro ๎‚ฐ ~ ๎‚ฐ k3d --version
k3d version v1.3.0

Which version of docker?

Client: Docker Engine - Community
 Version:           19.03.0-rc2
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        f97efcc
 Built:             Wed Jun  5 01:37:53 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-rc2
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       f97efcc
  Built:            Wed Jun  5 01:42:10 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

[FEATURE] Explicit support for running k3s in docker mode.

Problem

I have been experimenting with k3d as a lightweight method for CI and development workflows. My target deployment environment ultimately has a hard requirement on k3s to be running with --docker due to lack of support for other container run times.

I can start a k3d cluster (with or without worker nodes), and the cluster goes 'running' and check cluster-info shows services, but the k3s server fails to completely launch any pods.

k3d create -n dev -x '--docker' --volume '/var/run/docker.sock:/var/run/docker.sock'          2019/10/01 07:50:04 Created cluster network with ID 159cd009c71407c9fbf7607a7d9b1513a11bcd6d5fcefd1c478f8462930ed0bb
2019/10/01 07:50:04 Created docker volume  k3d-dev-images
2019/10/01 07:50:04 Creating cluster [dev]
2019/10/01 07:50:04 Creating server using docker.io/rancher/k3s:v0.7.0...
2019/10/01 07:50:05 SUCCESS: created cluster [dev]
2019/10/01 07:50:05 You can now use the cluster with:

export KUBECONFIG="$(k3d get-kubeconfig --name='dev')"
kubectl cluster-info

export KUBECONFIG="$(k3d get-kubeconfig --name='dev')"
kubectl cluster-info
Kubernetes master is running at https://localhost:6443
CoreDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Digging into the server logs, I see a lot of similar error messages.

W1001 12:51:30.951241       1 docker_sandbox.go:386] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the 
status hook for pod "coredns-b7464766c-kvjml_kube-system": Unexpected command output nsenter: can't open '/proc/26668/ns/net': 
No such file or directory
 with error: exit status 1
...
E1001 12:51:31.923002       1 kuberuntime_manager.go:697] createPodSandbox for pod "coredns-b7464766c-kvjml_kube-system(09ed1b31-e44a-11e9-9ee8-0242c0a87002)" failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod "coredns-b7464766c-kvjml": ResolvConfPath "/var/lib/docker/containers/677c106291d8a8052504be9f451d8fe4c8960c603f7f1339f9939e4976beca1e/resolv.conf" does not exist
E1001 12:51:31.923187       1 pod_workers.go:190] Error syncing pod 09ed1b31-e44a-11e9-9ee8-0242c0a87002 ("coredns-b7464766c-kvjml_kube-system(09ed1b31-e44a-11e9-9ee8-0242c0a87002)"), skipping: failed to "CreatePodSandbox" for "coredns-b7464766c-kvjml_kube-system(09ed1b31-e44a-11e9-9ee8-0242c0a87002)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-b7464766c-kvjml_kube-system(09ed1b31-e44a-11e9-9ee8-0242c0a87002)\" failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod \"coredns-b7464766c-kvjml\": ResolvConfPath \"/var/lib/docker/containers/677c106291d8a8052504be9f451d8fe4c8960c603f7f1339f9939e4976beca1e/resolv.conf\" does not exist"
E1001 12:51:31.983323       1 remote_runtime.go:109] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod "helm-install-traefik-hbldm": ResolvConfPath "/var/lib/docker/containers/f1e892a89c00217c0a513a1a1a880ff12d1eda3a3abaa553ffe7617f285c30a1/resolv.conf" does not exist
E1001 12:51:31.983385       1 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "helm-install-traefik-hbldm_kube-system(099898f5-e44a-11e9-9ee8-0242c0a87002)" failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod "helm-install-traefik-hbldm": ResolvConfPath "/var/lib/docker/containers/f1e892a89c00217c0a513a1a1a880ff12d1eda3a3abaa553ffe7617f285c30a1/resolv.conf" does not exist
E1001 12:51:31.983416       1 kuberuntime_manager.go:697] createPodSandbox for pod "helm-install-traefik-hbldm_kube-system(099898f5-e44a-11e9-9ee8-0242c0a87002)" failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod "helm-install-traefik-hbldm": ResolvConfPath "/var/lib/docker/containers/f1e892a89c00217c0a513a1a1a880ff12d1eda3a3abaa553ffe7617f285c30a1/resolv.conf" does not exist
E1001 12:51:31.983466       1 pod_workers.go:190] Error syncing pod 099898f5-e44a-11e9-9ee8-0242c0a87002 ("helm-install-traefik-hbldm_kube-system(099898f5-e44a-11e9-9ee8-0242c0a87002)"), skipping: failed to "CreatePodSandbox" for "helm-install-traefik-hbldm_kube-system(099898f5-e44a-11e9-9ee8-0242c0a87002)" with CreatePodSandboxError: "CreatePodSandbox for pod \"helm-install-traefik-hbldm_kube-system(099898f5-e44a-11e9-9ee8-0242c0a87002)\" failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod \"helm-install-traefik-hbldm\": ResolvConfPath \"/var/lib/docker/containers/f1e892a89c00217c0a513a1a1a880ff12d1eda3a3abaa553ffe7617f285c30a1/resolv.conf\" does not exist"

Per a conversation in slack, the resolv.conf issue is resolvable by mounting the host /var/lib/docker/containers location, but does not resolve the /proc issues.

Scope of your request

I would like explicit support for running k3s with docker enabled through k3d, including example docs.

Describe the solution you'd like

A new CLI argument should be added to create (e.g. k3d create --docker) that automatically injects the correct server/agent args and launches with a valid default image. The user may need to provide additional information (e.g., a docker socket volume or a DOCKER_HOST) if relevant.

This seems like it would necessitate an explicit k3s-dind image. The new image would likely need to be based out of rancher/k3s.

Describe alternatives you've considered

I've found one implementation available for running k3s with dind, but does not provide as clean a workflow as k3d. I also have not been able to get this to launch correctly. It suffers similar issues as above if ran with the host docker socket mounted (and different issues otherwise).

`k3d delete` fails when cluster has custom name

first of all, incredible piece of software and being the WSL Corsair I had to test it on ... WSL ๐Ÿ˜ˆ

everything works just fine, however if I create a cluster with a custom name (i.e. k3d create --name wslk3d), then when running k3d delete without any name, it will delete the $HOME/.config/k3d directory, but not delete the cluster itself (see screenshot below)
image

In the screenshot there's also the workaround mkdir -p $HOME/.config/k3d/<cluster name>

Hope this helps and really looking forward to play with it โญ๏ธ

[Feature] Local Storage Provider

It would be great to have a default storage provider similar to what Minikube provides. This allows to deploy and develop Kubernetes pods requiring storage.

Scope of your request

Additional addon to deploy to single node clusters.

Describe the solution you'd like

I got it working by using the storage provisioner of Minikube by creating following resources:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: storage-provisioner
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: storage-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:persistent-volume-provisioner
subjects:
  - kind: ServiceAccount
    name: storage-provisioner
    namespace: kube-system
---
apiVersion: v1
kind: Pod
metadata:
  name: storage-provisioner
  namespace: kube-system
spec:
  serviceAccountName: storage-provisioner
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  hostNetwork: true
  containers:
  - name: storage-provisioner
    image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
    command: ["/storage-provisioner"]
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - mountPath: /tmp
      name: tmp
  volumes:
  - name: tmp
    hostPath:
      path: /tmp
      type: Directory
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
  namespace: kube-system
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
provisioner: k8s.io/minikube-hostpath

Describe alternatives you've considered

An alternative might be local persistent volumes but the Minikube solution looks simpler. With the local persistent volumes it could work even with multiple nodes.

I'm not sure if the addon should be integrated into k3s directly. On the other hand I think it's more a feature required for local development and therefore probably fits better to k3d.

[Enhancement] Proper logging and error handling

Currently we're using the standard logging library and at some points at the code we check for the --verbose flag, while at other points we don't.
Also we don't have any sorts of log level (just print or fatal) between which we could switch.
Additionally, the log messages differ highly from each other.

Bonus: our error handling is quite mixed... at some level we pass the error upstream, sometimes we drop out with a Fatalf or we simply log something and go on. This could be approved e.g. in a way that we only drop out of the process at the top-most level and have lower-level functions only returning meaningful errors.

It would be nice if we could

  • clean up the log messages to a unified format
  • introduce a more feature-rich logging library (e.g. sirupsen/logrus)
  • Get proper unified error handling

[FEATURE] Custom k3s Image

For people behind a corporate proxy it is impossible to get workload on the k3s cluster because cacert issues. It would be nice to configure a custom k3s image with the required cacerts included.

k3d -create -image custom-image

You do great work with k3s & k3d. Thank you for your effort.

[ENHANCEMENT] Creative solution for port binding issues?

Right now you have to do -p to bind any port you'd wish to access k8s. On linux this isn't much of an issue because you can just access the k3d container IP directly but on docker-for-mac and windows this isn't possible. I'm wondering if we could get creative here. What we could do is deploy a controller that watches hosts ports of pods and then dynamically create docker containers that just forward that one port. For example....

If I can create a service LB in k3s that exposes port 80, what we do is actually create a pod that uses host port 80 and forwards all request to the service LB's cluster IP. We would see that a pod is using host port 80 and deploy a docker container with -p 80:80. In that docker container we would then create an iptables rule that forwards all traffic coming in on 80 and forwards to the k3s_default container. Then VPNKit on docker-for-mac, docker-for-windows will create a forwarding tunnel from the desktop to the new docker container we just created.

The only issue I see with this approach is if you have multiple k3s clusters running. We could do something then like "port offset". So one cluster gets an offset of 0 so port 80 and 443 will be bound to 80 and 443. Then a second cluster would get an offset like 10000 so ports get bound to 10080 and 10443.

WDTY?

[Feature] Add friendly names to nodes

If the docker containers are start with the equivalent of docker run --hostname foo then the kubectl get node will show friendlier looking names.

How to access services on host machine?

Hi, I'm running k3d using Docker for Mac and i need to access a service running on the host machine (the mac). From inside a pod I can ping the dhcp address of the host, but of course this can change.

Is there a static way to access the host machine?

Alternatively, I need to do this in order to access custom dns that is running on the host machine. If the dns settings of the host propagated to pods (like it does for containers running in regular docker), that would also solve my problem.

[FEATURE] support DOCKER_HOST=ssh:// connectivity

Is your feature request related to a problem or a Pull Request?

N/A

Scope of your request

As commented in the issue #68 , it "could be" nice to support the docker ssh:// connectivity (in addition of the TLS and non-TLS ones).

Here is the current behavior when a DOCKER_HOST is set to use ssh://
image

Describe the solution you'd like

k3d list would result in listing the current clusters when DOCKER_HOST=ssh://<host> is set.

Describe alternatives you've considered

Working from WSL, my current setup would use the npiperelay.exe solution for connecting to the local environment.

Thanks in advance for your feedback and help. I really don't have a clue about the effort, so I would totally understand if it cannot be implemented now or in a near future.
I still hope it would eventually be implemented as it would make the connectivity secure in a way more friendly and rapid way than the current Docker TLS setup.

Nunix out

[FEATURE] Allow using existing (or default) docker bridge network, to share network between clusters

Hi, I'd like to use k3d as a lightweight solution to test multi-cluster controllers. Currently, there's no easy way to call the Kubernetes API of, e.g., k3d-cluster1 from a pod in k3d-cluster2. I can connect the server container of cluster1 to the cluster2 bridge network and vice versa (with docker metwork connect), but cluster1's server certificate isn't valid for an IP in cluster2.

With kind, clusters are created in the default docker bridge network, so they are reachable from one another.

A --network DOCKER_NETWORK_NAME option could be added to the k3d create subcommand. The network would be created if it doesn't exist.

This issue differs from PR #53 in that it is more general (a --host option could actually also be used to use the host network rather than a bridge network).

[BUG] No Default k3s image tag (for go install)

Installed k3d using go get -u and go install ...
OS: macOS 10.14.4

Running k3d create results in the following error

2019/05/02 20:52:20 Created cluster network with ID fc9c4b9946bf9774421f68b90d04f461a29f969b7d375c99c5890566307bc9ba
2019/05/02 20:52:20 Creating cluster [k3s_default]
2019/05/02 20:52:20 Creating server using docker.io/rancher/k3s:...
2019/05/02 20:52:20 ERROR: failed to create cluster
ERROR: couldn't pull image docker.io/rancher/k3s:
invalid reference format

Running k3d c --version v0.4.0 results in a successful cluster creation.

k3d c --version v0.4.0
2019/05/02 20:53:51 Created cluster network with ID bfe00611154ed7a9884b14a85c5b772dbaac5e020da41322765114bcb6692f40
2019/05/02 20:53:51 Creating cluster [k3s_default]
2019/05/02 20:53:51 Creating server using docker.io/rancher/k3s:v0.4.0...
2019/05/02 20:53:53 SUCCESS: created cluster [k3s_default]
2019/05/02 20:53:53 You can now use the cluster with:

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s_default')"
kubectl cluster-info

Took a look at the source looks like it's expecting c.string("version") to be populated. Looks like there is no default set.

Awesome job getting this up and running, looking forward to playing with it more.

[BUG] Adding a label to a node fails

What did you do?

  • How was the cluster created?

    • k3d create -n the-cluster -w 3
  • What did you do afterwards?

$ kubectl get nodes -o name
node/k3d-the-cluster-server
node/k3d-the-cluster-worker-0
node/k3d-the-cluster-worker-1
node/k3d-the-cluster-worker-2

$ kubectl label node k3d-the-cluster-worker-0 -l minio-server=true
error: at least one label update is required 

What did you expect to happen?

I expected the command to succeed and to have the new label minio-server=true attached to the node

Screenshots or terminal output

Instead the command failed and only the existing labels are there:

$ kubectl get node k3d-the-cluster-worker-0 --show-labels
NAME                       STATUS   ROLES    AGE   VERSION         LABELS
k3d-the-cluster-worker-0   Ready    worker   13h   v1.14.4-k3s.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k3d-the-cluster-worker-0,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true

Which OS & Architecture?

Running on macOS Mojave 10.14.6

Which version of k3d?

$ k3d -v
k3d version v1.3.1

Which version of docker?

$ docker version
Client: Docker Engine - Community
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        6247962
 Built:             Sun Feb 10 04:12:39 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false

[Bug] Pods stay in Pending state forever

I was trying out latest k3d installed via go get. After k3d create the default traefik and coredns pods would stay in Pending state forever.

I found the following in kubectl describe for the pods:

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  22s (x2 over 22s)  default-scheduler  no nodes available to schedule pods

Also kubectl get nodes would output

No resources found.

In the docker logs of the k3d-k3s_default container I also saw repeated messages of the following:

time="2019-05-08T15:04:00.196973262Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"
time="2019-05-08T15:04:02.198201809Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"
time="2019-05-08T15:04:04.199724009Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"
time="2019-05-08T15:04:06.201047753Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"
time="2019-05-08T15:04:08.202304466Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"
time="2019-05-08T15:04:10.203751445Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"
time="2019-05-08T15:04:12.205281191Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"
time="2019-05-08T15:04:14.208050701Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"
time="2019-05-08T15:04:16.209164570Z" level=info msg="waiting for node k3d-k3s_default-server: nodes \"k3d-k3s_default-server\" not found"

When I tried the latest release (v1.0.2) everything was working again. I did a quick bisect on the commits and it appears that the changes in 9ac8198 cause this behavior.

[FEATURE] Support PowerShell for `k3d shell`

Right now running k3d shell from a PowerShell session results in this error:

ERROR: selected shell [.] is not supported

I would like to see k3d shell support PowerShell. That is, if I run k3d shell from a PowerShell session, a sub-shell PowerShell session should be created (much like running bash inside bash) with KUBECONFIG environment variable already set to the correct value and have the name of the k3s cluster prepended to the shell's prompt.

Right now as a workaround I run this command:

> PowerShell -NoExit -Command {function prompt {"($([regex]::Match($(k3d get-kubeconfig), '([^/]+)\/kubeconfig\.yaml$').captures.groups[1].value)) PS $($ExecutionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) "} [Environment]::SetEnvironmentVariable("KUBECONFIG", (k3d get-kubeconfig), "Process")}

... which does exactly what I wanted k3d shell in PowerShell to do. It would be nice if k3d shell can have that built in.

[HELP] How to specify API version for k3s cluster?

How can I launch a k3s cluster with an earlier version of the API? Right now it's pretty easy to launch one on 1.14, but if I want to deploy for example a 1.11 cluster, what's the right way to do it?

It looks like the --version flag is for the k3s image, not necessarily for the API version, and I see that the --server-arg flag is also available for passing flags to the k3s server, but I can't find any documentation on how to specify the API version.

Is this possible, or am I missing something?

[BUG] Rancher UI in k3d

What did you do?
Install rancher https://github.com/rancher/rancher . I tried to execute the rancher-agent in k3d docker container but It's not working

k3d version v1.2.2
โžœ  hd8-dev-infrastructure git:(master) โœ— (โŽˆ default:default) docker -v
Docker version 18.09.2, build 6247962
โžœ  hd8-dev-infrastructure git:(master) โœ— (โŽˆ default:default) docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                      NAMES
dd9e767692d8        rancher/k3s:v0.5.0   "/bin/k3s agent"         25 minutes ago      Up 25 minutes                                                  k3d-playground-worker-0
3fc9556c7198        rancher/k3s:v0.5.0   "/bin/k3s server --hโ€ฆ"   25 minutes ago      Up 25 minutes       0.0.0.0:6443->6443/tcp                     k3d-playground-server
120cf1f5c1ee        rancher/rancher      "entrypoint.sh"          30 minutes ago      Up 30 minutes       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   vigilant_carson
โžœ  hd8-dev-infrastructure git:(master) โœ— (โŽˆ default:default) docker exec -it dd9e767692d8 sh
/ # docker
sh: docker: not found
/ # k3s 
NAME:
   k3s - Kubernetes, but small and simple

USAGE:
   k3s [global options] command [command options] [arguments...]

VERSION:
   v0.5.0 (8c0116dd)

COMMANDS:
     server   Run management server
     agent    Run node agent
     kubectl  Run kubectl
     crictl   Run crictl
     help, h  Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --debug        Turn on debug logs
   --help, -h     show help
   --version, -v  print the version
/ # 

/ # ps
PID   USER     COMMAND
    1 0        /bin/k3s agent
   19 0        containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
  578 0        containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/a6a750da7213027e8fa6c09531b6286bbef04431e2dec8dac3b3620d9ea11835 -address /run/k3s/conta
  598 0        /pause
  709 0        containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/4433486100b13417671050a011b979cf0ef13c760a92d4723a91689d76503d5e -address /run/k3s/conta
  728 0        {entry} /bin/sh /usr/bin/entry
  760 0        containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/94df3fa08e1f816f2b9a79984936a195b0b169965eb538e89b1d405d1e04f657 -address /run/k3s/conta
  779 0        {entry} /bin/sh /usr/bin/entry
 2892 0        sh
 3088 0        ps

What did you expect to happen?
Rancher UI get connected with k3d clusters

Concise description of what you expected to happen after doing what you described above.
image

Which OS & Architecture?
MacOS

Which version of k3d?
k3d version v1.2.2

Which version of docker?
Docker version 18.09.2, build 6247962

[BUG] k3d not working with "Docker Toolbox on Windows"

What did you do?

Tried installing k3d on Windows 10 WSL bash. With Docker Toolbox on Windows https://docs.docker.com/toolbox/toolbox_install_windows/

  • How was the cluster created? And What did you do afterwards?
curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
k3d --version
docker version
k3d create
export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info

# Lots of errors
docker logs k3d-k3s-default-server

kubectl get pods --all-namespaces
# Lots of errors
kubectl logs -n kube-system coredns-695688789-sks4b

What did you expect to happen?
No error messages for the last commands docker logs k3d-k3s-default-server and kubectl logs -n kube-system coredns-695688789-sks4b

Screenshots or terminal output

$ curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
Preparing to install k3d into /usr/local/bin
k3d installed into /usr/local/bin/k3d
Run 'k3d --help' to see what you can do with it.

$ k3d --version
k3d version v1.2.2

$ docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:35:57 2019
 OS/Arch:           linux/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:57 2019
  OS/Arch:          linux/amd64
  Experimental:     false

$ k3d create
2019/06/07 18:38:01 Created cluster network with ID f03510dd8524a889aa4161b965cab7b491fd197f718c35f53b75c6e397cfd80a
2019/06/07 18:38:01 Add TLS SAN for 192.168.99.100
2019/06/07 18:38:01 Creating cluster [k3s-default]
2019/06/07 18:38:01 Creating server using docker.io/rancher/k3s:v0.5.0...
2019/06/07 18:38:01 SUCCESS: created cluster [k3s-default]
2019/06/07 18:38:01 You can now use the cluster with:

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info

$ export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:6443
CoreDNS is running at https://192.168.99.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ docker logs k3d-k3s-default-server
time="2019-06-08T01:38:01.295353326Z" level=info msg="Starting k3s v0.5.0 (8c0116dd)"
time="2019-06-08T01:38:03.680457910Z" level=info msg="Running kube-apiserver --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --requestheader-username-headers=X-Remote-User --service-cluster-ip-range=10.43.0.0/16 --advertise-address=127.0.0.1 --insecure-port=0 --bind-address=127.0.0.1 --api-audiences=unknown --watch-cache=false --advertise-port=6445 --service-account-issuer=k3s --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --allow-privileged=true --secure-port=6444 --tls-cert-file=/var/lib/rancher/k3s/server/tls/localhost.crt --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --requestheader-allowed-names=kubernetes-proxy --kubelet-client-key=/var/lib/rancher/k3s/server/tls/token-node.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --authorization-mode=Node,RBAC --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --tls-private-key-file=/var/lib/rancher/k3s/server/tls/localhost.key --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/token-node-1.crt"
E0608 01:38:04.245900       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.246322       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.246434       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.246509       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.246576       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.246694       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0608 01:38:04.323507       1 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
W0608 01:38:04.331378       1 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
E0608 01:38:04.394938       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.395128       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.395246       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.395323       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.395400       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.395461       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0608 01:38:04.409054       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg:
time="2019-06-08T01:38:04.436425259Z" level=info msg="Running kube-scheduler --leader-elect=false --port=10251 --bind-address=127.0.0.1 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml"
time="2019-06-08T01:38:04.442364922Z" level=info msg="Running kube-controller-manager --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --cluster-cidr=10.42.0.0/16 --root-ca-file=/var/lib/rancher/k3s/server/tls/token-ca.crt --leader-elect=false --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port=10252 --bind-address=127.0.0.1 --secure-port=0 --allocate-node-cidrs=true"
W0608 01:38:04.450753       1 authorization.go:47] Authorization is disabled
W0608 01:38:04.450994       1 authentication.go:55] Authentication is disabled
time="2019-06-08T01:38:04.735527600Z" level=info msg="Creating CRD listenerconfigs.k3s.cattle.io"
time="2019-06-08T01:38:04.760407582Z" level=info msg="Creating CRD addons.k3s.cattle.io"
time="2019-06-08T01:38:04.780682263Z" level=info msg="Creating CRD helmcharts.k3s.cattle.io"
time="2019-06-08T01:38:04.802830548Z" level=info msg="Waiting for CRD helmcharts.k3s.cattle.io to become available"
E0608 01:38:04.844869       1 autoregister_controller.go:193] v1.k3s.cattle.io failed with : apiservices.apiregistration.k8s.io "v1.k3s.cattle.io" already exists
time="2019-06-08T01:38:05.305204658Z" level=info msg="Done waiting for CRD helmcharts.k3s.cattle.io to become available"time="2019-06-08T01:38:05.305322727Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
time="2019-06-08T01:38:05.807375332Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2019-06-08T01:38:05.810677137Z" level=info msg="Listening on :6443"
time="2019-06-08T01:38:06.164764985Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"time="2019-06-08T01:38:06.164895057Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.2:6443 -t ${NODE_TOKEN}"
time="2019-06-08T01:38:06.167052357Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz"
time="2019-06-08T01:38:06.167368640Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2019-06-08T01:38:06.167472130Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2019-06-08T01:38:06.375916133Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2019-06-08T01:38:06.376031588Z" level=info msg="Run: k3s kubectl"
time="2019-06-08T01:38:06.376077645Z" level=info msg="k3s is up and running"
time="2019-06-08T01:38:06.701540087Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2019-06-08T01:38:06.701847586Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2019-06-08T01:38:06.703074341Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
time="2019-06-08T01:38:07.706822659Z" level=warning msg="failed to start br_netfilter module"
time="2019-06-08T01:38:07.707656405Z" level=info msg="Connecting to wss://localhost:6443/v1-k3s/connect"
time="2019-06-08T01:38:07.707722569Z" level=info msg="Connecting to proxy" url="wss://localhost:6443/v1-k3s/connect"
time="2019-06-08T01:38:07.755669679Z" level=info msg="Handling backend connection request [k3d-k3s-default-server]"
time="2019-06-08T01:38:07.756518754Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2019-06-08T01:38:07.756792296Z" level=info msg="Running kubelet --container-runtime=remote --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --anonymous-auth=false --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.pem --tls-private-key-file=/var/lib/rancher/k3s/agent/token-node.key --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --seccomp-profile-root=/var/lib/rancher/k3s/agent/kubelet/seccomp --cni-bin-dir=/bin --hostname-override=k3d-k3s-default-server --authentication-token-webhook=true --authorization-mode=Webhook --fail-swap-on=false --tls-cert-file=/var/lib/rancher/k3s/agent/token-node.crt --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --address=0.0.0.0 --cpu-cfs-quota=false --cert-dir=/var/lib/rancher/k3s/agent/kubelet/pki --healthz-bind-address=127.0.0.1 --read-only-port=0 --root-dir=/var/lib/rancher/k3s/agent/kubelet --serialize-image-pulls=false --allow-privileged=true --eviction-hard=imagefs.available<5%,nodefs.available<5% --kubeconfig=/var/lib/rancher/k3s/agent/kubeconfig.yaml --resolv-conf=/tmp/k3s-resolv.conf --cgroup-driver=cgroupfs"
Flag --allow-privileged has been deprecated, will be removed in a future version
W0608 01:38:07.767574       1 info.go:52] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
W0608 01:38:07.768544       1 server.go:214] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0608 01:38:07.770461       1 proxier.go:480] Failed to read file /lib/modules/4.14.92-boot2docker/modules.builtin with error open /lib/modules/4.14.92-boot2docker/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0608 01:38:07.783510       1 proxier.go:493] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0608 01:38:07.783834       1 proxier.go:493] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0608 01:38:07.784084       1 proxier.go:493] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0608 01:38:07.784371       1 proxier.go:493] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0608 01:38:07.784606       1 proxier.go:493] Failed to load kernel module nf_conntrack_ipv4 with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0608 01:38:07.791830       1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
E0608 01:38:07.818982       1 kubelet_network_linux.go:68] Failed to ensure rule to drop packet marked by KUBE-MARK-DROP in filter chain KUBE-FIREWALL: error appending rule: exit status 1: iptables: No chain/target/match by that name.
E0608 01:38:07.820187       1 cri_stats_provider.go:373] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0608 01:38:07.820296       1 kubelet.go:1250] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
time="2019-06-08T01:38:07.850853889Z" level=info msg="waiting for node k3d-k3s-default-server: nodes \"k3d-k3s-default-server\" not found"
W0608 01:38:07.854786       1 node.go:113] Failed to retrieve node info: nodes "k3d-k3s-default-server" not found
W0608 01:38:07.855122       1 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
E0608 01:38:07.867658       1 conntrack.go:85] failed to set conntrack hashsize to 32768: write /sys/module/nf_conntrack/parameters/hashsize: operation not supported
W0608 01:38:07.874816       1 manager.go:537] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
E0608 01:38:07.876158       1 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "k3d-k3s-default-server" not found
W0608 01:38:07.895587       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0608 01:38:07.898766       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.000286       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.100458       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.200814       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.301959       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.403627       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.506072       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.608700       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.710084       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:08.810888       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
W0608 01:38:08.886459       1 controllermanager.go:445] Skipping "csrsigning"
E0608 01:38:08.911297       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.011461       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
W0608 01:38:09.040187       1 shared_informer.go:312] resyncPeriod 53868058708201 is smaller than resyncCheckPeriod 61226469372869 and the informer has already started. Changing it to 61226469372869
E0608 01:38:09.040504       1 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs"]
W0608 01:38:09.041154       1 controllermanager.go:445] Skipping "root-ca-cert-publisher"
E0608 01:38:09.111648       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.211849       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.311996       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.412650       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.513444       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.616702       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.717373       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.817914       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:09.919538       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.021290       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.121547       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.224442       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.324620       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.426642       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.528729       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.629327       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.730373       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.831404       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:10.933432       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.033635       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.133870       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.236064       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.347255       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.447885       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.548154       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.648660       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.748881       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.849332       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
time="2019-06-08T01:38:11.912654090Z" level=info msg="waiting for node k3d-k3s-default-server CIDR not assigned yet"
time="2019-06-08T01:38:11.915771494Z" level=info msg="Updated coredns node hosts entry [172.26.0.2 k3d-k3s-default-server]"
E0608 01:38:11.949732       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
E0608 01:38:11.972793       1 proxier.go:696] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error appending rule: exit status 1: iptables: No chain/target/match by that name.
E0608 01:38:12.049990       1 kubelet.go:2207] node "k3d-k3s-default-server" not found
time="2019-06-08T01:38:13.915897489Z" level=info msg="waiting for node k3d-k3s-default-server CIDR not assigned yet"
time="2019-06-08T01:38:15.946193390Z" level=info msg="waiting for node k3d-k3s-default-server CIDR not assigned yet"
E0608 01:38:17.880856       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": failed to get cgroup stats for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": failed to get container info for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": unknown container "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy"
time="2019-06-08T01:38:17.947998415Z" level=info msg="waiting for node k3d-k3s-default-server CIDR not assigned yet"
E0608 01:38:19.363922       1 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons"]
W0608 01:38:19.419621       1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3d-k3s-default-server" does not exist
W0608 01:38:19.457953       1 node_lifecycle_controller.go:833] Missing timestamp for Node k3d-k3s-default-server. Assuming now as a timestamp.
E0608 01:38:19.962047       1 proxier.go:696] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error appending rule: exit status 1: iptables: No chain/target/match by that name.
E0608 01:38:20.089132       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0608 01:38:20.156695       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0608 01:38:27.893876       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": failed to get cgroup stats for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": failed to get container info for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": unknown container "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy"
E0608 01:38:34.194231       1 proxier.go:696] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error appending rule: exit status 1: iptables: No chain/target/match by that name.
E0608 01:38:37.905681       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": failed to get cgroup stats for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": failed to get container info for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": unknown container "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy"
E0608 01:38:47.911644       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": failed to get cgroup stats for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": failed to get container info for "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy": unknown container "/docker/b65fddd6296529c9414952b929986d92093e67ffcb0c9173cce5766cfd88ae22/kube-proxy"

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-695688789-568cf      1/1     Running   0          38s
kube-system   helm-install-traefik-ccbmb   1/1     Running   0          37s

$ kubectl logs -n kube-system coredns-695688789-568cf
.:53
2019-06-08T01:38:38.615Z [INFO] CoreDNS-1.3.0
2019-06-08T01:38:38.615Z [INFO] linux/amd64, go1.11.4, c8f0e94
CoreDNS-1.3.0
linux/amd64, go1.11.4, c8f0e94
2019-06-08T01:38:38.615Z [INFO] plugin/reload: Running configuration MD5 = ef347efee19aa82f09972f89f92da1cf
E0608 01:39:03.615524       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.43.0.1:443: i/o timeout
E0608 01:39:03.617480       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.43.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.43.0.1:443: i/o timeout
E0608 01:39:03.617647       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.43.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.43.0.1:443: i/o timeout

Which OS & Architecture?

  • Windows 10 amd64

Which version of k3d?

  • k3d version v1.2.2

Which version of docker?

$ docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:35:57 2019
 OS/Arch:           linux/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:57 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Additional note

  • Previously filed a bug for the same issue #68 but still reproducible with k3d 1.2.2 which includes the fix #71
  • On my WSL bash shell, I had to create a soft link docker-machine that points to docker-machine.exe

[Question] Install OpenEBS

Hi! This looks cool and I would like to use it for testing and development instead of creating and deleting VMs with my cloud provider all the time. I was wondering, can I used OpenEBS with a cluster created this way? I need it to test how things would work in production using persistent volumes. I am using the OpenEBS Local PV provisioning which uses a host path, so I don't need to attach disks (would that be possible as well out of curiosity?). Thanks in advance!

[Help] Can't connect to k3s from outside network

After change my ~/.kube/config to k3s generated certificate
Then I run kubectl get pod and received this error message

Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 172.18.0.2, not 192.168.1.200

[Bug]k3d start doesn't bring back the server container

Issuing the command: k3d start --name <custom name> after a reboot where in the previous session I didn't stop the cluster only my worker nodes(containers) come up but not the server node.
The output shows that the container was started but it actually is not present.

2019/05/18 15:59:20 Starting cluster [prometheus] 2019/05/18 15:59:20 ...Starting server 2019/05/18 16:00:01 ...Starting 5 workers 2019/05/18 16:00:42 SUCCESS: Started cluster [prometheus]

Here is the docker ps output:
docker-ps

And of course you can't get cluster information since the master is absent:
$ export KUBECONFIG="$(k3d get-kubeconfig --name='prometheus')"
2019/05/18 16:08:45 No server container for cluster prometheus

[FEATURE]: One kubeconfig.yaml for more than one cluster

Currently if I create two k3d cluster with different names, k3d creates two kubeconfig.yaml files under two different directories.

I wanna have one kubeconfig.yaml file and use kubectx and kubens to switch between them easily.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
<certificate-cluster-dev>
    server: https://localhost:6551
  name: cluster-dev
- cluster:
    certificate-authority-data: 
<certificate-cluster-prod>
    server: https://localhost:6550
  name: cluster-prod
contexts:
- context:
    cluster: cluster-dev
    namespace: development
    user: user-dev
  name: default-dev
- context:
    cluster: cluster-prod
    namespace: production
    user: user-prod
  name: default-prod
current-context: default-prod
kind: Config
preferences: {}
users:
- name: user-dev
  user:
    password: <user-dev password>
    username: admin
- name: user-prod
  user:
    password: <user-prod password>
    username: admin

Implementation perhaps with:
k3d create -a1 6550 -n1 cluster-prod -w1 5 -a2 6551 -n2 cluster-dev -w2 1 --image rancher/k3s:v0.7.0-rc4

default-scheduler 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.

brand new cluster with v1.0.0 . failed with $subject error

bash-3.2$ k3d create -n clstr1 --workers=1
2019/04/28 17:32:31 Created cluster network with ID e7d66d1ce94276e4bf596cb947288b898b0273608bcab25f014d81c33ebf4bad
2019/04/28 17:32:31 Creating cluster [clstr1]
2019/04/28 17:32:31 Creating server using docker.io/rancher/k3s:v0.4.0...
2019/04/28 17:32:36 Booting 1 workers for cluster clstr1
Created worker with ID cf91e20377cc3832488468e632cb3c64986a5dba080150a5904c9a3d3589ac9a
2019/04/28 17:32:41 SUCCESS: created cluster [clstr1]
2019/04/28 17:32:41 You can now use the cluster with:

export KUBECONFIG="$(k3d get-kubeconfig --name='clstr1')"
kubectl cluster-info
bash-3.2$ export KUBECONFIG="$(k3d get-kubeconfig --name='clstr1')"
bash-3.2$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
CoreDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
bash-3.2$ k get nodes
NAME           STATUS    ROLES     AGE       VERSION
cf91e20377cc   Ready     <none>    8s        v1.14.1-k3s.4
d364f0d492c5   Ready     <none>    8s        v1.14.1-k3s.4
bash-3.2$ k get po --all-namespaces
NAMESPACE     NAME                         READY     STATUS    RESTARTS   AGE
kube-system   coredns-857cdbd8b4-6kc6c     0/1       Pending   0          8s
kube-system   helm-install-traefik-bmdg7   0/1       Pending   0          8s
bash-3.2$ k get po --all-namespaces
NAMESPACE     NAME                         READY     STATUS    RESTARTS   AGE
kube-system   coredns-857cdbd8b4-6kc6c     0/1       Pending   0          9s
kube-system   helm-install-traefik-bmdg7   0/1       Pending   0          9s
k describe po coredns-857cdbd8b4-6kc6c -n kube-system
Name:               coredns-857cdbd8b4-6kc6c
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=857cdbd8b4
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      ReplicaSet/coredns-857cdbd8b4
Containers:
  coredns:
    Image:       coredns/coredns:1.3.0
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-nzxcf (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-nzxcf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-nzxcf
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age              From               Message
  ----     ------            ----             ----               -------
  Warning  FailedScheduling  8s (x4 over 1m)  default-scheduler  0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.

[ENHANCEMENT] Ease using a private registry

Currently, enable pulling from a private registry from withing k3d can be achieved e.g. by following this example: https://rancher-users.slack.com/archives/CHM1EB3A7/p1561743458058500 posted by Nicolas Levรฉe in our Slack channel.
Here's a copy of that post:

I don't know if this can help but for private registries (like GCR), here is my Solution :
I start k3d with this create command :
k3d create --volumes ${K3D_BASE_PATH}/config.toml.tmpl:/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
The content of config.toml.tmpl :

path = "{{ .NodeConfig.Containerd.Opt }}"
[plugins.cri]
stream_server_address = "{{ .NodeConfig.AgentConfig.NodeName }}"
stream_server_port = "10010"
{{- if .IsRunningInUserNS }}
disable_cgroup = true
disable_apparmor = true
restrict_oom_score_adj = true
{{ end -}}
{{- if .NodeConfig.AgentConfig.PauseImage }}
sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"
{{ end -}}
{{- if not .NodeConfig.NoFlannel }}
  [plugins.cri.cni]
    bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
    conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"
{{ end -}}

[plugins.cri.registry.mirrors]
  [plugins.cri.registry.mirrors."docker.io"]
    endpoint = ["https://registry-1.docker.io"]
  [plugins.cri.registry.mirrors."eu.gcr.io"]
    endpoint = ["https://eu.gcr.io"]
[plugins.cri.registry.auths]
  [plugins.cri.registry.auths."https://eu.gcr.io"]
    username = "oauth2accesstoken"
    password = "....."

The first part is from k3s source code (https://github.com/rancher/k3s/blob/master/pkg/agent/templates/templates.go)The second part add some registries in configMy images from eu.gcr.io/project/image_name can be pull without any problem

I'd like to ease the use of this, e.g. by external configuration via environment variables, a config file, or via additional flags for the create command.

[BUG] Importing local images leads to exec error

What did you do?

  • How was the cluster created?

    • k3d create --name test --api-port 8443 --workers 1
  • What did you do afterwards?

    • k3d import-images --name test containous/whoami:v1.0.1
    • k3d import-images --name test coredns/coredns:1.2.6

What did you expect to happen?

Both images get imported correctly.

Screenshots or terminal output

2019/08/08 20:37:27 INFO: Saving images [containous/whoami:v1.0.1] from local docker daemon...
2019/08/08 20:37:28 INFO: saved images to shared docker volume
2019/08/08 20:37:28 INFO: Importing images [containous/whoami:v1.0.1] in container [k3d-test-server]
2019/08/08 20:37:28 INFO: Importing images [containous/whoami:v1.0.1] in container [k3d-test-worker-0]
2019/08/08 20:37:28 INFO: Successfully imported images [containous/whoami:v1.0.1] in all nodes of cluster [test]
2019/08/08 20:37:28 INFO: Cleaning up tarball
2019/08/08 20:37:29 INFO: deleted tarball
2019/08/08 20:37:29 INFO: ...Done

2019/08/08 20:37:29 INFO: Saving images [coredns/coredns:1.2.6] from local docker daemon...
2019/08/08 20:37:29 INFO: saved images to shared docker volume
2019/08/08 20:37:29 INFO: Importing images [coredns/coredns:1.2.6] in container [k3d-test-server]
2019/08/08 20:37:30 INFO: Importing images [coredns/coredns:1.2.6] in container [k3d-test-worker-0]
2019/08/08 20:37:31 ERROR: seems like something went wrong using `ctr image import` in container [k3d-test-worker-0]. Full output below:
Error: Exec command f1e21aefd1f08f4482b839e1bbc1fc03ec0b16ec98a68325cc1e2402eb0f1bb9 is already running

Which OS & Architecture?

  • Semaphore box: Ubuntu 16.04, docker 19.02
    1 physical CPU (2 vCPUs), 4GB RAM, 10GB disk space

Which version of k3d?

k3d version v1.3.1

Which version of docker?

19.02 (semaphore)

Additional Notes

Because we have been testing this in CI, we have seen that the specific image being pushed does not seem to be constant.

Sometimes it succeeds pushing 3 images then fails, sometimes it fails on the first image.

[Enhancement] Ingress port mapping

Using k3s and docker-compose I can set a port binding for a node and the create an ingress using that to route into a pod ... lets say I bind port 8081:80, where port 80 is used by an nginx pod ... I can use localhost to reach nginx ..

http://localhost:8081

How can this be achieved using k3d ?

[BUG] pod can't find its own service name

What did you do?

  • How was the cluster created?

    • k3d create -n "d1" --workers 7 -x --tls-san="192.168.1.200"
  • What did you do afterwards?

    • deploy helm a chart helm install --name vato-presto stable/presto

What did you expect to happen?

Chart deploy failed , Pod can't find its own service name ( hostname ) .

I tried to attach to coordinate pod and curl/ ping to it own service name

curl -v http://vato-presto:8080/ - ping vato-presto

but it is not working .

I tried deploy to another k8s it worked fine ( GKE , Minikube or a cluster created by kubespray )

I think pod can't find route to its service name

Screenshots or terminal output

If applicable, add screenshots or terminal output (code block) to help explain your problem.

Which OS & Architecture?

  • Centos 6

Which version of k3d?

  • k3d version v1.1.0

Which version of docker?

`
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: b2f74b2/1.13.1
Built: Wed May 1 14:55:20 2019
OS/Arch: linux/amd64

Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
Go version: go1.10.3
Git commit: b2f74b2/1.13.1
Built: Wed May 1 14:55:20 2019
OS/Arch: linux/amd64
Experimental: false
`

[FEATURE] Expose ports AFTER k3d creation

I had a problem how to expose the tiller ingress port AFTER k3d creation, to be able to use a local helm client and set the HELM_HOST variable to HELM_HOST=localhost:44134

Here my solution:
Forward the internal ingress port over a docker container using socat

Here my procedure:

  • Get the IP address of the k3d-default-server
$ kubectl get nodes
  • Get the k3d network
$ sudo docker network ls
  • Run a detached docker socat container
 $ docker run \
    -d \
    -p <local-port>:<local-port> \
    --name=<k3d-host>-<k3d-port>-link \
    --network <k3d-namespace> \
    alpine/socat \
      TCP4-LISTEN:<local-port>,fork,reuseaddr \
      TCP4:<k3d-host>:<k3d-port>

Example:

 $ docker run \
    -d \
    -p 44134:44134 \
    --name=k3d-default-server-44134-link \
    --network k3d-default \
    alpine/socat \
      TCP4-LISTEN:44134,fork,reuseaddr \
      TCP4:172.19.0.2:44134

Question:

  1. Is there a way to implement this as a command in k3d?
$ k3d expose -l <local-port> -h <k3d-host> -r <k3d-port> -n <k3d-namespace>|default=k3d-default>
  1. May be adding this to FAQ?

[Bug] container name already in use after failed image pull or port conflict

If a cluster creation fails due to a port conflict or a failed image pull, a subsequent run (with fixed parameters, but unchanged name) will result in a container name conflict, because the container was already created (but not started).

TODO: if starting the container fails due to any issue, but it was already created, we need to delete it before erroring out.

[Help] Kubectl complains x.509 certificate is out of date

$ k3d create
2019/05/13 10:49:31 Created cluster network with ID 074b0dc5c988df2b9d3c1d4199496c852d3c136a76803673301c0e8979b307e6
2019/05/13 10:49:31 Creating cluster [k3s-default]
2019/05/13 10:49:31 Creating server using docker.io/rancher/k3s:v0.5.0...
2019/05/13 10:49:32 SUCCESS: created cluster [k3s-default]
2019/05/13 10:49:32 You can now use the cluster with:

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info
$ export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
$ kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: x509: certificate has expired or is not yet valid

Multiple worker nodes

Using k3s on its own with docker-compose, I can use scale to set the number of nodes that I need (most of the testing that I want to do is more realistic if I configure more than a single).

How can this be achieved using k3d ?

[FEATURE] add node-specifier for volumes

Is your feature request related to a problem or a Pull Request?

  • multiple volume mounts from #46 combined with node-specifiers from #43

Scope of your request

  • Enhanced functionality for the --volume flag of k3d create

Describe the solution you'd like

  • Enable using a syntax for volumes similar to the one we use for ports (--publish)
    • e.g. k3d create --volume some-vol:/tmp/some-path@k3d-test-worker-0 would only mount the volume to the specified node k3d-test-worker-0

Describe alternatives you've considered

No workaround to achieve this exact solution.

Additional context

Use-Case by Vito Botta (via Slack: https://rancher-users.slack.com/archives/CHM1EB3A7/p1562099541076100):

Hi! At the moment I am using only the master for my testing, no workers. However I would like to use a master + 2 workers configuration to test some replication stuff. Problem is, at the moment I am using (as previously mentioned) OpenEBS for the storage, so I create a cluster with --volume openebs:/mnt/openebs to mount a volume which I use with OpenEBS hostpath/local pv. This works great with the single master, but how can I make it work if I add two workers? I would need each node to have it's own volume... is this possible somehow? Thanks!

[RUNTIME ISSUE] Failed to start

What did you do?
Download 1.3.0/1.3.1 and create a cluster

  • How was the cluster created?

    • k3d create
  • What did you do afterwards?
    run docker logs k3d-k3s-default-server

What did you expect to happen?
k3s-cluster running

Screenshots or terminal output

docker logs k3d-k3s-default-server                                                                                                                                                                     (ci/sys-ci-gitlab-runner)
time="2019-09-18T08:48:21.697271954Z" level=info msg="Starting k3s v0.7.0 (61bdd852)"
time="2019-09-18T08:48:21.955340430Z" level=info msg="Running kube-apiserver --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --secure-port=6444 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key --service-cluster-ip-range=10.43.0.0/16 --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --allow-privileged=true --authorization-mode=Node,RBAC --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --advertise-port=6443 --insecure-port=0 --service-account-issuer=k3s --api-audiences=unknown --requestheader-allowed-names=system:auth-proxy --requestheader-username-headers=X-Remote-User --bind-address=127.0.0.1 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key"
E0918 08:48:22.309069       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.309339       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.309383       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.309415       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.309439       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.309461       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0918 08:48:22.370905       1 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
W0918 08:48:22.389340       1 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
E0918 08:48:22.414819       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.414838       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.414923       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.415255       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.415280       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0918 08:48:22.415297       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
time="2019-09-18T08:48:22.418886151Z" level=info msg="Running kube-scheduler --port=10251 --bind-address=127.0.0.1 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false"
time="2019-09-18T08:48:22.419346461Z" level=info msg="Running kube-controller-manager --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --allocate-node-cidrs=true --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --cluster-cidr=10.42.0.0/16 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --bind-address=127.0.0.1 --leader-elect=false --port=10252 --use-service-account-credentials=true"
W0918 08:48:22.430838       1 authorization.go:47] Authorization is disabled
W0918 08:48:22.430851       1 authentication.go:55] Authentication is disabled
time="2019-09-18T08:48:22.441797147Z" level=info msg="Creating CRD listenerconfigs.k3s.cattle.io"
E0918 08:48:22.452869       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0918 08:48:22.452969       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0918 08:48:22.453062       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0918 08:48:22.453129       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0918 08:48:22.453194       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
time="2019-09-18T08:48:22.459292497Z" level=info msg="Creating CRD addons.k3s.cattle.io"
E0918 08:48:22.459919       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0918 08:48:22.459998       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0918 08:48:22.460070       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0918 08:48:22.460142       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0918 08:48:22.460529       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
time="2019-09-18T08:48:22.461472784Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
time="2019-09-18T08:48:22.464093298Z" level=info msg="Waiting for CRD listenerconfigs.k3s.cattle.io to become available"
E0918 08:48:22.561369       1 controller.go:147] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
E0918 08:48:22.564493       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.24.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
E0918 08:48:22.641945       1 autoregister_controller.go:193] v1.helm.cattle.io failed with : apiservices.apiregistration.k8s.io "v1.helm.cattle.io" already exists
time="2019-09-18T08:48:22.970772223Z" level=info msg="Done waiting for CRD listenerconfigs.k3s.cattle.io to become available"
time="2019-09-18T08:48:22.970854786Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
E0918 08:48:23.461213       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0918 08:48:23.461453       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0918 08:48:23.461667       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0918 08:48:23.462224       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0918 08:48:23.465203       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0918 08:48:23.468453       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0918 08:48:23.471003       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0918 08:48:23.471003       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0918 08:48:23.474490       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
time="2019-09-18T08:48:23.476536318Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2019-09-18T08:48:23.476721545Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
E0918 08:48:23.479161       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
time="2019-09-18T08:48:23.985530573Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2019-09-18T08:48:24.024081532Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz"
time="2019-09-18T08:48:24.025276607Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2019-09-18T08:48:24.025982210Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2019-09-18T08:48:24.026349564Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
E0918 08:48:24.027020       1 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
E0918 08:48:24.027150       1 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
E0918 08:48:24.027405       1 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
E0918 08:48:24.027576       1 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
E0918 08:48:24.027696       1 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
E0918 08:48:24.027804       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
E0918 08:48:24.027980       1 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
time="2019-09-18T08:48:24.041342577Z" level=error msg="Update cert unable to convert string to cert: Unable to split cert into two parts"
time="2019-09-18T08:48:24.041633117Z" level=info msg="Listening on :6443"
E0918 08:48:24.068296       1 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
E0918 08:48:24.068409       1 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
E0918 08:48:24.068590       1 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
E0918 08:48:24.068796       1 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
E0918 08:48:24.069075       1 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
E0918 08:48:24.069254       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
E0918 08:48:24.069476       1 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
time="2019-09-18T08:48:24.572455561Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2019-09-18T08:48:24.673385989Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
time="2019-09-18T08:48:24.673431239Z" level=info msg="To join node to cluster: k3s agent -s https://172.24.0.2:6443 -t ${NODE_TOKEN}"
time="2019-09-18T08:48:24.674062076Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
E0918 08:48:24.685763       1 prometheus.go:138] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
E0918 08:48:24.685849       1 prometheus.go:150] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
E0918 08:48:24.685951       1 prometheus.go:162] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
E0918 08:48:24.686071       1 prometheus.go:174] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
E0918 08:48:24.686131       1 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
E0918 08:48:24.686186       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
E0918 08:48:24.686273       1 prometheus.go:214] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
E0918 08:48:24.686582       1 prometheus.go:138] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
E0918 08:48:24.686642       1 prometheus.go:150] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
E0918 08:48:24.686740       1 prometheus.go:162] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
E0918 08:48:24.686851       1 prometheus.go:174] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
E0918 08:48:24.686913       1 prometheus.go:189] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
E0918 08:48:24.686966       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
E0918 08:48:24.687190       1 prometheus.go:214] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
E0918 08:48:24.687398       1 prometheus.go:138] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
E0918 08:48:24.687448       1 prometheus.go:150] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
E0918 08:48:24.687540       1 prometheus.go:162] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
E0918 08:48:24.687627       1 prometheus.go:174] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
E0918 08:48:24.687687       1 prometheus.go:189] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
E0918 08:48:24.687741       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
E0918 08:48:24.687830       1 prometheus.go:214] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
E0918 08:48:24.688140       1 prometheus.go:138] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
E0918 08:48:24.688258       1 prometheus.go:150] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
E0918 08:48:24.688424       1 prometheus.go:162] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
E0918 08:48:24.688665       1 prometheus.go:174] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
E0918 08:48:24.689028       1 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
E0918 08:48:24.689317       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
E0918 08:48:24.689420       1 prometheus.go:214] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
E0918 08:48:24.689654       1 prometheus.go:138] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
E0918 08:48:24.689697       1 prometheus.go:150] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
E0918 08:48:24.689807       1 prometheus.go:162] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
E0918 08:48:24.689927       1 prometheus.go:174] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
E0918 08:48:24.689990       1 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
E0918 08:48:24.690038       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
E0918 08:48:24.690131       1 prometheus.go:214] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
E0918 08:48:24.690399       1 prometheus.go:138] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
E0918 08:48:24.690451       1 prometheus.go:150] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
E0918 08:48:24.690530       1 prometheus.go:162] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
E0918 08:48:24.690660       1 prometheus.go:174] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
E0918 08:48:24.690719       1 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
E0918 08:48:24.690847       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
E0918 08:48:24.691051       1 prometheus.go:214] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
time="2019-09-18T08:48:24.773469483Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2019-09-18T08:48:24.773485933Z" level=info msg="Run: k3s kubectl"
time="2019-09-18T08:48:24.773491768Z" level=info msg="k3s is up and running"
time="2019-09-18T08:48:24.897788812Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2019-09-18T08:48:24.898331142Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2019-09-18T08:48:24.898800913Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
time="2019-09-18T08:48:25.694307220Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
W0918 08:48:25.849273       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.24.0.2]
time="2019-09-18T08:48:25.900457786Z" level=info msg="module br_netfilter was already loaded"
time="2019-09-18T08:48:25.900603127Z" level=info msg="module overlay was already loaded"
time="2019-09-18T08:48:25.900621607Z" level=info msg="module nf_conntrack was already loaded"
time="2019-09-18T08:48:25.908623134Z" level=info msg="Connecting to proxy" url="wss://172.24.0.2:6443/v1-k3s/connect"
time="2019-09-18T08:48:25.909828206Z" level=info msg="Handling backend connection request [k3d-k3s-default-server]"
time="2019-09-18T08:48:25.910693024Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2019-09-18T08:48:25.910772236Z" level=info msg="Running kubelet --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --seccomp-profile-root=/var/lib/rancher/k3s/agent/kubelet/seccomp --cni-bin-dir=/bin --container-runtime=remote --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --read-only-port=0 --cgroup-driver=cgroupfs --authentication-token-webhook=true --cert-dir=/var/lib/rancher/k3s/agent/kubelet/pki --healthz-bind-address=127.0.0.1 --cluster-domain=cluster.local --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --eviction-hard=imagefs.available<5%,nodefs.available<5% --authorization-mode=Webhook --hostname-override=k3d-k3s-default-server --node-labels=node-role.kubernetes.io/master=true --resolv-conf=/tmp/k3s-resolv.conf --anonymous-auth=false --allow-privileged=true --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --root-dir=/var/lib/rancher/k3s/agent/kubelet --serialize-image-pulls=false --address=0.0.0.0 --fail-swap-on=false --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cluster-dns=10.43.0.10 --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key --cpu-cfs-quota=false"
Flag --allow-privileged has been deprecated, will be removed in a future version
W0918 08:48:25.910977       1 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
W0918 08:48:25.928285       1 options.go:266] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
W0918 08:48:25.928701       1 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0918 08:48:25.928948       1 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
W0918 08:48:25.928967       1 options.go:266] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
time="2019-09-18T08:48:25.930629904Z" level=info msg="waiting for node k3d-k3s-default-server: nodes \"k3d-k3s-default-server\" not found"
W0918 08:48:25.934972       1 fs.go:218] stat failed on /dev/mapper/vg-var with error: no such file or directory
W0918 08:48:25.937462       1 info.go:52] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
W0918 08:48:25.941424       1 proxier.go:485] Failed to read file /lib/modules/5.0.0-27-generic/modules.builtin with error open /lib/modules/5.0.0-27-generic/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0918 08:48:25.944317       1 proxier.go:498] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0918 08:48:25.944740       1 proxier.go:498] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0918 08:48:25.945133       1 proxier.go:498] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0918 08:48:25.945464       1 proxier.go:498] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0918 08:48:25.945854       1 proxier.go:498] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0918 08:48:25.953622       1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0918 08:48:25.959187       1 node.go:113] Failed to retrieve node info: nodes "k3d-k3s-default-server" not found
W0918 08:48:25.959378       1 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
W0918 08:48:25.962637       1 fs.go:556] stat failed on /dev/mapper/vg-var with error: no such file or directory
E0918 08:48:25.962809       1 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": failed to get device for dir "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": could not find device with major: 0, minor: 51 in cached partitions map.
E0918 08:48:25.962904       1 kubelet.go:1250] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
E0918 08:48:25.993992       1 controller.go:194] failed to get node "k3d-k3s-default-server" when trying to set owner ref to the node lease: nodes "k3d-k3s-default-server" not found
W0918 08:48:26.013972       1 fs.go:556] stat failed on /dev/mapper/vg-var with error: no such file or directory
F0918 08:48:26.014008       1 kubelet.go:1327] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/rancher/k3s/agent/kubelet": could not find device with major: 0, minor: 51 in cached partitions map
goroutine 7561 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog.stacks(0xc0008ad800, 0xc00345b800, 0xf6, 0xfd)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:828 +0xb1
github.com/rancher/k3s/vendor/k8s.io/klog.(*loggingT).output(0x606aba0, 0xc000000003, 0xc001884af0, 0x5d64655, 0xa, 0x52f, 0x0)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:779 +0x2d9
github.com/rancher/k3s/vendor/k8s.io/klog.(*loggingT).printf(0x606aba0, 0xc000000003, 0x3476532, 0x23, 0xc003c83d38, 0x1, 0x1)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:678 +0x14e
github.com/rancher/k3s/vendor/k8s.io/klog.Fatalf(...)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:1209
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).initializeRuntimeDependentModules(0xc004634400)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1327 +0x330
sync.(*Once).Do(0xc004634b28, 0xc00502de70)
	/usr/local/go/src/sync/once.go:44 +0xb3
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).updateRuntimeUp(0xc004634400)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2153 +0x3eb
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc003fd3090)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc003fd3090, 0x12a05f200, 0x0, 0x1, 0xc000494060)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc003fd3090, 0x12a05f200, 0xc000494060)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1376 +0x14b
E0918 08:48:26.083617       1 kubelet.go:2207] node "k3d-k3s-default-server" not found

Which OS & Architecture?
Ubuntu 18.04.3 LTS x86_64

Which version of k3d?
1.3.0 + 1.3.1

Which version of docker?

docker version                                                                                                                                                                                         (ci/sys-ci-gitlab-runner)
Client: Docker Engine - Community
 Version:           19.03.2
 API version:       1.40
 Go version:        go1.12.8
 Git commit:        6a30dfc
 Built:             Thu Aug 29 05:29:11 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.2
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.8
  Git commit:       6a30dfc
  Built:            Thu Aug 29 05:27:45 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

regards,
strowi

[Feature] Persistence and Always-On

  • Add functionality to store k3s server data on local disk so that you e.g. create a new cluster from existing data. E.g. use a new flags:
  • --persist <dir> to bind a local volume into the container where k3s stores state
  • --state-from <dir> to create new cluster using data from another cluster
  • Add functionality to auto-start a cluster (e.g. after reboots). E.g. use flag
  • --always-on to set docker's --restart=unless-stopped flag

[BUG] Create Cluster Fails with exit status 1

What did you do?
Upgraded from v1.1.0 to latest v.1.2.2

wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
k3d v1.2.2 is available. Changing from version v1.1.0.
Preparing to install k3d into /usr/local/bin
k3d installed into /usr/local/bin/k3d
Run 'k3d --help' to see what you can do with it.
  • How was the cluster created?
k3d create --publish 8082:30080@k3d-k3s-default-worker-0 --workers 2
2019/06/13 11:03:54 Created cluster network with ID d803062f08bf81694119440c25963082b51abd16997eafdf4ab2477a703a6022
exit status 1
  • What did you do afterwards?
    • k3d commands?
    • docker commands?
    • OS operations (e.g. shutdown/reboot)?

What did you expect to happen?
create would create a 2 worker cluster with NodePort mapped to localhost port

Screenshots or terminal output

See above

Which OS & Architecture?

  • MacOS
System Software Overview:

      System Version: macOS 10.14.4 (18E226)
      Kernel Version: Darwin 18.5.0
      64bit

Which version of k3d?

k3d version v1.2.2

Which version of docker?

Client: Docker Engine - Community
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        6247962
 Built:             Sun Feb 10 04:12:39 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false

[FEATURE] Add access to traefik dashboard

Is your feature request related to a problem or a Pull Request?

N/A

Scope of your request

I saw that traefik is used as ingress controller, which's great. Should be great to get an access to its dashboard.

Describe the solution you'd like

Expose port 8080 for access to traefik's webui (dashboard)

Describe alternatives you've considered

N/A

[BUG] k3d not working with "Docker Toolbox on Windows"

What did you do?
Tried installing k3d on Windows 10. With Docker Toolbox on Windows https://docs.docker.com/toolbox/toolbox_install_windows/

  • How was the cluster created?

    • k3d create
  • What did you do afterwards?

    • Inspect if the kube-system pods(e.g., coredns) are running successfully

What did you expect to happen?
No errors.

Screenshots or terminal output

Which OS & Architecture?

  • Windows, amd64

Which version of k3d?

  • k3d version v1.2.1

Which version of docker?

$ docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        e8ff056
 Built:             Thu Apr 11 04:43:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:57 2019
  OS/Arch:          linux/amd64
  Experimental:     false

[Bug] Rollback will delete existing cluster

The rollback functionality introduced via PR #31 will delete existing clusters if you accidentally create a new cluster with the same name. We should prevent that by either asking for confirmation before deleting the existing cluster or by simply skipping the rollback after we confirmed that there's a healthy cluster running with the same name and just print a message.

[FEATURE] add taints and tolerances at creation time

k3d version: v1.2.0-beta.2
platform: WSL

Is there any method to explicitly add taints to nodes (during cluster creation) ?

Note: I understand I can add a taint 'after the fact' ..

kubectl taint nodes k3d-k3s-default-server key=value:NoSchedule

I deployed a workload and noted that pods landed on the 'server' node which in most cases I don't want (and can control with a corresponding tolerance if I do).

kubectl describe nodes

Name:               k3d-k3s-default-server
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k3d-k3s-default-server
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"b2:3d:8e:bd:1c:01"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.21.0.2
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 15 May 2019 21:36:42 +0100
Taints:             <none>
...

[HELP] 1 node(s) didn't have free ports for the requested pod ports

I'm trying to install istio in k3d cluster, but one of istio component(service load balancer) is failing to start with below error.

Warning FailedScheduling 42s (x6 over 2m59s) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

NAME READY STATUS RESTARTS AGE
grafana-6fb9f8c5c7-mr7vb 1/1 Running 0 6m24s
istio-citadel-5cf47dbf7c-jxc4w 1/1 Running 0 6m24s
istio-galley-7898b587db-8jrpq 1/1 Running 0 6m25s
istio-ingressgateway-7c6f8fd795-wl6fn 1/1 Running 0 6m24s
istio-init-crd-10-8qh2j 0/1 Completed 0 26m
istio-init-crd-11-j7glh 0/1 Completed 0 26m
istio-init-crd-12-gvsg6 0/1 Completed 0 26m
istio-nodeagent-clvkf 1/1 Running 0 6m25s
istio-pilot-5c4b6f576b-2b5zf 2/2 Running 0 6m24s
istio-policy-769664fcf7-hj6bn 2/2 Running 3 6m24s
istio-sidecar-injector-677bd5ccc5-wj9zb 1/1 Running 0 6m24s
istio-telemetry-577c6f5b8c-j9dxn 2/2 Running 3 6m24s
istio-tracing-5d8f57c8ff-t7mm4 1/1 Running 0 6m24s
kiali-7d749f9dcb-w7qxr 1/1 Running 0 6m24s
prometheus-776fdf7479-gznbs 1/1 Running 0 6m24s
svclb-istio-ingressgateway-4znth 0/9 Pending 0 6m25s

Please help me in fixing this issue.

[FEATURE] Expose cluster methods for external usage

Scope of your request

I'd like to create a cluster from my Go code. As of now, I could only pass a custom cli.Context to the commands to do that.

Describe the solution you'd like

Maybe refactor the cli package and move all non cli-related stuff into a separated package.

Describe alternatives you've considered

Creating a custom cli.Context is kinda odd and I can't access the outputs from code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.