Git Product home page Git Product logo

kind's Introduction

kind

Please see Our Documentation for more in-depth installation etc.

kind is a tool for running local Kubernetes clusters using Docker container "nodes". kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

If you have go 1.16+ and docker, podman or nerdctl installed go install sigs.k8s.io/[email protected] && kind create cluster is all you need!

kind consists of:

kind bootstraps each "node" with kubeadm. For more details see the design documentation.

NOTE: kind is still a work in progress, see the 1.0 roadmap.

Installation and usage

For a complete install guide see the documentation here.

You can install kind with go install sigs.k8s.io/[email protected].

NOTE: please use the latest go to do this. KIND is developed with the latest stable go, see .go-version for the exact version we're using.

This will put kind in $(go env GOPATH)/bin. If you encounter the error kind: command not found after installation then you may need to either add that directory to your $PATH as shown here or do a manual installation by cloning the repo and run make build from the repository.

Without installing go, kind can be built reproducibly with docker using make build.

Stable binaries are also available on the releases page. Stable releases are generally recommended for CI usage in particular. To install, download the binary for your platform from "Assets" and place this into your $PATH:

On Linux:

# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-$(uname)-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-$(uname)-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

On macOS via Homebrew:

brew install kind

On macOS via MacPorts:

sudo port selfupdate && sudo port install kind

On macOS via Bash:

# For Intel Macs
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-darwin-amd64
# For M1 / ARM Macs
[ $(uname -m) = arm64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-darwin-arm64
chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind

On Windows:

curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.22.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe

# OR via Chocolatey (https://chocolatey.org/packages/kind)
choco install kind

To use kind, you will need to install docker. Once you have docker running you can create a cluster with:

kind create cluster

To delete your cluster use:

kind delete cluster

To create a cluster from Kubernetes source:

  • ensure that Kubernetes is cloned in $(go env GOPATH)/src/k8s.io/kubernetes
  • build a node image and create a cluster with:
kind build node-image
kind create cluster --image kindest/node:latest

Multi-node clusters and other advanced features may be configured with a config file, for more usage see the docs or run kind [command] --help

Community

Please reach out for bugs, feature requests, and other issues! The maintainers of this project are reachable via:

Current maintainers are @aojea and @BenTheElder - feel free to reach out if you have any questions!

Pull Requests are very welcome! If you're planning a new feature, please file an issue to discuss first.

Check the issue tracker for help wanted issues if you're unsure where to start, or feel free to reach out to discuss. ๐Ÿ™‚

See also: our own contributor guide and the Kubernetes community page.

Why kind?

  • kind supports multi-node (including HA) clusters
  • kind supports building Kubernetes release builds from source
    • support for make / bash or docker, in addition to pre-published builds
  • kind supports Linux, macOS and Windows
  • kind is a CNCF certified conformant Kubernetes installer

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

kind's People

Contributors

akihirosuda avatar alejandrox1 avatar amwat avatar aojea avatar atoato88 avatar bentheelder avatar carslen avatar cpanato avatar dependabot[bot] avatar dims avatar fabriziopandini avatar fllaca avatar goshlanguage avatar howardjohn avatar izzyleung avatar jsoref avatar k8s-ci-robot avatar liggitt avatar lixin963 avatar matzew avatar munnerz avatar neolit123 avatar robertojrojas avatar stmcginnis avatar tao12345666333 avatar vasrem avatar walkergriggs avatar wherka-ama avatar xinydev avatar yankay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kind's Issues

normalize CLI

IE make all commands fit the form: kind [verb] [noun]

We should do this to match clusterctl, kubectl, kops, etc...

/kind cleanup
we should prioritize this to avoid breaking users that start to depend on the CLI...
/priority important-soon
/lifecycle active
/assign

FYI @mitar @Lion-Wei @m1093782566 @munnerz @spiffxp
I think we can temporarily support both kind create and kind create cluster and print a warning, but we should eliminate kind create before we cut any release.

This leaves us room to expand the CLI in a consistent way in the future.

unable to load kernel module "configs"

On Linux Ubuntu 18.04 I am getting the following error/warning:

[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-36-generic
DOCKER_VERSION: 17.03.2-ce
DOCKER_GRAPH_DRIVER: aufs
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module "configs": output - "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-36-generic\n", err - exit status 1

The reason is that the modules directory on the host looks like this:

/lib/modules/4.15.0-36-generic$ ls -al
total 5288
drwxr-xr-x  5 root root    4096 Oct  4 08:39 .
drwxr-xr-x  7 root root    4096 Oct  4 08:39 ..
lrwxrwxrwx  1 root root      40 Sep 24 07:08 build -> /usr/src/linux-headers-4.15.0-36-generic
drwxr-xr-x  2 root root    4096 Sep 24 07:08 initrd
drwxr-xr-x 15 root root    4096 Oct  4 08:39 kernel
-rw-r--r--  1 root root 1264027 Oct  4 08:39 modules.alias
-rw-r--r--  1 root root 1244585 Oct  4 08:39 modules.alias.bin
-rw-r--r--  1 root root    7719 Sep 24 07:08 modules.builtin
-rw-r--r--  1 root root    9770 Oct  4 08:39 modules.builtin.bin
-rw-r--r--  1 root root  552081 Oct  4 08:39 modules.dep
-rw-r--r--  1 root root  780354 Oct  4 08:39 modules.dep.bin
-rw-r--r--  1 root root     317 Oct  4 08:39 modules.devname
-rw-r--r--  1 root root  206127 Sep 24 07:08 modules.order
-rw-r--r--  1 root root     540 Oct  4 08:39 modules.softdep
-rw-r--r--  1 root root  589765 Oct  4 08:39 modules.symbols
-rw-r--r--  1 root root  719596 Oct  4 08:39 modules.symbols.bin
drwxr-xr-x  3 root root    4096 Oct  4 08:39 vdso

kind mounts modules directory into the Docker image, but not also build subdirectory.

Support exposing additional ports to be used for NodePort services

kind currently only exposes a single Docker port to the host machine (Kubernetes API server).
As a result, It is impossible to access other services running inside the cluster.

In minikube NodePort is a solution for that.
minikube exposes all ports on a VM's IP address:

โฏ minikube service list
|----------------|-----------------------|--------------------------------|
|   NAMESPACE    |         NAME          |              URL               |
|----------------|-----------------------|--------------------------------|
| example        | example               | http://192.168.99.100:30980

that just print a VM IP address + port that you can query via

โฏ kubectl get -n example service example -o wide
NAME       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
example    NodePort    10.110.138.8     <none>        8080:30980/TCP,9090:30990/TCP   22m

This is also useful for testing Ingress controllers like Contour, https://github.com/heptio/contour/blob/master/docs/minikube.md

The trivial change I made to make it working is added extra flags to docker run in nilebox@99c4505.
After that I just declared a service with a NodePort type and a port from the exposed range, and I was able to access it via curl from the host machine.

Would it make sense to support such feature in kind upstream?
e.g. a flag for exposing extra ports to be passed to docker run.

Add versioning to kind

This issue is to track adding a version option/sub-command that displays the kind version. This would help in debugging to know what version a user is working with and to support released versions of kind.

Use container IP address in server field in kubeconfig for kind cluster

When working with multiple clusters such as with https://github.com/kubernetes-sigs/federation-v2, we provide the cluster info to the federation controller running in one of the clusters, to talk to all the other clusters. That cluster info comes from the kubeconfig provided by the user. The current kind create cluster command will create a kubeconfig with the server field using localhost as shown below:

$ KUBECONFIG=$(kind get kubeconfig-path) kubectl config view | grep server
    server: https://localhost:38785

This makes it problematic when one kind cluster needs to access the kube API server in another kind cluster because localhost for each kind cluster refers to itself. Since the kind clusters share a common docker network bridge, we can use the IP address for each container. So what is needed is for the container IP address to be used in the server field in the kubeconfig for the kind cluster as shown below:

$ KUBECONFIG=$(kind get kubeconfig-path) kubectl config view | grep server
    server: https://172.17.0.2:38785

The host would still be able to talk to the kube API servers using their container IP addresses. Is this something that is desirable? Is there a reason to use localhost instead?

Unstable cluster

When running it locally on my machine the cluster seems much more unstable than on our CI. So now cluster is created inside a privileged container, but then I am getting strange errors:

$ kubectl cluster-info
Unable to connect to the server: unexpected EOF
$ kubectl cluster-info
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get services)
$ kubectl cluster-info 
error: the server doesn't have a resource type "services"

Documentation: how to enable alpha features

I am trying to use PodPreset from settings.k8s.io/v1alpha1 but it seems that kube through kind does not support that. This is probably just a question of enabling those experimental features, but it is unclear how does one do that?

SIGSEGV error

If control plane Docker container already exists, SIGSEGV is reached:

INFO[1003-05:37:50] Running: /usr/bin/docker [docker run -d --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --tmpfs /var/lib/docker:exec -v /lib/modules:/lib/modules:ro --hostname kind-1-control-plane --name kind-1-control-plane --label io.k8s.sigs.kind.cluster=1 --expose 6443 --publish-all --entrypoint=/usr/local/bin/entrypoint kindest/node:v1.11.3 /sbin/init] 
ERRO[1003-05:37:50] docker: Error response from daemon: Conflict. The container name "/kind-1-control-plane" is already in use by container "0e7e0562ad123ce2430b12c44c3606be4458b1189770d2bfef3ab88a97ab5282". You have to remove (or rename) that container to be able to reuse that name. 
ERRO[1003-05:37:50] See 'docker run --help'.                     
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x5e26b9]

goroutine 1 [running]:
sigs.k8s.io/kind/pkg/cluster.(*nodeHandle).Run(0x0, 0x69654d, 0x5, 0xc4200d7638, 0x3, 0x3, 0x6978f0, 0xa)
	/usr/local/go/src/sigs.k8s.io/kind/pkg/cluster/node.go:112 +0x99
sigs.k8s.io/kind/pkg/cluster.(*nodeHandle).FixMounts(0x0, 0x14, 0x69aabb)
	/usr/local/go/src/sigs.k8s.io/kind/pkg/cluster/node.go:243 +0x96
sigs.k8s.io/kind/pkg/cluster.(*Context).provisionControlPlane(0xc420015dd0, 0xc42000d5c0, 0x14, 0xc420073620, 0x1, 0xc42000d5c0, 0x14, 0x5dc2bd)
	/usr/local/go/src/sigs.k8s.io/kind/pkg/cluster/cluster.go:158 +0xd5
sigs.k8s.io/kind/pkg/cluster.(*Context).Create(0xc420015dd0, 0xc420073620, 0x0, 0x0)
	/usr/local/go/src/sigs.k8s.io/kind/pkg/cluster/cluster.go:114 +0x13d
sigs.k8s.io/kind/cmd/kind/create.run(0xc420073530, 0xc4200cac80, 0x7f1c98, 0x0, 0x0)
	/usr/local/go/src/sigs.k8s.io/kind/cmd/kind/create/create.go:84 +0x4f6
sigs.k8s.io/kind/cmd/kind/create.NewCommand.func1(0xc4200cac80, 0x7f1c98, 0x0, 0x0)
	/usr/local/go/src/sigs.k8s.io/kind/cmd/kind/create/create.go:44 +0x52
sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).execute(0xc4200cac80, 0x7f1c98, 0x0, 0x0, 0xc4200cac80, 0x7f1c98)
	/usr/local/go/src/sigs.k8s.io/kind/vendor/github.com/spf13/cobra/command.go:766 +0x2b9
sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4200ca280, 0xc4200caf00, 0xc4200cac80, 0xc4200ca500)
	/usr/local/go/src/sigs.k8s.io/kind/vendor/github.com/spf13/cobra/command.go:852 +0x37e
sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4200ca280, 0x6a86d0, 0xc42005a0cc)
	/usr/local/go/src/sigs.k8s.io/kind/vendor/github.com/spf13/cobra/command.go:800 +0x2b
sigs.k8s.io/kind/cmd/kind.Run(0x7bade0, 0xc420056e00)
	/usr/local/go/src/sigs.k8s.io/kind/cmd/kind/kind.go:45 +0x2f
sigs.k8s.io/kind/cmd/kind.Main()
	/usr/local/go/src/sigs.k8s.io/kind/cmd/kind/kind.go:56 +0x7a
main.main()
	/usr/local/go/src/sigs.k8s.io/kind/main.go:25 +0x20

I am not familiar with Go, but SIGSEGV should never happen, no?

figure out why create is broken after https://github.com/kubernetes/kubernetes/pull/68890

/priority critical-urgent
/assign

I've pinned down CI failures to after kubernetes/kubernetes#68890 merged and I can replicate this locally, but I'm not sure why this broke kind just yet. This reliably breaks cluster up.

This changed how some of the static pods are handling DNS, before this they would have used host DNS instead of cluster DNS. I'm not actually sure how this is supposed to bootstrap properly.

FYI @spiffxp :(

cc @MrHohn who I suspect is familiar with DNS policy ๐Ÿ˜‰ and may be able to lend a hand

Failed to create cluster: failed to apply overlay network: exit status 1

Hi I tried running kind and encountered the following errors. Full log attached at the bottom.

Notable lines:

So I guess there are two issues here:

~$ df
Filesystem                Size      Used Available Use% Mounted on
tmpfs                   896.1M    244.0M    652.0M  27% /
tmpfs                   497.8M    104.0K    497.7M   0% /dev/shm
/dev/sda1                17.9G      5.3G     11.6G  32% /mnt/sda1
cgroup                  497.8M         0    497.8M   0% /sys/fs/cgroup
c/Users                 463.5G    276.1G    187.4G  60% /c/Users
/dev/sda1                17.9G      5.3G     11.6G  32% /mnt/sda1/var/lib/docker```
  • Early exit on error. I think the create process should have stopped at the initial "no space left on device" error but it went on a while.
$ ~/go/bin/kind create
INFO[1007-16:08:47] Image: kindest/node:v1.11.3 present locally
INFO[1007-16:08:47] Running: /usr/bin/docker [docker run -d --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --tmpfs /var/lib/docker:exec -v /lib/modules:/lib/modules:ro --hostname kind-1-control-plane --name kind-1-control-plane --label io.k8s.sigs.kind.cluster=1 --expose 6443 --publish-all --entrypoint=/usr/local/bin/entrypoint kindest/node:v1.11.3 /sbin/init]
kind-1-control-plane
f9d9e4e6e2f0: Loading layer 1.378 MB/1.378 MB
7d9ec20bda54: Loading layer 185.2 MB/185.2 MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.11.3
Loaded image: gcr.io/google_containers/kube-apiserver:v1.11.3
582b548209e1: Loading layer  44.2 MB/44.2 MB
e20569a478ed: Loading layer 3.358 MB/3.358 MB
59eabcbba10c: Loading layer 51.92 MB/51.92 MB
Loaded image: k8s.gcr.io/kube-proxy:v1.11.3
Loaded image: gcr.io/google_containers/kube-proxy:v1.11.3
d3a980f536d3: Loading layer 153.8 MB/153.8 MB
Error processing tar file(exit status 1): write /usr/local/bin/kube-controller-manager: no space left on device
1fb1aa2fa388: Loading layer  55.5 MB/55.5 MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.11.3
Loaded image: gcr.io/google_containers/kube-scheduler:v1.11.3
REPOSITORY                                      TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kube-apiserver-amd64   v1.11.3             932fff609f2d        2 weeks ago         186 MB
gcr.io/google_containers/kube-apiserver         v1.11.3             932fff609f2d        2 weeks ago         186 MB
k8s.gcr.io/kube-apiserver-amd64                 v1.11.3             932fff609f2d        2 weeks ago         186 MB
k8s.gcr.io/kube-apiserver                       v1.11.3             932fff609f2d        2 weeks ago         186 MB
gcr.io/google_containers/kube-proxy-amd64       v1.11.3             8b7daab5f48f        2 weeks ago         97.6 MB
gcr.io/google_containers/kube-proxy             v1.11.3             8b7daab5f48f        2 weeks ago         97.6 MB
k8s.gcr.io/kube-proxy-amd64                     v1.11.3             8b7daab5f48f        2 weeks ago         97.6 MB
k8s.gcr.io/kube-proxy                           v1.11.3             8b7daab5f48f        2 weeks ago         97.6 MB
k8s.gcr.io/kube-scheduler-amd64                 v1.11.3             ab997a31b99f        2 weeks ago         56.7 MB
k8s.gcr.io/kube-scheduler                       v1.11.3             ab997a31b99f        2 weeks ago         56.7 MB
gcr.io/google_containers/kube-scheduler-amd64   v1.11.3             ab997a31b99f        2 weeks ago         56.7 MB
gcr.io/google_containers/kube-scheduler         v1.11.3             ab997a31b99f        2 weeks ago         56.7 MB
INFO[1007-16:08:55] Using KubeadmConfig:

# config generated by kind
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3
# TODO(bentheelder): fix this upstream!
# we need nsswitch.conf so we use /etc/hosts
# https://github.com/kubernetes/kubernetes/issues/69195
apiServerExtraVolumes:
- name: nsswitch
  mountPath: /etc/nsswitch.conf
  hostPath: /etc/nsswitch.conf
  writeable: false
  pathType: FileOrCreate
clusterName: kind-1
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServerCertSANs: [localhost]


[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING FileExisting-crictl]: crictl not found in system path
I1007 23:08:54.584002     368 kernel_validator.go:81] Validating kernel version
I1007 23:08:54.584592     368 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
        [WARNING ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager-amd64:v1.11.3]: exit status 1
        [WARNING ImagePull]: failed to pull image [k8s.gcr.io/etcd-amd64:3.2.18]: exit status 1
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kind-1-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kind-1-control-plane localhost] and IPs [127.0.0.1 ::1][certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kind-1-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Adding extra host path mount "nsswitch" to "kube-apiserver"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

The connection to the server 172.17.0.2:6443 was refused - did you specify the right host or port?
error: unable to recognize "https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxMSIsIEdpdFZlcnNpb246InYxLjExLjMiLCBHaXRDb21taXQ6ImE0NTI5NDY0ZTQ2MjljMjEyMjRiM2Q1MmVkZmUwZWE5MWIwNzI4NjIiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE4LTA5LTIwVDA5OjAyOjUyWiIsIEdvVmVyc2lvbjoiZ28xLjEwLjMiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQo=": Get https://172.17.0.2:6443/api?timeout=32s: dial tcp 172.17.0.2:6443: connect: connection refused
FATA[1007-16:12:53] Failed to create cluster: failed to apply overlay network: exit status 1

improve logging

  • make log level configurable
  • improve differentiation between print, info, debug etc.
    • we should use print & friends for normal user level info
    • use info for debug-ish details like "running this command"
    • use debug for trace-ish level details
    • use warn & error as appropriate
  • revisit structured logging
  • look at adding "quiet" modes to all commands

/kind cleanup
/assign

pre-pull images that are not part of the build

When we build Kubernetes we are able to preload built images into the kind node image, a typical kubernetes build does not actually build important relatively fixed images like etcd though, we should look at querying kubeadm for these at build time and pre-pulling them.

Possibly we can use https://github.com/google/containerregistry to pull these directly into the built image without docker. These tools have worked pretty well for us underneath rules_docker in test-infra.

/priority important-soon
/kind enhancement
/assign

Cannot talk to cluster inside dind container

I am trying to use it on GitLab CI which uses DIND. I am trying to setup cluster inside a Docker container. I have tried the following:

$ docker run --privileged --rm -d --name dind docker:dind

$ docker run --rm -t -i --link dind:docker -e DOCKER_HOST=tcp://docker:2375 ubuntu:artful

Inside container:

$ apt-get update
$ apt-get install --yes golang-go git curl unzip wget apt-transport-https curl ca-certificates

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

$ curl -s https://download.docker.com/linux/ubuntu/gpg | apt-key add -
$ echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" > /etc/apt/sources.list.d/docker.list

$ apt-get update
$ apt-get install --yes kubectl docker-ce

$ export GOPATH=/usr/local/go
$ export PATH="${GOPATH}/bin:${PATH}"

$ go get sigs.k8s.io/kind

$ kind create

$ export KUBECONFIG="/root/.kube/kind-config-1"
$ kubectl cluster-info 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:32771 was refused - did you specify the right host or port?

$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                     NAMES
898cd8aeedce        kindest/node:v1.11.3   "/usr/local/bin/entrโ€ฆ"   2 minutes ago       Up 2 minutes        0.0.0.0:32771->6443/tcp   kind-1-control-plane

$ docker exec kind-1-control-plane ps
  PID TTY          TIME CMD
    1 ?        00:00:00 systemd
   55 ?        00:00:00 systemd-journal
   71 ?        00:00:12 dockerd
   88 ?        00:00:00 docker-containe
  812 ?        00:00:08 kubelet
 1755 ?        00:00:00 ps

$ kind delete

Create user guide(s)

we should have documentation with details for:

  • common usage
  • known issues and workarounds
  • power user features

/assign
/priority important-soon
/kind documentation

Add option to specify kubeconfig when creating kind clusters

The kind create cluster command creates a ~/.kube/kind-config-<name> kubeconfig file to access that kind cluster. When working with multiple clusters, you end up with multiple of these files. Could we add an option to the kind create cluster command (e.g. --kubeconfig) so that the user can provide a file to use for all the kubeconfigs such as ~/.kube/config. I've also filed #112, which would need to be resolved first.

add retries to image pulls

(base) image pulling can flake builds (and probably cluster creation) ... let's add configurable retries to image pulling to avoid this.

/priority important-soon
/kind feature
/assign

implement command for log dumping

  • This is critical to kubetest integration, and will really help with debugging
  • @Katharine's work could use particular log dumping for the special coverage output files

/kind feature
/priority important-soon

Update the docker version to maximum validated version 18.06

Looks like kubernetes support the maximum validated docker is 18.06 now:
kubernetes/website#10299
kubernetes/test-infra#9444
kubernetes/kubernetes#68495
https://kubernetes.io/docs/setup/cri/#docker

On each of your machines, install Docker. Version 18.06 is recommended, but 1.11, 1.12, 1.13 and 17.03 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes.

So we can update the docker version to 18.06 in images/base/Dockerfile. ๐Ÿ˜ƒ

# NOTE: 17.03 is officially supported by Kubernetes currently, so we pin to that.
# https://kubernetes.io/docs/tasks/tools/install-kubeadm/
ARG DOCKER_VERSION="17.03.2~ce-0~debian-stretch"

switch docker graph to volume

We're using a tmpfs currently because it was convenient and fast, but long term we really ought to test switching to a volume which will allow us to handle larger images on the nodes etc more gracefully.

To do this we need to create & label the volume when provisioning nodes with a name keyed by the cluster + node, then clean up any volumes labeled with the cluster in cluster teardown.

/kind cleanup
/assign

add a blocking e2e presubmit

This repo should have a presubmit that actually builds an image and spins up a cluster, running some very basic smoke tests. As other issues are fixed we should consider expanding it to something like the non-serial conformance tests.

  • create an e2e presubmit
  • let it "soak"
  • make it blocking

/priority important-soon
/assign

Support insecure-registries for container runtime running inside of kind container

The host that is running kind to set up kind clusters may want to create container images to be pulled by the container runtime (docker/containerd daemons) running inside of the kind-<name>-control-plane containers e.g. kind-1-control-plane. To simplify this, it would be great to have a way to easily configure the container runtime running inside the kind containers with insecure-registries in order to pull images from the host's insecure registry. This would simplify the local registry setup on the host to not require TLS.

For now, I have used the following workaround:

  1. Run local insecure registry on host:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
  1. Create daemon.json inside of the kind container via docker exec:
docker exec kind-1-control-plane bash -c "cat <<EOF > /etc/docker/daemon.json
{
    "insecure-registries": ["172.17.0.1:5000"]
}
EOF
  1. Send SIGHUP to docker daemon in kind container in order to reload config:
docker exec kind-1-control-plane bash -c 'kill -s SIGHUP $(pgrep dockerd)'

This works for now and then any container image to be pulled needs to be specified like so:

docker pull 172.17.0.1:5000/<imagename>:<tag>

Build node-image using apt build type fails

Trying to build the node-image using the apt type fails:

INFO[15:00:54] Pulling image: k8s.gcr.io/kube-apiserver:v1.12.2 ... 
ERRO[15:00:55] Image build Failed! exit status 1            
FATA[15:00:56] Error building node image: exit status 1

The problems seems to be that during the build process, kind tries to save the required images to a temporary directory of the type /tmp/kind-node-imageXXXXXXXXX/bits/images:

kind/pkg/build/node/node.go

Lines 368 to 369 in 6177a98

pullTo := fmt.Sprintf("%s/bits/images/%s", dir, pullName)
err = docker.Save(image, pullTo)

Extracting the output from the docker.Save command:

INFO[15:00:55] exit status 1                                
INFO[15:00:55] failed to save image: unable to validate output path: directory "/tmp/kind-node-image241009176/bits/images" does not exist

Possible solution

During a previous step in the build process, build artifacts are moved to the temporary build directory by populateBits. This call will create the sub directory bits within the build directory /tmp/kind-node-imageXXXXXXXX.

kind/pkg/build/node/node.go

Lines 150 to 167 in 6177a98

func (c *BuildContext) populateBits(buildDir string) error {
// always create bits dir
bitsDir := path.Join(buildDir, "bits")
if err := os.Mkdir(bitsDir, 0777); err != nil {
return errors.Wrap(err, "failed to make bits dir")
}
// copy all bits from their source path to where we will COPY them into
// the dockerfile, see images/node/Dockerfile
bitPaths := c.bits.Paths()
for src, dest := range bitPaths {
realDest := path.Join(bitsDir, dest)
// NOTE: we use copy not copyfile because copy ensures the dest dir
if err := fs.Copy(src, realDest); err != nil {
return errors.Wrap(err, "failed to copy build artifact")
}
}
return nil
}

We could add another exec.Command here to create the images sub directory within bits.

Another possibility, would be to do this when the images are being pulled in (here we could more easily track where directories are being created)

kind/pkg/build/node/node.go

Lines 359 to 375 in 6177a98

for i, image := range requiredImages {
if !builtImages.Has(image) {
fmt.Printf("Pulling: %s\n", image)
err := docker.Pull(image, 2)
if err != nil {
return err
}
// TODO(bentheelder): generate a friendlier name
pullName := fmt.Sprintf("%d.tar", i)
pullTo := fmt.Sprintf("%s/bits/images/%s", dir, pullName)
err = docker.Save(image, pullTo)
if err != nil {
return err
}
movePulled = append(movePulled, fmt.Sprintf("/build/bits/images/%s", pullName))
}
}

/kind bug

Data Race in latest version

$ bazel-bin/external/io_k8s_sigs_kind/darwin_amd64_race_stripped/kind create cluster
Creating cluster 'kind-1' ...
 โœ“ Ensuring node image (kindest/node:v1.11.3@sha256:855562266d5f647b5770a91c76a0579ed6487eb5bf1dfe908705dad285525483) ๐Ÿ–ผ
 โœ“ [kind-1-control-plane] Creating node container ๐Ÿ“ฆ
==================
WARNING: DATA RACE
Read at 0x00c000116150 by goroutine 8:
  runtime.convT2Estring()
      GOROOT/src/runtime/iface.go:332 +0x0
  sigs.k8s.io/kind/vendor/github.com/briandowns/spinner.(*Spinner).Start.func1()
      external/io_k8s_sigs_kind/vendor/github.com/briandowns/spinner/spinner.go:262 +0x336

Previous write at 0x00c000116150 by main goroutine:
  sigs.k8s.io/kind/pkg/log.(*Status).Start()
      external/io_k8s_sigs_kind/pkg/log/status.go:128 +0x1c5
  sigs.k8s.io/kind/pkg/cluster.(*Context).provisionControlPlane()
      external/io_k8s_sigs_kind/pkg/cluster/cluster.go:163 +0x30f
  sigs.k8s.io/kind/pkg/cluster.(*Context).Create()
      external/io_k8s_sigs_kind/pkg/cluster/cluster.go:120 +0x38c
  sigs.k8s.io/kind/cmd/kind/create/cluster.run()
      external/io_k8s_sigs_kind/cmd/kind/create/cluster/createcluster.go:84 +0x3b9
  sigs.k8s.io/kind/cmd/kind/create/cluster.NewCommand.func1()
      external/io_k8s_sigs_kind/cmd/kind/create/cluster/createcluster.go:44 +0x69
  sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).execute()
      external/io_k8s_sigs_kind/vendor/github.com/spf13/cobra/command.go:766 +0x8b2
  sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).ExecuteC()
      external/io_k8s_sigs_kind/vendor/github.com/spf13/cobra/command.go:852 +0x432
  sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).Execute()
      external/io_k8s_sigs_kind/vendor/github.com/spf13/cobra/command.go:800 +0x38
  sigs.k8s.io/kind/cmd/kind.Run()
      external/io_k8s_sigs_kind/cmd/kind/kind.go:84 +0x34
  sigs.k8s.io/kind/cmd/kind.Main()
      external/io_k8s_sigs_kind/cmd/kind/kind.go:101 +0x174
  main.main()
      external/io_k8s_sigs_kind/main.go:25 +0x2f

Goroutine 8 (running) created at:
  sigs.k8s.io/kind/vendor/github.com/briandowns/spinner.(*Spinner).Start()
      external/io_k8s_sigs_kind/vendor/github.com/briandowns/spinner/spinner.go:245 +0x95
  sigs.k8s.io/kind/pkg/log.(*Status).Start()
      external/io_k8s_sigs_kind/pkg/log/status.go:129 +0x20a
  sigs.k8s.io/kind/pkg/cluster.(*Context).Create()
      external/io_k8s_sigs_kind/pkg/cluster/cluster.go:113 +0x2a9
  sigs.k8s.io/kind/cmd/kind/create/cluster.run()
      external/io_k8s_sigs_kind/cmd/kind/create/cluster/createcluster.go:84 +0x3b9
  sigs.k8s.io/kind/cmd/kind/create/cluster.NewCommand.func1()
      external/io_k8s_sigs_kind/cmd/kind/create/cluster/createcluster.go:44 +0x69
  sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).execute()
      external/io_k8s_sigs_kind/vendor/github.com/spf13/cobra/command.go:766 +0x8b2
  sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).ExecuteC()
      external/io_k8s_sigs_kind/vendor/github.com/spf13/cobra/command.go:852 +0x432
  sigs.k8s.io/kind/vendor/github.com/spf13/cobra.(*Command).Execute()
      external/io_k8s_sigs_kind/vendor/github.com/spf13/cobra/command.go:800 +0x38
  sigs.k8s.io/kind/cmd/kind.Run()
      external/io_k8s_sigs_kind/cmd/kind/kind.go:84 +0x34
  sigs.k8s.io/kind/cmd/kind.Main()
      external/io_k8s_sigs_kind/cmd/kind/kind.go:101 +0x174
  main.main()
      external/io_k8s_sigs_kind/main.go:25 +0x2f
==================
 โœ“ [kind-1-control-plane] Fixing mounts ๐Ÿ—ป
 โœ“ [kind-1-control-plane] Starting systemd ๐Ÿ–ฅ
 โœ“ [kind-1-control-plane] Waiting for docker to be ready ๐Ÿ‹
โขŽโกฐ [kind-1-control-plane] Starting Kubernetes (this may take a minute) โ˜ธ

wait for CNI to be ready

or more generally, the e2e tests need the node to be in the ready state, we should either block kind create on this, or give a separate command to block on readiness.

I think probably making kind create block on this with some flag to toggle it is the way to go.

/kind bug
/priority important-soon
/assign

Create development guide(s)

we should have documentation with details for developing kind
/assign
/priority important-soon
/kind documentation

Allow mounting host host directories to kind nodes

It seems there is no way to request some host volumes to be mounted inside the kind container. So then pods cannot access those host volumes through hostPath. This is useful for us to bring testing data in.

add environment diagnostics

we should have a number of checks when kind fails, to help diagnose issues in the environment, ideally this can also be invoked by another command, some rough ideas:

  • check if there are existing containers tagged with the current cluster when creating
  • more gracefully handle lack of cluster artifacts when deleting
  • gracefully handle incompatible docker / lack of docker
  • base image builds should check that go exists

/priority important-soon

support executing commands on nodes

We should have a command like kind exec [--cluster=foo] nodename command args that allows executing a command on a node by name. Then users can list nodes with kubectl get no and run commands against them with kind exec foo-node ...

This will be helpful for things like allowing prototyping of coverage dumping for #5 without leaking quite as many details about the container names etc.

/kind feature
/priority important-soon
/assign

support all the kubeadm config flavors in templates

the kubeadm configuration is moving fast and ever changing.
it was v1alpha2 in 1.11 and in 1.12 it would be v1alpha3.
even more changes are planned for v1beta1 in 1.13.

based on a parseable kuberentes-version flag of sorts, kind would have to pick the right template to construct a kubeadm config.

for ref:

const DefaultConfigTemplate = `# config generated by kind

kubeadm's api:
https://github.com/kubernetes/kubernetes/tree/master/cmd/kubeadm/app/apis/kubeadm

/assign
/kind feature
/priority important-longterm

Homebrew formula

After kind is released it'd be great to have a homebrew formula for macOS users to use to install kind. Definitely a nice to have :)

Multiple node support?

Don't know if this has been discussed yet. But it'd be nice to be able to bootstrap joining "nodes" in more docker containers, like kubeadm-dind-cluster does ๐Ÿ˜„. WDYT?

kind create cluster creates same kubeconfig user name for each cluster

When creating multiple clusters with kind create cluster --name <name> the kubeconfig for each cluster specifies the same user name of kubernetes-admin. This becomes a problem when trying to use multiple kubeconfigs in KUBECONFIG because the user name overlaps with each of the configs. For example:

$ export KUBECONFIG="$(kind get kubeconfig-path):$(kind get kubeconfig-path --name 2)"
$ kubectl --context=kubernetes-admin@kind-2 get all --all-namespaces
error: the server doesn't have a resource type "all"

Instead each auth user name should include the --name of the kind cluster when specified in the kind create cluster --name <name> command.

Add pre-pulled images during build

I would like to add some pre-pulled images (e.g. weave, core-dns) on the node image in order to make kubeadm init faster.

If the idea is fine, I'm happy to contribute it but I need a little bit of guidance in order make this feature good citizen of kind. From what I can see:

  • This feature could be implemented as new bits, that gets triggered if the user pass a path where pre-pulled images are stored. Yes/No?
  • Currently BuildContext handles only a bits, that is the apt install/kubernetes build bits. Should I change BuildContext.bits into []Bits?

Any other preferred/suggested approach?

Adopt kubernetes API tooling

pkg/cluster/config/... should adopt Kubernetes's API tooling to automatically generate deep copy and other API tooling.

To do this we need to vendor the generators and add them to hack/update-generate.sh

Marking this help wanted, if anyone wants to take this on feel free to contact me for help. I'll tackle this myself eventually.
/priority important-soon
/kind cleanup
/help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.