tilt-dev / ctlptl Goto Github PK
View Code? Open in Web Editor NEWMaking local Kubernetes clusters fun and easy to set up
License: Apache License 2.0
Making local Kubernetes clusters fun and easy to set up
License: Apache License 2.0
On Docker v3.1.0, the following works:
ctlptl docker-desktop set vm.kubernetes.enabled false
On v3.2.2:
ctlptl docker-desktop: expected map at DockerDesktop setting "vm.kubernetes.enabled", got: bool
I cloned the repo and ran "make install" on an Ubuntu 20.04 system.
After downloading and unpacking a ton of go packages it throws this error:
build github.com/tilt-dev/ctlptl/cmd/ctlptl: cannot load io/fs: malformed module path "io/fs": missing dot in first path element
Is there anything else I should do to get it to build?
Kind allows you to mount host directories on a kind node. https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
For example:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/bill/work/foo
containerPath: /foo
Then run:
kind create cluster --config kind-config.yaml
This currently doesn't seem possible with ctlptl. What are the other options?
Related issue with minikube (to solve this generically) #127
Context:
I am using minikube as Docker Desktop replacement, running minikube-vm without kubernetes but with docker-engine.
minikube start --driver=hyperkit --container-runtime=docker --no-kubernetes
eval $(minikube docker-env)
Now I would like to use ctlptl
to setup k8s cluster with registery in minikube, How do I do that ?
Is it possible to provide minikube profile/cluster name in ctlptl yaml file ? What is the schema for the YAML file, can it be included in the docs please ?
Running ctlptl example for minikube is deleting and trying to recreate minikube vm, but in that process I am losing my docker-engine.
e.g.,
ctlptl docker-desktop settings
looks pretty easy to fix...they just changed the socket location
one of the teams we work with uses
ctlptl docker-desktop set cli.useCloudCli false
to automatically disable cloud cli in docker-desktop. But if you're using an old version of docker-desktop, this fails, because that setting does not exist.
we should probably have an --ignore-not-found for docker-desktop set commands
This error occurs while applying a cluster config from YAML on Windows or Mac. It may be of note that ctlptl does not wait the full 2 minutes, and if I repeat the apply after the Kubernetes icon goes green in Docker Desktop, it will complete without error.
ctlptl apply -f cluster.yaml
Applied new Docker Desktop settings. Waiting 2m for Docker Desktop to restart...
Switched to context "docker-desktop".
configuring cluster: Delete "https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-public/configmaps/ctlptl-cluster-spec": EOF
Setup could not complete due to an error.
Here are the contents of my cluster.yaml:
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
name: docker-desktop
product: docker-desktop
This was run via Windows Terminal + Powershell (5.1), Docker Desktop (4.0.1) with WSL2 on Windows 10, and also from bash on Mac OS 11.6, also with Docker Desktop 4.0.1.
(Edit: Originally I thought this was only an issue on Windows, but now that I have tested it on Mac OS as well (with the help of a co-worker) and I am seeing the same issue on Docker Desktop no matter what.)
Currently, the ctlptl API lets you pass a complete Kind config: https://pkg.go.dev/github.com/tilt-dev/ctlptl/pkg/api#Cluster
ctlptl then merges the cluster-independent options with a user-supplied Kind config.
We should do something similar with minikube flags, so you can do things like minikube start --vm=true --cni=cilium --kubernetes-version='v1.19.4' --bootstrapper=kubeadm
, and ctlptl can ensure your cluster has the right settings. (This is a little tricky, because we want to make sure people aren't setting flags that are incompatible with how ctlptl sets up the cluster, but I think it can be done).
currently, ctlptl uses Docker Desktop as the primary VM manager on Windows/macos.
As more people use Rancher Desktop, we may need to add support for using it as a VM manager too.
I am using this command "ctlptl create cluster minikube --registry=ctlptl-registry --min-cpus=4 --kubernetes-version=v1.21.2"
ctlptl is running indefinitely without logs
It was ok with minikube 1.25.1 but not after update to docker desktop 4.4.4
With command "minikube start", minikube is starting correctly but not when using ctlptl
kind.ctlptl.yaml:
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
product: kind
registry: ctlptl-registry
kubernetesVersion: "v1.21.1"
---
apiVersion: ctlptl.dev/v1alpha1
kind: Registry
name: ctlptl-registry
ctlptl apply -f ./k8s/kind.ctlptl.yaml
Creating registry "ctlptl-registry"...
registry.ctlptl.dev/ctlptl-registry created
creating cluster: No available kindest/node versions for kind version v0.11.1.
Please file an issue: https://github.com/tilt-dev/ctlptl/issues/new
make: *** [cluster-up] Error 1
It worked with kind 0.9.0. If I remove the version, kind starts just fine:
ctlptl apply -f ./k8s/kind.ctlptl.yaml
registry.ctlptl.dev/ctlptl-registry created
Creating cluster "kind" ...
โ Ensuring node image (kindest/node:v1.21.1) ๐ผ
โ Preparing nodes ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Set kubectl context to "kind-kind"
Running the example Minikube command
ctlptl create cluster minikube --registry=ctlptl-registry --kubernetes-version=v1.18.8
fails on Linux.
I can confirm docker version
returns, and I can start a Minikube cluster manually.
When provisioning a kind cluster + a local registry, any attempt to use images via hostFromClusterNetwork causes errors because containerd tries to communicate with the insecure registry via https.
ctlptl/pkg/cluster/admin_kind.go
Lines 58 to 63 in 472eb5b
Adding an additional mirror to force containerd to access registry-name:port via http would fix the problem. I confirmed this by docker execing into the kind container, modifying the containerd config.toml, and restarting the containerd service.
My current ctlptl ymls:
cluster.yml
---
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
product: kind
registry: registry.local
registry.yml
---
apiVersion: ctlptl.dev/v1alpha1
kind: Registry
port: 5000
name: registry.local
Proud to grab first issue! This is a slick little tool.
I am having problems with connecting to the registry so I ran the e2e tests to validate my environment.
# clove @ DESKTOP-6TGAFKG in ~/Workspace/src/github.com/tilt-dev/ctlptl/test/kind-cluster-network on git:main x [14:56:08]
$ ./e2e.sh
+++ realpath ./e2e.sh
++ dirname /home/clove/Workspace/src/github.com/tilt-dev/ctlptl/test/kind-cluster-network/e2e.sh
+ cd /home/clove/Workspace/src/github.com/tilt-dev/ctlptl/test/kind-cluster-network
+ ctlptl apply -f registry.yaml
Creating registry "ctlptl-test-registry"...
๐ฎ Env DOCKER_HOST set. Assuming remote Docker and forwarding registry to localhost:5005
registry.ctlptl.dev/ctlptl-test-registry created
+ ctlptl apply -f cluster.yaml
creating cluster: No available kindest/node versions for kind version v0.12.0-alpha+31d33511ae0a86.
Please file an issue: https://github.com/tilt-dev/ctlptl/issues/new
I installed ctlptl from master and kind from master @ 31d33511ae0a86f04e5bd475acc422407349122c
And I am on wls2
I'm having an issue with Kind about an invalid argument when unpacking an image. While searching for the issue, I found this on the breaking changes part of the Kind release page for v0.9.0:
UPDATE: If you are hitting label key and value greater than maximum size (4096 bytes), key: containerd: invalid argument (containerd/cri#1572), then try the following KIND config:
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 containerdConfigPatches: - |- [plugins."io.containerd.grpc.v1.cri".containerd] disable_snapshot_annotations = true
But there is no way to set custom Kind config for now, as there is this comment in the admin_kind.go file.
// TODO(nick): Let the user pass in their own Kind configuration.
Can a fix or an option be made available? Thanks!
I saw ctlptl
using the circleci
currently, we can add the local check help the maintainer or developer do a check at their local environment. And many open-source projects all use the golangci-lint
, and from my personal experience, it's works well. Decrease occupies the common resource of Github Actions and makes the review easy.
maybe it would be handy to clearly state in the INSTALL.md the possibility to install the tool with:
go install github.com/tilt-dev/ctlptl/cmd/ctlptl@latest
I have these scripts:
#!/usr/bin/env bash
cat <<EOF | ctlptl apply -f -
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
product: kind
registry: ctlptl-registry
EOF
and
#!/usr/bin/env bash
cat <<EOF | ctlptl delete -f -
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
product: kind
registry: ctlptl-registry
EOF
The setup script works fine and creates the kind backplane with a registry.
However, when I run the teardown
script, the registry is left as an orphaned docker resource. I can get rid of this manually with some scripting, but I would've expected the ctlptl delete
with the registry config to do that automatically.
While #134 fixed the issue where ctlctl stopped waiting for the cluster too early, I'm finding that 2 minutes is sometimes still not long enough. The connection on my Windows VM reports 140 Mbps download speeds (via fast.com), but still frequently takes closer to 4-5 minutes than the 2 minutes ctlptl will wait (maybe the bottleneck isn't internet, but we both have 32 GB of RAM and at least 8 reasonably fast CPU cores). I can of course just run the command a second time and it will work (because the cluster is usually up by then), but I've been surprised by how long it takes Docker Desktop to setup Kubernetes, both on my Windows test VM and my co-worker's Mac.
# ctlptl apply -f ./cluster.yaml
Applied new Docker Desktop settings. Waiting 2m for Docker Desktop to restart...
Switched to context "docker-desktop".
Waiting 2m for Kubernetes cluster "docker-desktop" to start...
timed out waiting for cluster to start: context deadline exceeded
$ docker-desktop set vm.kubernetes.enabled true
ctlptl docker-desktop: reading d4m settings: Get "http://localhost/settings": dial unix /Users/USER/Library/Containers/com.docker.docker/Data/gui-api.sock: connect: no such file or directory
we should give the user instructions that they need to upgrade or install docker for desktop
I'm trying to use this with podman using the Docker API-compat socket. It seems like it'll probably work but I've hit a snag and don't know enough about the Docker ecosystem to know the "right solution" for a PR.
Running ctlptl apply
with config to create a registry fails. It creates the container but then can't find it to proceed with the workflow. The problem is this line
ctlptl/pkg/registry/registry.go
Line 106 in f5d2f38
If I change that to use the fully-qualified path docker.io/library/registry:2
then it works fine. I'm unsure if that would also work on native Docker though, or if instead I should add a check for podman and set the filter depending on Docker v podman. Thoughts?
# clove @ DESKTOP-6TGAFKG in ~/Workspace/src/github.com/tilt-dev/ctlptl/examples on git:main o [14:15:49] C:130
$ ctlptl apply -f kind.yaml
Creating registry "ctlptl-registry"...
๐ฎ Env DOCKER_HOST set. Assuming remote Docker and forwarding registry to localhost:34131
creating local portforwarder: exec: "socat": executable file not found in $PATH
You get the registry started but not kind:
# clove @ DESKTOP-6TGAFKG in ~/Workspace/src/github.com/tilt-dev/ctlptl/examples on git:main o [14:16:52]
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
698c07f443c5 alpine/socat "/bin/sh -c 'while tโฆ" 19 seconds ago Up 18 seconds ctlptl-portforward-service
ab2fa4a07466 registry:2 "/entrypoint.sh /etcโฆ" 21 seconds ago Up 21 seconds 0.0.0.0:34131->5000/tcp, :::34131->5000/tcp ctlptl-registry
DOCKER_HOST=$(ctlptl get cluster kind-kind -o template --template '{{.status.localRegistryHosting.host}}')
echo ${DOCKER_HOST}
The above ctlptl command does not output a '\n' on the end of the output message, and I cannot find it. This is causing me to try to remove the character in bash, which is turning out difficult
I am running ctlptl
on macOS, but I've noticed that there are some assumptions being made that Docker Desktop has to be running. Would it be possible to support DOCKER_HOST
? That way we aren't tied to using the unix:///var/run/docker.sock
?
ctlptl
CLI does not have the option to provide an image URL for the registry (other than docker.io/libarary/registry).
How can we provide the image registry URL so that it does not pull images from the dockerhub
In README.md please enter an example of how to download and install ctlptl
manually. It just states to download binaries, but no example of how to do this step.
---
apiVersion: ctlptl.dev/v1alpha1
kind: Registry
port: 5002
name: ctlptl-registry
---
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
product: minikube
minCPUs: 4
registry: ctlptl-registry
kubernetesVersion: v1.18.3
is failing with an error about the kube api server closing the connection.
I wonder if minikube changed something and minikube start
is exiting before the apiserver is ready
Can't seem to get ctlptl
to connect kind to the local registry. Have tried with the registry image already running, registry stopped, and registry rm'd.
$ ctlptl create cluster kind --registry=ctlptl-registry
Creating registry "ctlptl-registry"...
Creating cluster "kind" ...
โ Ensuring node image (kindest/node:v1.17.0) ๐ผ
โ Preparing nodes ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Not sure what to do next? ๐
Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Connecting kind to registry ctlptl-registry
connecting registry: exit status 1
(Additionally, a more obvious/detailed error message would be helpful!)
Currently ctlptl always uses docker as the driver for minikube
We should be able to customize it to use hyperkit.
This isn't quite as simple as changing a flag, because you also need to do some additional registry plumbing to connect the hyperkit vm to the registry
If you try to create a docker-desktop cluster for linux,
ctlptl create cluster docker-desktop
you'll get:
Detected remote DOCKER_HOST. Remote Docker engines do not support Docker Desktop clusters: unix:///home/me/.docker/desktop/docker.sock
Now that docker-deskop linux is out of beta, we should support this.
The maintainer of Linkerd2 advice me to try to tilt because it works well for them. And I like tilt, so I'm trying to learn about it.
k3d-local
repo was broken, so need to be fixbrew
on LinuxI'm using colima to manage docker and not require a docker-desktop install on Mac (due to licensing).
When ctlptl
attemps to create a kind cluster, it uses the dockerDesktopSocketPath
ctlptl create cluster kind --name=kind2
could not connect to Docker Desktop. Please ensure Docker is installed and up to date.
(caused by: Get "http://localhost/settings": dial unix /Users/USERNAME/Library/Containers/com.docker.docker/Data/backend.native.sock: connect: no such file or directory)
Colima manages the docker socket at:
echo $DOCKER_HOST
unix:///Users/USER/.colima/docker.sock
I came to the conclusion of this being a possible workaround based on this answer regarding vscode
I am using ctlptl
to create my k8s cluster using kind
. but because I am running k8s on MacBook M1.
I got some problems with creating a cluster using kind.
I find a solution in this StackOverflow answer comments: https://stackoverflow.com/a/65864116/4718253
which advice using another image to create a kind cluster like this:
kind create cluster --image rossgeorgiev/kind-node-arm64:v1.20.0
Now, If I don't know how should I change my ctlptl config to use that image instead!
here is my config.yaml
file:
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
product: kind
registry: ctlptl-registry
kindV1Alpha4Cluster:
name: extraa
nodes:
- role: control-plane
If you have stopped a registry container with something like docker stop ctlptl-registry
, then try to recreate a cluster with ctlptl apply
, you get an opaque error message, such as:
โฏ cat ctlptl-kind.yaml | ctlptl apply -f -
Creating registry "ctlptl-registry"...
exit status 125
This appears to be due to ctlptl not detecting a stopped container, executing this command, which fails:
docker: Error response from daemon: Conflict. The container name "/ctlptl-registry" is already in use by container "81d4e69896c373c395d28fc0fdeed1a244ed6d20b26503a3e12ad189b28cbf11". You have to remove (or rename) that container to be able to reuse that name.
Removing the container with docker rm
clears the issue up.
Hi there,
I am trying to create a cluster with ctlptl with the following config:
apiVersion: ctlptl.dev/v1alpha1
kind: Cluster
product: kind
registry: ctlptl-registry
kubernetesVersion: 1.20.2
kindV1Alpha4Cluster:
name: operator-debug
networking:
apiServerAddress: 0.0.0.0
apiServerPort: 8443
ipFamily: ipv4
nodes:
- role: control-plane
- role: worker
When I do ctlptl apply -f kind-cluster.yaml
I get the following error:
creating cluster: No available kindest/node versions for kind version v0.11.1.
Please file an issue: https://github.com/tilt-dev/ctlptl/issues/new
I am running this inside wsl2 with Docker Desktop and kind version is kind v0.11.1 go1.16.4 linux/amd64
Best regards
Just got
# clove @ DESKTOP-6TGAFKG in ~/Workspace/src/github.com/tilt-dev/ctlptl/examples on git:main o [14:12:57]
$ ctlptl create
Create a cluster or registry
Usage:
ctlptl create [cluster|registry] [flags]
ctlptl create [command]
Examples:
ctlptl create cluster docker-desktop
ctlptl create cluster kind --registry=ctlptl-registry
Available Commands:
cluster Create a cluster with the given local Kubernetes product
registry Create a registry with the given name
Flags:
-h, --help help for create
Use "ctlptl create [command] --help" for more information about a command.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1464d79]
goroutine 1 [running]:
github.com/tilt-dev/ctlptl/pkg/cmd.(*CreateOptions).Run(0xc00041c570, 0xc000260000, 0x2286840, 0x0, 0x0)
/home/clove/Workspace/src/github.com/tilt-dev/ctlptl/pkg/cmd/create.go:38 +0x39
github.com/spf13/cobra.(*Command).execute(0xc000260000, 0x2286840, 0x0, 0x0, 0xc000260000, 0x2286840)
/home/clove/Workspace/pkg/mod/github.com/spf13/[email protected]/command.go:856 +0x2c2
github.com/spf13/cobra.(*Command).ExecuteC(0xc000463b80, 0x17efa70, 0x1, 0x1)
/home/clove/Workspace/pkg/mod/github.com/spf13/[email protected]/command.go:960 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
/home/clove/Workspace/pkg/mod/github.com/spf13/[email protected]/command.go:897
main.main()
/home/clove/Workspace/src/github.com/tilt-dev/ctlptl/cmd/ctlptl/main.go:32 +0xf9
From Tim in Slack:
I had this fun thing happy after rebooting and trying to start my minikube cluster again. I've found that if I use minikube stop followed by ctlptl apply -f (my cluster config) then it will delete my cluster rather than confirming it exists and starting it. I'm not sure whether ctlptl ought to be used to start a cluster, but I did find that it was better than minikube start in some cases as ctlptl re-patches the containerd registry mirror. What is the correct way to restart a minikube cluster after reboot? Is this a bug or am I misusing it?
ctlptl apply -f cluster.yaml Deleting cluster minikube to initialize with registry ctlptl-registry ๐ฅ Deleting "minikube" in docker ... ๐ฅ Deleting container "minikube" ... ๐ฅ Removing /home/tim/.minikube/machines/minikube ... ๐ Removed all traces of the "minikube" cluster. ...
Ignoring the containerd patching hack/issue for minikube, this isn't a hugely unusual use case to to apply
and expect ctlptl
to launch an existing, yet stopped, local cluster tool.
We should think about if there's common scenarios where we can handle this (though there's some chicken-and-egg problems, e.g. we can't read local registry info until we start the cluster, so we might end up starting it and then realizing we can't reconcile it in-place and needing to delete it, which would take longer than just having done that in the first place!)
Otherwise, it might be a good idea to clarify the behavior in the README a bit.
need to get repro steps on this
ctlptl is possibly doing something unsupported to configure the containerd
config, which minikube overrides on restart.
Reported on Slack: https://kubernetes.slack.com/archives/CESBL84MV/p1642175814004400
I'm having an issue with minikube wiping out the config patch that ctlptl applies to connect to the registry after stopping and starting the cluster. Here is what my /etc/containerd/config.toml looks like after restarting minikube:
[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins.diff-service]
For my development environment I need to run Kubernetes version 1.16. But when trying to set the version through ctlptl I get the message that this is not supported with Kind. Even though this says otherwise. Look at the GitHub releases page of kind for the image tags
When you have other entries in your kubeconfig which may no longer exist, ctlptl is taking a very long time to list the results. We often have dirty kubeconfig files, so this is a bit of an odd user experience.
$ time ctlptl get
CURRENT NAME PRODUCT AGE REGISTRY
foo gke unknown none
bar gke unknown none
gca gke unknown none
tw gke unknown none
* kind-kind kind 6m localhost:56978
ctlptl get 0.13s user 0.15s system 0% cpu 1:00.46 total
I have four gke clusters in my kubeconfig, and I think two of them still exist.
When using ctlptl
with minikube
, Tim Loyer encountered an issue with Docker assigning IP addresses in the order that containers are created. Tim said:
I'm running into an issue but I am not sure if it is a ctlptl problem or minikube+docker driver issue. If create a minikube cluster with ctlptl, restart my computer (or stop minikube container and restart ctlptl-registry) and then try to start minikube again, minikube will fail saying that the container stopped running. It ends up this is because the IP address (192.168.49.2) has been taken over by ctlptl-registry and is already in use when minikube tries to use it for itself.
@milas was able to reproduce the issue locally:
I think minikube is launching itself with
--network minikube --ip 192.168.49.2
and ctlptl is launching the registry with--network minikube
(but no IP flag)
Docker DHCP isnโt smart enough to not hand out IP addresses โreservedโ by created (but not yet running containers), so if the registry starts first, it assigns it 192.168.49.2, at which point the minikube container canโt be started up again
One solution might be for ctlptl
to choose a fixed (but random) IP that doesn't conflict when it's running with minikube
.
Full thread on k8s slack for reference.
hi there,
I simply tried to spin up a either a kind
or a minikube
server with:
kind create cluster --image=kindest/node:v1.21.2
ctlptl create cluster minikube --registry=ctlptl-registry --kubernetes-version=v1.21.6
I have to mention, that I am NOT using Docker Desktop (that was eating up all my CPU and memory and was impossible to work with). So I work with brew install hyperkit
, and brew install docker
A regular minikube start --memory=4096 --driver=virtualbox --kubernetes-version=v1.21.6 --cpus=4
works as expected.
Do I miss something?
Thank you in advance
PS:
I saw, that ctlptl is using "docker" as guest driver. Maybe that's relevant:
๐ข Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "docker" driver, which is incompatible with requested "virtualbox" driver.
(that happened when I tried to create another minikube cluster, while the ctlptl
cluster was failed.
That's the complete output of ctlptl KIND...
Creating cluster "kind" ...
โ Ensuring node image (kindest/node:v1.21.2) ๐ผ
โ Preparing nodes ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I1102 09:46:36.417274 210 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.21.2
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1102 09:46:36.437468 210 certs.go:110] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1102 09:46:36.722944 210 certs.go:487] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1102 09:46:37.345725 210 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1102 09:46:37.577422 210 certs.go:487] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1102 09:46:37.883405 210 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1102 09:46:38.047485 210 certs.go:487] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1102 09:46:38.726407 210 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
I1102 09:46:38.834593 210 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
I1102 09:46:39.013447 210 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1102 09:46:39.123278 210 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1102 09:46:39.425823 210 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I1102 09:46:39.678982 210 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1102 09:46:39.779861 210 manifests.go:96] [control-plane] getting StaticPodSpecs
I1102 09:46:39.782608 210 certs.go:487] validating certificate period for CA certificate
I1102 09:46:39.782827 210 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1102 09:46:39.782848 210 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1102 09:46:39.782854 210 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1102 09:46:39.782858 210 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1102 09:46:39.782861 210 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I1102 09:46:39.790326 210 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I1102 09:46:39.790354 210 manifests.go:96] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1102 09:46:39.790789 210 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1102 09:46:39.790834 210 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1102 09:46:39.790840 210 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1102 09:46:39.790843 210 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1102 09:46:39.790846 210 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1102 09:46:39.790849 210 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1102 09:46:39.790852 210 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I1102 09:46:39.791645 210 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I1102 09:46:39.791671 210 manifests.go:96] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1102 09:46:39.791907 210 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1102 09:46:39.792397 210 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I1102 09:46:39.793138 210 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1102 09:46:39.793300 210 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
I1102 09:46:39.793877 210 loader.go:372] Config loaded from file: /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1102 09:46:39.817565 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:40.318258 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:40.818594 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:41.319035 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:41.818587 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:42.318663 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:42.819415 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:43.318759 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:43.819619 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:44.319229 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:44.819810 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:45.318928 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:45.818592 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:46.318763 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:46.818981 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:47.319231 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:47.818305 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:48.318925 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:48.819466 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:49.318670 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:49.818017 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:50.318088 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:50.820623 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:51.318743 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:51.819496 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:52.318593 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:52.818510 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:53.318118 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:53.819099 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:54.318610 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:54.819845 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:55.319201 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:55.818470 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:56.318407 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:56.818549 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:57.318912 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:57.819130 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:58.318204 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:58.818741 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:59.318967 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:46:59.819331 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:00.318486 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:00.820312 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:01.318987 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:01.818123 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:02.318776 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:02.818732 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:03.318507 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:03.818109 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:04.318268 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:04.818463 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:05.318530 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:05.818790 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:06.318990 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:06.817996 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:07.318796 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:07.818764 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:08.317913 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:08.818759 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:09.318802 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:09.818968 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:10.317977 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:10.818559 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:11.318964 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:11.818820 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:12.318237 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:12.817990 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:13.319049 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:13.818566 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:14.318090 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:14.818582 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:15.318412 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:15.818679 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:16.318930 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:16.819681 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:17.318454 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:17.818497 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:18.318702 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:18.818358 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:19.317957 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1102 09:47:19.818528 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:20.319618 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:20.819165 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:21.319423 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:21.818633 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:22.318218 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:22.819329 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:23.318841 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:23.818548 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:24.318271 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1102 09:47:24.818731 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:25.318645 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:25.818680 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:26.318860 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:26.819675 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:27.319954 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:27.818555 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:28.319557 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:28.818236 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:29.318355 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:29.818344 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:30.318990 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:30.818580 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:31.318690 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:31.818696 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:32.318057 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:32.817999 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:33.318439 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:33.819417 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:34.318181 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1102 09:47:34.817943 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:35.318490 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:35.819449 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:36.319133 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:36.818899 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:37.319032 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:37.819540 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:38.319255 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:38.819338 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:39.319175 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:39.821109 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:40.318323 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:40.819136 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:41.318604 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:41.819153 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:42.318135 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:42.818602 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:43.318827 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:43.818679 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:44.319036 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:44.818761 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:45.319589 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:45.819823 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:46.318756 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:46.818539 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:47.318822 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:47.818901 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:48.318527 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:48.818785 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:49.318819 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:49.819608 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:50.318745 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:50.818680 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:51.318863 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:51.818673 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:52.319618 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:52.818962 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:53.318829 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:53.818002 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:54.318633 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:54.819412 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1102 09:47:55.318351 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:55.818230 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:56.318492 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:56.817958 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:57.318712 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:57.818123 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:58.318417 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:58.819323 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:59.318249 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:47:59.818760 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:00.319887 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:00.818232 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:01.318537 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:01.819914 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I1102 09:48:02.319579 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:02.818518 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:03.319360 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:03.818540 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:04.319097 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:04.818637 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:05.319072 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:05.819087 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:06.318405 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:06.819395 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I1102 09:48:07.318293 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:07.818505 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:08.319050 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:08.818588 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:09.319065 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:09.819716 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:10.318609 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:10.819593 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:11.319098 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:11.818551 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:12.318104 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:12.818256 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:13.318220 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:13.818056 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:14.318907 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:14.818248 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:15.319029 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:15.818572 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:16.318485 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:16.818376 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:17.318409 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:17.818434 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:18.319443 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:18.818303 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:19.318336 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:19.818503 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:20.318638 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:20.819295 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:21.319916 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:21.818056 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:22.317986 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:22.818932 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:23.318857 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:23.818699 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:24.318417 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:24.818402 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:25.318605 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:25.818837 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:26.318536 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:26.818849 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:27.318102 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:27.819890 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:28.318112 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:28.819660 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:29.319066 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:29.818131 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:30.318316 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:30.818710 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:31.318379 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:31.819559 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:32.318345 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:32.819581 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:33.319177 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:33.818711 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:34.318966 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I1102 09:48:34.818999 210 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:225
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:225
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371
and that's the output of minikube
โฏ ctlptl create cluster minikube --registry=ctlptl-registry --kubernetes-version=v1.21.6
๐ minikube v1.23.2 on Darwin 11.6
โช MINIKUBE_EXISTING_DOCKER_TLS_VERIFY=1
โช MINIKUBE_EXISTING_DOCKER_HOST=tcp://192.168.99.112:2376
โช MINIKUBE_EXISTING_DOCKER_CERT_PATH=/Users/jan/.docker/machine/machines/default
โช MINIKUBE_ACTIVE_DOCKERD=minikube
โจ Using the docker driver based on user configuration
๐ Starting control plane node minikube in cluster minikube
๐ Pulling base image ...
๐ฅ Creating docker container (CPUs=2, Memory=1992MB) ...
โ Listening to 0.0.0.0 on external docker host 192.168.99.112. Please be advised
๐คฆ StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.27@sha256:89b4738ee74ba28684676e176752277f0db46f57d27f0e08c3feec89311e22de -d /var/lib: exit status 125
stdout:
stderr:
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
๐ Noticed you have an activated docker-env on docker driver in this terminal:
โ Please re-eval your docker-env, To ensure your environment variables have updated ports:
'minikube -p minikube docker-env'
๐คท docker "minikube" container is missing, will recreate.
๐ฅ Creating docker container (CPUs=2, Memory=1992MB) ...
โ Listening to 0.0.0.0 on external docker host 192.168.99.112. Please be advised
๐ฟ Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.27@sha256:89b4738ee74ba28684676e176752277f0db46f57d27f0e08c3feec89311e22de -d /var/lib: exit status 125
stdout:
stderr:
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
โ Exiting due to PR_DOCKER_IP_CONFLICT: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.27@sha256:89b4738ee74ba28684676e176752277f0db46f57d27f0e08c3feec89311e22de -d /var/lib: exit status 125
stdout:
stderr:
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
๐ก Suggestion: Run: 'minikube delete --all' to clean up all the abandoned networks.
๐ฟ Related issue: https://github.com/kubernetes/minikube/issues/9605
creating minikube cluster: exit status 60
I am having networking issues still with wsl2. When I run the kind example for an internal registry it works fine. When I use ctptl it is not setting the the port to the random number, but is setting the registry port in the docket container to 500x
When ctlptl auto-creates a registry, it should print out the registry port and instructions on how to use it.
(I've also wondered if it should always default new registries to port 5000, but that might be more confusing than helpful...people would assume that the registry was always on port 5000, except for that one rare time where 5000 is already occupied and they don't understand what's wrong)
It appears k3d might be a good fit now that it supports local registry?
Currently, you can mount a local folder into the minikube VM with something like:
minikube start --mount --mount-string="$PWD:/home" --driver docker
ctlptl should support something like this. The most bare-bones way to support it would be to have a pass-through way to set minikube options, as described here: #86
But one of the goals of ctlptl is to have apis that are a bit abstracted above the underlying implementation engine (whether it's a VM vs a docker container, or Kind vs Minikube), so i wonder if we could offer a better config-based option that would also work on Kind
Hi,
Love the tool. Just rebuilt my environment and all the images that were tagged for localhost:40392 can't push because the new exposed registry port is 39282... It looks like the registry port can be set if it's built outside of the Kind cluster. Adding "port" to the kind config didn't work.
Our kind-local script has instructions on how to run in kind in circleci with a remote docker
We should support this in ctlptl as well
Once we do that, we can use ctlptl to set up the tilt integration tests
Loving ctlptl! Nice work!
It would be awesome if k3d configuration options could be passed through to the k3d cluster create
command for product: k3d
, similar to how kindV1Alpha4Cluster
options get passed through to the kind create cluster
command for product: kind
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.