Git Product home page Git Product logo

kind-on-c's Introduction

KINDONC!! a.k.a. kind-on-c a.k.a. kind on concourse

KINDONC!! POW!!

This is a concourse task which uses kind to start a kubernetes cluster locally for users to run their tests against.

Users can choose to run whichever kubernetes version kind uses by default, or use a specific kubernetes version by providing the kubernetes source tree checked out at a specific version.

It is heavily based on concourse-dcind and of course uses kind. This is really just plumbing ...

Usage

Run whatever kind thinks is cool ...

jobs:
- name: kind
  plan:
  - in_parallel:
    - get: kind-on-c
    - get: kind-release
      params:
        globs:
        - kind-linux-amd64
  - task: run-kind
    privileged: true
    file: kind-on-c/kind.yaml
    params:
      KIND_TESTS: |
        # your actual tests go here!
        kubectl get nodes -o wide

resources:
- name: kind-release
  type: github-release
  source:
    owner: kubernetes-sigs
    repository: kind
    access_token: <some github token>
    pre_release: true
- name: kind-on-c
  type: git
  source:
    uri: https://github.com/pivotal-k8s/kind-on-c

This uses the kind binary, provided by the kind-release resource, to create a kubernetes cluster. It will use the node image from the kind team. The version of kind in use and thus, indirectly, the kubernetes version deployed can be controlled by pinning the kind-release resource to a specific version.

When the cluster is up and running, any commands configured in $KIND_TESTS will run. The environment is setup with $KUBECONFIG which points to the client configuration for the deployed kubernetes cluster. Also the most recent (stable) version of kubectl will be downloaded and put into the $PATH, therefore kubectl <something> will just work™️.

Build and run your own kubernetes ...

jobs:
- name: kind
  plan:
  - in_parallel:
    - get: k8s-git
    - get: kind-on-c
    - get: kind-release
      params:
        globs:
        - kind-linux-amd64
  - task: run-kind
    privileged: true
    file: kind-on-c/kind.yaml
    params:
      KIND_TESTS: |
        # your actual tests go here!
        kubectl get nodes -o wide

resources:
- name: k8s-git
  type: git
  source:
    uri: https://github.com/kubernetes/kubernetes
- name: kind-release
  type: github-release
  source:
    owner: kubernetes-sigs
    repository: kind
    access_token: <some github token>
    pre_release: true
- name: kind-on-c
  type: git
  source:
    uri: https://github.com/pivotal-k8s/kind-on-c

If the task finds an task input named k8s-git it treats that as a kubernetes source tree and tells kind to create a node image off of that. You can just use a git resource, and pin it to a specific commit if need be, if you want to run a specific kubernetes version.

In this case, also kubectl is compiled on the fly and therefore exactly matches the version of kubernetes deployed.

User configurations

  • KIND_TESTS
    ... this is the important piece: configure, what YOU actually want to run against the cluster! E.g.:
    KIND_TESTS: |
      kubectl get nodes -o jsonpath="{..name}" | grep -q 'worker2' || {
        echo >&2 "Expected node 'worker2' not found"
        return 1
      }
    Note: The tests will run in bash with the options errexit & nounset
  • KIND_CONFIG
    ... is the config kind when creating the cluster. Optional. If not specified, the default config is used, but can be overwritten with something like:
    KIND_CONFIG: |
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      nodes:
      - role: control-plane
      - role: worker
      - role: worker
    Note: potentially the config you specify will be patched with things that are essential for kind-on-c to work properly.
  • EXPORT_NODE_ROOTFS
    If this parameter is set, the node image built by kind is made available as an output in the format of a rootfs. This output can be consumed by other steps and used to run a task off of.
    See also the ouput exported-node-rootfs.
    Note: If the task is configured to use upstream kind's default node image, this feature is not available. A warning is printed and the output is kept empty.
  • EXPORT_NODE_IMAGE
    If this parameter is set, the node image built by kind is made available as an output in the format of an OCI image. This output can be consumed by other steps and e.g. pushed to a registry. This might come in handy if you want to run multiple tests against the same version of kubernetes in parallel. You could have one step which builds the node image, exports it as and OCI image, and pushes it to a registry and then some more parallel jobs that consume that image. With that approach you don't need to compile the smae node image over and over.
    See also the ouput exported-node-image.
    Note: If the task is configured to use upstream kind's default node image, this feature is not available. A warning is printed and the output is kept empty.
  • INSTALL_METALLB
    If this parameter is set, metallb is installed onto the cluster. This allows users to deploy services of type LoadBalancer and get an EXTERNAL-IP for those. This external IP can be used to connect to a exposed service from the task container, thus from code running in KIND_TESTS.
  • KIND_PRE_START
    ... if you want or need to run something just before the kind cluster is started
  • DOCKERD_OPTS
    ... if you need to add some configs when starting the docker daemon
  • DOCKERD_TIMEOUT
    ... how long do you want to wait for docker to come up?
  • KINDONC_DEBUG
    ... if you want to see all the ugly things that are happening to bring up docker and to run kind
  • KIND_LOGLEVEL
    ... make kind more or less verbose when it is doing its business.
    For kind <= 0.5.1 this value is used for the --loglevel option and needs to be one of panic, fatal, error, warning, info, debug.
    For kind > 0.5.1 this value is used for the --verbosity option and needs to be and integer.
  • KIND_CLUSTER_NAME
    ... in case you want to change kind's cluster name -- you actually should not need to do that ...

Well known task inputs & outputs

Inputs

  • kind-on-c, mandatory
    The file tree of this repo. At least the task file and the entrypoint will (or should) be used from this input, in future potentially more. This input is typically provided by a git resource that is pulling this very git repo.
  • kind-release, mandatory Must prrovide the kind binary, named kind-linux-amd64. This should either be backed by a github-release resource or a earlier task that compiles kind from the source.
  • k8s-git, optional
    Must provide a git source tree of kubernetes. When configured, the node image will be built off of the checked-out revision of kubernetes. This is typically a git resource, pointing to (a fork of) k/k.
  • node-image, optional Must provide an OCI image image.tar that will be used as a node image. This can be an image generated via EXPORT_NODE_IMAGE/exported-node-image or any other OCI image that can be used as node image for kind.
  • inputs, optional
    If this task depends on other resources, aggregate them into the inputs input. More about that in Aggregated inputs & outputs.

Outputs

The task generates all the outputs in this list, however depending on the task's configuration it may leave certain (or even all) outputs empty.

  • exported-node-image
    If the task is configured to output the node image as an OCI image, this output will be populated. A following step can consume this output and e.g. the registry-image resource could be used to push the node image into a registry.
    jobs:
    - name: kind
      plan:
      - task: run-kind
        privileged: true
        file: kind-on-c/kind.yaml
        params:
          EXPORT_NODE_IMAGE: 1
      - put: image-repo  # this is a registry-image resource
        params:
          image: exported-node-image/image.tar
  • exported-node-rootfs
    If the task is configured to output the node image as a rootfs, this output will be populated with a rootfs and a minimal metadata.json. With that it is possible to use the node image as a task image for a following concourse task.
    jobs:
    - name: kind
      plan:
      - task: run-kind
        privileged: true
        file: kind-on-c/kind.yaml
        params:
          EXPORT_NODE_ROOTFS: 1
      - task: do something with the node rootfs
        image: exported-node-rootfs
        config:
          platform: linux
          run:
            path: bash
            args:
            - -xeuc
            - |
              echo "Concourse is now running the image we built with kind"
  • outputs
    If users want to create further outputs, they can do so by aggregating them in the ouputs output. More about that in Aggregated inputs & outputs.

Bring your own task image

Chances are good, that you need some additional tools to run your KIND_TESTS. To do that, you can use a custom task image.

You can e.g. use the registry-image resource to pull in your custom image and then override kind-on-c's default image by providing its rootfs to the kind-on-c task via the image directive. This overrides the image that is configured in the task config.

When it comes to creating the image, you can base it on the default kind-on-c image and add your additional tools and things as another layer. For that you can follow the image's ci-latest tag, which gets updated every time we push a new task image. Or you can extract all information (repo and digest) of the default task image from the task config file.

When you want to bring a entirely different image which is not based on kind-on-c's default task image, you need to check which dependencies we need in the Dockerfile.

For e.g. static binaries you could, instead of providing a custom image, have concourse pull down all the needed things and provide them to the task via the inputs input. Also, of course, nothing stand in the way to use both a custom image and provide certain things via inputs.

jobs:
- name: kind
  plan:
  - in_parallel:
    - get: custom-kind-on-c-image
    - get: kind-on-c
    - get: kind-release
    - get: other-custom-things
  - task: run-kind
    privileged: true
    file: kind-on-c/kind.yaml
    image: custom-kind-on-c-image # This overrides the image from the task config file
    input_mapping:
      inputs: other-custom-things
    params:
      KIND_TESTS: # use custom things either from the image or from inputs

resources:
- name: custom-kind-on-c-image
  type: registry-image
  source: {repository: my-custom-image, tag: "1.13"}
- name: other-custom-things
  type: some-resource-type
  source: {...}

Aggregated inputs & outputs

Currently this task only allows for a fixed set of inputs and outputs (some inputs are optional, some output might be kept empty). This set cannot be changed by users of kind-on-c.

Still users need to be able to provide one or more of their own resources to be consumed by this task as inputs or generate one or more outputs by this task.

A workaround for this are (what I call now) aggregated inputs & outputs:

Users can use as many input resources they want, they however need to aggregate them into the one input inputs for this task. Likewise, if that is something a user needs to do, they can place all their outputs into the single outputs output and have a later task split them apart again, into separate, individual outputs.

After all, inputs and outputs are eventually "just directories" on disk and therefore relatively easy to manipulate if need be.

Example usage of aggregated inputs and outputs

The following (contrived) example shows how this could be used:

  • aggregates the resources input-res-1, input-res-2, and input-res-3 into the input inputs
  • the kind-on-c task uses the input inputs
  • the kind-on-c task uses the output outputs and places artifacts, metrics, and logs in there
  • the last task consumes the outputs output of the kind-on-c task, splits those up and makes them available as separate outputs artifacts, metrics, and logs
  • all of the put resources just care about one of the individual outputs
plan:
- in_parallel:
  - get: kind-on-c
  - get: input-res-1
    trigger: true
  - get: input-res-2
    trigger: true
  - get: input-res-3
- task: aggregate inputs for kind-on-c
  config:
    platform: linux
    image_resource:
      type: registry-image
      source: { repository: bash }
    inputs:
    - name: input-res-1
    - name: input-res-2
    - name: input-res-3
    outputs:
    - name: inputs # this will be the aggregated inputs for kind-on-c
    run:
      path: bash
      args:
      - -xeuc
      - |
        cp -a input-res-1 inputs/
        cp -a input-res-2 inputs/
        cp -a input-res-3 inputs/
- task: kind-on-c
  privileged: true
  file: kind-on-c/kind.yaml
  params:
    KIND_TESTS: |
      # your actual tests go here!
      cp -a some/artifacts outputs
      cp -a some/metrics   outputs
      cp -a some/logs      outputs
- task: split aggregated outputs of kind-on-c
  config:
    platform: linux
    image_resource:
      type: registry-image
      source: { repository: bash }
    inputs:
    - name: outputs
    outputs:
    - name: artifacts
    - name: metrics
    - name: logs
    run:
      path: bash
      args:
      - -xeuc
      - |
        cp -a outputs/artifacts/* artifacts
        cp -a outputs/metrics/*   metrics
        cp -a outputs/logs/*      logs
- in_parallel:
  - put: artifactory          # using only the artifacts output of the last task
    params:
      folder: artifacts
  - put: lftp-log-dump        # using only the log output of the previous task
    params:
      files: "logs/*.log"
  - put: swift-metrics-store  # using only the metrics output of the previous task
    params:
      from: "metrics/(.*)"

But why?

kind-on-c made the design decision to use a task file. This is mostly to have an easy way to specify the specific version of the kind-on-c image, have the pipeline automatically update it, and track which commit has been tested with which version of the image.

concourse however does not allow to mix settings (e.g. inputs) from a task file with inline configuration in the task config.

Therefore, for now at least, kind-on-c is opting for specifying the set of allowed/available inputs/outputs, which might mean a bit of overhead for the users.

kind-on-c's People

Contributors

hoegaarden avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kind-on-c's Issues

error when creating cluster

Versions

→ concourse 7.1.0 on k8s with containerd runtime

using example job given in readme

https://github.com/pivotal-k8s/kind-on-c#run-whatever-kind-thinks-is-cool-

jobs:
- name: kind
  plan:
  - in_parallel:
    - get: kind-on-c
    - get: kind-release
      params:
        globs:
        - kind-linux-amd64
  - task: run-kind
    privileged: true
    file: kind-on-c/kind.yaml
    params:
      KIND_TESTS: |
        # your actual tests go here!
        kubectl get nodes -o wide

resources:
- name: kind-release
  type: github-release
  source:
    owner: kubernetes-sigs
    repository: kind
    # access_token: <some github token>
    pre_release: false
- name: kind-on-c
  type: git
  source:
    uri: https://github.com/pivotal-k8s/kind-on-c

error

selected worker: concourse-worker-0
selected worker: concourse-worker-0
downloading asset: kind-linux-amd64
Cloning into '/tmp/build/get'...
4b885d4 Update container image to "sha256:114fb10216e1aba882919220ba73b745b9315c47bfcdb14e04206c894634c868"
initializing
selected worker: concourse-worker-0
fetching gcr.io/cf-london-servces-k8s/kind-on-c/kind-on-c@sha256:114fb10216e1aba882919220ba73b745b9315c47bfcdb14e04206c894634c868
83ee3a23efb7 [========================================] 27.2MiB/27.2MiB
db98fc6f11f0 [==============================================] 843b/843b
f611acd52c6c [==============================================] 162b/162b
9ebb33a953e9 [======================================] 161.4MiB/161.4MiB
6d8477dbb6ef [==========================================] 1.2MiB/1.2MiB
selected worker: concourse-worker-1
running kind-on-c/entrypoint.sh
[INF] kind-on-c: 4b885d4
[INF] node-prepper starting in the background
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.20.2) 🖼 
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
⢀⡱ Starting control-plane 🕹️ [INF] node-prepper was successful on all nodes, shutting down
 ✗ Starting control-plane 🕹️ 
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0513 05:29:29.565622     197 initconfiguration.go:201] loading configuration from "/kind/kubeadm.conf"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.20.2
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0513 05:29:29.586403     197 certs.go:110] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0513 05:29:29.786373     197 certs.go:474] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.4 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0513 05:29:30.465372     197 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0513 05:29:31.203200     197 certs.go:474] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0513 05:29:31.458761     197 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0513 05:29:31.640074     197 certs.go:474] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0513 05:29:33.593829     197 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0513 05:29:33.959930     197 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0513 05:29:34.589429     197 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0513 05:29:34.832498     197 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
I0513 05:29:35.524377     197 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0513 05:29:35.678510     197 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0513 05:29:35.800712     197 manifests.go:96] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0513 05:29:35.801776     197 certs.go:474] validating certificate period for CA certificate
I0513 05:29:35.801875     197 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0513 05:29:35.801881     197 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0513 05:29:35.801885     197 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0513 05:29:35.801890     197 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0513 05:29:35.801905     197 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0513 05:29:35.810341     197 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0513 05:29:35.810688     197 manifests.go:96] [control-plane] getting StaticPodSpecs
I0513 05:29:35.811081     197 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0513 05:29:35.811123     197 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0513 05:29:35.811310     197 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0513 05:29:35.811399     197 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0513 05:29:35.811432     197 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0513 05:29:35.811616     197 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0513 05:29:35.811709     197 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0513 05:29:35.812724     197 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0513 05:29:35.812911     197 manifests.go:96] [control-plane] getting StaticPodSpecs
I0513 05:29:35.813350     197 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0513 05:29:35.813928     197 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0513 05:29:35.814833     197 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0513 05:29:35.814970     197 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
I0513 05:29:35.817080     197 loader.go:379] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0513 05:29:35.824992     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0513 05:29:36.325982     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:36.825950     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:37.325938     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:37.826269     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:38.325907     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:38.825680     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:39.325996     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:39.825964     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:40.326030     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:40.826047     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:41.326376     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:41.826066     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:42.325992     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:42.825910     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:43.326000     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:43.826188     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:44.326093     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:44.826189     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:45.326010     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:45.826116     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:46.326226     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:46.826586     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:47.325747     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:47.825925     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:48.325927     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:48.825982     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:49.326526     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0513 05:29:49.825987     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:50.326090     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:50.826152     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:51.326072     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:51.825998     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:52.326367     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:52.826032     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:53.326443     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:53.826214     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:54.325984     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:54.825970     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:55.325993     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:55.825857     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:56.326007     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:56.825976     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:57.325853     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:57.825964     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:58.325921     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:58.825916     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:59.326182     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:29:59.826008     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:00.325985     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:00.826186     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:01.325874     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:01.826657     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0513 05:30:02.325938     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:02.826053     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:03.326036     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:03.826022     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:04.325790     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:04.825791     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:05.326118     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:05.825909     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:06.326047     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:06.826190     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:07.326010     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:07.826112     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:08.325800     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:08.825921     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:09.326127     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:09.826124     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:10.326051     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:10.825889     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:11.325933     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:11.825960     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:12.326146     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:12.825837     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:13.325827     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:13.825831     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:14.325943     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:14.825997     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:15.326019     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0513 05:30:15.825766     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:16.326011     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:16.826159     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:17.325872     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:17.825930     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:18.325880     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:18.825926     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:19.325993     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:19.825975     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:20.326022     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:20.826015     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0513 05:30:21.326322     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:21.825741     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:22.325895     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:22.826067     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:23.325970     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:23.825761     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:24.326161     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:24.826271     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:25.326025     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:25.825896     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:26.326228     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:26.826080     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:27.326059     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:27.825848     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:28.325945     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:28.825867     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:29.326002     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:29.825956     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:30.326137     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:30.826091     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0513 05:30:31.326031     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:31.826155     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:32.326023     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:32.825942     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:33.326207     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:33.826008     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:34.326107     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:34.825990     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:35.326016     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:35.825958     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:36.326069     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:36.826060     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:37.326007     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:37.826055     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:38.325917     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:38.826068     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:39.325995     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:39.826027     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:40.325860     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:40.825829     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:41.325971     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:41.826023     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:42.325944     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:42.825972     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:43.326173     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:43.826209     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:44.325901     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:44.826001     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:45.326107     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:45.826126     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:46.326009     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:46.826125     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:47.325847     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:47.826097     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:48.325960     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:48.825950     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:49.326419     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:49.825964     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:50.325960     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:50.826299     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0513 05:30:51.326029     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:51.826155     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:52.326149     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:52.825962     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:53.326263     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:53.825915     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:54.326252     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:54.826119     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:55.326045     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:55.825926     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:56.325984     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:56.826132     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:57.326097     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:57.826656     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0513 05:30:58.325864     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:58.825939     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:59.325771     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:30:59.825929     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:00.326127     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:00.825882     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:01.326046     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:01.825956     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:02.326002     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:02.826064     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:03.326025     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:03.826070     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:04.326145     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:04.825942     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:05.325969     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:05.826260     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:06.326201     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:06.826168     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:07.326467     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:07.826095     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:08.325873     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:08.825965     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:09.326050     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:09.826175     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:10.326115     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:10.826156     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:11.326152     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:11.826275     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:12.326073     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:12.825989     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:13.325911     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:13.825993     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:14.325896     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:14.825918     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:15.326022     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:15.826204     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:16.325967     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:16.826195     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:17.325947     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:17.826245     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:18.325912     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:18.825930     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:19.326684     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0513 05:31:19.825947     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:20.326020     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:20.826018     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:21.325971     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:21.825870     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:22.326014     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:22.826008     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:23.326022     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:23.826104     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:24.326060     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:24.826049     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:25.325950     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:25.825877     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:26.326011     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:26.826114     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:27.325982     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:27.825983     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:28.325904     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:28.825874     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:29.325953     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:29.825902     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:30.325956     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0513 05:31:30.825847     197 round_trippers.go:445] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:151
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:204
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1374
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:151
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:204
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1374
[ERR] Build failed (1), not stopping docker.
failed

kubelet isn't running.

I installed concourse and then used the example job in the readme. Seems like the kubelet is not running.

- name: kind
  plan:
  - in_parallel:
    - get: k8s-git
    - get: kind-on-c
    - get: kind-release
      params:
        globs:
        - kind-linux-amd64
  - task: run-kind
    privileged: true
    file: kind-on-c/kind.yaml
    params:
      KIND_TESTS: |
        # your actual tests go here!
        kubectl get nodes -o wide

resources:
- name: k8s-git
  type: git
  source:
    uri: https://github.com/kubernetes/kubernetes
- name: kind-release
  type: github-release
  source:
    owner: kubernetes-sigs
    repository: kind
    access_token: <some github token>
    pre_release: true
- name: kind-on-c
  type: git
  source:
    uri: https://github.com/pivotal-k8s/kind-on-c```

The logs are :- 

```[INF] Setting up Docker environment...
[INF] Starting Docker...
[INF] Waiting 60 seconds for Docker to be available...
[INF] Docker available after 2 seconds.
[INF] /tmp/build/dd1bc04d/bin/kind: v0.5.0
[INF] will use kind upstream's node image
[INF] /tmp/build/dd1bc04d/bin/kubectl: Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
[INF] kmsg-linker starting in the background
DEBU[18:19:45] Running: /bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}}] 
Creating cluster "kind" ...
DEBU[18:19:45] Running: /bin/docker [docker inspect --type=image kindest/node:v1.15.3] 
INFO[18:19:45] Pulling image: kindest/node:v1.15.3 ...      
DEBU[18:19:45] Running: /bin/docker [docker pull kindest/node:v1.15.3] 
 ✓ Ensuring node image (kindest/node:v1.15.3) 🖼 
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-worker2 --name kind-worker2 --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=worker kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157] 
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-worker --name kind-worker --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=worker kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157] 
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=control-plane --expose 43587 --publish=127.0.0.1:43587:6443/TCP kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157] 
 ✓ Preparing nodes 📦📦📦 
DEBU[18:21:16] Running: /bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=kind] 
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-control-plane] 
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-worker2] 
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-worker] 
DEBU[18:21:16] Running: /bin/docker [docker exec --privileged kind-control-plane cat /kind/version] 
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane] 
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker] 
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane] 
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2] 
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} 
DEBU[18:21:18] Configuration generated:
 # config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
  name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
  certSANs: [localhost, "127.0.0.1"]
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
    # configure ipv6 default addresses for IPv6 clusters
    
scheduler:
  extraArgs:
    # configure ipv6 default addresses for IPv6 clusters
    
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
  name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network. 
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
  advertiseAddress: "172.17.0.4"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.4"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
  name: config

nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.4"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
  name: config
# configure ipv6 addresses in IPv6 mode

# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
  name: config 
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} 
DEBU[18:21:18] Configuration generated:
 # config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
  name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
  certSANs: [localhost, "127.0.0.1"]
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
    # configure ipv6 default addresses for IPv6 clusters
    
scheduler:
  extraArgs:
    # configure ipv6 default addresses for IPv6 clusters
    
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
  name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network. 
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
  advertiseAddress: "172.17.0.2"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.2"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
  name: config

nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.2"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
  name: config
# configure ipv6 addresses in IPv6 mode

# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
  name: config 
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 true 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} 
DEBU[18:21:18] Configuration generated:
 # config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
  name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
  certSANs: [localhost, "127.0.0.1"]
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
    # configure ipv6 default addresses for IPv6 clusters
    
scheduler:
  extraArgs:
    # configure ipv6 default addresses for IPv6 clusters
    
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
  name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network. 
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
  advertiseAddress: "172.17.0.3"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.3"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
  name: config
controlPlane:
  localAPIEndpoint:
    advertiseAddress: "172.17.0.3"
    bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.3"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
  name: config
# configure ipv6 addresses in IPv6 mode

# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
  name: config 
DEBU[18:21:19] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration 
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind] 
DEBU[18:21:19] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.3
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration 
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind] 
DEBU[18:21:19] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration 
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-worker mkdir -p /kind] 
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf] 
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf] 
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf] 
⠈⠁ Creating kubeadm config 📜 [INF] kmsg-linker successful, shutting down
 ✓ Creating kubeadm config 📜 
DEBU[18:21:27] Running: /bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6] 
DEBU[18:23:36] I1014 18:21:28.578980      82 initconfiguration.go:189] loading configuration from "/kind/kubeadm.conf"
I1014 18:21:28.587691      82 feature_gate.go:216] feature gates: &{map[]}
	[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
I1014 18:21:28.588302      82 checks.go:581] validating Kubernetes and kubeadm version
I1014 18:21:28.588342      82 checks.go:172] validating if the firewall is enabled and active
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
I1014 18:21:28.981511      82 checks.go:209] validating availability of port 6443
I1014 18:21:28.982062      82 checks.go:209] validating availability of port 10251
I1014 18:21:28.982256      82 checks.go:209] validating availability of port 10252
I1014 18:21:28.982557      82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1014 18:21:28.982867      82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1014 18:21:28.983087      82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1014 18:21:28.983181      82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1014 18:21:28.983325      82 checks.go:439] validating if the connectivity type is via proxy or direct
I1014 18:21:28.983484      82 checks.go:475] validating http connectivity to first IP address in the CIDR
I1014 18:21:28.983921      82 checks.go:475] validating http connectivity to first IP address in the CIDR
I1014 18:21:28.984031      82 checks.go:105] validating the container runtime
I1014 18:21:30.543266      82 checks.go:382] validating the presence of executable crictl
I1014 18:21:30.543727      82 checks.go:341] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1014 18:21:30.656489      82 checks.go:341] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1014 18:21:30.784627      82 checks.go:653] validating whether swap is enabled or not
I1014 18:21:30.785055      82 checks.go:382] validating the presence of executable ip
I1014 18:21:30.825742      82 checks.go:382] validating the presence of executable iptables
I1014 18:21:30.830747      82 checks.go:382] validating the presence of executable mount
I1014 18:21:30.831014      82 checks.go:382] validating the presence of executable nsenter
I1014 18:21:30.959989      82 checks.go:382] validating the presence of executable ebtables
I1014 18:21:30.960602      82 checks.go:382] validating the presence of executable ethtool
I1014 18:21:30.960791      82 checks.go:382] validating the presence of executable socat
I1014 18:21:30.961037      82 checks.go:382] validating the presence of executable tc
I1014 18:21:30.961232      82 checks.go:382] validating the presence of executable touch
I1014 18:21:30.970741      82 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1051-aws
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.15.0-1051-aws/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1051-aws\n", err: exit status 1
I1014 18:21:31.147227      82 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I1014 18:21:31.170004      82 checks.go:622] validating kubelet version
I1014 18:21:31.597558      82 checks.go:131] validating if the service is enabled and active
I1014 18:21:31.703504      82 checks.go:209] validating availability of port 10250
I1014 18:21:31.703815      82 checks.go:209] validating availability of port 2379
I1014 18:21:31.703992      82 checks.go:209] validating availability of port 2380
I1014 18:21:31.704178      82 checks.go:254] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1014 18:21:32.057312      82 checks.go:842] image exists: k8s.gcr.io/kube-apiserver:v1.15.3
I1014 18:21:32.071703      82 checks.go:842] image exists: k8s.gcr.io/kube-controller-manager:v1.15.3
I1014 18:21:32.084891      82 checks.go:842] image exists: k8s.gcr.io/kube-scheduler:v1.15.3
I1014 18:21:32.091814      82 checks.go:842] image exists: k8s.gcr.io/kube-proxy:v1.15.3
I1014 18:21:32.100361      82 checks.go:842] image exists: k8s.gcr.io/pause:3.1
I1014 18:21:32.106837      82 checks.go:842] image exists: k8s.gcr.io/etcd:3.3.10
I1014 18:21:32.114283      82 checks.go:842] image exists: k8s.gcr.io/coredns:1.3.1
I1014 18:21:32.114593      82 kubelet.go:61] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1014 18:21:32.151542      82 kubelet.go:79] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1014 18:21:32.817114      82 certs.go:104] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
I1014 18:21:36.402593      82 certs.go:104] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.3 172.17.0.3 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1014 18:21:37.468092      82 certs.go:104] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I1014 18:21:38.247122      82 certs.go:70] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1014 18:21:38.446304      82 kubeconfig.go:79] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1014 18:21:38.799561      82 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1014 18:21:39.030137      82 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1014 18:21:39.102549      82 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1014 18:21:39.263012      82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.350543      82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1014 18:21:39.356141      82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.379764      82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1014 18:21:39.389670      82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.390652      82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1014 18:21:39.405349      82 local.go:60] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1014 18:21:39.405622      82 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy
I1014 18:21:39.406957      82 loader.go:359] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1014 18:21:39.418315      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:39.924496      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:40.424106      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:40.924056      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:41.423989      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:41.924116      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:42.424104      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:42.924058      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:43.423971      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:43.924120      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:44.424030      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:44.924132      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:45.424130      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:45.937824      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:46.424131      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:46.924095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:47.424053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:47.924090      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:48.424072      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:48.924102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:49.424124      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:49.924065      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:50.424075      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:50.923981      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:51.424053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:51.924046      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:52.424027      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:52.924088      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:53.424100      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:53.925096      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:54.424117      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:54.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:55.430311      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:55.924114      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:56.424090      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:56.924058      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:57.424100      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:57.924114      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:58.424088      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:58.924090      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:59.424102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:59.924043      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:00.426483      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:00.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:01.424103      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:01.924039      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:02.424061      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:02.924109      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:03.424084      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:03.924015      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:04.424112      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:04.924128      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:05.423935      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:05.924112      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:06.424093      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:06.924053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:07.424095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:07.924067      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:08.424097      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:08.924143      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:09.424113      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:09.924114      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:10.424055      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:10.924135      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:11.424110      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:11.924094      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:12.424070      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:12.930947      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 7 milliseconds
I1014 18:22:13.424082      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:13.949873      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 26 milliseconds
I1014 18:22:14.425145      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:14.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:15.424118      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:15.933121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:16.424085      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:16.924198      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:17.424274      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:17.925109      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:18.424108      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:18.924062      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] Initial timeout of 40s passed.
I1014 18:22:19.474587      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 24 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:19.928234      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:20.424095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:20.926108      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:21.424087      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:21.924117      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:22.424099      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:22.924072      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:23.424089      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:23.928696      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:24.424099      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:24.924085      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:25.429034      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:25.924102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:26.424066      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:26.929100      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:27.424066      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:27.924093      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:28.425166      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:28.924084      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:29.424101      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:29.924069      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:30.424078      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:30.924053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:31.424133      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:31.924214      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:32.424087      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:32.924092      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:33.424578      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:33.924123      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:34.424114      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:34.924870      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:35.424009      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:35.924089      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:36.424899      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:36.924112      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:37.424071      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:37.924018      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:38.424094      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:38.924068      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:39.424052      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:39.924064      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:40.424096      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:40.924139      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:41.424074      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:41.924102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:42.426563      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:42.924231      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:43.424119      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:43.925037      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:44.424099      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:44.924115      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:45.424039      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:45.924188      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:46.424103      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:46.924064      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:47.424120      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:47.924043      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:48.424061      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:48.924092      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:49.424124      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:49.924013      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:50.424048      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:50.924097      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:51.424014      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:51.924118      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:52.425079      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:52.924061      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:53.424251      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:53.924106      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:54.424121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:54.924041      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:55.424029      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:55.924109      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:56.424110      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:56.925229      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:57.425302      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:57.924109      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:58.426970      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:58.924046      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:59.424100      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:59.924009      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:00.424081      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:00.924127      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:01.424042      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:01.924124      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:02.424137      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:02.930307      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:03.424095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:03.924145      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:04.424049      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:04.924101      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:05.424126      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:05.924108      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:06.424095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:06.924051      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:07.424091      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:07.924068      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:08.424067      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:08.924083      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:09.424053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:09.924093      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:10.424132      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:10.924038      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:11.424141      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:11.924061      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:12.445096      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:12.924152      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:13.424183      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:13.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:14.424082      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:14.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:15.426411      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 2 milliseconds
I1014 18:23:15.925376      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:16.424088      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:16.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:17.424093      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:17.924050      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:18.424049      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:18.932107      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:19.436940      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:19.924270      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:20.424604      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:20.925099      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:21.424134      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:21.924120      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:22.424047      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:22.924071      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:23.424144      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:23.924033      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:24.424124      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:24.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:25.428832      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:25.924102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:26.424110      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:26.924041      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:27.424096      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:27.924133      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:28.424652      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:28.924040      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:29.424117      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:29.924041      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:30.424092      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:30.924104      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:31.424121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:31.924155      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:32.424078      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:32.924138      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:33.424119      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:33.924118      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:34.424120      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster 
 ✗ Starting control-plane 🕹️ 
Error: failed to create cluster: failed to init node with kubeadm: exit status 1
[ERR] Build failed (1), not stopping docker.```

Can you help me with this issue ?

kind-on-c should not use the deprecated `--loglevel` flag for kind

As stated e.g. in the "kind master with k8s-master" test in the deprecation message, we should move to using -v & -q instead of --loglevel:

  [...]
[INF] /tmp/build/f541ec31/bin/kubectl: Client Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.0-alpha.0.1809+ff8716f4cf6180", GitCommit:"ff8716f4cf6180771615865428fe45f529add46b", GitTreeState:"clean", BuildDate:"2019-09-25T23:11:25Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
[INF] kmsg-linker starting in the background
WARNING: --loglevel is deprecated, please switch to -v and -q!
Creating cluster "kind" ...
 ✓ Ensuring node image (kind/local-image:v1.17.0-alpha.0-1809-gff8716f4cf) 🖼
 ✓ Preparing nodes 📦📦📦 
 ✓ Creating kubeadm config 📜 
  [...]

kind-on-c pipeline should not use deprecated `kubectl run`

E.g. in the MetalLB test case we use a deprecated kubectl run ... command. This should be changed.

  [...]
[INF] Running tests from "$KIND_TESTS"
+ PORT=8080
+ NAME=echo
+ IMG=inanimate/echo-server
+ kubectl run echo --image=inanimate/echo-server --replicas=2 --port=8080
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/echo created
+ kubectl expose deployment echo --type=LoadBalancer
  [...]

doc: Update/create docs about tests, promotion, ...

As a user of kind-on-c
I want to understand how the thing is tested and how changes are promoted
So that I understand how to consume it and can assess if I can trust it

There are some docs in ci/README.md about how changes make it in, are tested, and promoted. This is however outdated (still talks about alpine & tags).
Also, the Dockerfile still has some reference to alpine.
We might also think about pulling that into the main README.md or at least link from there.

ref: #10

Releases would be nice

Hi folks,

We're using kindonc to test a Kubernetes-based Concourse training environment. We're tracking master of this repo currently, and it'd be nice to be able to track tested releases instead.

Any chance of changing the pipeline to tag releases, so we can track those instead? Tagging images with versions that correspond to the source code tags would also be lovely.

kind-on-c should use the images digest in the task file

Right now the task file references the task image with its tag. This tag is a strangely computed thing. This was done, because we couldn't (or didn't know how) pin a container image via its digest. So we created our own digest-like tag and used that.

We now found how to pin an image by its digest 🎉:

  [...]
image_resource:
  type: registry-image
  source:
    repository: gcr.io/cf-london-servces-k8s/kind-on-c/kind-on-c
  version:
    digest: sha256:7f5bce3de4ca7b749b0d27f510135b977cd374fb3a0a9f4381cefe466451a610
  [...]  

Thus we want to do the following:

Bonus:

  • If easily possible, state in the commit message what actually has changed (update of the base image, update of docker image, ...)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.