Git Product home page Git Product logo

kubelet's Introduction

kubelet

Implements KEP 14 - Moving ComponentConfig API types to staging repos

This repo provides external, versioned ComponentConfig API types for configuring the kubelet. These external types can easily be vendored and used by any third-party tool writing Kubernetes ComponentConfig objects.

Compatibility

HEAD of this repo will match HEAD of k8s.io/apiserver, k8s.io/apimachinery, and k8s.io/client-go.

Where does it come from?

This repo is synced from https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/kubelet. Code changes are made in that location, merged into k8s.io/kubernetes and later synced here by a bot.

kubelet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubelet's Issues

kubelet panics on Alpine Linux (no systemd)

Trying to initialise a cluster with kubeadm init results in the following log messages in kubelet.log:

E0516 21:38:17.999084    8210 runtime.go:76] Observed a panic: systemd cgroup manager not available
goroutine 324 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x278ad00, 0x2c50f48)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x87
panic(0x278ad00, 0x2c50f48)
        /usr/lib/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/pkg/kubelet/cm.(*libcontainerAdapter).newManager(0xc0003cba70, 0xc0005dea10, 0x0, 0x1, 0x6, 0x2276b4a, 0x6)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go:154 +0x367
k8s.io/kubernetes/pkg/kubelet/cm.(*cgroupManagerImpl).Create(0xc0003cbb40, 0xc000ba97c0, 0x0, 0x0)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go:633 +0x1c6
k8s.io/kubernetes/pkg/kubelet/cm.(*containerManagerImpl).createNodeAllocatableCgroups(0xc0005fb180, 0x6, 0x0)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/node_container_manager_linux.go:56 +0x151
k8s.io/kubernetes/pkg/kubelet/cm.(*containerManagerImpl).setupNode(0xc0005fb180, 0xc00031bb60, 0x0, 0x11)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go:483 +0x79d
k8s.io/kubernetes/pkg/kubelet/cm.(*containerManagerImpl).Start(0xc0005fb180, 0xc00136c300, 0xc00031bb60, 0x2c86ce8, 0xc000022d20, 0x7fa53cf98f90, 0xc00058b8c0, 0x2d0baf8, 0xc000ba8ec0, 0xc000d2bc88, ...)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go:661 +0x14b
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).initializeRuntimeDependentModules(0xc000562000)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1389 +0x1e2
sync.(*Once).doSlow(0xc000562890, 0xc000d2bdd8)
        /usr/lib/go/src/sync/once.go:68 +0xee
sync.(*Once).Do(...)
        /usr/lib/go/src/sync/once.go:59
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).updateRuntimeUp(0xc000562000)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2233 +0x5b7
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0008d84a0)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x62
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008d84a0, 0x2c80e58, 0xc000900480, 0x6d95bde9baf35b01, 0xc000058120)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008d84a0, 0x12a05f200, 0x0, 0x1, 0xc000058120)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x9a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0008d84a0, 0x12a05f200, 0xc000058120)
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4f
created by k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run
        /home/buildozer/aports/testing/kubernetes/src/kubernetes-1.21.0/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1446 +0x16b
I0516 21:38:18.044803    8210 kubelet.go:461] "Kubelet nodes not sync"

Neither etcd, nor any other static pods are created. Attached the full logfile.

kubelet.log

Command for bootstrapping the cluster is:

kubeadm init --skip-phases=addon/kube-proxy --service-cidr 2a0a:e5c0:1
3:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64

Installed versions:

[21:51] server48.place7:~# apk list -I | grep kube
kubeadm-1.21.0-r1 x86_64 {kubernetes} (Apache-2.0) [installed]
kubectl-1.21.0-r1 x86_64 {kubernetes} (Apache-2.0) [installed]
kubelet-1.21.0-r1 x86_64 {kubernetes} (Apache-2.0) [installed]
kubelet-openrc-1.21.0-r1 x86_64 {kubernetes} (Apache-2.0) [installed]
[21:51] server48.place7:~# 
[21:51] server48.place7:~# apk list -I | grep cri-o
cri-o-1.20.0-r3 x86_64 {cri-o} (Apache-2.0) [installed]
cri-o-openrc-1.20.0-r3 x86_64 {cri-o} (Apache-2.0) [installed]
[21:51] server48.place7:~# 

kubelet parameter(eviction-max-pod-grace-period ), not work as expected like officical comment.

Hey brothers , feedback a confusing point about kubelet soft eviction:

Just like kubelet --help show:
--eviction-max-pod-grace-period int32 Maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. If negative, defer to pod specified value.

I guess if this parameter is set to negative ,such as -1, soft eviction would use pod specified value(TerminationGracePeriodSeconds),

But When I try to evict pod by creating node pressure(such as memory pressure), I found it's always -1 send to CRI runtime.
Then pod container stopped immediately with sigkill, exit 137.

In short, this parameter:
(1)set as active number:work as expected;
(2)no set, kubelet read its default number:0, work as expected;
(3)set as negative number: not work as kubelet help show.

Thanks for your response. May it be a bug ?

Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found

[root@master ~]# journalctl -xefu kubelet | grep Failed
Aug 02 03:25:06 master kubelet[22572]: W0802 03:25:06.874384   22572 manager.go:597] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
^C
[root@master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sun 2020-08-02 03:24:58 EDT; 2min 18s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 22572 (kubelet)
    Tasks: 17
   Memory: 49.0M
   CGroup: /system.slice/kubelet.service
           └─22572 /usr/bin/kubelet --cgroup-driver=systemd

Aug 02 03:25:37 master kubelet[22572]: I0802 03:25:37.055205   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:25:47 master kubelet[22572]: I0802 03:25:47.117451   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:25:57 master kubelet[22572]: I0802 03:25:57.178810   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:26:07 master kubelet[22572]: I0802 03:26:07.238330   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:26:17 master kubelet[22572]: I0802 03:26:17.296485   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:26:27 master kubelet[22572]: I0802 03:26:27.353703   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:26:37 master kubelet[22572]: I0802 03:26:37.406603   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:26:47 master kubelet[22572]: I0802 03:26:47.479777   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:26:57 master kubelet[22572]: I0802 03:26:57.534994   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 02 03:27:07 master kubelet[22572]: I0802 03:27:07.592462   22572 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
[root@master ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@master ~]# 

kubelet fails to pull "k8s.gcr.io/pause:3.1" when it already exists in registry

Bug:
I ran "kubeadm init" in master and "kubeadm join" from node succeeds but carry the status as *not ready". Peeling the layers by onion, I realize the issue is from a calico pod in namespace kube-system. The failure message is:
" Warning FailedCreatePodSandBox 4m (x76 over 64m) kubelet, str-s6000-acs-13 Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"

But thus image is already in local registry

$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.17.2             cba2a99699bd        3 weeks ago         116MB
k8s.gcr.io/kube-apiserver            v1.17.2             41ef50a5f06a        3 weeks ago         171MB
k8s.gcr.io/kube-controller-manager   v1.17.2             da5fd66c4068        3 weeks ago         161MB
k8s.gcr.io/kube-scheduler            v1.17.2             f52d4c527ef2        3 weeks ago         94.4MB
calico/node                          v3.10.3             6c2199647d1c        4 weeks ago         192MB
calico/cni                           v3.10.3             34ffdb0b77aa        4 weeks ago         163MB
calico/kube-controllers              v3.10.3             ac5e9765205b        4 weeks ago         50.6MB
calico/pod2daemon-flexvol            v3.10.3             35001c355868        5 weeks ago         9.78MB
k8s.gcr.io/coredns                   1.6.5               70f311871ae1        3 months ago        41.6MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        3 months ago        288MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB

The failure to pull is possibly because, my VM has two network interfaces and I am running API server on non-default interface.

Any tips to move ahead will be highly helpful.

complete spew:

$ kubectl describe pod calico-node-xgkp6  --namespace kube-system 
Name:                 calico-node-xgkp6
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 str-s6000-acs-13/10.3.147.253
Start Time:           Sun, 09 Feb 2020 18:32:15 +0000
Labels:               controller-revision-hash=5969d7cb65
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          scheduler.alpha.kubernetes.io/critical-pod: 
Status:               Pending
IP:                   10.3.147.253
IPs:
  IP:           10.3.147.253
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  
    Image:         calico/cni:v3.10.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-fqg7r (ro)
  install-cni:
    Container ID:  
    Image:         calico/cni:v3.10.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /install-cni.sh
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-fqg7r (ro)
  flexvol-driver:
    Container ID:   
    Image:          calico/pod2daemon-flexvol:v3.10.3
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /host/driver from flexvol-driver-host (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-fqg7r (ro)
Containers:
  calico-node:
    Container ID:   
    Image:          calico/node:v3.10.3
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=1s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      CALICO_IPV4POOL_IPIP:               Always
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_IPV4POOL_CIDR:               192.168.0.0/16
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_LOGSEVERITYSCREEN:            info
      FELIX_HEALTHENABLED:                true
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-fqg7r (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:  
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:  
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  flexvol-driver-host:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
    HostPathType:  DirectoryOrCreate
  calico-node-token-fqg7r:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  calico-node-token-fqg7r
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     :NoSchedule
                 :NoExecute
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason                  Age                From                       Message
  ----     ------                  ----               ----                       -------
  Warning  FailedCreatePodSandBox  4m (x76 over 64m)  kubelet, str-s6000-acs-13  Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

kubelet panics on creation of inline ephemeral volume with FSType specified

The issue has been seen in e2e testing in pmem-csi, where a new test was
recently introduced that creates a pod with an inline ephemeral volume.

If the related VolumeSource has filesystem type set to empty "", or FSType is omitted in spec,
then the pod creation works OK. In that case, pmem-csi chooses it's default fstype which is ext4.

Example of working definition of VolumeSource:

   fstype := ""
   vsource := v1.VolumeSource{
           CSI: &v1.CSIVolumeSource{
                   Driver: "pmem-csi.intel.com",
                   FSType: &fstype,
                   VolumeAttributes: map[string]string{
                           "size": "110Mi",
                   },
           },
   }

If the filesystem type in the VolumeSource is set to ext4 or xfs
then kubelet fails to create the volume and pod. A panic trace is written in the node kubelet log:

pmem-csi-govm-worker2 kubelet[905]: E0526 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/a5c44727-b679-412d-a3ef-3f6bbc7af122-vol1 podName:a5c44727-b679-412d-a3ef-3f6bbc7af122 nodeName:}" failed. No retries permitted until 2021-05-26 16:53:09.613416932 +0000 UTC m=+14773.383054841 (durationBeforeRetry 500ms). Error: "recovered from panic \"runtime error: invalid memory address or nil pointer dereference\". (err=<nil>) Call stack:
goroutine 95902 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc0014c0d10)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xba
panic(0x40021e0, 0x6fd2280)
/usr/local/go/src/runtime/panic.go:975 +0x47a
k8s.io/kubernetes/pkg/volume/csi.(*csiMountMgr).supportsFSGroup(0xc0016b0ff0, 0xc0009e6724, 0x3, 0xc0013960f0, 0xc00144b260, 0x17, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/volume/csi/csi_mounter.go:429 +0xb5
k8s.io/kubernetes/pkg/volume/csi.(*csiMountMgr).SetUpAt(0xc0016b0ff0, 0xc001077c80, 0x5f, 0xc001396470, 0xc0013960f0, 0x0, 0x0, 0x0, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/volume/csi/csi_mounter.go:269 +0x105f
k8s.io/kubernetes/pkg/volume/csi.(*csiMountMgr).SetUp(0xc0016b0ff0, 0xc001396470, 0xc0013960f0, 0x0, 0x0, 0x0, 0xc0009e6720)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/volume/csi/csi_mounter.go:106 +0x65
k8s.io/kubernetes/pkg/volume/util/operationexecutor.(*operationGenerator).GenerateMountVolumeFunc.func1(0xc000bb7db0, 0xc0014c0d10, 0x0, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go:643 +0x504
k8s.io/kubernetes/pkg/volume/util/types.(*GeneratedOperations).Run(0xc001dcdb00, 0xc000bb7f18, 0x4b8530, 0xc000bb7f58, 0x409548)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/volume/util/types/types.go:54 +0xde
k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations.(*nestedPendingOperations).Run.func1(0xc000e1e5c0, 0xc00082fb00, 0x3b, 0xc001a24780, 0x24, 0x0, 0x0, 0xc001dcdb00, 0x0, 0x0, ...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations/nestedpendingoperations.go:183 +0xfa
created by k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations.(*nestedPendingOperations).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations/nestedpendingoperations.go:178 +0x479"

and the creation will be retried few times.

On pmem-csi side: the 1st NodePublishVolume is received, namespace
and volume are created, formatted with mkfs, mounted and NodePublishVolume returns OK.
Same request is received few times with increasing time interval.

e2e testing code which tries to create the pod, will fail with:

FAIL: running pod
Unexpected error:
    <*errors.errorString | 0xc0002f9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

The pmem-csi CI system where this was detected, runs same tests on k8s systems with 1.19 and 1.20.
The problem occurs in testing on 1.20 system, but creation works on 1.19 system.

kubelet is causing nodes to run out of inodes on filesystem

For each restart of the pod, kubelet seems to write a 0 byte file into the /var/lib/kubelet/pods/<pod-id>/containers/<container-name>/:

# ls -lth | head
total 0
-rw-rw-rw-. 1 root root 0 Aug 26 00:54 6e8475d0
-rw-rw-rw-. 1 root root 0 Aug 26 00:46 8e462187
-rw-rw-rw-. 1 root root 0 Aug 26 00:40 bd4f31c5
-rw-rw-rw-. 1 root root 0 Aug 26 00:34 14ada185
-rw-rw-rw-. 1 root root 0 Aug 26 00:29 a287b35c
-rw-rw-rw-. 1 root root 0 Aug 26 00:23 6db0d4b3
-rw-rw-rw-. 1 root root 0 Aug 26 00:13 bf024d43
-rw-rw-rw-. 1 root root 0 Aug 26 00:03 f8043a73
-rw-rw-rw-. 1 root root 0 Aug 25 23:53 8c18a801

On k8s clusters where many pods are stuck in CrashLoopBackOff state kubelet will create millions of these 0 byte files, which can lead to nodes running out of inodes on a filesystem where /var/lib/kubelet is located. For example, in our clusters /var/lib/kubelet is on a small / filesystem with 512K inodes allocated and we found instances where the above described kubelet behavior exhausted inodes:

Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/sdb1        512K  502K   11K   98% /

Please, review why kubelet is creating these files and not cleaning them up!

Kubelet could not start up kubernetes apiserver and core components post docker upgrade v25

Until docker-ce v24.0.7, kubernetes image components are successfully run by kubelet. With the upgrade of docker-ce to v25.0, kubelet cannot start up the main components of k8s with an error stating ID and size unknown

kubelet[21094]: E0125 07:46:51.645034 21094 remote_image.go:94] ImageStatus failed: Id or size of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set kubelet[21094]: E0125 07:46:51.645064 21094 kuberuntime_image.go:85] ImageStatus for image {"k8s.gcr.io/kube-proxy:v1.17.12"} failed: Id or size of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set E0125 07:46:51.645109 21094 kuberuntime_manager.go:809] container start failed: ImageInspectError: Failed to inspect image "k8s.gcr.io/kube-proxy:v1.17.12": Id or size of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set Error syncing pod 3ed55839-d24d-482a-a2ea-5fa52af9a07a ("kube-proxy-r6kxl_kube-system(3ed55839-d24d-482a-a2ea-5fa52af9a07a)"), skipping: failed to "StartContainer" for "kube-proxy" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-proxy:v1.17.12\": Id or size of image \"k8s.gcr.io/kube-proxy:v1.17.12\" is not set"
kubernetes version(older one) 1.17, but I have seen this happening in latest version of k8s in 1.29 as well in k3s for some images

The change that is causing this problem is moby/moby#45469

Client: Docker Engine - Community
Version: 25.0.1

Server: Docker Engine - Community
Engine:
Version: 25.0.1

Kubelet version - 1.17.12(Old, I know but I have seen some saying it is a problem in the newer versions)

Downgrading docker to 24.0.7 works but it is not an option

For some reasons, I cannot upgrade kubernetes to later versions but if there is a way to remediate this problem, it helps with the latest version of docker

add option to delete the iptables, created by kubelet

kubeadm reset does not delete the iptables/ipvs/ipset entries, created by kubeadm init. Part of the entries are created by kube-proxy and can be cleaned by kube-proxy --cleanup. Others are created by kubelet. See also kubernetes/kubeadm#2587 .

  • add a method to kubelet to delete all iptables/ipvs/ipset entries created by it, so that kubeadm reset can revert the effect of kubeadm init.

kubelet won't start

[root@ip-10-247-239-16 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Tue 2019-09-10 16:17:59 EDT; 5s ago
Docs: https://kubernetes.io/docs/
Process: 8695 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 8695 (code=exited, status=255)

Sep 10 16:17:59 ip-10-247-239-16.awscloud.cms.local systemd[1]: Unit kubelet.service entered failed state.
Sep 10 16:17:59 ip-10-247-239-16.awscloud.cms.local systemd[1]: kubelet.service failed.

kubelet breaks CRI standards by running containers in 'k8s.io' containerd namespace.

Kubernetes pods run in the 'k8s.io' namespace of containerd. This 'runtime namespace' concept is containerd specific and not a part of CRI (at least I could not find it).

If kubelet truly follows the CRI standard, then it must not be able to run containers in such containerd namespaces.

crictl does not support this 'runtime namespace' thing and therefore can only communicate to containerd in the default namespace -- kubelet must do the same.

kubelet ignores `--container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock` parameters

Calling sudo kubelet -v 1000 --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock produces this output.

$ kubelet --version
Kubernetes v1.22.3-rc.0.12+2c0e4a232a3c10
$ kubelet --help
...
      --container-runtime string                                 The container runtime to use. Possible values: 'docker', 'remote'. (default "docker")
      --container-runtime-endpoint string                        [Experimental] The endpoint of remote runtime service. Currently unix socket endpoint is supported on Linux, while npipe and tcp endpoints are supported on windows. Note: When using docker as container runtime this specifies the dockershim socket location which kubelet itself creates.  Examples:'unix:///var/run/dockershim.sock', 'npipe:////./pipe/dockershim' (default "unix:///var/run/dockershim.sock")
...

My reading of the kubelet --help output is, that passing the --container-runtime=remote with --container-runtime-endpoint to kubelet, will convince it to connect to CRI-O. But instead the log contains

I1001 22:08:16.666036 319919 plugin.go:68] Docker not connected: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

kubernetes v1.20.3+: [kubelet-check] Initial timeout of 40s passed.

Hello, on minikube we updated our default kubernetes to v1.20.4 and we noticed bumping from v1.20.2 to v1.20.3 causes 1 minute slow down on miikube which the logs shows it is spending that time in [kubelet-check] Initial timeout of 40s passed.


    ▪ Booting up control plane ...I0220 15:44:24.522119   82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0220 15:44:24.522344   82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0220 15:44:24.522442   82905 command_runner.go:123] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0220 15:44:24.522659   82905 command_runner.go:123] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0220 15:44:24.522894   82905 command_runner.go:123] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
| I0220 15:45:04.513403   82905 command_runner.go:123] > [kubelet-check] Initial timeout of 40s passed.
/ I0220 15:45:28.017602   82905 command_runner.go:123] > [apiclient] All control plane components are healthy after 63.506128 seconds
I0220 15:45:28.017909   82905 command_runner.go:123] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0220 15:45:28.031480   82905 command_runner.go:123] > [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
- I0220 15:45:28.557112   82905 command_runner.go:123] > [upload-certs] Skipping phase. Please see --upload-certs
I0220 15:45:28.557337   82905 command_runner.go:123] > [mark-control-plane] Marking the node minikube as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
| I0220 15:45:29.070246   82905 command_runner.go:123] > [bootstrap-token] Using token: ckmqq4.dov5m97q5ko44fpg
I0220 15:45:29.159638   82905 out.go:140]     ▪ Configuring RBAC rules ..

I am curious if you are aware of any change that is happened since kubernetes v1.20.3 that has caused this ?
v1.20.2 does NOT have this problem, more details kubernetes/minikube#10545

Kubelet v1.28.6: Unable to fetch container log stats err= failed to get fsstats

We have installed K8s 1.28.6 on a single node Deiban. with docker as container manager. Below are the version details.
I am seeing an error in kubelet service logs. What is the reason for this error. How to resolve this issue.
Please let me know if providing any further mode details can help in understanding this issue.

$ sudo service kubelet status
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2024-04-15 06:52:21 UTC; 3 days ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 988432 (kubelet)
Tasks: 17 (limit: 35749)
Memory: 90.0M
CGroup: /system.slice/kubelet.service
└─988432 /usr/local/bin/kubelet --v=2 --node-ip=172.27.173.227 --hostname-override=k8snode --bootstrap-kubeconfig=/etc/kubernetes/bootstrap>

Apr 19 05:36:46 k8snode kubelet[988432]: E0419 05:36:46.772752 988432 cri_stats_provider.go:675] "Unable to fetch container log stats" err="failed to g>
Apr 19 05:37:02 k8snode kubelet[988432]: E0419 05:37:02.207654 988432 cri_stats_provider.go:675] "Unable to fetch container log stats" err="failed to g>
Apr 19 05:37:02 k8snode kubelet[988432]: E0419 05:37:02.207958 988432 cri_stats_provider.go:675] "Unable to fetch container log stats" err="failed to g>
Apr 19 05:37:17 k8snode kubelet[988432]: E0419 05:37:17.364224 988432 cri_stats_provider.go:675] "Unable to fetch container log stats" err="failed to g>
Apr 19 05:37:17 k8snode kubelet[988432]: E0419 05:37:17.366166 988432 cri_stats_provider.go:675] "Unable to fetch container log stats" err="failed to g>
Apr 19 05:37:28 k8snode kubelet[988432]: I0419 05:37:28.875143 988432 kubelet_getters.go:187] "Pod status updated" pod="kube-system/kube-scheduler-k8sn>
Apr 19 05:37:28 k8snode kubelet[988432]: I0419 05:37:28.875264 988432 kubelet_getters.go:187] "Pod status updated" pod="kube-system/kube-apiserver-k8sn>
Apr 19 05:37:28 k8snode kubelet[988432]: I0419 05:37:28.875318 988432 kubelet_getters.go:187] "Pod status updated" pod="kube-system/kube-controller-man>
Apr 19 05:37:32 k8snode kubelet[988432]: E0419 05:37:32.654805 988432 cri_stats_provider.go:675] "Unable to fetch container log stats" err="failed to g>
Apr 19 05:37:32 k8snode kubelet[988432]: E0419 05:37:32.655182 988432 cri_stats_provider.go:675] "Unable to fetch container log stats" err="failed to g>

sudo kubelet --version
Kubernetes v1.28.6

sudo docker --version
Docker version 20.10.20, build 9fdeb9c

sudo containerd --version
containerd containerd.io 1.6.16 31aa4358a36870b21a992d3ad2bef29e1d693bec

sudo runc --version
runc version 1.1.4
commit: v1.1.4-0-g5fd4c4d
spec: 1.0.2-dev
go: go1.18.10
libseccomp: 2.5.1

Unimplemented desc = unknown service v1beta1.Registration

Developed a device plugin with k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1 api.
When executing binary(not deploy Deamonset yaml), got err:
rpc error: code = Unimplemented desc = unknown service v1beta1.Registration
only one k8s-node kubelet set --feature-gates=DevicePlugins=true

# systemctl status kubelet.service -l
* kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-08-03 19:36:47 CST; 2 days ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
  Process: 3672134 ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/pids/system.slice/kubelet.service (code=exited, status=0/SUCCESS)
  Process: 3672131 ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service (code=exited, status=0/SUCCESS)
  Process: 3672128 ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service (code=exited, status=0/SUCCESS)
  Process: 3672126 ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service (code=exited, status=0/SUCCESS)
  Process: 3672123 ExecStartPre=/bin/mount -o remount,rw /sys/fs/cgroup (code=exited, status=0/SUCCESS)
 Main PID: 3672139 (kubelet)
   Memory: 1.3G
   CGroup: /system.slice/kubelet.service
           `-3672139 /usr/bin/kubelet --config=/search/data/kubelet_root/config.yaml --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --hostname-override=10.160.27.57 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --network-plugin=cni --pod-infra-container-image=docker-reg.sogou-inc.com/atom/pause --root-dir=/search/data/kubelet_root --logtostderr=false --log-dir=/search/data/kubernetes/kubelet --v=3 --feature-gates=DevicePlugins=true
# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

How to fix the error?

RunPodSandbox from runtime service failed + Kubelet errors

As I described here : kubernetes/kubernetes#121548
I'm experiencing this errors:

root@k8s-eu-1-master:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Thu 2023-10-26 12:03:09 CEST; 2h 5min ago
       Docs: https://kubernetes.io/docs/home/
   Main PID: 1067481 (kubelet)
      Tasks: 18 (limit: 72235)
     Memory: 43.0M
        CPU: 2min 43.501s
     CGroup: /system.slice/kubelet.service
             └─1067481 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9

Oct 26 14:08:18 k8s-eu-1-master kubelet[1067481]: E1026 14:08:18.459322 1067481 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db402ddcda98eb797e884575883860a0211c9bf5b6b9d08562793290fd5de78\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Oct 26 14:08:18 k8s-eu-1-master kubelet[1067481]: E1026 14:08:18.459398 1067481 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db402ddcda98eb797e884575883860a0211c9bf5b6b9d08562793290fd5de78\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-gt7xt"
Oct 26 14:08:18 k8s-eu-1-master kubelet[1067481]: E1026 14:08:18.459421 1067481 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db402ddcda98eb797e884575883860a0211c9bf5b6b9d08562793290fd5de78\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-gt7xt"
Oct 26 14:08:18 k8s-eu-1-master kubelet[1067481]: E1026 14:08:18.459507 1067481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-gt7xt_kube-system(8f974cff-995d-4449-8b3d-095c3e9efa96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-gt7xt_kube-system(8f974cff-995d-4449-8b3d-095c3e9efa96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5db402ddcda98eb797e884575883860a0211c9bf5b6b9d08562793290fd5de78\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5dd5756b68-gt7xt" podUID="8f974cff-995d-4449-8b3d-095c3e9efa96"
Oct 26 14:08:19 k8s-eu-1-master kubelet[1067481]: I1026 14:08:19.421859 1067481 scope.go:117] "RemoveContainer" containerID="1cfe3a1d67806d0f309784c10f66f1444acb1459a75084ebcd131652e4a44453"
Oct 26 14:08:19 k8s-eu-1-master kubelet[1067481]: E1026 14:08:19.422163 1067481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-ds-7sf87_kube-flannel(76bd2939-fd7a-4828-820c-af5a9c82e2fb)\"" pod="kube-flannel/kube-flannel-ds-7sf87" podUID="76bd2939-fd7a-4828-820c-af5a9c82e2fb"
Oct 26 14:08:22 k8s-eu-1-master kubelet[1067481]: E1026 14:08:22.464703 1067481 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8902a65b15fa748436e36451288dea3ab85a031c5baa891722a2bc5356146c5c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Oct 26 14:08:22 k8s-eu-1-master kubelet[1067481]: E1026 14:08:22.465203 1067481 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8902a65b15fa748436e36451288dea3ab85a031c5baa891722a2bc5356146c5c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-g2bkc"
Oct 26 14:08:22 k8s-eu-1-master kubelet[1067481]: E1026 14:08:22.465321 1067481 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8902a65b15fa748436e36451288dea3ab85a031c5baa891722a2bc5356146c5c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-g2bkc"
Oct 26 14:08:22 k8s-eu-1-master kubelet[1067481]: E1026 14:08:22.465511 1067481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-g2bkc_kube-system(7d1be4ef-d086-4e76-b34c-4d48adaace3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-g2bkc_kube-system(7d1be4ef-d086-4e76-b34c-4d48adaace3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8902a65b15fa748436e36451288dea3ab85a031c5baa891722a2bc5356146c5c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5dd5756b68-g2bkc" podUID="7d1be4ef-d086-4e76-b34c-4d48adaace3d"




root@k8s-eu-1-master:~# kubelet
I1026 15:45:46.904653 1176781 server.go:467] "Kubelet version" kubeletVersion="v1.28.2"
I1026 15:45:46.904817 1176781 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1026 15:45:46.905357 1176781 server.go:630] "Standalone mode, no API client"
I1026 15:45:46.924554 1176781 server.go:518] "No api server defined - no events will be sent to API server"
I1026 15:45:46.924603 1176781 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I1026 15:45:46.924913 1176781 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I1026 15:45:46.925236 1176781 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
I1026 15:45:46.925440 1176781 topology_manager.go:138] "Creating topology manager with none policy"
I1026 15:45:46.925472 1176781 container_manager_linux.go:301] "Creating device plugin manager"
I1026 15:45:46.925559 1176781 state_mem.go:36] "Initialized new in-memory state store"
I1026 15:45:46.925650 1176781 kubelet.go:399] "Kubelet is running in standalone mode, will skip API server sync"
I1026 15:45:46.926546 1176781 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.24" apiVersion="v1"
I1026 15:45:46.926948 1176781 volume_host.go:74] "KubeClient is nil. Skip initialization of CSIDriverLister"
W1026 15:45:46.928483 1176781 csi_plugin.go:189] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
W1026 15:45:46.928704 1176781 csi_plugin.go:266] Skipping CSINode initialization, kubelet running in standalone mode
I1026 15:45:46.929233 1176781 server.go:1232] "Started kubelet"
I1026 15:45:46.929456 1176781 kubelet.go:1579] "No API server defined - no node status update will be sent"
I1026 15:45:46.929655 1176781 server.go:194] "Starting to listen read-only" address="0.0.0.0" port=10255
I1026 15:45:46.929795 1176781 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
E1026 15:45:46.930598 1176781 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
I1026 15:45:46.930646 1176781 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
E1026 15:45:46.930656 1176781 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I1026 15:45:46.930753 1176781 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
E1026 15:45:46.930827 1176781 server.go:852] "Failed to start healthz server" err="listen tcp 127.0.0.1:10248: bind: address already in use"
I1026 15:45:46.931218 1176781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I1026 15:45:46.931911 1176781 server.go:462] "Adding debug handlers to kubelet server"
I1026 15:45:46.932015 1176781 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
I1026 15:45:46.931987 1176781 volume_manager.go:291] "Starting Kubelet Volume Manager"
E1026 15:45:46.933909 1176781 server.go:179] "Failed to listen and serve" err="listen tcp 0.0.0.0:10250: bind: address already in use"


root@k8s-eu-1-master:~# kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2

O.S. : Ubuntu 22.04

Cluster information:

Kubernetes version:

root@k8s-eu-1-master:~# kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2

Cloud being used: (put bare-metal if not on a public cloud) : bare-metal (Contabo Cloud)
Installation method: I installed kubernetes following these indications: https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/
Host OS: Ubuntu 22.04
CNI and version:

root@k8s-eu-1-master:~# ls /etc/cni/net.d/
10-flannel.conflist
root@k8s-eu-1-master:~# 
root@k8s-eu-1-master:~# ip a s flannel.1
23: flannel.1: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default 
    link/ether 9a:0d:1b:93:61:ef brd ff:ff:ff:ff:ff:ff
root@k8s-eu-1-master:~# 


root@k8s-eu-1-master:~# crictl pods ls
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
ERRO[0000] validate service connection: validate CRI v1 runtime API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory" 
POD ID              CREATED             STATE               NAME                                      NAMESPACE           ATTEMPT             RUNTIME
02f6c786b8be3       3 hours ago         Ready               kube-flannel-ds-7sf87                     kube-flannel        0                   (default)
4928efe000ed8       47 hours ago        Ready               kube-proxy-qhvrn                          kube-system         5                   (default)
ff796c03022b5       47 hours ago        Ready               kube-scheduler-k8s-eu-1-master            kube-system         5                   (default)
ee5cf0acafee0       47 hours ago        Ready               kube-controller-manager-k8s-eu-1-master   kube-system         5                   (default)
964707729241d       47 hours ago        Ready               kube-apiserver-k8s-eu-1-master            kube-system         5                   (default)
425309d119aa6       47 hours ago        Ready               etcd-k8s-eu-1-master                      kube-system         5                   (default)
0d5af9ab38ff2       2 days ago          NotReady            kube-proxy-qhvrn                          kube-system         4                   (default)
dcd2220d19eb9       2 days ago          NotReady            kube-apiserver-k8s-eu-1-master            kube-system         4                   (default)
400848343f530       2 days ago          NotReady            etcd-k8s-eu-1-master                      kube-system         4                   (default)
601101dcb8471       2 days ago          NotReady            kube-controller-manager-k8s-eu-1-master   kube-system         4                   (default)
ed75d5bbdb6a2       2 days ago          NotReady            kube-scheduler-k8s-eu-1-master            kube-system         4                   (default)
d0c99b759eeec       8 days ago          NotReady            coredns-5dd5756b68-gt7xt                  kube-system         0                   (default)
1c740a8a348fa       8 days ago          NotReady            coredns-5dd5756b68-g2bkc                  kube-system         0                   (default)
root@k8s-eu-1-master:~# 

CRI and version:

root@k8s-eu-1-master:~# kubectl describe node k8s-eu-1-master
Name:               k8s-eu-1-master
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-eu-1-master
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 17 Oct 2023 13:35:04 +0200
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-eu-1-master
  AcquireTime:     <unset>
  RenewTime:       Thu, 26 Oct 2023 16:32:35 +0200
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 26 Oct 2023 16:28:36 +0200   Tue, 17 Oct 2023 13:35:03 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 26 Oct 2023 16:28:36 +0200   Tue, 17 Oct 2023 13:35:03 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 26 Oct 2023 16:28:36 +0200   Tue, 17 Oct 2023 13:35:03 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 26 Oct 2023 16:28:36 +0200   Thu, 26 Oct 2023 12:03:20 +0200   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  yy.yyy.yyy.yy
  Hostname:    k8s-eu-1-master
Capacity:
  cpu:                10
  ephemeral-storage:  2061040144Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             61714440Ki
  pods:               110
Allocatable:
  cpu:                10
  ephemeral-storage:  1899454593566
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             61612040Ki
  pods:               110
System Info:
  Machine ID:                 20539bd7c301cf97deb69a1d652e2613
  System UUID:                d2ca271d-2f5b-4b7f-8282-5031297d3a19
  Boot ID:                    02f5756a-2b89-46b2-a4e3-8109ab741c01
  Kernel Version:             5.15.0-86-generic
  OS Image:                   Ubuntu 22.04.3 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.6.24
  Kubelet Version:            v1.28.2
  Kube-Proxy Version:         v1.28.2
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  kube-flannel                kube-flannel-ds-7sf87                      100m (1%)     0 (0%)      50Mi (0%)        0 (0%)         3h7m
  kube-system                 coredns-5dd5756b68-g2bkc                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9d
  kube-system                 coredns-5dd5756b68-gt7xt                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9d
  kube-system                 etcd-k8s-eu-1-master                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9d
  kube-system                 kube-apiserver-k8s-eu-1-master             250m (2%)     0 (0%)      0 (0%)           0 (0%)         9d
  kube-system                 kube-controller-manager-k8s-eu-1-master    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9d
  kube-system                 kube-proxy-qhvrn                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
  kube-system                 kube-scheduler-k8s-eu-1-master             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                950m (9%)   0 (0%)
  memory             290Mi (0%)  340Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>

May be the issue is related somehow to kubelet (misconfiguration or something else),
because with just kubelet command, I get few errors:

root@k8s-eu-1-master:~# kubelet
I1027 10:51:38.735449   21117 server.go:467] "Kubelet version" kubeletVersion="v1.28.2"
I1027 10:51:38.735593   21117 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1027 10:51:38.736115   21117 server.go:630] "Standalone mode, no API client"
I1027 10:51:38.751951   21117 server.go:518] "No api server defined - no events will be sent to API server"
I1027 10:51:38.752001   21117 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I1027 10:51:38.752773   21117 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I1027 10:51:38.753672   21117 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
I1027 10:51:38.753855   21117 topology_manager.go:138] "Creating topology manager with none policy"
I1027 10:51:38.753887   21117 container_manager_linux.go:301] "Creating device plugin manager"
I1027 10:51:38.753971   21117 state_mem.go:36] "Initialized new in-memory state store"
I1027 10:51:38.754080   21117 kubelet.go:399] "Kubelet is running in standalone mode, will skip API server sync"
I1027 10:51:38.755819   21117 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.24" apiVersion="v1"
I1027 10:51:38.757690   21117 volume_host.go:74] "KubeClient is nil. Skip initialization of CSIDriverLister"
W1027 10:51:38.758597   21117 csi_plugin.go:189] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
W1027 10:51:38.758620   21117 csi_plugin.go:266] Skipping CSINode initialization, kubelet running in standalone mode
I1027 10:51:38.759603   21117 server.go:1232] "Started kubelet"
I1027 10:51:38.759832   21117 kubelet.go:1579] "No API server defined - no node status update will be sent"
I1027 10:51:38.760245   21117 server.go:194] "Starting to listen read-only" address="0.0.0.0" port=10255
I1027 10:51:38.760603   21117 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
E1027 10:51:38.761713   21117 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
E1027 10:51:38.761891   21117 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I1027 10:51:38.762199   21117 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I1027 10:51:38.762860   21117 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
I1027 10:51:38.763059   21117 server.go:462] "Adding debug handlers to kubelet server"
E1027 10:51:38.763274   21117 server.go:852] "Failed to start healthz server" err="listen tcp 127.0.0.1:10248: bind: address already in use"
I1027 10:51:38.763468   21117 volume_manager.go:291] "Starting Kubelet Volume Manager"
I1027 10:51:38.763825   21117 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
I1027 10:51:38.764049   21117 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
E1027 10:51:38.765247   21117 server.go:179] "Failed to listen and serve" err="listen tcp 0.0.0.0:10250: bind: address already in use"

What and How do I have to check and detect for understanding why more than one kubelet is running ?

Kubelet: Error while dialing dial unix: missing address

I am trying to create K8's cluster with containerd but kubelet is failing to run.

Versions:

Os : Deb 11 bullseye
kubeadm : 1.25.0
kubectl : 1.25.0
kubelet : 1.25.0
containerd : 1.6.8

KubeletConfiguration:

kubeadm config print init-defaults --component-configs KubeletConfiguration

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      kind: InitConfiguration
      localAPIEndpoint:
      advertiseAddress: 1.2.3.4
      bindPort: 6443
      nodeRegistration:
      criSocket: unix:///var/run/containerd/containerd.sock
      imagePullPolicy: IfNotPresent
      name: node
      taints: null

apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.25.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}

apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:

  • 10.96.0.10
    clusterDomain: cluster.local
    cpuManagerReconcilePeriod: 0s
    evictionPressureTransitionPeriod: 0s
    fileCheckFrequency: 0s
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 0s
    imageMinimumGCAge: 0s
    kind: KubeletConfiguration
    logging:
    flushFrequency: 0
    options:
    json:
    infoBufferSize: "0"
    verbosity: 0
    memorySwap: {}
    nodeStatusReportFrequency: 0s
    nodeStatusUpdateFrequency: 0s
    rotateCertificates: true
    runtimeRequestTimeout: 0s
    shutdownGracePeriod: 0s
    shutdownGracePeriodCriticalPods: 0s
    staticPodPath: /etc/kubernetes/manifests
    streamingConnectionIdleTimeout: 0s
    syncFrequency: 0s
    volumeStatsAggPeriod: 0s

When i run "sudo kubelet" :

sudo kubelet
I0905 19:08:14.454517 51922 server.go:413] "Kubelet version" kubeletVersion="v1.25.0"
I0905 19:08:14.454617 51922 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0905 19:08:14.455190 51922 server.go:576] "Standalone mode, no API client"
I0905 19:08:14.494132 51922 server.go:464] "No api server defined - no events will be sent to API server"
I0905 19:08:14.494158 51922 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I0905 19:08:14.494359 51922 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0905 19:08:14.494439 51922 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0905 19:08:14.494474 51922 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0905 19:08:14.494492 51922 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
I0905 19:08:14.494530 51922 state_mem.go:36] "Initialized new in-memory state store"
I0905 19:08:14.494583 51922 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="" URL="unix://"
W0905 19:08:14.495051 51922 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "",
"ServerName": "",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial unix: missing address"
E0905 19:08:14.495223 51922 run.go:74] "command failed" err="failed to run Kubelet: unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix: missing address""

kubelet v1.18.0 always run failed

[root@master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Sun 2020-04-05 05:33:40 EDT; 7s ago
     Docs: https://kubernetes.io/docs/
  Process: 4759 ExecStart=/usr/bin/kubelet (code=exited, status=255)
 Main PID: 4759 (code=exited, status=255)

Apr 05 05:33:40 master kubelet[4759]: F0405 05:33:40.511159    4759 server.go:274] failed to run Kubelet: misconfigurati...ystemd"
Apr 05 05:33:40 master systemd[1]: Unit kubelet.service entered failed state.
Apr 05 05:33:40 master systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
Apr 05 05:33:40 master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Apr 05 05:33:40 master kubelet[4759]: F0405 05:33:40.511159    4759 server.go:274] failed to run Kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Apr 05 05:33:40 master systemd[1]: Unit kubelet.service entered failed state.
Apr 05 05:33:40 master systemd[1]: kubelet.service failed.
[root@master ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
#Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/ --cni-bin-dir=/opt/cni/bin"
#Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
#Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd"
[root@master ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"

Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podxx.slice/crio-xx.scope

kubelet 1.20.12
crio 1.20.5
#systemctl status kubelet -l
Nov 10 11:50:23 k8sm01 kubelet[29772]: E1110 11:50:23.784299 29772 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ac557d7d2316fa37c7fde53ab78837.slice/crio-93632fe92af9f256ed6eb70b770e1ace809e413a2c6a09c26411d7751a5a8dd1.scope: Error finding container 93632fe92af9f256ed6eb70b770e1ace809e413a2c6a09c26411d7751a5a8dd1: Status 404 returned error &{%!s(*http.body=&{0xc000eca4e0 false false {0 0} false false false }) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) %!s(func(error) error=0x7759c0) %!s(func() error=0x775940)}

Feature Request: configure CNI in KubeletConfiguration as opposed to choosing based on lexicographic order

in minikube we noticed that during kubeadm init, kuberentes picks up the first alphabetical CNI that could cause some unexpected issues with choosing CNIs

based on this comment from @afbjorklund kubernetes/minikube#10788 (comment)

If there are multiple CNI configuration files in the directory, the kubelet
uses the configuration file that comes first by name in lexicographic order.
https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni

would it be possible to have a flag to pass to kubeadm init to specify the CNI explicitly ?

kubelet service not running on workernode (eks setup)

systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; disabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2020-01-01 20:09:05 UTC; 2s ago
Docs: https://github.com/kubernetes/kubernetes
Process: 29398 ExecStart=/usr/bin/kubelet --cloud-provider aws --config /etc/kubernetes/kubelet/kubelet-config.json --allow-privileged=true --kubeconfig /var/lib/kubelet/kubeconfig --container-runtime docker --network-plugin cni $KUBELET_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Process: 29397 ExecStartPre=/sbin/iptables -P FORWARD ACCEPT (code=exited, status=0/SUCCESS)
Main PID: 29398 (code=exited, status=255)

Jan 01 20:09:05 ip-192-168-101-196.ec2.internal systemd[1]: Unit kubelet.service entered failed state.
Jan 01 20:09:05 ip-192-168-101-196.ec2.internal systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@ip-192-168-101-196 /]# journalctl -xe see kubelet start describe logs
Failed to add match 'see': Invalid argument
Failed to add filters: Invalid argument

note:- on api server iam running kubectl get nodes (no resource found error)

Please assist me how to fix the issue.

Kubelet fails to retrieve secret with empty name

Hi,

I'm currently seeing a weird issue in kubelet logs across multiple clusters and i'm unsure what else I should look into.

Some details about the clusters:

  • Managed cloud provider clusters, EKS and GKE - Happens in both
  • Happening across multiple versions, at least in v1.17.9-eks-4c6976 and v1.15.12-gke.20.

In a cluster with 330 nodes i'm seeing roughly 3 million of these messages per half hour:

E0225 09:23:21.876843    1230 reflector.go:125] object-"seahorse"/"": Failed to list *v1.Secret: secrets is forbidden: User "system:node:$node_SA_identifier" cannot list resource "secrets" in API group "" in the namespace "seahorse": No Object name found
E0225 09:23:21.758400    1309 reflector.go:125] object-"arcadia"/"": Failed to list *v1.Secret: secrets is forbidden: User "system:node:$node_SA_identifier" cannot list resource "secrets" in API group "" in the namespace "arcadia": No Object name found
E0225 09:23:21.726618    1272 reflector.go:125] object-"inception"/"": Failed to list *v1.Secret: secrets is forbidden: User "system:node:$node_SA_identifier" cannot list resource "secrets" in API group "" in the namespace "inception": No Object name found
E0225 09:23:21.709821    1301 reflector.go:125] object-"istio-system"/"": Failed to list *v1.Secret: secrets is forbidden: User "system:node:$node_SA_identifier" cannot list resource "secrets" in API group "" in the namespace "istio-system": No Object name found

I've looked through the list of all secrets and none of them have an empty name. I wouldn't expect this to be the case but thought to check it anyway.

Is this a known issue?

Thank you for your time, let me know if you require further detail.

Kubelet goes to a cyclic restart loop when inconsistent container list received from runtimeservice

Kubelet 1.19.3
When node joining is started with kubeadm, PodSandbox is created for Multus, Podsandbox dies for unknown reason. Problem is that this podsandbox container removal is skipped because container ID is not found in pods list (ContainerStatus[]) in Kubelet.

Later when container manager make queries runtimeService.ListContainers(nil) and runtimeService.ListPodsandbox(nil) and loop containers, one of the containers has reference to the died PodSandbox which is not anymore in Podsandbox list returned for runtimeService. This leads to Kubelet fatal crash. Because there is no working logic to cleanup non-existing Podsandbox reference from container returned inruntimeService.ListContainers(nil) kubelet start to crash in loop.
kubelet[5992]: I1217 11:30:42.639790 5992 kubelet.go:1898] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"68224015-de33-4879-a229-b8eee8538b89", Type:"ContainerDied", Data:"894f35dca3eda57ad ef28b69acd0607efdeb34e8814e87e196bc163305576028"} 2020-12-17T09:30:42.640070+00:00 base-image-2 kubelet[5992]: W1217 11:30:42.639799 5992 pod_container_deletor.go:79] Container " 894f35dca3eda57adef28b69acd0607efdeb34e8814e87e196bc163305576028" not found in pod's containers 2020-12-17T09:30:43.234857+00:00 base-image-2 kubelet[5992]: I1217 11:30:43.232179 5992 generic.go:155] GenericPLEG: 68224015-de 33-4879-a229-b8eee8538b89/894f35dca3eda57adef28b69acd0607efdeb34e8814e87e196bc163305576028: exited -> non-existent kubelet.go:1325] Failed to start ContainerManager failed to build map of initial containers from runtime: no PodsandBox found with Id '894f35dca3eda57adef28b69acd0607efdeb34e8814e87e196bc163305576028'

Workaround to add runtimeService.RemoveContainer call for this podsandbox container in container manager

func buildContainerMapFromRuntime(runtimeService internalapi.RuntimeService) (containermap.ContainerMap, error) {
	podSandboxMap := make(map[string]string)
	podSandboxList, _ := runtimeService.ListPodSandbox(nil)
	for _, p := range podSandboxList {
		podSandboxMap[p.Id] = p.Metadata.Uid
	}

	containerMap := containermap.NewContainerMap()
	containerList, _ := runtimeService.ListContainers(nil)
	for _, c := range containerList {
		if _, exists := podSandboxMap[c.PodSandboxId]; !exists {
Line added------------> runtimeService.RemoveContainer(c.Id)
			return nil, fmt.Errorf("no PodsandBox found with Id '%s'", c.PodSandboxId)
		}
		containerMap.Add(podSandboxMap[c.PodSandboxId], c.Metadata.Name, c.Id)
	}

	return containerMap, nil
} 

should NodeStageVolume be called when pod rescheduled

version v1.20.11

I'm developing the CSI and when I try to delete pod then I find the interface NodeStageVolume is called.
As the SPEC says NodeStageVolume is like volume associated interface.

   CreateVolume +------------+ DeleteVolume
 +------------->|  CREATED   +--------------+
 |              +---+----^---+              |
 |       Controller |    | Controller       v
+++         Publish |    | Unpublish       +++
|X|          Volume |    | Volume          | |
+-+             +---v----+---+             +-+
                | NODE_READY |
                +---+----^---+
               Node |    | Node
              Stage |    | Unstage
             Volume |    | Volume
                +---v----+---+
                |  VOL_READY |
                +---+----^---+
               Node |    | Node
            Publish |    | Unpublish
             Volume |    | Volume
                +---v----+---+
                | PUBLISHED  |
                +------------+

Figure 6: The lifecycle of a dynamically provisioned volume, from
creation to destruction, when the Node Plugin advertises the
STAGE_UNSTAGE_VOLUME capability.

The CSI controller logs shows CreateVolume doesn't be called when the pod start after I delete it. What should I do when NodeStageVolume is called (the remote fs has already been mounted to global path and it's hard for containter to check whether remote fs has been mounted)

kubelet fails to be restarted when the NFS server is offline.

Hi team,

I'd like to report the current issue of our k8s cluster .

Prerequisites

  1. Start the NFS server on machine1.
  2. Mount the NFS shared disk of machine1 to machine2, mount -o nfsvers=4,rw,tcp,retry=0,timeo=${mount_timeo:-6000} 172.10.10.10:/opt/backup/ /opt/backup/elasticsearch/repo
  3. Stop the NFS server on machine 1.
  4. Restart kubelet on machine 2.

The Kubernetes version is v1.25.3. In this case, access to the shared disk on machine 2 is blockedwhich I guess is why kubelet fails to start. Therefore, I suggest that you use the OS library to access system resources. Do you add a timeout setting to avoid startup failure in this case?

Kubelet E1228 RunPodSandbox from runtime service failed

OS: CentOS 7.4
Kubernetes Version 1.20.01
CRI-O 1.20.0

cluster inizialized with Kubeadm init --config=config.yml

config.yml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  podSubnet: "10.224.0.0/24"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
NAME     STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
node01   Ready    control-plane,master   9h    v1.20.1   172.16.0.1    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   cri-o://1.20.0
[root@node01 ~]# k get po --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-744cfdf676-qdcrd   0/1     ContainerCreating   0          18m
kube-system   calico-node-2mnxp                          1/1     Running             0          18m
kube-system   coredns-74ff55c5b-bd2ts                    0/1     ContainerCreating   0          9h
kube-system   coredns-74ff55c5b-f2wlh                    0/1     ContainerCreating   0          9h
kube-system   etcd-node01                                1/1     Running             0          9h
kube-system   kube-apiserver-node01                      1/1     Running             0          9h
kube-system   kube-controller-manager-node01             1/1     Running             0          9h
kube-system   kube-proxy-jq7cx                           1/1     Running             0          9h
kube-system   kube-scheduler-node01                      1/1     Running             0          9h

Kubelet logs

Dec 28 07:24:34 node01 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 28 07:24:34 node01 kubelet[6272]: I1228 07:24:34.314434    6272 server.go:416] Version: v1.20.1
Dec 28 07:24:34 node01 kubelet[6272]: I1228 07:24:34.314927    6272 server.go:837] Client rotation is on, will bootstrap in background
Dec 28 07:24:34 node01 kubelet[6272]: I1228 07:24:34.316990    6272 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 28 07:24:34 node01 kubelet[6272]: I1228 07:24:34.318071    6272 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328077    6272 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328294    6272 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328303    6272 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328365    6272 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328371    6272 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328374    6272 container_manager_linux.go:315] Creating device plugin manager: true
Dec 28 07:24:39 node01 kubelet[6272]: W1228 07:24:39.328640    6272 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328668    6272 remote_runtime.go:62] parsed scheme: ""
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328673    6272 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328689    6272 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328698    6272 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 28 07:24:39 node01 kubelet[6272]: W1228 07:24:39.328725    6272 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328732    6272 remote_image.go:50] parsed scheme: ""
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328735    6272 remote_image.go:50] scheme "" not registered, fallback to default scheme
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328740    6272 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328743    6272 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328761    6272 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.328777    6272 kubelet.go:273] Watching apiserver
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.329353    6272 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
Dec 28 07:24:39 node01 kubelet[6272]: I1228 07:24:39.345198    6272 kuberuntime_manager.go:216] Container runtime cri-o initialized, version: 1.20.0, apiVersion: v1alpha1
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.638795    6272 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Dec 28 07:24:45 node01 kubelet[6272]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.640141    6272 server.go:1176] Started kubelet
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.640254    6272 kubelet.go:1271] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.642684    6272 server.go:148] Starting to listen on 0.0.0.0:10250
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.643375    6272 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.643927    6272 server.go:409] Adding debug handlers to kubelet server.
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.645578    6272 volume_manager.go:271] Starting Kubelet Volume Manager
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.647212    6272 desired_state_of_world_populator.go:142] Desired state populator starts to run
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.657581    6272 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.657629    6272 status_manager.go:158] Starting to sync pod status with apiserver
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.657643    6272 kubelet.go:1799] Starting kubelet main sync loop.
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.657663    6272 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.746645    6272 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.224.0.0/24
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.755141    6272 kubelet_node_status.go:71] Attempting to register node node01
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.755355    6272 kubelet_network.go:77] Setting Pod CIDR:  -> 10.224.0.0/24
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.762098    6272 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.779842    6272 kubelet_node_status.go:109] Node node01 was previously registered
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.780613    6272 kubelet_node_status.go:74] Successfully registered node node01
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787743    6272 cpu_manager.go:193] [cpumanager] starting with none policy
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787753    6272 cpu_manager.go:194] [cpumanager] reconciling every 10s
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787770    6272 state_mem.go:36] [cpumanager] initializing new in-memory state store
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787871    6272 state_mem.go:88] [cpumanager] updated default cpuset: ""
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787944    6272 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.787959    6272 policy_none.go:43] [cpumanager] none policy: Start
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.789531    6272 setters.go:577] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-12-28 07:24:45.7895105 +0100 CET m=+11.578152801 LastTransitionTime:2020-12-28 07:24:45.7895105 +0100 CET m=+11.578152801 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Dec 28 07:24:45 node01 kubelet[6272]: W1228 07:24:45.794173    6272 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.795919    6272 plugin_manager.go:114] Starting Kubelet Plugin Manager
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962322    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962426    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962457    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962477    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962535    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962574    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962630    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962672    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: I1228 07:24:45.962721    6272 topology_manager.go:187] [topologymanager] Topology Admit Handler
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.973287    6272 kubelet.go:1635] Failed creating a mirror pod for "kube-scheduler-node01_kube-system(9be8cb4627e7e5ad4c3f8acabd4b49b3)": pods "kube-scheduler-node01" already exists
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.976972    6272 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-node01_kube-system(62167925d1ac26070e568a81a11be1b5)": pods "kube-apiserver-node01" already exists
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.977042    6272 kubelet.go:1635] Failed creating a mirror pod for "etcd-node01_kube-system(e25ea21632f580335cac4f07009e0473)": pods "etcd-node01" already exists
Dec 28 07:24:45 node01 kubelet[6272]: E1228 07:24:45.977154    6272 kubelet.go:1635] Failed creating a mirror pod for "kube-controller-manager-node01_kube-system(6a237e4472e8c04619dd54b3dc80f073)": pods "kube-controller-manager-node01" already exists
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.049900    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/62167925d1ac26070e568a81a11be1b5-ca-certs") pod "kube-apiserver-node01" (UID: "62167925d1ac26070e568a81a11be1b5")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150114    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-kubeconfig") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150156    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb-config-volume") pod "coredns-74ff55c5b-f2wlh" (UID: "332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150171    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-78l9h" (UniqueName: "kubernetes.io/secret/15ae4814-32a6-4b85-82f4-6d8b18940736-calico-node-token-78l9h") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150184    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/e25ea21632f580335cac4f07009e0473-etcd-certs") pod "etcd-node01" (UID: "e25ea21632f580335cac4f07009e0473")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150194    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/62167925d1ac26070e568a81a11be1b5-k8s-certs") pod "kube-apiserver-node01" (UID: "62167925d1ac26070e568a81a11be1b5")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150202    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-ca-certs") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150212    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-cni-net-dir") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150251    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-kube-controllers-token-slf5w" (UniqueName: "kubernetes.io/secret/b504d6d5-9171-4cca-a6e7-cd8501842d7c-calico-kube-controllers-token-slf5w") pod "calico-kube-controllers-744cfdf676-qdcrd" (UID: "b504d6d5-9171-4cca-a6e7-cd8501842d7c")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150263    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-qs5ct" (UniqueName: "kubernetes.io/secret/b85e2da8-6c7e-41f1-918c-89f2f4954e72-kube-proxy-token-qs5ct") pod "kube-proxy-jq7cx" (UID: "b85e2da8-6c7e-41f1-918c-89f2f4954e72")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150271    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/e25ea21632f580335cac4f07009e0473-etcd-data") pod "etcd-node01" (UID: "e25ea21632f580335cac4f07009e0473")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150280    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sysfs" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-sysfs") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150288    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-etc-pki") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150296    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-flexvolume-dir") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150304    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-lib-modules") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150313    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-cni-bin-dir") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150322    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9be8cb4627e7e5ad4c3f8acabd4b49b3-kubeconfig") pod "kube-scheduler-node01" (UID: "9be8cb4627e7e5ad4c3f8acabd4b49b3")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150332    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b85e2da8-6c7e-41f1-918c-89f2f4954e72-xtables-lock") pod "kube-proxy-jq7cx" (UID: "b85e2da8-6c7e-41f1-918c-89f2f4954e72")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150341    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b85e2da8-6c7e-41f1-918c-89f2f4954e72-lib-modules") pod "kube-proxy-jq7cx" (UID: "b85e2da8-6c7e-41f1-918c-89f2f4954e72")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150350    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/62167925d1ac26070e568a81a11be1b5-etc-pki") pod "kube-apiserver-node01" (UID: "62167925d1ac26070e568a81a11be1b5")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150359    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8l5tz" (UniqueName: "kubernetes.io/secret/332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb-coredns-token-8l5tz") pod "coredns-74ff55c5b-f2wlh" (UID: "332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150368    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-xtables-lock") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150377    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-policysync") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150387    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-flexvol-driver-host") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150397    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b85e2da8-6c7e-41f1-918c-89f2f4954e72-kube-proxy") pod "kube-proxy-jq7cx" (UID: "b85e2da8-6c7e-41f1-918c-89f2f4954e72")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150406    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c59e0124-1fa5-4cc3-87af-93544cd6ec69-config-volume") pod "coredns-74ff55c5b-bd2ts" (UID: "c59e0124-1fa5-4cc3-87af-93544cd6ec69")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150414    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/6a237e4472e8c04619dd54b3dc80f073-k8s-certs") pod "kube-controller-manager-node01" (UID: "6a237e4472e8c04619dd54b3dc80f073")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150428    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-log-dir" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-cni-log-dir") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150436    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8l5tz" (UniqueName: "kubernetes.io/secret/c59e0124-1fa5-4cc3-87af-93544cd6ec69-coredns-token-8l5tz") pod "coredns-74ff55c5b-bd2ts" (UID: "c59e0124-1fa5-4cc3-87af-93544cd6ec69")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150445    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-var-lib-calico") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150454    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-local-net-dir" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-host-local-net-dir") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150466    6272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/15ae4814-32a6-4b85-82f4-6d8b18940736-var-run-calico") pod "calico-node-2mnxp" (UID: "15ae4814-32a6-4b85-82f4-6d8b18940736")
Dec 28 07:24:46 node01 kubelet[6272]: I1228 07:24:46.150471    6272 reconciler.go:157] Reconciler: start to sync state
Dec 28 07:24:47 node01 kubelet[6272]: I1228 07:24:47.041746    6272 request.go:655] Throttling request took 1.0778311s, request: GET:https://172.16.0.1:6443/api/v1/namespaces/kube-system/pods/etcd-node01
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.206272    6272 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = container create failed: time="2020-12-28T07:24:47+01:00" level=error msg="container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\""
Dec 28 07:24:47 node01 kubelet[6272]: container_linux.go:349: starting container process caused "error adding seccomp rule for syscall socket: requested action matches default action of filter"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.206327    6272 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)" failed: rpc error: code = Unknown desc = container create failed: time="2020-12-28T07:24:47+01:00" level=error msg="container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\""
Dec 28 07:24:47 node01 kubelet[6272]: container_linux.go:349: starting container process caused "error adding seccomp rule for syscall socket: requested action matches default action of filter"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.206338    6272 kuberuntime_manager.go:755] createPodSandbox for pod "calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)" failed: rpc error: code = Unknown desc = container create failed: time="2020-12-28T07:24:47+01:00" level=error msg="container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\""
Dec 28 07:24:47 node01 kubelet[6272]: container_linux.go:349: starting container process caused "error adding seccomp rule for syscall socket: requested action matches default action of filter"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.206366    6272 pod_workers.go:191] Error syncing pod b504d6d5-9171-4cca-a6e7-cd8501842d7c ("calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)"), skipping: failed to "CreatePodSandbox" for "calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-kube-controllers-744cfdf676-qdcrd_kube-system(b504d6d5-9171-4cca-a6e7-cd8501842d7c)\" failed: rpc error: code = Unknown desc = container create failed: time=\"2020-12-28T07:24:47+01:00\" level=error msg=\"container_linux.go:349: starting container process caused \\\"error adding seccomp rule for syscall socket: requested action matches default action of filter\\\"\"\ncontainer_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\"\n"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.251629    6272 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.251687    6272 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/c59e0124-1fa5-4cc3-87af-93544cd6ec69-config-volume podName:c59e0124-1fa5-4cc3-87af-93544cd6ec69 nodeName:}" failed. No retries permitted until 2020-12-28 07:24:47.7516687 +0100 CET m=+13.540311001 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c59e0124-1fa5-4cc3-87af-93544cd6ec69-config-volume\") pod \"coredns-74ff55c5b-bd2ts\" (UID: \"c59e0124-1fa5-4cc3-87af-93544cd6ec69\") : failed to sync configmap cache: timed out waiting for the condition"
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.252821    6272 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Dec 28 07:24:47 node01 kubelet[6272]: E1228 07:24:47.252868    6272 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb-config-volume podName:332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb nodeName:}" failed. No retries permitted until 2020-12-28 07:24:47.7528523 +0100 CET m=+13.541494601 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb-config-volume\") pod \"coredns-74ff55c5b-f2wlh\" (UID: \"332c0792-ca1a-4a27-bfd5-ed17b6b1e7bb\") : failed to sync configmap cache: timed out waiting for the condition"

kubelet v1.8.4 ppc64le musl-c gets exception

Steps to reproduce:

  1. Compile kubelet v1.18.4 with golang 1.14.4
  2. Run kubelet

Expected results

No error

Actual results

$ sudo kubelet
I0626 12:31:19.173178   25297 server.go:417] Version: v1.18.4
I0626 12:31:19.173388   25297 plugins.go:100] No cloud provider specified.
W0626 12:31:19.173407   25297 server.go:560] standalone mode, no API client
W0626 12:31:19.178562   25297 info.go:51] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
W0626 12:31:19.178914   25297 server.go:474] No api server defined - no events will be sent to API server.
I0626 12:31:19.178977   25297 server.go:647] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0626 12:31:19.179495   25297 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
fatal error: missing deferreturn

runtime stack:
runtime.throw(0x1394064e, 0x13)
	/usr/lib/go/src/runtime/panic.go:1116 +0x5c
runtime.addOneOpenDeferFrame.func1.1(0x7fff8e01f7e8, 0x0, 0x164145a0)
	/usr/lib/go/src/runtime/panic.go:753 +0x258
runtime.gentraceback(0x1004fc44, 0xc000bbf310, 0x0, 0xc000000180, 0x0, 0x0, 0x7fffffff, 0x7fff8e01f8b8, 0x0, 0x0, ...)
	/usr/lib/go/src/runtime/traceback.go:334 +0xea0
runtime.addOneOpenDeferFrame.func1()
	/usr/lib/go/src/runtime/panic.go:721 +0x8c
runtime.systemstack(0x0)
	/usr/lib/go/src/runtime/asm_ppc64x.s:269 +0x94
runtime.mstart()
	/usr/lib/go/src/runtime/proc.go:1041

goroutine 1 [running]:
runtime.systemstack_switch()
	/usr/lib/go/src/runtime/asm_ppc64x.s:216 +0x10 fp=0xc000bbf1f0 sp=0xc000bbf1d0 pc=0x10068f20
runtime.addOneOpenDeferFrame(0xc000000180, 0x1004fc44, 0xc000bbf310)
	/usr/lib/go/src/runtime/panic.go:720 +0x7c fp=0xc000bbf240 sp=0xc000bbf1f0 pc=0x1003649c
panic(0x130d14a0, 0x163c1f20)
	/usr/lib/go/src/runtime/panic.go:929 +0xdc fp=0xc000bbf310 sp=0xc000bbf240 pc=0x10036adc
runtime.panicmem(...)
	/usr/lib/go/src/runtime/panic.go:212
runtime.sigpanic()
	/usr/lib/go/src/runtime/signal_unix.go:695 +0x3f4 fp=0xc000bbf350 sp=0xc000bbf310 pc=0x1004fc44
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/api/resource.(*Quantity).String(0x0, 0x138b5520, 0x0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/api/resource/quantity.go:601 +0x28 fp=0xc000bbf410 sp=0xc000bbf370 pc=0x104cf068
fmt.(*pp).handleMethods(0xc000ac16c0, 0xc000000076, 0x101)
	/usr/lib/go/src/fmt/print.go:630 +0x28c fp=0xc000bbf6b0 sp=0xc000bbf410 pc=0x100e8bfc
fmt.(*pp).printValue(0xc000ac16c0, 0x138b5520, 0xc000c88060, 0x196, 0x76, 0x5)
	/usr/lib/go/src/fmt/print.go:727 +0x211c fp=0xc000bbf8a0 sp=0xc000bbf6b0 pc=0x100eba7c
fmt.(*pp).printValue(0xc000ac16c0, 0x132a6780, 0xc000c88060, 0x199, 0x76, 0x4)
	/usr/lib/go/src/fmt/print.go:810 +0x170c fp=0xc000bbfa90 sp=0xc000bbf8a0 pc=0x100eb06c
fmt.(*pp).printValue(0xc000ac16c0, 0x13573520, 0xc000c88040, 0x199, 0x76, 0x3)
	/usr/lib/go/src/fmt/print.go:810 +0x170c fp=0xc000bbfc80 sp=0xc000bbfa90 pc=0x100eb06c
fmt.(*pp).printValue(0xc000ac16c0, 0x12d40ec0, 0xc0002e32c0, 0x97, 0x76, 0x2)
	/usr/lib/go/src/fmt/print.go:869 +0x3dc fp=0xc000bbfe70 sp=0xc000bbfc80 pc=0x100e9d3c
fmt.(*pp).printValue(0xc000ac16c0, 0x136725a0, 0xc0002e3280, 0x99, 0x76, 0x1)
	/usr/lib/go/src/fmt/print.go:810 +0x170c fp=0xc000bc0060 sp=0xc000bbfe70 pc=0x100eb06c
fmt.(*pp).printValue(0xc000ac16c0, 0x13860f20, 0xc0002e3200, 0x99, 0x76, 0x0)
	/usr/lib/go/src/fmt/print.go:810 +0x170c fp=0xc000bc0250 sp=0xc000bc0060 pc=0x100eb06c
fmt.(*pp).printArg(0xc000ac16c0, 0x13860f20, 0xc0002e3200, 0x76)
	/usr/lib/go/src/fmt/print.go:716 +0x2a8 fp=0xc000bc02f8 sp=0xc000bc0250 pc=0x100e91f8
fmt.(*pp).doPrintf(0xc000ac16c0, 0x13a0cb08, 0x3b, 0xc000bc06b8, 0x1, 0x1)
	/usr/lib/go/src/fmt/print.go:1030 +0x140 fp=0xc000bc0408 sp=0xc000bc02f8 pc=0x100ec5a0
fmt.Fprintf(0x141063e0, 0xc000388310, 0x13a0cb08, 0x3b, 0xc000bc06b8, 0x1, 0x1, 0x1, 0x1391c4be, 0x13860f20)
	/usr/lib/go/src/fmt/print.go:204 +0x58 fp=0xc000bc0480 sp=0xc000bc0408 pc=0x100e5cc8
k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).printf(0x164cd8e0, 0xc000000000, 0x13a0cb08, 0x3b, 0xc000bc06b8, 0x1, 0x1)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:703 +0x98 fp=0xc000bc0510 sp=0xc000bc0480 pc=0x1030e868
k8s.io/kubernetes/vendor/k8s.io/klog.Infof(...)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:1201
k8s.io/kubernetes/pkg/kubelet/cm.NewContainerManager(0x141ae680, 0xc0002a1660, 0x141d6ee0, 0xc000a90b10, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go:271 +0x560 fp=0xc000bc0a20 sp=0xc000bc0510 pc=0x1151f730
k8s.io/kubernetes/cmd/kubelet/app.run(0xc0008ba000, 0xc000501500, 0x7fff8def1440, 0xc00007db00, 0xc0000d4a80, 0x1, 0x1)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:715 +0xd60 fp=0xc000bc19d8 sp=0xc000bc0a20 pc=0x12a62ac0
k8s.io/kubernetes/cmd/kubelet/app.Run(0xc0008ba000, 0xc000501500, 0x7fff8def1440, 0xc00007db00, 0xc0000d4a80, 0x0, 0x1016d30c)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:421 +0xfc fp=0xc000bc1b38 sp=0xc000bc19d8 pc=0x12a615cc
k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0000fb180, 0xc0001121a0, 0x0, 0x0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:273 +0x51c fp=0xc000bc1d10 sp=0xc000bc1b38 pc=0x12a675bc
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000fb180, 0xc0001121a0, 0x0, 0x0, 0xc0000fb180, 0xc0001121a0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830 +0x208 fp=0xc000bc1df8 sp=0xc000bc1d10 pc=0x128b7b38
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000fb180, 0x161c17ede727a2d3, 0x164cd2a0, 0x1003989c)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914 +0x258 fp=0xc000bc1ee0 sp=0xc000bc1df8 pc=0x128b8528
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
main.main()
	_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xdc fp=0xc000bc1f50 sp=0xc000bc1ee0 pc=0x12a6860c
runtime.main()
	/usr/lib/go/src/runtime/proc.go:203 +0x214 fp=0xc000bc1fc0 sp=0xc000bc1f50 pc=0x10039914
runtime.goexit()
	/usr/lib/go/src/runtime/asm_ppc64x.s:884 +0x4 fp=0xc000bc1fc0 sp=0xc000bc1fc0 pc=0x1006b644

goroutine 19 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x164cd8e0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:1010 +0x78
created by k8s.io/kubernetes/vendor/k8s.io/klog.init.0
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:411 +0xe0

goroutine 88 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x14115940, 0xc000a90b70, 0xc00030df20)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:288 +0x98
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:286 +0x68

goroutine 82 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalHandler.func1(0xc0000d4a80)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:38 +0x38
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalHandler
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:37 +0xd8

goroutine 45 [sleep]:
time.Sleep(0x3b9aca00)
	/usr/lib/go/src/runtime/time.go:188 +0xc4
k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1.(*metricsRecorder).run(0xc000371f20)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1/metrics_recorder.go:87 +0x40
created by k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1.newMetricsRecorder
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1/metrics_recorder.go:59 +0xf4

goroutine 32 [syscall]:
os/signal.signal_recv(0x0)
	/usr/lib/go/src/runtime/sigqueue.go:147 +0xf8
os/signal.loop()
	/usr/lib/go/src/os/signal/signal_unix.go:23 +0x24
created by os/signal.Notify.func1
	/usr/lib/go/src/os/signal/signal.go:127 +0x4c

goroutine 57 [select]:
k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000698820)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0xd8
created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x64

goroutine 80 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x13ab54e0, 0x14105a20, 0xc00044e150, 0x1, 0xc0001020c0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x120
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x13ab54e0, 0x12a05f200, 0x0, 0xc000126901, 0xc0001020c0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x13ab54e0, 0x12a05f200)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x50
created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x88

goroutine 87 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc0007324c0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:207 +0x58
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewBroadcaster
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:75 +0xc4

Extra info

os: alpine 3.12 (ppc64le)
go 1.14.4
libc: musl-c

Kubelet service starts and then enters failed state

NAMESPACE     NAME                          READY   STATUS        RESTARTS   AGE
kube-system   coredns-75f8564758-92ws7      1/1     Running       0          24h
kube-system   coredns-75f8564758-z9xn8      1/1     Running       0          24h
kube-system   kube-flannel-ds-amd64-2j4mw   1/1     Running       0          24h
kube-system   kube-flannel-ds-amd64-5tmhp   0/1     Pending   0          24h
kube-system   kube-flannel-ds-amd64-rqwmz   1/1     Running       0          24h
kube-system   kube-proxy-6v24w              1/1     Running       0          24h
kube-system   kube-proxy-jgdw7              0/1     Pending   0          24h
kube-system   kube-proxy-qppnk              1/1     Running       0          24h

These 2 pods are in the pending state. I do systemctl restart kubelet . The service stays active for about 5 seconds, then enters failed state. Here are the errors from journalctl -u kubelet.

cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.dkubelet.go:1383]

Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids

kubelet.service: main process exited, code=exited, status=255/n/a
Unit kubelet.service entered failed state.
kubelet.service failed.

Every time restart kubelet a listening port changes .

[root@k8s-node01 ~]# netstat -tunlp | grep kubelet
tcp 0 0 127.0.0.1:39404 0.0.0.0:* LISTEN 8859/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 8859/kubelet
tcp6 0 0 :::10250 :::* LISTEN 8859/kubelet

[root@k8s-node01 ~]# systemctl restart kubelet
[root@k8s-node01 ~]# netstat -tunlp | grep kubelet
tcp 0 0 127.0.0.1:45414 0.0.0.0:* LISTEN 8859/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 8859/kubelet
tcp6 0 0 :::10250 :::* LISTEN 8859/kubelet

[root@k8s-node01 ~]# ps -aux| grep 8859
root 8859 2.1 0.4 2048112 75488 ? Ssl 11:02 0:30 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1

[root@k8s-node01 ~]# kubelet
I0520 11:29:42.875586 17830 server.go:410] Version: v1.16.4

Has anyone ever met ??? How do i fix it ???

Statefulset/Deployment pod going to unknown state

Statefulset/Deployment pod going to unknown state

Expected Behavior

Expected statefulset/deployment pod to be in running and healthy state after reboot of K8s worker nodes.

Current Behavior

Statefulset/Deployment pod going to unknown state with event: "Normal SandboxChanged 13s (x10 over 2m13s) kubelet Pod sandbox changed, it will be killed and re-created."

Before reboot:
-----------------
root@k8s-control-120-1666125150:~# kubectl get sc,sts,pvc,pv,pods -owide -n vsan-stretch-7887 
NAME                                           PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/nginx-sc-default   csi.vsphere.vmware.com   Delete          Immediate           false                  103s

NAME                   READY   AGE    CONTAINERS   IMAGES
statefulset.apps/web   3/3     103s   nginx        registry.k8s.io/nginx-slim:0.8

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE    VOLUMEMODE
persistentvolumeclaim/pvc-hhtnd   Bound    pvc-9f2f9423-da75-43c1-90e5-796563dec65b   2Gi        RWO            nginx-sc-default   56s    Filesystem
persistentvolumeclaim/www-web-0   Bound    pvc-ccb45b9b-d224-474a-b673-4537412fe5dc   1Gi        RWO            nginx-sc-default   102s   Filesystem
persistentvolumeclaim/www-web-1   Bound    pvc-1e807aba-7aa8-475f-a2a9-da51f70347f2   1Gi        RWO            nginx-sc-default   89s    Filesystem
persistentvolumeclaim/www-web-2   Bound    pvc-82e8301d-8fbf-4116-8cad-8298446810c6   1Gi        RWO            nginx-sc-default   77s    Filesystem

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS       REASON   AGE    VOLUMEMODE
persistentvolume/pvc-1e807aba-7aa8-475f-a2a9-da51f70347f2   1Gi        RWO            Delete           Bound    vsan-stretch-7887/www-web-1   nginx-sc-default            88s    Filesystem
persistentvolume/pvc-81a2bc10-44bb-4081-9d84-904d0e5f0fc7   2Gi        RWO            Delete           Bound    vsan-stretch-8081/pvc-q8qf9   nginx-sc-default            42m    Filesystem
persistentvolume/pvc-82e8301d-8fbf-4116-8cad-8298446810c6   1Gi        RWO            Delete           Bound    vsan-stretch-7887/www-web-2   nginx-sc-default            75s    Filesystem
persistentvolume/pvc-9f2f9423-da75-43c1-90e5-796563dec65b   2Gi        RWO            Delete           Bound    vsan-stretch-7887/pvc-hhtnd   nginx-sc-default            53s    Filesystem
persistentvolume/pvc-b7ef9db7-032c-471c-a13b-29593d12f72d   1Gi        RWO            Delete           Bound    vsan-stretch-8081/www-web-1   nginx-sc-default            43m    Filesystem
persistentvolume/pvc-ccb45b9b-d224-474a-b673-4537412fe5dc   1Gi        RWO            Delete           Bound    vsan-stretch-7887/www-web-0   nginx-sc-default            101s   Filesystem
persistentvolume/pvc-e7e94375-6988-4838-885f-219091dff175   1Gi        RWO            Delete           Bound    vsan-stretch-8081/www-web-0   nginx-sc-default            43m    Filesystem
persistentvolume/pvc-f9de880f-1773-45d5-9ad7-a032f52faf7d   1Gi        RWO            Delete           Bound    vsan-stretch-8081/www-web-2   nginx-sc-default            43m    Filesystem

NAME                                                                  READY   STATUS    RESTARTS   AGE    IP            NODE                      NOMINATED NODE   READINESS GATES
pod/deployment-ea472bfc-abed-4105-a83f-876f81b887ef-f87f7879b-t6624   1/1     Running   0          52s    10.244.3.13   k8s-node-50-1666125193    <none>           <none>
pod/web-0                                                             1/1     Running   0          102s   10.244.5.10   k8s-node-667-1666125222   <none>           <none>
pod/web-1                                                             1/1     Running   0          89s    10.244.3.12   k8s-node-50-1666125193    <none>           <none>
pod/web-2                                                             1/1     Running   0          77s    10.244.4.14   k8s-node-500-1666125207   <none>           <none>

After reboot:
----------------
root@k8s-control-120-1666125150:~# kubectl get sc,sts,pvc,pv,pods -owide -n vsan-stretch-7887 
NAME                                           PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/nginx-sc-default   csi.vsphere.vmware.com   Delete          Immediate           false                  5m34s

NAME                   READY   AGE     CONTAINERS   IMAGES
statefulset.apps/web   2/3     5m34s   nginx        registry.k8s.io/nginx-slim:0.8

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE     VOLUMEMODE
persistentvolumeclaim/pvc-hhtnd   Bound    pvc-9f2f9423-da75-43c1-90e5-796563dec65b   2Gi        RWO            nginx-sc-default   4m47s   Filesystem
persistentvolumeclaim/www-web-0   Bound    pvc-ccb45b9b-d224-474a-b673-4537412fe5dc   1Gi        RWO            nginx-sc-default   5m33s   Filesystem
persistentvolumeclaim/www-web-1   Bound    pvc-1e807aba-7aa8-475f-a2a9-da51f70347f2   1Gi        RWO            nginx-sc-default   5m20s   Filesystem
persistentvolumeclaim/www-web-2   Bound    pvc-82e8301d-8fbf-4116-8cad-8298446810c6   1Gi        RWO            nginx-sc-default   5m8s    Filesystem

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS       REASON   AGE     VOLUMEMODE
persistentvolume/pvc-1e807aba-7aa8-475f-a2a9-da51f70347f2   1Gi        RWO            Delete           Bound    vsan-stretch-7887/www-web-1   nginx-sc-default            5m19s   Filesystem
persistentvolume/pvc-81a2bc10-44bb-4081-9d84-904d0e5f0fc7   2Gi        RWO            Delete           Bound    vsan-stretch-8081/pvc-q8qf9   nginx-sc-default            46m     Filesystem
persistentvolume/pvc-82e8301d-8fbf-4116-8cad-8298446810c6   1Gi        RWO            Delete           Bound    vsan-stretch-7887/www-web-2   nginx-sc-default            5m6s    Filesystem
persistentvolume/pvc-9f2f9423-da75-43c1-90e5-796563dec65b   2Gi        RWO            Delete           Bound    vsan-stretch-7887/pvc-hhtnd   nginx-sc-default            4m44s   Filesystem
persistentvolume/pvc-b7ef9db7-032c-471c-a13b-29593d12f72d   1Gi        RWO            Delete           Bound    vsan-stretch-8081/www-web-1   nginx-sc-default            47m     Filesystem
persistentvolume/pvc-ccb45b9b-d224-474a-b673-4537412fe5dc   1Gi        RWO            Delete           Bound    vsan-stretch-7887/www-web-0   nginx-sc-default            5m32s   Filesystem
persistentvolume/pvc-e7e94375-6988-4838-885f-219091dff175   1Gi        RWO            Delete           Bound    vsan-stretch-8081/www-web-0   nginx-sc-default            47m     Filesystem
persistentvolume/pvc-f9de880f-1773-45d5-9ad7-a032f52faf7d   1Gi        RWO            Delete           Bound    vsan-stretch-8081/www-web-2   nginx-sc-default            46m     Filesystem

NAME                                                                  READY   STATUS    RESTARTS        AGE     IP            NODE                      NOMINATED NODE   READINESS GATES
pod/deployment-ea472bfc-abed-4105-a83f-876f81b887ef-f87f7879b-t6624   0/1     Unknown   0               4m43s   <none>        k8s-node-50-1666125193    <none>           <none>
pod/web-0                                                             1/1     Running   1 (2m29s ago)   5m33s   10.244.5.12   k8s-node-667-1666125222   <none>           <none>
pod/web-1                                                             1/1     Running   1 (2m28s ago)   5m20s   10.244.3.15   k8s-node-50-1666125193    <none>           <none>
pod/web-2                                                             0/1     Unknown   0               5m8s    <none>        k8s-node-500-1666125207   <none>           <none>

root@k8s-control-120-1666125150:~# kubectl describe pod -n vsan-stretch-7887 deployment-ea472bfc-abed-4105-a83f-876f81b887ef-f87f7879b-t6624 
Name:           deployment-ea472bfc-abed-4105-a83f-876f81b887ef-f87f7879b-t6624
Namespace:      vsan-stretch-7887
Priority:       0
Node:           k8s-node-50-1666125193/10.191.186.200
Start Time:     Wed, 19 Oct 2022 08:25:02 +0000
Labels:         app=test
                pod-template-hash=f87f7879b
Annotations:    <none>
Status:         Running
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/deployment-ea472bfc-abed-4105-a83f-876f81b887ef-f87f7879b
Containers:
  write-pod:
    Container ID:  containerd://662b0ff6e6c6e12aac239c9eabf3e8c96fc4ff8e98fea16c9d7abde55bf5f2e9
    Image:         harbor-repo.vmware.com/csi_ci/busybox:1.35
    Image ID:      harbor-repo.vmware.com/csi_ci/busybox@sha256:505e5e20edbb5f2ac0abe3622358daf2f4a4c818eea0498445b7248e39db6728
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
    Args:
      -c
      trap exit TERM; while true; do sleep 1; done
    State:          Terminated
      Reason:       Unknown
      Exit Code:    255
      Started:      Wed, 19 Oct 2022 08:25:07 +0000
      Finished:     Wed, 19 Oct 2022 08:27:16 +0000
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/volume1 from volume1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zrhn2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  volume1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-hhtnd
    ReadOnly:   false
  kube-api-access-zrhn2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Normal   Scheduled               4m50s                 default-scheduler        Successfully assigned vsan-stretch-7887/deployment-ea472bfc-abed-4105-a83f-876f81b887ef-f87f7879b-t6624 to k8s-node-50-1666125193
  Normal   SuccessfulAttachVolume  4m50s                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-9f2f9423-da75-43c1-90e5-796563dec65b"
  Normal   Pulled                  4m46s                 kubelet                  Container image "harbor-repo.vmware.com/csi_ci/busybox:1.35" already present on machine
  Normal   Created                 4m46s                 kubelet                  Created container write-pod
  Normal   Started                 4m46s                 kubelet                  Started container write-pod
  Warning  NodeNotReady            3m57s                 node-controller          Node is not ready
  Warning  FailedMount             2m15s                 kubelet                  MountVolume.MountDevice failed for volume "pvc-9f2f9423-da75-43c1-90e5-796563dec65b" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name csi.vsphere.vmware.com not found in the list of registered CSI drivers
  Normal   SandboxChanged          13s (x10 over 2m13s)  kubelet                  Pod sandbox changed, it will be killed and re-created.

Steps to Reproduce (for bugs)

  1. Create a statefulset , deployment
  2. Expect all statefulset and deployment pods to be in running and healthy state.
  3. Reboot all k8s worker nodes.
  4. Once the k8s worker nodes are up and Running, check the status of statefulset pods and deployments pod.
  5. Expect sts and deployment pods to be up and running

Context

Disaster recovery scenarios with k8s 1.24 in containerd environment is failing with this issue. This was working fine in k8s 1.23 with dockershim.

Your Environment

Kubelet failed to create a pod on windows node

I have two nodes.

NAME STATUS ROLES AGE VERSION
ubuntu1804l-001 Ready master 14d v1.18.6
windowsserv-0 Ready 127m v1.18.6

kubelet has been started on the windowsserv-0 machine.

On the windows machine, I applied the yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
name: webadmin-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
webadmin: web
template:
metadata:
labels:
webadmin: web
spec:
containers:
- name: webadmin-site
image: XXXX

to create a deployment on the windows node.

When creating the pod, kubelet reports the following log messages:

W0821 17:37:03.003768 99524 pod_container_deletor.go:77] Container "dee4daa76a9e60e0e68af75597092aa5cff517c7021a6ef7579f77f662f2a163" not found in pod's containers

W0821 17:37:03.071774 99524 helpers.go:289] Unable to create pod sandbox due to conflict. Attempting to remove sandbox "dee4daa76a9e60e0e68af75597092aa5cff517c7021a6ef7579f77f662f2a163"

E0821 17:37:03.108764 99524 remote_runtime.go:200] CreateContainer in sandbox "62ff282461eba2fae24a66b7d38ccca43b224c74320dbb5a0a4659b4c4446eb7" from runtime service failed: rpc error: code = Unknown desc = Error response from daemon: Conflict. The container name "/k8s_webadmin-site_webadmin-app-757c7455cf-nms75_default_7ac60567-f9e2-4c04-aead-c6957200c961_0" is already in use by container "dee4daa76a9e60e0e68af75597092aa5cff517c7021a6ef7579f77f662f2a163". You have to remove (or rename) that container to be able to reuse that name.

E0821 17:37:03.109762 99524 kuberuntime_manager.go:801] container start failed: CreateContainerError: Error response from daemon: Conflict. The container name "/k8s_webadmin-site_webadmin-app-757c7455cf-nms75_default_7ac60567-f9e2-4c04-aead-c6957200c961_0" is already in use by container "dee4daa76a9e60e0e68af75597092aa5cff517c7021a6ef7579f77f662f2a163". You have to remove (or rename) that container to be able to reuse that name.

E0821 17:37:03.113766 99524 pod_workers.go:191] Error syncing pod 7ac60567-f9e2-4c04-aead-c6957200c961 ("webadmin-app-757c7455cf-nms75_default(7ac60567-f9e2-4c04-aead-c6957200c961)"), skipping: failed to "StartContainer" for "webadmin-site" with CreateContainerError: "Error response from daemon: Conflict. The container name "/k8s_webadmin-site_webadmin-app-757c7455cf-nms75_default_7ac60567-f9e2-4c04-aead-c6957200c961_0" is already in use by container "dee4daa76a9e60e0e68af75597092aa5cff517c7021a6ef7579f77f662f2a163". You have to remove (or rename) that container to be able to reuse that name."

Such failed messages keeps generating when creating the pod.

So I listed the docker containers using "docker ps" during the pod creation period. It seems that the kubelet keeps creating and removing containers and the pod's status is CreateContainerError.

Use internal cluster registry to pull image

Hi community,

Not sure if this is the right place to raise this idea/question but will give it a try.

Problem

  • I have 3 multi-node cluster on prem which doesn't have internet access. (Each of clusters configs are federated by GitOps approaches.)
  • I have an internal registry in each cluster. (1 for each, 3 in total.) Registry is serving a pod. (e.g. Harbor)

Question/Idea
In a federated environment each cluster has a different domain. (e.g. myregistry-1.com, myregistry-2.com, myregistry-3.com) Therefore, each registries have different FQDNs. To deploy the same applications into 3 different clusters/regions, I need to change image FQDNs accordingly.

Is it possible to have a common DNS, such as myregistry.local for all clusters so that kubelet will be able to download images locally from local registry, which is basically a pod..

Ps: This is something different than configuring a local registry, because my registry runs as a pod and kubelet doesn't know anything about it...

About selection of cgroup driver

seems like cgroup driver "systemd" cannot work well with kubelet in below condition:

1. created a cgroup named k8s-cgroup by mkdir under /sys/fs/cgroup/xxx
2. "exec-opts": ["native.cgroupdriver=systemd"] in /etc/docker/daemon.json
3. cgroupRoot: /k8s-cgroup in config.yaml during run kubeadm join xxx

kubeadm join failed since kubelet startded failed:
failed to run Kubelet: invalid configuration: cgroup-root ["k8s-cgroup"] doesn't exist

but if I remove "exec-opts": ["native.cgroupdriver=systemd"] from /etc/docker/daemon.json, and both docker and k8s use "cgroupfs" driver, the cgroup k8s-cgroup can be find, and kubeadm join successfully.

Do we have this limit for support the above failed condition or something wrong with my usage?

typo in kubelet --help

kubelet --help prints

Other than from an PodSpec from the apiserver…

an PodSpec → a PodSpect

windows pod error while adding to cni network: error while ProvisionEndpoint: Cannot create a file when that file already exists.

Expected Behavior

We want to run docker windows hyperv isolation mode in kubernetes.
Both Windows hyperv isolation mode and process isolation mode container pod in kubernetes is started correctly with a pod ip and right network configuration.

Current Behavior

Everything works fine for windows isolation mode pods in kubernetes.But for windows hyperv isolation mode pods, they have no pod ip and keeps restarting.
below is what I got from "kubelet get pod -o wide", we can see that in IP column is "". And it keeps restarting and have restarted 51 times:
hyperv-0 0/1 CrashLoopBackOff 51 110m <none> jf53120cp-hv

In kubelet log, we see:
E0315 00:04:48.592295 11324 cni.go:366] Error adding hyperv-0/ded8068e29d711f9e996fdc753a99a1eb421509decb0035709f6e58291be8a23 to network flannel/flannel.4096: error while ProvisionEndpoint(ded8068e29d711f9e996fdc753a99a1eb421509decb0035709f6e58291be8a23_flannel.4096,D3CBB652-3DCC-486F-BBB5-690D54C96F18,ded8068e29d711f9e996fdc753a99a1eb421509decb0035709f6e58291be8a23): Cannot create a file when that file already exists. E0315 00:04:48.592295 11324 cni_windows.go:59] error while adding to cni network: error while ProvisionEndpoint(ded8068e29d711f9e996fdc753a99a1eb421509decb0035709f6e58291be8a23_flannel.4096,D3CBB652-3DCC-486F-BBB5-690D54C96F18,ded8068e29d711f9e996fdc753a99a1eb421509decb0035709f6e58291be8a23): Cannot create a file when that file already exists.

The only different in 2 isolation mode pods is the annotation "experimental.windows.kubernetes.io/isolation-type: hyperv", which is to tell kubelet which windows container isolation mode to use.

Context

We are trying the Windows hyperv isolation mode feature gate in kubernetes. Process isolation mode works fine, but pods cannot get pod ip when started with isolation mode.

Your Environment

  • Flannel version: flannel:v0.13.0-nanoserver
  • Backend used (e.g. vxlan or udp): vxlan
  • Etcd version: 3.4.13
  • Kubernetes version (if used): 1.20.2
  • Operating System and version: Windows Server 2019
  • Link to your project (optional):

Pod stuck at every steps of deployment - need to restart kubelet

Hi team,

I'd like to report the current issue of our k8s cluster that we can not deploy/schedule any pods onto any nodes automatically.

We have a cluster including 3 worker nodes and 1 control plane. The Kubernetes version is v1.18.9 and we have been running the cluster normally for months.

Recently, we've encountered the issue of pod getting stuck at every step of lifecycle (scheduling, containerCreating, terminating, ...) and we had to manually restart the kubelet service step by step to make it move on.

For example, we triggered a Gitlab pipeline which created a new runner pod.

Try describing the pod we could see it was successfully assigned to a node. However, it get stuck at this step forever (or at least long enough to make Gitlab pipeline timeout)
MicrosoftTeams-image (7)

We checked the kubelet logs on the node and found nothing special. Even "kubectl get events" on master returns nothing.

Then we restarted the kubelet with systemctl restart kubelet and the pod moved to "ContainerCreating" quickly
MicrosoftTeams-image (8)

But it get stuck at this step again and still we had to restart the kubelet on the node.

The pod went "Running" after the restart.
MicrosoftTeams-image (9)

Our attempts to identify the cause:

  • Try restarting Docker on every machines. It also got stuck and we had to remove all containers from /var/lib/docker/containers
  • Try restarting kubelet after Docker restart
  • Restarting all the nodes make the cluster back to normal state but this issue happened twice this week.
  • We have launched a virtual NFS server (a k8s deployment) recently and we don't know if this can be the problem.

Could anyone please help me to identify the root cause? Please be free to ask for more information if needed.
Thank you

enableSystemLogHandler not working

https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/

enableSystemLogHandler bool | enableSystemLogHandler enables system logs via web interface host:port/logs/ Default: true

Visiting that URL gives me 'Unauthorized' response even if I specify my bearer token (that I extracted from kubectl config file ~/.kube/config):

curl -H "Authorization: Bearer ${my_token}" "https://${my_k8s_worker_node}:${kubelet_port:-10250}/logs/"
Unauthorized

IMO the docs should mention how to access system logs via that web interface.

Kubelet /stats/summary not working in EKS

Hi,

Kubelet endpoint is not working in the EKS setup. kubelet is listening in port 10250 but when the
endpoint ( http://x.x.x.x:10250/stats/summary ) is accessed from python code it throws error - Failed to establish a new connection: [Errno 111] Connection refused')

When the same endpoint is accessed from the terminal - getting success but no data returned -
PFA
Screenshot from 2021-10-27 18-05-08

Any help would be much appreciated.

Regards,
NImmsy

kubelet v1.8.4 ppc64le musl-c get exception when

Steps to reproduce:

  1. Compile kubelet with golang 1.14.4
  2. Run kubelet

Expected results

No error

Actual results

$ sudo kubelet
I0626 12:31:19.173178   25297 server.go:417] Version: v1.18.4
I0626 12:31:19.173388   25297 plugins.go:100] No cloud provider specified.
W0626 12:31:19.173407   25297 server.go:560] standalone mode, no API client
W0626 12:31:19.178562   25297 info.go:51] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
W0626 12:31:19.178914   25297 server.go:474] No api server defined - no events will be sent to API server.
I0626 12:31:19.178977   25297 server.go:647] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0626 12:31:19.179495   25297 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
fatal error: missing deferreturn

runtime stack:
runtime.throw(0x1394064e, 0x13)
	/usr/lib/go/src/runtime/panic.go:1116 +0x5c
runtime.addOneOpenDeferFrame.func1.1(0x7fff8e01f7e8, 0x0, 0x164145a0)
	/usr/lib/go/src/runtime/panic.go:753 +0x258
runtime.gentraceback(0x1004fc44, 0xc000bbf310, 0x0, 0xc000000180, 0x0, 0x0, 0x7fffffff, 0x7fff8e01f8b8, 0x0, 0x0, ...)
	/usr/lib/go/src/runtime/traceback.go:334 +0xea0
runtime.addOneOpenDeferFrame.func1()
	/usr/lib/go/src/runtime/panic.go:721 +0x8c
runtime.systemstack(0x0)
	/usr/lib/go/src/runtime/asm_ppc64x.s:269 +0x94
runtime.mstart()
	/usr/lib/go/src/runtime/proc.go:1041

goroutine 1 [running]:
runtime.systemstack_switch()
	/usr/lib/go/src/runtime/asm_ppc64x.s:216 +0x10 fp=0xc000bbf1f0 sp=0xc000bbf1d0 pc=0x10068f20
runtime.addOneOpenDeferFrame(0xc000000180, 0x1004fc44, 0xc000bbf310)
	/usr/lib/go/src/runtime/panic.go:720 +0x7c fp=0xc000bbf240 sp=0xc000bbf1f0 pc=0x1003649c
panic(0x130d14a0, 0x163c1f20)
	/usr/lib/go/src/runtime/panic.go:929 +0xdc fp=0xc000bbf310 sp=0xc000bbf240 pc=0x10036adc
runtime.panicmem(...)
	/usr/lib/go/src/runtime/panic.go:212
runtime.sigpanic()
	/usr/lib/go/src/runtime/signal_unix.go:695 +0x3f4 fp=0xc000bbf350 sp=0xc000bbf310 pc=0x1004fc44
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/api/resource.(*Quantity).String(0x0, 0x138b5520, 0x0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/api/resource/quantity.go:601 +0x28 fp=0xc000bbf410 sp=0xc000bbf370 pc=0x104cf068
fmt.(*pp).handleMethods(0xc000ac16c0, 0xc000000076, 0x101)
	/usr/lib/go/src/fmt/print.go:630 +0x28c fp=0xc000bbf6b0 sp=0xc000bbf410 pc=0x100e8bfc
fmt.(*pp).printValue(0xc000ac16c0, 0x138b5520, 0xc000c88060, 0x196, 0x76, 0x5)
	/usr/lib/go/src/fmt/print.go:727 +0x211c fp=0xc000bbf8a0 sp=0xc000bbf6b0 pc=0x100eba7c
fmt.(*pp).printValue(0xc000ac16c0, 0x132a6780, 0xc000c88060, 0x199, 0x76, 0x4)
	/usr/lib/go/src/fmt/print.go:810 +0x170c fp=0xc000bbfa90 sp=0xc000bbf8a0 pc=0x100eb06c
fmt.(*pp).printValue(0xc000ac16c0, 0x13573520, 0xc000c88040, 0x199, 0x76, 0x3)
	/usr/lib/go/src/fmt/print.go:810 +0x170c fp=0xc000bbfc80 sp=0xc000bbfa90 pc=0x100eb06c
fmt.(*pp).printValue(0xc000ac16c0, 0x12d40ec0, 0xc0002e32c0, 0x97, 0x76, 0x2)
	/usr/lib/go/src/fmt/print.go:869 +0x3dc fp=0xc000bbfe70 sp=0xc000bbfc80 pc=0x100e9d3c
fmt.(*pp).printValue(0xc000ac16c0, 0x136725a0, 0xc0002e3280, 0x99, 0x76, 0x1)
	/usr/lib/go/src/fmt/print.go:810 +0x170c fp=0xc000bc0060 sp=0xc000bbfe70 pc=0x100eb06c
fmt.(*pp).printValue(0xc000ac16c0, 0x13860f20, 0xc0002e3200, 0x99, 0x76, 0x0)
	/usr/lib/go/src/fmt/print.go:810 +0x170c fp=0xc000bc0250 sp=0xc000bc0060 pc=0x100eb06c
fmt.(*pp).printArg(0xc000ac16c0, 0x13860f20, 0xc0002e3200, 0x76)
	/usr/lib/go/src/fmt/print.go:716 +0x2a8 fp=0xc000bc02f8 sp=0xc000bc0250 pc=0x100e91f8
fmt.(*pp).doPrintf(0xc000ac16c0, 0x13a0cb08, 0x3b, 0xc000bc06b8, 0x1, 0x1)
	/usr/lib/go/src/fmt/print.go:1030 +0x140 fp=0xc000bc0408 sp=0xc000bc02f8 pc=0x100ec5a0
fmt.Fprintf(0x141063e0, 0xc000388310, 0x13a0cb08, 0x3b, 0xc000bc06b8, 0x1, 0x1, 0x1, 0x1391c4be, 0x13860f20)
	/usr/lib/go/src/fmt/print.go:204 +0x58 fp=0xc000bc0480 sp=0xc000bc0408 pc=0x100e5cc8
k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).printf(0x164cd8e0, 0xc000000000, 0x13a0cb08, 0x3b, 0xc000bc06b8, 0x1, 0x1)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:703 +0x98 fp=0xc000bc0510 sp=0xc000bc0480 pc=0x1030e868
k8s.io/kubernetes/vendor/k8s.io/klog.Infof(...)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:1201
k8s.io/kubernetes/pkg/kubelet/cm.NewContainerManager(0x141ae680, 0xc0002a1660, 0x141d6ee0, 0xc000a90b10, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go:271 +0x560 fp=0xc000bc0a20 sp=0xc000bc0510 pc=0x1151f730
k8s.io/kubernetes/cmd/kubelet/app.run(0xc0008ba000, 0xc000501500, 0x7fff8def1440, 0xc00007db00, 0xc0000d4a80, 0x1, 0x1)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:715 +0xd60 fp=0xc000bc19d8 sp=0xc000bc0a20 pc=0x12a62ac0
k8s.io/kubernetes/cmd/kubelet/app.Run(0xc0008ba000, 0xc000501500, 0x7fff8def1440, 0xc00007db00, 0xc0000d4a80, 0x0, 0x1016d30c)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:421 +0xfc fp=0xc000bc1b38 sp=0xc000bc19d8 pc=0x12a615cc
k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0000fb180, 0xc0001121a0, 0x0, 0x0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:273 +0x51c fp=0xc000bc1d10 sp=0xc000bc1b38 pc=0x12a675bc
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000fb180, 0xc0001121a0, 0x0, 0x0, 0xc0000fb180, 0xc0001121a0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830 +0x208 fp=0xc000bc1df8 sp=0xc000bc1d10 pc=0x128b7b38
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000fb180, 0x161c17ede727a2d3, 0x164cd2a0, 0x1003989c)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914 +0x258 fp=0xc000bc1ee0 sp=0xc000bc1df8 pc=0x128b8528
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
main.main()
	_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xdc fp=0xc000bc1f50 sp=0xc000bc1ee0 pc=0x12a6860c
runtime.main()
	/usr/lib/go/src/runtime/proc.go:203 +0x214 fp=0xc000bc1fc0 sp=0xc000bc1f50 pc=0x10039914
runtime.goexit()
	/usr/lib/go/src/runtime/asm_ppc64x.s:884 +0x4 fp=0xc000bc1fc0 sp=0xc000bc1fc0 pc=0x1006b644

goroutine 19 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x164cd8e0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:1010 +0x78
created by k8s.io/kubernetes/vendor/k8s.io/klog.init.0
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/klog.go:411 +0xe0

goroutine 88 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x14115940, 0xc000a90b70, 0xc00030df20)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:288 +0x98
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:286 +0x68

goroutine 82 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalHandler.func1(0xc0000d4a80)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:38 +0x38
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalHandler
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:37 +0xd8

goroutine 45 [sleep]:
time.Sleep(0x3b9aca00)
	/usr/lib/go/src/runtime/time.go:188 +0xc4
k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1.(*metricsRecorder).run(0xc000371f20)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1/metrics_recorder.go:87 +0x40
created by k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1.newMetricsRecorder
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1/metrics_recorder.go:59 +0xf4

goroutine 32 [syscall]:
os/signal.signal_recv(0x0)
	/usr/lib/go/src/runtime/sigqueue.go:147 +0xf8
os/signal.loop()
	/usr/lib/go/src/os/signal/signal_unix.go:23 +0x24
created by os/signal.Notify.func1
	/usr/lib/go/src/os/signal/signal.go:127 +0x4c

goroutine 57 [select]:
k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000698820)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0xd8
created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x64

goroutine 80 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x13ab54e0, 0x14105a20, 0xc00044e150, 0x1, 0xc0001020c0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x120
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x13ab54e0, 0x12a05f200, 0x0, 0xc000126901, 0xc0001020c0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x13ab54e0, 0x12a05f200)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x50
created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x88

goroutine 87 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc0007324c0)
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:207 +0x58
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewBroadcaster
	/home/vagrant/aports/testing/kubernetes/src/kubernetes-1.18.4/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:75 +0xc4

Extra info

os: alpine 3.12 (ppc64le)
go 1.14.4
libc: musl-c

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.