Git Product home page Git Product logo

certified-kubernetes-administrator-course's People

Contributors

abdul125 avatar anujgupta09 avatar cell5 avatar chirangaalwis avatar csheremeta avatar dinesh-grammer avatar erauner12 avatar fduthilleul avatar fireflycons avatar fj-greger avatar fprojetto avatar gauravbansal17 avatar guruvishna04 avatar huntex avatar k2-kk avatar ksemele avatar m-ayman avatar mmumshad avatar mohlatif227 avatar poojapatel-iit avatar rahulsoni43 avatar ruwgxo avatar sajiyah-salat avatar srinivas-kk avatar sujinsjlee avatar tej-singh-rana avatar tintnc avatar vasil-shaikh avatar vpalazhi avatar xr09 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

certified-kubernetes-administrator-course's Issues

Sentence in Authentication.md does not make sense

This sentence makes no sense: "Different users that may be accessing the cluster security of end users who access the applications deployed on the cluster is managed by the applications themselves internally."

https://github.com/kodekloudhub/certified-kubernetes-administrator-course/blob/master/docs/07-Security/03-Authentication.md#different-users-that-may-be-accessing-the-cluster-security-of-end-users-who-access-the-applications-deployed-on-the-cluster-is-managed-by-the-applications-themselves-internally

Minor correction

Hello,

There is minor correction in docs/07-Security/11-Certificate-API.md at line 64

Lacture request for certified-kubernetes-administrator-with-practice-tests

Hi Sir,Maam

i have purchase the [Certified Kubernetes Administrator (CKA) with Practice Tests] on udemy
(https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/)

in which i have found from official doc current Kubernetes version is V1.28
(Kubernetes v1.28)
(https://v1-28.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)

image

but on udemy course is updated till to V1.27.
image

please check and update the course if there is any update in this course.

Thanks Regards
[email protected]

Typo

Section 07 - Security, Lecture 142. TLS Basics, 4:33 in the video
SSH into the server using the private key :
ssh -i id_ras user1@server1 should be ssh -i id_rsa user1@server1

Windows 10 issues

Hello Team - I got the error "[email protected]: Permission denied (publickey)." and it got solved with below env var.

Resolved with set VAGRANT_PREFER_SYSTEM_BIN=0

Also, I had my RTFM moment with installing vagrant due to hyper-v, if you could capture these in pre-req section it will save time others if they run into this. Most people having windows 10 laptop and taking k8s course will have docker installed which will need Hyper-v component. Anyone trying this lab will run into this issue if they didnt read the document for Vagrant which says to remove Hyper-V

Thanks
Gopinath T

Typo

Hello,

There is a Minor Typo in in docs/07-Security/15-API-Groups.md

kube-controller-manager-kubemaster, kube-scheduler-kubemaster and kube-apiserver-kubemaster failling after fresh install

Hi.

I've followed the steps given into page also aligned with the video from course.

And after we perform the kubeadm init with the parameters mentioned, the custer did not gets stable and resources crash.

image

vagrant@kubemaster:~$ kubectl logs kube-controller-manager-kubemaster -n kube-system

I0201 22:23:51.509281 1 serving.go:348] Generated self-signed cert in-memory
I0201 22:23:52.452616 1 controllermanager.go:189] "Starting" version="v1.28.6"
I0201 22:23:52.452692 1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0201 22:23:52.456475 1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
I0201 22:23:52.456779 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0201 22:23:52.457621 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0201 22:23:52.458048 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager...

I0201 22:23:52.458526 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0201 22:23:53.140435 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0201 22:24:12.687007 1 leaderelection.go:260] successfully acquired lease kube-system/kube-controller-manager
I0201 22:24:12.689194 1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="kubemaster_750b6d10-4871-4b0a-bcb8-71d780d4ad17 became leader"
I0201 22:24:12.701894 1 shared_informer.go:311] Waiting for caches to sync for tokens
I0201 22:24:12.721898 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
I0201 22:24:12.722154 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
I0201 22:24:12.722195 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
I0201 22:24:12.722910 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
I0201 22:24:12.722950 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
W0201 22:24:12.723035 1 shared_informer.go:593] resyncPeriod 17h15m46.280444347s is smaller than resyncCheckPeriod
[... same message]
0201 22:24:12.751832 1 stateful_set.go:161] "Starting stateful set controller"
I0201 22:24:12.751845 1 shared_informer.go:311] Waiting for caches to sync for stateful set
I0201 22:24:12.755064 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
I0201 22:24:12.755080 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
I0201 22:24:12.755099 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0201 22:24:12.756616 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
I0201 22:24:12.756628 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
I0201 22:24:12.756663 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0201 22:24:12.757965 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
I0201 22:24:12.757989 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I0201 22:24:12.758003 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0201 22:24:12.759225 1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
I0201 22:24:12.759319 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
I0201 22:24:12.759330 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
[...]
I0201 22:24:23.707653 1 shared_informer.go:318] Caches are synced for cidrallocator
I0201 22:24:23.709265 1 shared_informer.go:318] Caches are synced for PVC protection
I0201 22:24:23.718266 1 shared_informer.go:318] Caches are synced for ephemeral
I0201 22:24:23.732303 1 shared_informer.go:318] Caches are synced for endpoint
I0201 22:24:23.735974 1 shared_informer.go:318] Caches are synced for PV protection
I0201 22:24:23.738394 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
I0201 22:24:23.743347 1 shared_informer.go:318] Caches are synced for disruption
I0201 22:24:23.749414 1 shared_informer.go:318] Caches are synced for endpoint_slice
I0201 22:24:23.806862 1 shared_informer.go:318] Caches are synced for namespace
I0201 22:24:23.825031 1 shared_informer.go:318] Caches are synced for resource quota
I0201 22:24:23.865760 1 shared_informer.go:318] Caches are synced for service account
I0201 22:24:23.878232 1 shared_informer.go:318] Caches are synced for HPA
I0201 22:24:23.915455 1 shared_informer.go:318] Caches are synced for resource quota
I0201 22:24:24.247272 1 shared_informer.go:318] Caches are synced for garbage collector
I0201 22:24:24.247302 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
[...]
I0201 22:28:17.019009 1 serving.go:348] Generated self-signed cert in-memory
I0201 22:28:17.596780 1 controllermanager.go:189] "Starting" version="v1.28.6"
I0201 22:28:17.596861 1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0201 22:28:17.598891 1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
I0201 22:28:17.599058 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0201 22:28:17.599178 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0201 22:28:17.599205 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager...
I0201 22:28:17.599276 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"

Then everythings starts to crash

image

This is a fresh install, not modified, any document or config file, in a ubuntu node using vagrant and virtualbox.

Something is wrong in the definitions files.

Only kubelet is possible to find a service, there are no services status for kube-apiserver and kube-controller

image

If we tries to check the binary for kube-api-server:

vagrant@kubemaster:~$ sudo /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/11/fs/usr/local/bin/kube-apiserver
W0201 22:31:14.306767 5712 options.go:293] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0201 22:31:14.651329 5712 serving.go:342] Generated self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
I0201 22:31:14.651377 5712 options.go:220] external host was not specified, using 10.0.2.15
W0201 22:31:14.651387 5712 authentication.go:527] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
E0201 22:31:14.652413 5712 run.go:74] "command failed" err="[--etcd-servers must be specified, service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags]"

The same for etcd binary:

vagrant@kubemaster:~$ sudo /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/25/fs/usr/local/bin/etcd
{"level":"warn","ts":"2024-02-01T22:33:36.962898Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2024-02-01T22:33:36.963002Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/25/fs/usr/local/bin/etcd"]}
{"level":"warn","ts":"2024-02-01T22:33:36.963031Z","caller":"etcdmain/etcd.go:105","msg":"'data-dir' was empty; using default","data-dir":"default.etcd"}
{"level":"warn","ts":"2024-02-01T22:33:36.963255Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2024-02-01T22:33:36.963399Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2024-02-01T22:33:36.964032Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["http://localhost:2379"]}
{"level":"info","ts":"2024-02-01T22:33:36.964304Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"default","data-dir":"default.etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
{"level":"info","ts":"2024-02-01T22:33:36.964409Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"default","data-dir":"default.etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
{"level":"warn","ts":"2024-02-01T22:33:36.964435Z","caller":"etcdmain/etcd.go:146","msg":"failed to start etcd","error":"listen tcp 127.0.0.1:2379: bind: address already in use"}
{"level":"fatal","ts":"2024-02-01T22:33:36.964462Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"listen tcp 127.0.0.1:2379: bind: address already in use","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:250"}

Logs from kube-apiServer:

vagrant@kubemaster:~$ kubectl logs kube-apiserver-kubemaster -n kube-system
I0201 22:33:19.667044 1 options.go:220] external host was not specified, using 192.168.56.11
I0201 22:33:19.668036 1 server.go:148] Version: v1.28.6
I0201 22:33:19.668055 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0201 22:33:20.611715 1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
I0201 22:33:20.629162 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0201 22:33:20.629522 1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0201 22:33:20.629952 1 instance.go:298] Using reconciler: lease
I0201 22:33:20.667856 1 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
W0201 22:33:20.667896 1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0201 22:33:20.862887 1 handler.go:275] Adding GroupVersion v1 to ResourceManager
I0201 22:33:20.863192 1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
I0201 22:33:21.155913 1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
I0201 22:33:21.182668 1 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
W0201 22:33:21.182725 1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0201 22:33:21.182734 1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
I0201 22:33:21.183279 1 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
W0201 22:33:21.183319 1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
I0201 22:33:21.184046 1 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
I0201 22:33:21.184843 1 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
W0201 22:33:21.184882 1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
W0201 22:33:21.184887 1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
I0201 22:33:21.186135 1 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
W0201 22:33:21.186182 1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.

And logs from scheduler:

vagrant@kubemaster:~$ kubectl logs kube-scheduler-kubemaster -n kube-system
I0201 22:33:22.632553 1 serving.go:348] Generated self-signed cert in-memory
I0201 22:33:22.993418 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.6"
I0201 22:33:22.993440 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0201 22:33:22.997795 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0201 22:33:22.997880 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0201 22:33:22.997914 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0201 22:33:22.997949 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
I0201 22:33:22.998001 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0201 22:33:22.998010 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0201 22:33:22.998030 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0201 22:33:22.998034 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0201 22:33:23.098750 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
I0201 22:33:23.098767 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0201 22:33:23.098751 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0201 22:33:23.100038 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-scheduler...
I0201 22:33:40.693893 1 server.go:238] "Requested to terminate, exiting"

System is going in eternal loop on error and crash for resource until loses connection:

_vagrant@kubemaster:~$ kubectl get pods -A -w
The connection to the server 192.168.56.11:6443 was refused - did you specify the right host or port?

I've done a restart in the vagrant vm, and all resources are running again (until we get the errors and crashs again, but in this "OK" state, see below that there are no services for kube-apiserver, kube-scheduler and controller-manager)

image

Until the system is running, we can check some process stated below:

image

When all resources dies due the errors/crash on lopping, only the process for controller is running:

image

Just a little more of conecpt from describe pods and his logs:
kube-scheduler-kubemaster

_kubectl describe pod -n kube-system kube-scheduler-kubemaster
Name: kube-scheduler-kubemaster
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: kubemaster/192.168.56.11
Start Time: Fri, 02 Feb 2024 01:58:32 +0000
Labels: component=kube-scheduler
tier=control-plane
Annotations: kubernetes.io/config.hash: 0670fe8668c8dd769b1e2391a17b95af
kubernetes.io/config.mirror: 0670fe8668c8dd769b1e2391a17b95af
kubernetes.io/config.seen: 2024-02-01T04:17:34.586974879Z
kubernetes.io/config.source: file
Status: Running
SeccompProfile: RuntimeDefault
IP: 192.168.56.11
IPs:
IP: 192.168.56.11
Controlled By: Node/kubemaster
Containers:
kube-scheduler:
Container ID: containerd://c269368833b3e53e1f6cd414c0b4e5ed90c26235b7d6826c2093bea3dc28d0df
Image: registry.k8s.io/kube-scheduler:v1.28.6
Image ID: registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39
Port:
Host Port:
Command:
kube-scheduler
--authentication-kubeconfig=/etc/kubernetes/scheduler.conf
--authorization-kubeconfig=/etc/kubernetes/scheduler.conf
--bind-address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 02 Feb 2024 02:07:22 +0000
Finished: Fri, 02 Feb 2024 02:07:29 +0000
Ready: False
Restart Count: 218
Requests:
cpu: 100m
Liveness: http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
Startup: http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
Environment:
Mounts:
/etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors:
Tolerations: :NoExecute op=Exists
Events:
Type Reason Age From Message


Normal Created 21h kubelet Created container kube-scheduler
Normal Started 21h kubelet Started container kube-scheduler
Normal Pulled 21h kubelet Container image "registry.k8s.io/kube-scheduler:v1.28.6" already present on machine
Normal Created 21h (x10 over 21h) kubelet Created container kube-scheduler
Normal Started 21h (x10 over 21h) kubelet Started container kube-scheduler
Warning Unhealthy 18h (x13 over 19h) kubelet Startup probe failed: Get "https://127.0.0.1:10259/healthz": net/http: TLS handshake timeout
Warning Unhealthy 12h (x6 over 21h) kubelet Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
Normal SandboxChanged 12h (x95 over 21h) kubelet Pod sandbox changed, it will be killed and re-created.
Warning BackOff 11h (x1704 over 21h) kubelet Back-off restarting failed container kube-scheduler in pod kube-scheduler-kubemaster_kube-system(0670fe8668c8dd769b1e2391a17b95af)
Normal Pulled 11h (x118 over 21h) kubelet Container image "registry.k8s.io/kube-scheduler:v1.28.6" already present on machine
Normal Killing 11h (x106 over 21h) kubelet Stopping container kube-scheduler
Normal Started 6h42m (x3 over 6h45m) kubelet Started container kube-scheduler
Normal Created 6h20m (x7 over 6h45m) kubelet Created container kube-scheduler
Normal Pulled 6h (x11 over 6h45m) kubelet Container image "registry.k8s.io/kube-scheduler:v1.28.6" already present on machine
Normal SandboxChanged 4h (x26 over 6h45m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Killing 4h (x25 over 6h42m) kubelet Stopping container kube-scheduler
Warning BackOff 3h47m (x554 over 6h43m) kubelet Back-off restarting failed container kube-scheduler in pod kube-scheduler-kubemaster_kube-system(0670fe8668c8dd769b1e2391a17b95af)
Warning Unhealthy 3h37m kubelet Startup probe failed: Get "https://127.0.0.1:10259/healthz": net/http: TLS handshake timeout
Warning Unhealthy 3h37m kubelet Startup probe failed: Get "https://127.0.0.1:10259/healthz": read tcp 127.0.0.1:41450->127.0.0.1:10259: read: connection reset by peer
Normal Pulled 3h37m (x3 over 3h44m) kubelet Container image "registry.k8s.io/kube-scheduler:v1.28.6" already present on machine
Normal Started 3h37m (x3 over 3h44m) kubelet Started container kube-scheduler
Normal Created 3h37m (x3 over 3h44m) kubelet Created container kube-scheduler
Normal Killing 3h37m (x2 over 3h38m) kubelet Stopping container kube-scheduler
Normal SandboxChanged 3h37m (x3 over 3h44m) kubelet Pod sandbox changed, it will be killed and re-created.
Warning BackOff 3h33m (x28 over 3h38m) kubelet Back-off restarting failed container kube-scheduler in pod kube-scheduler-kubemaster_kube-system(0670fe8668c8dd769b1e2391a17b95af)
Normal Started 3h21m (x3 over 3h25m) kubelet Started container kube-scheduler
Normal Created 3h21m (x3 over 3h25m) kubelet Created container kube-scheduler
Warning Unhealthy 165m kubelet Startup probe failed: Get "https://127.0.0.1:10259/healthz": net/http: TLS handshake timeout
Normal SandboxChanged 104m (x16 over 3h25m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 100m (x23 over 3h25m) kubelet Container image "registry.k8s.io/kube-scheduler:v1.28.6" already present on machine
Warning BackOff 89m (x349 over 3h23m) kubelet Back-off restarting failed container kube-scheduler in pod kube-scheduler-kubemaster_kube-system(0670fe8668c8dd769b1e2391a17b95af)
Normal Killing 61m (x28 over 3h23m) kubelet Stopping container kube-scheduler
Normal Created 4m34s (x3 over 9m23s) kubelet Created container kube-scheduler
Normal Pulled 4m34s (x3 over 9m23s) kubelet Container image "registry.k8s.io/kube-scheduler:v1.28.6" already present on machine
Normal Started 4m33s (x3 over 9m23s) kubelet Started container kube-scheduler
Normal Killing 4m33s (x2 over 4m56s) kubelet Stopping container kube-scheduler
Normal SandboxChanged 4m32s (x3 over 9m23s) kubelet Pod sandbox changed, it will be killed and re-created.
Warning BackOff 4m26s (x11 over 5m31s) kubelet Back-off restarting failed container kube-scheduler in pod kube-scheduler-kubemaster_kube-system(0670fe8668c8dd769b1e2391a17b95af)
Warning Unhealthy 2m2s kubelet Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused_

Controler Manager

_kubectl describe pod -n kube-system kube-controller-manager-kubemaster
Name: kube-controller-manager-kubemaster
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: kubemaster/192.168.56.11
Start Time: Fri, 02 Feb 2024 01:58:32 +0000
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash: 2db9bd12f78f5220150a5d8d383647fc
kubernetes.io/config.mirror: 2db9bd12f78f5220150a5d8d383647fc
kubernetes.io/config.seen: 2024-02-01T04:17:34.586985919Z
kubernetes.io/config.source: file
Status: Running
SeccompProfile: RuntimeDefault
IP: 192.168.56.11
IPs:
IP: 192.168.56.11
Controlled By: Node/kubemaster
Containers:
kube-controller-manager:
Container ID: containerd://496b2213622d3e3259cf4aaaaccfedf17a8fbc8d3ae4311b7e8a8d3483d55196
Image: registry.k8s.io/kube-controller-manager:v1.28.6
Image ID: registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e
Port:
Host Port:
Command:
kube-controller-manager
--allocate-node-cidrs=true
--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
--bind-address=127.0.0.1
--client-ca-file=/etc/kubernetes/pki/ca.crt
--cluster-cidr=10.244.0.0/16
--cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--controllers=*,bootstrapsigner,tokencleaner
--kubeconfig=/etc/kubernetes/controller-manager.conf
--leader-elect=true
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--root-ca-file=/etc/kubernetes/pki/ca.crt
--service-account-private-key-file=/etc/kubernetes/pki/sa.key
--service-cluster-ip-range=10.96.0.0/12
--use-service-account-credentials=true
State: Running
Started: Fri, 02 Feb 2024 02:07:19 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 02 Feb 2024 02:05:15 +0000
Finished: Fri, 02 Feb 2024 02:05:52 +0000
Ready: True
Restart Count: 214
Requests:
cpu: 200m
Liveness: http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
Startup: http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
Environment:
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/controller-manager.conf from kubeconfig (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
flexvolume-dir:
Type: HostPath (bare host directory volume)
Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/controller-manager.conf
HostPathType: FileOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors:
Tolerations: :NoExecute op=Exists
Events:
Type Reason Age From Message


Normal Created 21h kubelet Created container kube-controller-manager
Normal Started 21h kubelet Started container kube-controller-manager
Normal Pulled 21h kubelet Container image "registry.k8s.io/kube-controller-manager:v1.28.6" already present on machine
Warning Unhealthy 21h kubelet Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
Normal Started 21h (x3 over 21h) kubelet Started container kube-controller-manager
Normal Created 13h (x95 over 21h) kubelet Created container kube-controller-manager
Normal Killing 13h (x85 over 21h) kubelet Stopping container kube-controller-manager
Normal Pulled 13h (x99 over 21h) kubelet Container image "registry.k8s.io/kube-controller-manager:v1.28.6" already present on machine
Normal SandboxChanged 13h (x88 over 21h) kubelet Pod sandbox changed, it will be killed and re-created.
Warning BackOff 11h (x1736 over 21h) kubelet Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-kubemaster_kube-system(2db9bd12f78f5220150a5d8d383647fc)
Normal Started 6h38m (x3 over 6h45m) kubelet Started container kube-controller-manager
Normal SandboxChanged 6h33m (x5 over 6h45m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Created 6h20m (x7 over 6h45m) kubelet Created container kube-controller-manager
Normal Pulled 4h56m (x22 over 6h45m) kubelet Container image "registry.k8s.io/kube-controller-manager:v1.28.6" already present on machine
Normal Killing 4h50m (x18 over 6h39m) kubelet Stopping container kube-controller-manager
Warning BackOff 3h47m (x452 over 6h43m) kubelet Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-kubemaster_kube-system(2db9bd12f78f5220150a5d8d383647fc)
Normal Killing 3h42m (x3 over 3h43m) kubelet Stopping container kube-controller-manager
Normal Pulled 3h42m (x3 over 3h44m) kubelet Container image "registry.k8s.io/kube-controller-manager:v1.28.6" already present on machine
Normal Created 3h42m (x3 over 3h44m) kubelet Created container kube-controller-manager
Normal Started 3h42m (x3 over 3h44m) kubelet Started container kube-controller-manager
Normal SandboxChanged 3h42m (x4 over 3h44m) kubelet Pod sandbox changed, it will be killed and re-created.
Warning BackOff 3h33m (x52 over 3h43m) kubelet Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-kubemaster_kube-system(2db9bd12f78f5220150a5d8d383647fc)
Warning BackOff 125m (x352 over 3h24m) kubelet Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-kubemaster_kube-system(2db9bd12f78f5220150a5d8d383647fc)
Normal Killing 80m (x20 over 3h23m) kubelet Stopping container kube-controller-manager
Normal Created 80m (x26 over 3h25m) kubelet Created container kube-controller-manager
Normal Started 80m (x26 over 3h25m) kubelet Started container kube-controller-manager
Normal Pulled 80m (x26 over 3h25m) kubelet Container image "registry.k8s.io/kube-controller-manager:v1.28.6" already present on machine
Normal SandboxChanged 80m (x21 over 3h25m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Killing 9m27s kubelet Stopping container kube-controller-manager
Normal SandboxChanged 9m26s (x2 over 9m32s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Created 5m9s (x3 over 9m32s) kubelet Created container kube-controller-manager
Normal Started 5m9s (x3 over 9m31s) kubelet Started container kube-controller-manager
Warning BackOff 3m3s (x13 over 9m26s) kubelet Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-kubemaster_kube-system(2db9bd12f78f5220150a5d8d383647fc)
Normal Pulled 2m50s (x4 over 9m32s) kubelet Container image "registry.k8s.io/kube-controller-manager:v1.28.6" already present on machine

And APISERVER

_kubectl describe pod -n kube-system kube-apiserver-kubemaster
Name: kube-apiserver-kubemaster
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: kubemaster/192.168.56.11
Start Time: Fri, 02 Feb 2024 01:58:32 +0000
Labels: component=kube-apiserver
tier=control-plane
Annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.56.11:6443
kubernetes.io/config.hash: 35f64f3e5140428757af4d2c695db0fa
kubernetes.io/config.mirror: 35f64f3e5140428757af4d2c695db0fa
kubernetes.io/config.seen: 2024-02-01T04:17:20.302433239Z
kubernetes.io/config.source: file
Status: Running
SeccompProfile: RuntimeDefault
IP: 192.168.56.11
IPs:
IP: 192.168.56.11
Controlled By: Node/kubemaster
Containers:
kube-apiserver:
Container ID: containerd://a668208122107812d5316847b3b7f27a1be7f9bc7d928d457da5bda433a47c66
Image: registry.k8s.io/kube-apiserver:v1.28.6
Image ID: registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68
Port:
Host Port:
Command:
kube-apiserver
--advertise-address=192.168.56.11
--allow-privileged=true
--authorization-mode=Node,RBAC
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-issuer=https://kubernetes.default.svc.cluster.local
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
State: Running
Started: Fri, 02 Feb 2024 02:06:56 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Fri, 02 Feb 2024 02:03:14 +0000
Finished: Fri, 02 Feb 2024 02:06:12 +0000
Ready: True
Restart Count: 207
Requests:
cpu: 250m
Liveness: http-get https://192.168.56.11:6443/livez delay=10s timeout=15s period=10s #success=1 #failure=8
Readiness: http-get https://192.168.56.11:6443/readyz delay=0s timeout=15s period=1s #success=1 #failure=3
Startup: http-get https://192.168.56.11:6443/livez delay=10s timeout=15s period=10s #success=1 #failure=24
Environment:
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors:
Tolerations: :NoExecute op=Exists
Events:
Type Reason Age From Message


Normal Created 21h kubelet Created container kube-apiserver
Normal Started 21h kubelet Started container kube-apiserver
Normal Pulled 21h kubelet Container image "registry.k8s.io/kube-apiserver:v1.28.6" already present on machine
Normal SandboxChanged 21h kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 21h kubelet Container image "registry.k8s.io/kube-apiserver:v1.28.6" already present on machine
Normal Created 21h kubelet Created container kube-apiserver
Normal Started 21h kubelet Started container kube-apiserver
Warning Unhealthy 21h (x2 over 21h) kubelet Liveness probe failed: Get "https://192.168.56.11:6443/livez": dial tcp 192.168.56.11:6443: connect: connection refused
Warning Unhealthy 18h (x9 over 20h) kubelet Startup probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 18h (x11 over 20h) kubelet Startup probe failed: Get "https://192.168.56.11:6443/livez": dial tcp 192.168.56.11:6443: connect: connection refused
Warning Unhealthy 15h (x242 over 21h) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 12h (x93 over 21h) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
Normal Killing 11h (x80 over 21h) kubelet Stopping container kube-apiserver
Warning Unhealthy 11h (x1101 over 21h) kubelet Readiness probe failed: Get "https://192.168.56.11:6443/readyz": dial tcp 192.168.56.11:6443: connect: connection refused
Warning BackOff 11h (x2320 over 21h) kubelet Back-off restarting failed container kube-apiserver in pod kube-apiserver-kubemaster_kube-system(35f64f3e5140428757af4d2c695db0fa)
Normal Pulled 6h45m (x2 over 6h45m) kubelet Container image "registry.k8s.io/kube-apiserver:v1.28.6" already present on machine
Normal Created 6h45m (x2 over 6h45m) kubelet Created container kube-apiserver
Normal Started 6h45m (x2 over 6h45m) kubelet Started container kube-apiserver
Warning Unhealthy 6h43m kubelet Liveness probe failed: Get "https://192.168.56.11:6443/livez": dial tcp 192.168.56.11:6443: connect: connection refused
Warning Unhealthy 6h1m (x95 over 6h43m) kubelet Readiness probe failed: Get "https://192.168.56.11:6443/readyz": dial tcp 192.168.56.11:6443: connect: connection refused
Warning Unhealthy 5h52m (x20 over 6h36m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 4h50m (x97 over 6h36m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 4h (x9 over 6h45m) kubelet Startup probe failed: Get "https://192.168.56.11:6443/livez": dial tcp 192.168.56.11:6443: connect: connection refused
Normal SandboxChanged 4h (x26 over 6h45m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Killing 3h48m (x26 over 6h45m) kubelet Stopping container kube-apiserver
Warning BackOff 3h47m (x588 over 6h45m) kubelet Back-off restarting failed container kube-apiserver in pod kube-apiserver-kubemaster_kube-system(35f64f3e5140428757af4d2c695db0fa)
Normal SandboxChanged 3h44m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 3h44m kubelet Container image "registry.k8s.io/kube-apiserver:v1.28.6" already present on machine
Normal Created 3h44m kubelet Created container kube-apiserver
Normal Started 3h44m kubelet Started container kube-apiserver
Warning Unhealthy 3h38m (x16 over 3h38m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 3h38m (x6 over 3h38m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
Normal Killing 3h32m (x2 over 3h35m) kubelet Stopping container kube-apiserver
Normal SandboxChanged 3h25m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Started 3h25m kubelet Started container kube-apiserver
Normal Created 3h25m kubelet Created container kube-apiserver
Warning Unhealthy 3h23m (x2 over 3h24m) kubelet Liveness probe failed: Get "https://192.168.56.11:6443/livez": dial tcp 192.168.56.11:6443: connect: connection refused
Warning Unhealthy 3h23m (x18 over 3h24m) kubelet Readiness probe failed: Get "https://192.168.56.11:6443/readyz": dial tcp 192.168.56.11:6443: connect: connection refused
Normal Pulled 155m (x11 over 3h25m) kubelet Container image "registry.k8s.io/kube-apiserver:v1.28.6" already present on machine
Warning Unhealthy 155m kubelet Startup probe failed: Get "https://192.168.56.11:6443/livez": net/http: TLS handshake timeout
Warning Unhealthy 154m kubelet Startup probe failed: Get "https://192.168.56.11:6443/livez": read tcp 192.168.56.11:44012->192.168.56.11:6443: read: connection reset by peer
Warning Unhealthy 133m (x73 over 3h22m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Normal Killing 124m (x16 over 3h24m) kubelet Stopping container kube-apiserver
Warning BackOff 62m (x467 over 3h23m) kubelet Back-off restarting failed container kube-apiserver in pod kube-apiserver-kubemaster_kube-system(35f64f3e5140428757af4d2c695db0fa)
Warning Unhealthy 62m kubelet Startup probe failed: Get "https://192.168.56.11:6443/livez": read tcp 192.168.56.11:57034->192.168.56.11:6443: read: connection reset by peer
Warning Unhealthy 9m1s (x2 over 9m11s) kubelet Startup probe failed: Get "https://192.168.56.11:6443/livez": dial tcp 192.168.56.11:6443: connect: connection refused
Normal SandboxChanged 8m52s (x2 over 9m27s) kubelet Pod sandbox changed, it will be killed and re-created.
Warning BackOff 8m48s (x5 over 8m52s) kubelet Back-off restarting failed container kube-apiserver in pod kube-apiserver-kubemaster_kube-system(35f64f3e5140428757af4d2c695db0fa)
Normal Created 8m36s (x2 over 9m27s) kubelet Created container kube-apiserver
Normal Pulled 8m36s (x2 over 9m27s) kubelet Container image "registry.k8s.io/kube-apiserver:v1.28.6" already present on machine
Normal Started 8m35s (x2 over 9m26s) kubelet Started container kube-apiserver
Normal Killing 5m47s (x2 over 9m22s) kubelet Stopping container kube-apiserver
Warning Unhealthy 5m40s (x7 over 5m46s) kubelet Readiness probe failed: Get "https://192.168.56.11:6443/readyz": dial tcp 192.168.56.11:6443: connect: connection refused
Warning Unhealthy 3m38s (x2 over 5m47s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500_

If we check those ports into vagrant node:
image

I'm trying to check the possible errors, but it is possible to you guys perform a fresh install and validates ?_

Video content and picture for Kube proxy is wrong

On this page is a graphics how kube-proxy works
https://github.com/kodekloudhub/certified-kubernetes-administrator-course/blob/master/docs/02-Core-Concepts/09-Kube-Proxy.md

As well in the video content at around minute 2:47min.

This graphics has wrong IP addresses, as the service has 10.96.0.12 IP but pod at left side has 10.32.0.14 and on right side has 10.32.0.15 .

So the one on the left side is wrong as it shows 10.32.0.15 but it should be 14

vagrant issue

Not able to find the private keys to vagrant VMs to connect to. this was present earlier under certified-kubernetes-administrator-course.vagrant\machines\kubemaster\virtualbox seems to be removed or replaced. needed help in setting up the cluster for kubeadm practice.

Question: What the difference set the node's address as loopback and its local ethernet ip?

https://github.com/kodekloudhub/certified-kubernetes-administrator-course/blob/master/ubuntu/vagrant/setup-hosts.sh#L5

Above provisioning codes do it like:

# /etc/hosts

# change from 
# 127.0.1.1	cluster1-master1	cluster1-master1
# to
10.0.2.15 cluster1-master1 cluster1-master1.local

What the difference of the cluster1-master1 is as 127.0.1.1 and as 10.0.2.15 in the local?
+) What the meaning of duplication domain for the same IP? (127.0.1.1 cluster1-master1 cluster1-master1)

Mistake in LAB Labels and Selectors Question#4

Question#4

Identify the POD which is part of the prod environment, the finance BU and of frontend tier?

The BU in UPPERCASE is wrong in the question as it should be in lowercase as otherwise the CLI do not find anything:

controlplane ~ ➜  kubectl get all --selector env=prod,BU=finance,tier=frontend
No resources found in default namespace.

While when it is in lowercaps

controlplane ~ ➜  kubectl get all --selector env=prod,bu=finance,tier=frontend
NAME              READY   STATUS    RESTARTS   AGE
pod/app-1-zzxdf   1/1     Running   0          3m58s

It is also mentioned in lowercaps in the Hints tab

Please fix it.

Some errors in Practice Test Role Based Access Controls

In step 10 of Role Based Access Controls practice test, it says

The dev-user is trying to get details about the dark-blue-app pod in the blue namespace. Investigate and fix the issue.

We have created the required roles and rolebindings, but something seems to be wrong.

However that namespace does not exist, yet alone have the role + rb. I tried the scenario about 3 times, did this each time. This was first week of Dec, roughly.

I looked in a few of the usual folders where some of the yaml is stored such as /var/ and /opt but did not see any.

BTW I was able to complete the question by creating the namespace, role, and rolebinding, and guessing the correct role requirements.

There was an earlier step (perhaps step 5 or 6) in same test that a smaller issue, saying that a user had been created and to check out the corresponding info... but the role did not exist. Luckily that question was multiple choice (actually just 2 choices) so it was easy to complete it.

The final step, 11, had possibly a minor issue in that it said that the API group should be apps and extensions, whereas it only needs to be one of those 2: the one that pertains to the version of kubernetes being used.

Note: I submitted email about this to support@kodekloud and heard back that this is being looked after, but I since found out I could submit issues here, which makes more sense since technically I'm reporting a problem not asking for support :)

Typo

In the README.md file "08-Security" should be "08-Storage"

Unable to SSH into the nodes

Hi, the plain SSH from terminal from host os is not working. Can you please look into this.

rswarnka@rswarnka:~$ sudo ssh [email protected]
The authenticity of host '192.168.50.2 (192.168.50.2)' can't be established.
ECDSA key fingerprint is SHA256:SAPR79xExzIzGf1TO+5hBdUNhhvVTTruhZipYrxEQbI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.50.2' (ECDSA) to the list of known hosts.
[email protected]: Permission denied (publickey).

Mac OS monterey - unable to provision vms using vagrant up

There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["hostonlyif", "create"]

Stderr: 0%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg *)" at line 95 of file VBoxManageHostonly.cpp

Vagrant Issue

I am getting following error:
image

Does someone know how to solve this?
I am working on Windows.
Kind regards.

LAB - kubeadm init doesn't work well

Section 11 - installing kubernetes the kubeadm way

In the lab, the following command doesn't work well - even with my IP...

kubeadm init --apiserver-cert-extra-sans=controlplane --apiserver-advertise-address 10.2.223.3 --pod-network-cidr=10.244.0.0/16

I have:

ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.187.196.6 netmask 255.255.255.0 broadcast 10.187.196.255

and so I run :

root@controlplane:~# kubeadm init --apiserver-advertise-address=10.187.196.6 --apiserver-cert-extra-sans=controlplane --pod-network-cidr=10.244.0.0/16

which gives me:

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

[init] Using Kubernetes version: v1.22.0

Lecture Note for CKA 0200

Lecture Note for CKA 0200, 31 page

my-custom-scheduler yaml file seems last two lines are not new line properly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.