NAME READY STATUS RESTARTS AGE
anetd-cf8jz 1/1 Running 0 13m
anetd-q5vzr 1/1 Running 0 13m
anetd-rzk7g 1/1 Running 0 13m
antrea-controller-horizontal-autoscaler-7b69d9bfd7-f82m6 0/1 Pending 0 13m
event-exporter-gke-7bf6c99dcb-grmz7 0/2 Pending 0 13m
filestore-node-4vd54 3/3 Running 0 13m
filestore-node-86dbn 3/3 Running 0 13m
filestore-node-dssdr 3/3 Running 0 13m
fluentbit-gke-f9hh9 2/2 Running 0 13m
fluentbit-gke-m2hqb 2/2 Running 0 13m
fluentbit-gke-wscl5 2/2 Running 0 13m
gke-metadata-server-2q8q5 1/1 Running 0 13m
gke-metadata-server-5xgg5 1/1 Running 0 13m
gke-metadata-server-hmz6s 1/1 Running 0 13m
hubble-generate-certs-init-64mnp 0/1 Pending 0 13m
hubble-relay-677f85b964-v2cxd 0/2 Pending 0 14m
konnectivity-agent-autoscaler-5d9dbcc6d8-swvst 0/1 Pending 0 14m
konnectivity-agent-fb695849d-6ks95 0/1 Pending 0 13m
konnectivity-agent-fb695849d-hdq7q 0/1 Pending 0 14m
konnectivity-agent-fb695849d-qvck9 0/1 Pending 0 13m
kube-dns-7f58849488-rngxv 0/3 Pending 0 13m
kube-dns-7f58849488-rtb7g 0/3 Pending 0 14m
kube-dns-autoscaler-84b8db4dc7-4qpmx 0/1 Pending 0 13m
l7-default-backend-d86c96845-6mhrm 0/1 Pending 0 14m
metrics-server-v0.5.2-8569bc4cf9-rt26w 0/2 Pending 0 14m
netd-74jz8 1/1 Running 0 13m
netd-ckswg 1/1 Running 0 13m
netd-k6pzk 1/1 Running 0 13m
pdcsi-node-csvx5 2/2 Running 0 13m
pdcsi-node-n46x7 2/2 Running 0 13m
pdcsi-node-xvqkx 2/2 Running 0 13m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16m (x2 over 16m) default-scheduler no nodes available to schedule pods
Normal NotTriggerScaleUp 16m cluster-autoscaler pod didn't trigger scale-up:
Warning FailedScheduling 16m default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
Normal NotTriggerScaleUp 95s (x84 over 15m) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}
Warning FailedScheduling 9s (x3 over 11m) default-scheduler 0/3 nodes are available: 3 node(s) had untolerated taint {node.cilium.io/agent-not-ready: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
Error from server (BadRequest): pod hubble-relay-677f85b964-v2cxd does not have a host assigned
dataplane_v2_enabled = true
enable_dpv2_hubble = true
machine_type = "e2-standard-2"
preemptible = false
disk_size_gb = 40
initial_node_count = 3
min_nodes = 3
max_nodes = 6
NAME STATUS ROLES AGE VERSION
gke-cluster-nodepool-d5a1f7ad-cf52 Ready <none> 26m v1.27.3-gke.100
gke-cluster-nodepool-d5a1f7ad-gm5c Ready <none> 26m v1.27.3-gke.100
gke-cluster-nodepool-d5a1f7ad-pwhp Ready <none> 26m v1.27.3-gke.100
So it seems like the nodes are up and running in my zonal cluster.