Comments (13)
Here is my the cloud controller manager log:
W0920 19:47:10.135265 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0920 19:47:10.165727 1 controllermanager.go:108] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
W0920 19:47:10.166171 1 authentication.go:55] Authentication is disabled
I0920 19:47:10.166187 1 insecure_serving.go:49] Serving insecurely on [::]:10253
I0920 19:47:10.168184 1 node_controller.go:89] Sending events to api server.
I0920 19:47:10.172232 1 controllermanager.go:264] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
I0920 19:47:10.174551 1 pvlcontroller.go:107] Starting PersistentVolumeLabelController
I0920 19:47:10.174661 1 controller_utils.go:1025] Waiting for caches to sync for persistent volume label controller
I0920 19:47:10.175088 1 service_controller.go:183] Starting service controller
I0920 19:47:10.175444 1 controller_utils.go:1025] Waiting for caches to sync for service controller
I0920 19:47:10.283980 1 controller_utils.go:1032] Caches are synced for service controller
I0920 19:47:10.284146 1 service_controller.go:636] Detected change in list of current cluster nodes. New node set: map[node3:{}]
I0920 19:47:10.284516 1 service_controller.go:644] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0920 19:47:10.303447 1 controller_utils.go:1032] Caches are synced for persistent volume label controller
I0920 19:52:34.005342 1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"traefik", UID:"b7692cdf-bd0e-11e8-a7d8-722135fbe584", APIVersion:"v1", ResourceVersion:"2482", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
E0920 19:52:34.514342 1 loadbalancers.go:269] error getting node addresses for bastion: could not get private ip: <nil>
I0920 19:53:41.191494 1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"traefik", UID:"b7692cdf-bd0e-11e8-a7d8-722135fbe584", APIVersion:"v1", ResourceVersion:"2482", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer
from digitalocean-cloud-controller-manager.
Hi @EnigmaCurry! As of today, the cloud controller manager assumes droplets have private networking enabled. Based on those logs I'm assuming you don't.
from digitalocean-cloud-controller-manager.
related discussion around this: #85
from digitalocean-cloud-controller-manager.
Hi Andrew, I am using private networking..
[root@dev kubelab]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master,node 1h v1.11.2 10.132.126.106 <none> Ubuntu 18.04.1 LTS 4.15.0-30-generic docker://17.3.2
node2 Ready master,node 1h v1.11.2 10.132.126.118 <none> Ubuntu 18.04.1 LTS 4.15.0-30-generic docker://17.3.2
node3 Ready node 1h v1.11.2 10.132.126.120 <none> Ubuntu 18.04.1 LTS 4.15.0-30-generic docker://17.3.2
from digitalocean-cloud-controller-manager.
Can you share your output from kubectl get node node1 -o yaml
?
from digitalocean-cloud-controller-manager.
apiVersion: v1
kind: Node
metadata:
annotations:
csi.volume.kubernetes.io/nodeid: '{"com.digitalocean.csi.dobs":"111007088"}'
flannel.alpha.coreos.com/backend-data: '{"VtepMAC":"52:85:2a:dc:49:ed"}'
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: "true"
flannel.alpha.coreos.com/public-ip: 165.227.176.137
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-09-20T19:33:17Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: node1
node-role.kubernetes.io/master: "true"
node-role.kubernetes.io/node: "true"
name: node1
resourceVersion: "9019"
selfLink: /api/v1/nodes/node1
uid: 06666026-bd0c-11e8-93c5-e22cee2822ba
spec:
podCIDR: 10.233.64.0/24
status:
addresses:
- address: 10.132.126.106
type: InternalIP
- address: node1
type: Hostname
allocatable:
cpu: 800m
ephemeral-storage: "46663523866"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 1438960Ki
pods: "110"
capacity:
cpu: "1"
ephemeral-storage: 50633164Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 2041360Ki
pods: "110"
conditions:
- lastHeartbeatTime: 2018-09-20T21:04:32Z
lastTransitionTime: 2018-09-20T19:33:17Z
message: kubelet has sufficient disk space available
reason: KubeletHasSufficientDisk
status: "False"
type: OutOfDisk
- lastHeartbeatTime: 2018-09-20T21:04:32Z
lastTransitionTime: 2018-09-20T19:33:17Z
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: 2018-09-20T21:04:32Z
lastTransitionTime: 2018-09-20T19:33:17Z
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: 2018-09-20T21:04:32Z
lastTransitionTime: 2018-09-20T19:33:17Z
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: 2018-09-20T21:04:32Z
lastTransitionTime: 2018-09-20T19:34:28Z
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
images:
- names:
- gcr.io/google-containers/hyperkube-amd64@sha256:f380059a8090b5d29da8d99844af3ac4a015514e9c8bed05cc78d92aa3f80837
- gcr.io/google-containers/hyperkube-amd64:v1.11.2
sizeBytes: 625914512
- names:
- gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:1d2e1229a918f4bc38b5a3f9f5f11302b3e71f8397b492afac7f273a0008776a
- gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.0
sizeBytes: 122460923
- names:
- nginx@sha256:b1d09e9718890e6ebbbd2bc319ef1611559e30ce1b6f56b2e3b479d9da51dc35
- nginx:1.13
sizeBytes: 108958610
- names:
- gcr.io/kubernetes-helm/tiller@sha256:2a3dd484ecfcf9343994e0f6c2af0a6faf1af7f7e499905793643f91e90edcb3
- gcr.io/kubernetes-helm/tiller:v2.10.0
sizeBytes: 68964748
- names:
- quay.io/calico/kube-controllers@sha256:a6d6b1a01e773792465254f72056e69ebd007cafda086b3c763e9ffbe5093bfe
- quay.io/calico/kube-controllers:v3.2.0-amd64
sizeBytes: 60252084
- names:
- gcr.io/google_containers/cluster-proportional-autoscaler-amd64@sha256:791836c2a2471ecb23a975cf12835f2a35ada3f9d841f3f1a363b83eca99e2aa
- gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.2
sizeBytes: 50485461
- names:
- quay.io/coreos/flannel-cni@sha256:734b4222110980abd0abe74974b6ca36452d26bdd2a20e25f37fdf7fdc2da170
- quay.io/coreos/flannel-cni:v0.3.0
- quay.io/coreos/flannel@sha256:6ecef07be35e5e861016ee557f986f89ad8244df47198de379a1bf4e580185df
- quay.io/coreos/flannel:v0.10.0
sizeBytes: 44598861
- names:
- gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:4f1ab957f87b94a5ec1edc26fae50da2175461f00afecf68940c4aa079bd08a4
- gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.10
sizeBytes: 41635309
- names:
- gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:bbb2a290a568125b3b996028958eb773f33b5b87a6b37bf38a28f8b62dddb3c8
- gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.10
sizeBytes: 40372149
- names:
- quay.io/coreos/etcd@sha256:1f433f387d7d0ff283e9abfdd493a461175bee0c372deca68c5633eb7ce7ee56
- quay.io/coreos/etcd:v3.2.18
sizeBytes: 37232444
- names:
- digitalocean/do-csi-plugin@sha256:ccda85cecb6a0fccd8492acff11f4d3071036ff97f1f3226b9dc3995d9f372da
- digitalocean/do-csi-plugin:v0.2.0
sizeBytes: 19856073
- names:
- gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516
- gcr.io/google_containers/pause-amd64:3.0
sizeBytes: 746888
nodeInfo:
architecture: amd64
bootID: ae55b6b4-cbd3-4945-a59e-a6d9755c7ec9
containerRuntimeVersion: docker://17.3.2
kernelVersion: 4.15.0-30-generic
kubeProxyVersion: v1.11.2
kubeletVersion: v1.11.2
machineID: d7eeccf9bac74ce591026432119bfb93
operatingSystem: linux
osImage: Ubuntu 18.04.1 LTS
systemUUID: D7EECCF9-BAC7-4CE5-9102-6432119BFB93
from digitalocean-cloud-controller-manager.
Not sure if it matters or not, my cluster droplets are not in my default DO project. However, I do note that the load balancer is spawned in my default project.
from digitalocean-cloud-controller-manager.
I also noticed that this droplet is not registered correctly. If it was it's droplet ID would be in the node.Spec.ProviderID
field.
from digitalocean-cloud-controller-manager.
Did you happen to run the CCM on an existing cluster or was it a fresh cluster? i.e. was --cloud-provider=external
set when the kubelet was registered?
from digitalocean-cloud-controller-manager.
I've reproduced this last night on one cluster, and today on a new cluster.
My project linked above bootstraps from scratch.
from digitalocean-cloud-controller-manager.
It looks like kubespray does not set cloud-provider, I can try applying that setting on a fresh cluster
from digitalocean-cloud-controller-manager.
Please do, not setting --cloud-provider=external
when the kubelet registers will definitely lead to unexpected behaviours :)
from digitalocean-cloud-controller-manager.
hmm, flannel config is now hanging when I use that, but I think I'm now on the right track. I'll close this, keep working on it and reopen this if I can reproduce again using cloud-provider:external
Thanks!
from digitalocean-cloud-controller-manager.
Related Issues (20)
- do-loadbalancer-protocol: 'http2' results in 'http2' --> 'http' HOT 4
- Extending Loadbalancer timeout duration HOT 1
- Misconfigured cloud-controller-manager.yml (HA deployment that uses daemonset) HOT 1
- Change release pipeline to promote dev manifests
- Protect load balancer from being deleted HOT 7
- udp loadbalancer failing to create HOT 8
- Controller manual mode HOT 17
- Allow the region to be explicitly specified instead of using the Region metadata API HOT 5
- K8 annotations for load balancer name / id do not work as expected HOT 4
- Typos in README.md
- IPv6 address missing in nodes status HOT 9
- Prevent duplicate do-loadbalancer-name annotation from changing LB ownership
- do-loadbalancer should accept a certificate name as an alternative to the certificate ID
- Wrong validation regex for service.beta.kubernetes.io/do-loadbalancer-allow-rules HOT 2
- Feature Request: Create a Helm chart for DO CCM HOT 1
- Cloud Controller Manager doesn't add droplets to Load Balancer HOT 6
- `k8s.gcr.io` is no longer used HOT 2
- CI: Bypass branch protection on release workflow execution
- do-loadbalancer-allow-rules doesn't work (firewall is not configured) HOT 1
- Confusion with do-loadbalancer-hostname HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from digitalocean-cloud-controller-manager.