Git Product home page Git Product logo

digitalocean-cloud-controller-manager's Issues

Use correct format for Node.Spec.ProviderID

Kubernetes uses the format providerName://node-id for Node.Spec.ProviderID. Functions like NodeAddressesByProviderID should expect the providerID to be in this format. This isn't an issue for the time being since cloud controller manager doesn't even set Node.Spec.ProviderID but since it will in the future we should plan accordingly (see issue below).

See this for the expected format: https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go#L73-L84

Related:
kubernetes/kubernetes#49836

LoadBalancer doesn't add any droplets (crashes on a droplet not used in the cluster)

Hi!

I have just created a new cluster with kubeadm, and added do-ccm. When creating a loadbalancer it failed, it got stuck while adding droplets to the loadbalancer:

kubectl -n kube-system logs -f digitalocean-cloud-controller-manager-56bd986844-jn286
W0708 16:30:39.020273       1 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0708 16:30:39.048026       1 controllermanager.go:105] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
W0708 16:30:39.051714       1 authentication.go:55] Authentication is disabled
I0708 16:30:39.051824       1 insecure_serving.go:44] Serving insecurely on [::]:10253
I0708 16:30:39.054121       1 node_controller.go:86] Sending events to api server.
I0708 16:30:39.055122       1 pvlcontroller.go:107] Starting PersistentVolumeLabelController
I0708 16:30:39.055139       1 controller_utils.go:1019] Waiting for caches to sync for persistent volume label controller
I0708 16:30:39.055951       1 controllermanager.go:258] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
I0708 16:30:39.056171       1 service_controller.go:183] Starting service controller
I0708 16:30:39.056192       1 controller_utils.go:1019] Waiting for caches to sync for service controller
I0708 16:30:39.155327       1 controller_utils.go:1026] Caches are synced for persistent volume label controller
I0708 16:30:39.156331       1 controller_utils.go:1026] Caches are synced for service controller
I0708 16:30:39.156534       1 service_controller.go:636] Detected change in list of current cluster nodes. New node set: map[worker1:{} worker2:{} worker3:{}]
I0708 16:30:39.163151       1 service_controller.go:644] Successfully updated 1 out of 1 load balancers to direct traffic to the updated set of nodes
I0708 16:52:36.106744       1 event.go:218] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"worker2", UID:"efd803d9-82bf-11e8-ac1a-e21a8d63bb7d", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Deleting Node worker2 because it's not present according to cloud provider' Node worker2 event: DeletingNode
I0708 16:53:46.963070       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"d456d2d9-82ce-11e8-ac1a-e21a8d63bb7d", APIVersion:"v1", ResourceVersion:"14539", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
I0708 16:53:46.963132       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"d456d2d9-82ce-11e8-ac1a-e21a8d63bb7d", APIVersion:"v1", ResourceVersion:"14539", FieldPath:""}): type: 'Normal' reason: 'Type' NodePort -> LoadBalancer
E0708 16:53:47.768379       1 loadbalancers.go:269] error getting node addresses for host.example.com: could not get private ip: <nil>
E0708 16:53:47.768421       1 loadbalancers.go:269] error getting node addresses for host.example.com: could not get private ip: <nil>
I0708 16:53:59.166272       1 service_controller.go:636] Detected change in list of current cluster nodes. New node set: map[worker1:{} worker3:{}]
E0708 16:53:59.377640       1 loadbalancers.go:269] error getting node addresses for host.example.com: could not get private ip: <nil>
E0708 16:53:59.377865       1 loadbalancers.go:269] error getting node addresses forhost.example.com: could not get private ip: <nil>
I0708 16:54:53.863448       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"d456d2d9-82ce-11e8-ac1a-e21a8d63bb7d", APIVersion:"v1", ResourceVersion:"14539", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer
E0708 16:54:56.402307       1 service_controller.go:660] External error while updating load balancer: PUT https://api.digitalocean.com/v2/load_balancers/93b9c153-a39f-415c-8bd4-763ffac1e6e5: 422 (request "c66e6aab-3784-4edd-91ad-cbfc57705882") Load Balancer can't be updated while it processes previous actions.
I0708 16:54:56.402554       1 service_controller.go:644] Successfully updated 2 out of 3 load balancers to direct traffic to the updated set of nodes
I0708 16:54:56.402706       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"d456d2d9-82ce-11e8-ac1a-e21a8d63bb7d", APIVersion:"v1", ResourceVersion:"14539", FieldPath:""}): type: 'Warning' reason: 'LoadBalancerUpdateFailed' Error updating load balancer with new hosts map[worker1:{} worker3:{}]: PUT https://api.digitalocean.com/v2/load_balancers/93b9c153-a39f-415c-8bd4-763ffac1e6e5: 422 (request "c66e6aab-3784-4edd-91ad-cbfc57705882") Load Balancer can't be updated while it processes previous actions
E0708 16:56:37.022723       1 loadbalancers.go:269] error getting node addresses for host.example.com: could not get private ip: <nil>
E0708 16:56:37.022771       1 loadbalancers.go:269] error getting node addresses for host.example.com: could not get private ip: <nil>
I0708 16:56:38.566833       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"d456d2d9-82ce-11e8-ac1a-e21a8d63bb7d", APIVersion:"v1", ResourceVersion:"14539", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts

host.example.com is a different droplet on the same account (there are a totalt of 5 droplets, 1 master, 3 workers and host.example.com)

Also, After running kubeadm join and started kubelet, I modified the kubelet config and restarted the service, I don't know if this is okay/enough to make do-ccm work properly?

Error fetching by providerID

  • My node name and droplet name is the same
  • My DO access token is create

I'm getting this error:

E0530 19:14:36.524264       1 node_controller.go:327] NodeAddress: Error fetching by providerID: providerID cannot be empty string Error fetching by NodeName: instance not found
I0530 19:14:36.524291       1 node_controller.go:392] Successfully initialized node rancher-1.briggs.io with cloud provider

I used the release file from here: https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/releases/v0.1.6.yml

slack channel

Is there a slack channel or something where I could ask questions?

I'm mostly wondering if I can use this right now, and is it capable of automatic node-scaling.

Is it meant for the general public or only for digital ocean's internal use?

Watch for CSI actions until they are completed

Currently, after provisioning, deleting, attaching, vs.. any volume, we don't wait until the action is completed. We should have a waitForAction function that checks (via backoff) that action is indeed completed.

installing cluster with kubeadm

Hi!

I have 5 nodes, and trying to install kubernetes with the ccm like this:

# Common settup:
ssh-keygen
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list\
deb http://apt.kubernetes.io/ kubernetes-xenial main\
EOF

apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common zsh git curl wget docker.io kubelet kubeadm kubectl
sh -c "$(wget https://raw.githubusercontent.com/Richard87/oh-my-zsh/master/tools/install.sh -O -)"
systemctl enable docker

# Master
kubeadm init --pod-network-cidr 10.244.0.0/16

# admin
scp [email protected]:/etc/kubernetes/admin.conf ~/.kube/config
kubectl create secret generic digitalocean --from-literal "access-token=XXXXXXXXXXXXXXXXXXXXXXX" -n kube-system
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/digitalocean/digitalocean-cloud-controller-manager/master/releases/v0.1.6.yml

# Workers
add `--cloud-provider=external` to config `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
kubeadm join

The worker kubelet log:

-- Logs begin at Mon 2018-07-09 07:58:55 UTC. --
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: W0709: reflector.go:341] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: watch of *v1.Service ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Unexpected watch close - watch lasted less than a second and no items received
kubelet: W0709: reflector.go:341] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: watch of *v1.Node ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Unexpected watch close - watch lasted less than a second and no items received
kubelet: W0709: reflector.go:341] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: watch of *v1.Pod ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Unexpected watch close - watch lasted less than a second and no items received
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: E0709: kubelet_node_status.go:379] Unable to update node status: update node status exceeds retry count
kubelet: E0709: eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "worker4" not found
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: W0709: reflector.go:341] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: watch of *v1.Node ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Unexpected watch close - watch lasted less than a second and no items received
kubelet: W0709: reflector.go:341] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: watch of *v1.Pod ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Unexpected watch close - watch lasted less than a second and no items received
kubelet: W0709: reflector.go:341] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: watch of *v1.Service ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Unexpected watch close - watch lasted less than a second and no items received
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: E0709: kubelet_node_status.go:391] Error updating node status, will retry: error getting node "worker4": nodes "worker4" not found
kubelet: E0709: kubelet_node_status.go:379] Unable to update node status: update node status exceeds retry count

(all the nodes is showing exactly the same messages)

#Droplets:

ID           Name                      Public IPv4        Private IPv4     Public IPv6                                Memory    VCPUs    Disk    Region    Image               Status    Tags
90704279     vps.example.com           255.255.255.255                     XXXX:XXXX:XXX:00D0:0000:0000:0CCE:1001     4096      2        80      fra1      CentOS 7.4 x64      active    
100908325    kubernetes.example.com    255.255.255.255    10.133.28.214                                               2048      2        40      ams3      Ubuntu 18.04 x64    active    
100908351    worker3.example.com       255.255.255.255    10.133.56.25                                                2048      2        40      ams3      Ubuntu 18.04 x64    active    
100908352    worker1.example.com       255.255.255.255    10.133.57.7                                                 2048      2        40      ams3      Ubuntu 18.04 x64    active    
100908353    worker2.example.com       255.255.255.255    10.133.57.53                                                2048      2        40      ams3      Ubuntu 18.04 x64    active    
100994275    worker4.example.com       255.255.255.255    10.133.60.122                                               2048      2        40      ams3      Ubuntu 18.04 x64    active    

Healthcheck Path

Hey

Is there a way to define the healthcheck path for the load balancers?

My problem is that the root is protected by Basic Auth, so it always fails the health check. There is a specific /health endpoint i need to call to see if the app is alive or not.

Adding to running cluster

Hello

I tried adding digitalocean-cloud-controller-manager last night, but it didn't work out.
The cluster was created with bootkube and the manifest is updated to match v0.6.2.

Here is the steps I did:

  1. Add --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname to the kube-apiserver.
  2. Deploy digitalocean-cloud-controller-manager
  3. Add --cloud-provider=external to kube-apiserver, kube-controller-manager and the kubelets
  4. Change --hostname-override to --hostname-override=%H
  5. Restart all the kubelet
  6. digitalocean-cloud-controller-manager isn't adding labels like region or any other labels. node.cloudprovider.kubernetes.io/uninitialized isn't removed either.

Did I miss something, or isn't it supported updating a running cluster?

The cluster ended up crashing (restarting too many kubelet at the same time, and digitalocean-cloud-controller-manager then removed the node, so the pod was evicted) . In the end I did a rollback to the latest configuration, but I tried getting digitalocean-cloud-controller-manager working for a few hours first.

HTTPS redirect not working

I tried creating a LoadBalancer based off the example here: https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/http-nginx-with-redirect.yml

It seems as though having service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true" has no impact on the created LoadBalancer. When I go to my DO admin portal the SSL setting is set to "No redirect". If I edit that setting in the portal, then redirect works as expected. Seems like it is a problem with how the LoadBalancer is provisioned.

Cyclic dependency on network and ccm

Hi, I'm trying to setup a digitalocean-ccm based cluster with kubespray. With cloud-provider set to 'external', I'm unable to start any pods until ccm is ready (makes sense). However, I cannot start ccm until my network is up and running. So it's a deadlock.

  • If I start flannel first, flannel never starts, due to the taint of cloud-provider=external
  • If I start ccm first, it never starts, due to nodes in NotReady state (because network is not ready.)

I've applied the following config:

  • kubelet --cloud-provider=external
  • hostnames == droplet names, and no override-hostname used on kubelet.

Now kubespray has this notion of 'cloud' networking, which I have also tried (instead of 'flannel'). In this case, ccm does indeed startup, and my nodes get initialized successfully! But, I do not get a cluster subnet (10.233.0.0) setup. I assume this is just not implemented in DO-ccm to assign IPs like flannel does? I cannot install helm to such a cluster, it just times out trying to reach a 10.233.0.1 ip address.

Can you provide any details about tested cluster networking configs? I've tried 'cloud', 'flannel', and 'calico', but to no avail. Do you think I need to add a tolerance for flannel to start even in uninitialized state?

Feature Request : Allow name prefix on created resources

At the moment when for example a load balancer is provisioned through CCM it gets assigned a name that is bound to be unique.

This approach works beautifully for a purely machine-based operator. But since there is no way to prefix a resource name it leads to confusion when looking at the digitalocean dashboard. As such it would/could be beneficial to allow for a prefix to be set ( a prefix, not an override ).

For example : let's say a load balancer in the current scheme would get a uid of qwerty123
with a prefix it could become something like : staging-api-qwerty123

Which is both unique for automation reasons and allows for readability to human operators

This behavior can be added through an annotation, or could take the name of the k8s service managing it as the prefix.

In regards to verification it would need to take into account any limitations digitalocean poses on the length of the name that a resource such as a loadbalancer can have

LoadBalancer does not attach cluster droplets

I have a service definition that requests a LoadBalancer, and so the cloud-controller-manager spins up the Load Balancer instance, and my service receives the correct external IP address.

Howerver, no droplets are attached to the Load Balancer as viewed from the control panel. If I manually attach each node of my cluster to the Load Balancer, then everything works, and proxying to my k8s service works fine.

My setup:

  • version 0.1.7 of digitalocean-cloud-controller-manager
  • kubernetes 1.11.2 deployed via kubespray
  • flannel networking using private networking

Here are my reproducible steps: https://github.com/EnigmaCurry/kubelab/blob/kubelab/README.md

Set droplet tags as kubernetes labels

I think it can be very useful to set up the cloud-controller-manager to set the tags attached to the droplet as labels for the node.

This feature will give me the automation I need to make a more complex cluster topology

How to enable HTTP to HTTPS redirect?

I've been looking through the provided annotations and I can't seem to find one that toggles the 'Http to Https' redirect option on DO load balancers?

( I went through this file : https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/do/loadbalancers.go )

At the moment my setup is like this ( terraform based ) :

...
resource "kubernetes_service" "srv" {
        metadata {
                name = "..."
                namespace = "${var.m_namespace}"
                annotations {
                        "service.beta.kubernetes.io/do-loadbalancer-protocol" = "http"
                        "service.beta.kubernetes.io/do-loadbalancer-algorithm" = "round_robin"
                        "service.beta.kubernetes.io/do-loadbalancer-tls-ports" = "443"
                        "service.beta.kubernetes.io/do-loadbalancer-certificate-id" = "${var.m_certificate_id}"
                }
        }

        spec {
                selector {
                        app = "..."
                }
                port = [
                        {
                                name = "http"
                                port = 80
                                protocol = "TCP"
                                target_port = 8080
                        }, {
                                name = "https"
                                port = 443
                                protocol = "TCP"
                                target_port =  8080
                        }
                ]

                type = "LoadBalancer"
        }
}
...

What would be the expected annotatation for this ( if it is available ) ? Or what would be an alternative ? For now I'm making the adjustment manually once the load balancer has been provisioned, but that's really not the expected way of working for me :/

Loving the module btw guys :D good job !

E2E Testing

We should start to consider adding an E2E test suite. In terms of tooling I personally don't have a preference but using a combination fo kubeadm/terraform would make the most sense to me.

running kubelets with flag--cloud-provider=external

Hi guys, thank you for the great work here!
My setup:

  • Ubuntu 16.04
  • kubectl and kubedam version: 1.10.5
  • One master and one worker node

After runninng kubectl apply -f digitalocean-cloud-controller-manager/releases/v0.1.5.yml or in fact kubectl apply -f digitalocean-cloud-controller-manager/releases/v0.1.5.yml I get this error in the logs of the digitalocean-cloud-controller-manager pod:

E0624 10:30:04.614299       1 node_controller.go:166] NodeAddress: Error fetching by providerID: providerID cannot be empty string Error fetching by NodeName: could not get private ip: <nil>
E0624 10:30:05.688043       1 node_controller.go:166] NodeAddress: Error fetching by providerID: providerID cannot be empty string Error fetching by NodeName: could not get private ip: <nil>

Hopefully answering those questions will help me and others in the future as things are not clear from the documentation provided

  1. How do I enforce the --cloud-provider=external, flag that is mentioned in the README file? I have tried adding this flag to KUBELET_KUBECONFIG_ARGS in the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file on my server:
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--cloud-provider=external --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf "
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
....

After updating this file I run sudo kubeadm init --pod-network-cidr=10.244.0.0/16
unfortunately kubelet starts with those flags instead (retrieved from ps command):
/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki

  1. Do I understand correctly one needs to have the cloud-config file setup? Is there a template DigitaOcean users could use?

Stuck in "pending"

Hey

Apologies if this is the wrong place for this. I have the following Service:

kind: Service
apiVersion: v1
metadata:
  name: tesla-proxy-loadbalancer
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
spec:
  selector:
    app: tesla-proxy-web
  type: LoadBalancer
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 3000

The Service is stuck in "pending", and there has been no LB created in DO.

This is my first Service creation in a few weeks, but all the previous ones worked flawlessly. Nothing has changed on my end since then (no upgrades etc).

Any ideas? There does not seem to be any errors at all happening.

Loadbalancers are created, but the creation fails

I tried creating 9 clusters with the digitalocean-cloud-controller-manager. On each of the clusters I create I have one service that is of type Load balancer.

About half of the creations failed with the message in the UI "Load Balancer creation failed. Please try again.". I checked to see if there was some kind of issue with the load balancer service on the status page, but it was reporting no issues. To try and get some more information I then created my own container with some extra logs and ran that image instead. This let me see the request being made, as well as the error from the creation. Whether the load balancer was successful in creating or errord the object passed to the request client was the same.

The error in the cloud controller manager logs on failure is error while waiting for loadbalancer a4ce101f25e9e11e88f2deea6cc6cd46: Get https://api.digitalocean.com/v2/load_balancers/5837275f-de0f-4b10-8111-444b954d2207: context deadline exceeded

Load balancer definition

{
    Name:"a4e206c825ea811e88f2deea6cc6cd46", 
    Algorithm:"round_robin", 
    Region:"nyc1", 
    ForwardingRules:[godo.ForwardingRule{
        EntryProtocol:"tcp", 
        EntryPort:1234, 
        TargetProtocol:"tcp", 
        TargetPort:32638, 
        CertificateID:"", 
        TlsPassthrough:false
    }], 
    HealthCheck:godo.HealthCheck{
        Protocol:"tcp", 
        Port:32638, 
        Path:"", 
        CheckIntervalSeconds:3, 
        ResponseTimeoutSeconds:5, 
        HealthyThreshold:5, 
        UnhealthyThreshold:3
    }, 
    StickySessions:godo.StickySessions{
        Type:"none", 
        CookieName:"", 
        CookieTtlSeconds:0
    }, 
    DropletIDs:[94768619], 
    Tag:"", 
    RedirectHttpToHttps:false
}

screen shot 2018-05-23 at 11 02 40 am

Feature request - Hooking into firewalls when creating load balancers

Hello, please feel free to close this, I'm just not sure where else to ask ๐Ÿ™‚

This is working fine for me with a vanilla install of kubicorn, it's extremely simple to launch.

The only thing that seems to be missing is a step to update the firewalls which have been applied to your k8s nodes so that a rule is added to allow the load balancer to talk to your nodes on the port the new service is listening on. This is a manual step at the moment. Would digitalocean-cloud-controller-manager be the right place to add this hook in, or would a separate service be required?

Handle case where node name is an IP address

A lot of DO Kubernetes clusters override the kubelet node name using the node's private IP, this guarantees node names are unique and that its name can be used to reach the node.

DO CCM should consider cases where the node is in the form of an IP address.

unexpected providerID format

I am following CCM deployment instruction in https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/getting-started.md.

After deployed, load balancing and node addressing are work great but node labelling is not working.

kubectl get no

root@kube-1-master-2:~# kubectl get no kube-1-worker-1 -o yaml
apiVersion: v1
kind: Node
metadata:
  annotations:
    alpha.kubernetes.io/provided-node-ip: 10.130.13.103
    csi.volume.kubernetes.io/nodeid: '{"com.digitalocean.csi.dobs":"109470133"}'
    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
    node.alpha.kubernetes.io/ttl: "0"
    projectcalico.org/IPv4Address: 10.130.13.103/16
    volumes.kubernetes.io/controller-managed-attach-detach: "true"
  creationTimestamp: 2018-09-11T10:23:33Z
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    kubernetes.io/hostname: kube-1-worker-1
  name: kube-1-worker-1
  resourceVersion: "350167"
  selfLink: /api/v1/nodes/kube-1-worker-1
  uid: bc946296-b5ac-11e8-964e-daca43c1fec8
spec:
  podCIDR: 192.168.3.0/24
status:
  addresses:
  - address: kube-1-worker-1
    type: Hostname
  - address: 10.130.13.103
    type: InternalIP
  - address: <HIDDEN>
    type: ExternalIP
  allocatable:
    cpu: "1"
    ephemeral-storage: "23249247399"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 913620Ki
    pods: "110"
  capacity:
    cpu: "1"
    ephemeral-storage: 25227048Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 1016020Ki
    pods: "110"
  conditions:
  - lastHeartbeatTime: 2018-09-13T15:46:05Z
    lastTransitionTime: 2018-09-11T10:23:33Z
    message: kubelet has sufficient disk space available
    reason: KubeletHasSufficientDisk
    status: "False"
    type: OutOfDisk
  - lastHeartbeatTime: 2018-09-13T15:46:05Z
    lastTransitionTime: 2018-09-11T10:23:33Z
    message: kubelet has sufficient memory available
    reason: KubeletHasSufficientMemory
    status: "False"
    type: MemoryPressure
  - lastHeartbeatTime: 2018-09-13T15:46:05Z
    lastTransitionTime: 2018-09-11T10:23:33Z
    message: kubelet has no disk pressure
    reason: KubeletHasNoDiskPressure
    status: "False"
    type: DiskPressure
  - lastHeartbeatTime: 2018-09-13T15:46:05Z
    lastTransitionTime: 2018-09-11T10:23:33Z
    message: kubelet has sufficient PID available
    reason: KubeletHasSufficientPID
    status: "False"
    type: PIDPressure
  - lastHeartbeatTime: 2018-09-13T15:46:05Z
    lastTransitionTime: 2018-09-13T13:55:04Z
    message: kubelet is posting ready status. AppArmor enabled
    reason: KubeletReady
    status: "True"
    type: Ready
  daemonEndpoints:
    kubeletEndpoint:
      Port: 10250
  images:
  - names:
    - quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d4d0f5416c26444fb318c1bf7e149b70c7d0e5089e129827b7dccfad458701ca
    - quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
    sizeBytes: 414090450
  - names:
    - quay.io/calico/node@sha256:a35541153f7695b38afada46843c64a2c546548cd8c171f402621736c6cf3f0b
    - quay.io/calico/node:v3.1.3
    sizeBytes: 248202699
  - names:
    - k8s.gcr.io/kube-proxy-amd64@sha256:6a8d6e8d1674cb26167d85bebbb953e93993b81bbbf7e00c2985e61e0c7c2062
    - k8s.gcr.io/kube-proxy-amd64:v1.11.2
    sizeBytes: 97772380
  - names:
    - quay.io/calico/cni@sha256:ed172c28bc193bb09bce6be6ed7dc6bfc85118d55e61d263cee8bbb0fd464a9d
    - quay.io/calico/cni:v3.1.3
    sizeBytes: 68849270
  - names:
    - digitalocean/digitalocean-cloud-controller-manager@sha256:c59c83fb1a5ef73b255de12245b17debe181a66c31fc828ea1b722a162ef7966
    - digitalocean/digitalocean-cloud-controller-manager:v0.1.7
    sizeBytes: 68295557
  - names:
    - huseyinbabal/node-example@sha256:caa0bb831c88be08d342c05fe8fa223516dbef33ebadf8cae9e7c27d55370d9d
    - huseyinbabal/node-example:latest
    sizeBytes: 66276303
  - names:
    - quay.io/coreos/flannel@sha256:60d77552f4ebb6ed4f0562876c6e2e0b0e0ab873cb01808f23f55c8adabd1f59
    - quay.io/coreos/flannel:v0.9.1
    sizeBytes: 51338831
  - names:
    - quay.io/k8scsi/csi-attacher@sha256:44b7d518e00d437fed9bdd6e37d3a9dc5c88ca7fc096ed2ab3af9d3600e4c790
    - quay.io/k8scsi/csi-attacher:v0.3.0
    sizeBytes: 46929442
  - names:
    - quay.io/k8scsi/csi-provisioner@sha256:d45e03c39c1308067fd46d69d8e01475cc0c9944c897f6eded4df07e75e5d3fb
    - quay.io/k8scsi/csi-provisioner:v0.3.0
    sizeBytes: 46848737
  - names:
    - quay.io/k8scsi/driver-registrar@sha256:b9b8b0d2e7e3bcf1fda1776c4bee216f70a51345c3b62af7248c10054143755d
    - quay.io/k8scsi/driver-registrar:v0.3.0
    sizeBytes: 44650528
  - names:
    - quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
    - quay.io/coreos/flannel:v0.10.0-amd64
    sizeBytes: 44598861
  - names:
    - digitalocean/do-csi-plugin@sha256:ccda85cecb6a0fccd8492acff11f4d3071036ff97f1f3226b9dc3995d9f372da
    - digitalocean/do-csi-plugin:v0.2.0
    sizeBytes: 19856073
  - names:
    - gokul93/hello-world@sha256:4cf553f69fbb1c331a1ac8f3b6dc3a2d92276e27e55b79c049aec6b841f904ac
    - gokul93/hello-world:latest
    sizeBytes: 10319652
  - names:
    - gcr.io/google-samples/hello-app@sha256:c62ead5b8c15c231f9e786250b07909daf6c266d0fcddd93fea882eb722c3be4
    - gcr.io/google-samples/hello-app:1.0
    sizeBytes: 9860419
  - names:
    - gcr.io/google_containers/defaultbackend@sha256:865b0c35e6da393b8e80b7e3799f777572399a4cff047eb02a81fa6e7a48ed4b
    - gcr.io/google_containers/defaultbackend:1.4
    sizeBytes: 4844064
  - names:
    - busybox@sha256:cb63aa0641a885f54de20f61d152187419e8f6b159ed11a251a09d115fdff9bd
    - busybox:latest
    sizeBytes: 1162769
  - names:
    - k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
    - k8s.gcr.io/pause:3.1
    sizeBytes: 742472
  nodeInfo:
    architecture: amd64
    bootID: 0b982393-75c4-4a64-a14b-29978a591d9d
    containerRuntimeVersion: docker://17.3.3
    kernelVersion: 4.4.0-131-generic
    kubeProxyVersion: v1.11.2
    kubeletVersion: v1.11.2
    machineID: a38c62498ffb47cab90b37d7b4f0b586
    operatingSystem: linux
    osImage: Ubuntu 16.04.5 LTS
    systemUUID: A38C6249-8FFB-47CA-B90B-37D7B4F0B586

kubectl logs

root@kube-1-master-2:~# kubectl logs digitalocean-cloud-controller-manager-79cff6f759-99kj9 -n kube-system
W0913 14:17:38.001579       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0913 14:17:38.042439       1 controllermanager.go:108] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
W0913 14:17:38.043324       1 authentication.go:55] Authentication is disabled
I0913 14:17:38.043391       1 insecure_serving.go:49] Serving insecurely on [::]:10253
I0913 14:17:38.045003       1 node_controller.go:89] Sending events to api server.
I0913 14:17:38.047221       1 controllermanager.go:264] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
I0913 14:17:38.048342       1 pvlcontroller.go:107] Starting PersistentVolumeLabelController
I0913 14:17:38.048421       1 controller_utils.go:1025] Waiting for caches to sync for persistent volume label controller
I0913 14:17:38.048509       1 service_controller.go:183] Starting service controller
I0913 14:17:38.048533       1 controller_utils.go:1025] Waiting for caches to sync for service controller
I0913 14:17:38.174388       1 controller_utils.go:1032] Caches are synced for service controller
I0913 14:17:38.174832       1 service_controller.go:636] Detected change in list of current cluster nodes. New node set: map[kube-1-worker-1:{}]
I0913 14:17:38.174908       1 service_controller.go:644] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0913 14:17:38.195621       1 controller_utils.go:1032] Caches are synced for persistent volume label controller
E0913 14:17:38.766837       1 node_controller.go:161] unexpected providerID format: 109037323, format should be: digitalocean://12345
E0913 14:17:39.911414       1 node_controller.go:161] unexpected providerID format: 109155407, format should be: digitalocean://12345
E0913 14:17:40.954761       1 node_controller.go:161] unexpected providerID format: 109172105, format should be: digitalocean://12345
E0913 14:17:42.048235       1 node_controller.go:161] unexpected providerID format: 109470133, format should be: digitalocean://12345
E0913 14:22:44.119502       1 node_controller.go:161] unexpected providerID format: 109037323, format should be: digitalocean://12345
E0913 14:22:45.211726       1 node_controller.go:161] unexpected providerID format: 109155407, format should be: digitalocean://12345
E0913 14:22:46.313092       1 node_controller.go:161] unexpected providerID format: 109172105, format should be: digitalocean://12345
E0913 14:22:47.447328       1 node_controller.go:161] unexpected providerID format: 109470133, format should be: digitalocean://12345
E0913 14:27:49.184234       1 node_controller.go:161] unexpected providerID format: 109037323, format should be: digitalocean://12345
E0913 14:27:50.954856       1 node_controller.go:161] unexpected providerID format: 109155407, format should be: digitalocean://12345
E0913 14:27:52.838915       1 node_controller.go:161] unexpected providerID format: 109172105, format should be: digitalocean://12345
E0913 14:27:53.946188       1 node_controller.go:161] unexpected providerID format: 109470133, format should be: digitalocean://12345
E0913 14:32:55.680519       1 node_controller.go:161] unexpected providerID format: 109037323, format should be: digitalocean://12345
E0913 14:32:57.436294       1 node_controller.go:161] unexpected providerID format: 109155407, format should be: digitalocean://12345
E0913 14:32:58.453036       1 node_controller.go:161] unexpected providerID format: 109172105, format should be: digitalocean://12345
E0913 14:32:59.578324       1 node_controller.go:161] unexpected providerID format: 109470133, format should be: digitalocean://12345
E0913 14:38:02.240903       1 node_controller.go:161] unexpected providerID format: 109037323, format should be: digitalocean://12345
E0913 14:38:04.694336       1 node_controller.go:161] unexpected providerID format: 109155407, format should be: digitalocean://12345
E0913 14:38:06.557360       1 node_controller.go:161] unexpected providerID format: 109172105, format should be: digitalocean://12345
E0913 14:38:07.886014       1 node_controller.go:161] unexpected providerID format: 109470133, format should be: digitalocean://12345
E0913 14:43:08.918326       1 node_controller.go:161] unexpected providerID format: 109037323, format should be: digitalocean://12345
E0913 14:43:09.997752       1 node_controller.go:161] unexpected providerID format: 109155407, format should be: digitalocean://12345
E0913 14:43:11.095733       1 node_controller.go:161] unexpected providerID format: 109172105, format should be: digitalocean://12345
E0913 14:43:12.158490       1 node_controller.go:161] unexpected providerID format: 109470133, format should be: digitalocean://12345

Node labels and addresses - Not working

After deploying the DigitalOcean CCM, I can create and access loadbalancers for the service type:loadbalancer.

But I can't get the node labels "instance type" and "region".
"External IP" in address not set in the node.

DigitalOcean CCM - 0.1.6
Kubernetes version - v1.10.4
Container Linux by CoreOS 1745.7.0 (Rhyolite) 4.14.48-coreos-r2

Cluster Creation with terraform - kubeadm - DO CCM
https://github.com/kubernetes-digitalocean-terraform/kubernetes-digitalocean-terraform

Thanks in advance

Verify version compatibility with Kubernetes

The latest version of DO CCM should support the latest stable Kubernetes and 2 major version before it. At the moment we only advertise that the latest version of CCM supports the latest stable version of Kubernetes because those are the only versions we tested. We should verify that DO CCM is compatible with the latest stable and 2 major versions behind.

Replace logging package "glog"

"glog" is heavily based on some assumptions for Google infrastructure and is not a well built package due the usage of the flag package. This caused some problems already in other upstream projects, some examples:

https://github.com/heptio/contour/blob/master/cmd/contour/contour.go#L43
coredns/coredns#1597

I recommend to replace it with a KV logger such as log (https://godoc.org/github.com/go-kit/kit/log), zap (https://github.com/uber-go/zap) or logrus (https://github.com/Sirupsen/logrus).

Service type LoadBalancer lacks balancer IP

Users at kubernetes will need the external loadbalancer host for any Service type LoadBalancer
The cloud provider doesn't populate the IP since the load balancer hasn't been provisioned yet

We should probably wait for the balancer to be active before writing the service's IP

CSI: Add ErrorLog interceptor for the gRPC servers

We have multiple methods in the plugin that might return an error. We should add an interceptor that logs the return error of all functions automatically. I've didn't looked into it, but I believe there might be something like this.

Use stringData for Kubernetes Secret containing DO Token.

The current examples use data in Kubernetes secrets which requires you to base64 encode your secrets. Kuberetes now supports stringData which is backwards compatible where the kubelet would end up consuming the data in base64 but we can use string data instead.

In #49, we had a discussion around how we should handle new line characters when we base64 encode the DO token, this would address it.

More details here kubernetes/kubernetes#19575.

Question on how to access cluster through NodePort

Hi there - beta DO Kubernetes user here - love the ease of use in setting up the cluster.

One thing that I was having trouble with was accessing the cluster at a NodePort. Sorry for the ignorance.

I've been using k8s on DO for a while now with my own droplets. I use 1 master and 2 nodes. What I'll do is have a service that exposes a specific NodePort, which I can then access by visiting $DROPLET_IP:$NODEPORT in the browser. I can do that with the IP's of any of the nodes in the cluster. Works great. I can even spin up a load balancer and point a domain to that exact port.

With the DO k8s beta - the host url I'm given cannot be used the same way. It just gives an error when I try to access it at the port: $HOSTURL:$NODEPORT.

I see that the docs mention to use the host url and not the IP's because the IP's can change. That's fine. But how can I access the NodePort in this case? Below is the example of a service that works great with my own cluster:

apiVersion: v1
kind: Service
metadata:
  name: $appName
  namespace: $appName
spec:
  type: NodePort
  ports:
  - port: 4000
    nodePort: 30001
  selector:
    app: $appName
    tier: backend

Also, when trying to access AWS RDS from my cluster, I need to set up firewall access in my RDS dashboard - so what do I put for the IP's of the droplets in my cluster so they are let through? Is the host something I can use here? It is the value I have to put in the Source field below. I guess it the main question I have is how to provide the server's IP Address to various services that need it.

screen shot 2018-10-22 at 10 20 23 am

Thank you

Add label according to node's pool

Currently node's do not have a label that shows which pool they are part of. It would be helpful to have this information in order to schedule resources better.

Block storage support?

Hello

I'm just wondering if there is any ETA for block storage support?

The README says In the future, it may implement:

Regards Kristian Klausen

Investigate: Move load balancer from one k8s cluster to another

From kubernetes docs:
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer

Some cloud providers allow the loadBalancerIP to be specified. In those cases, the load-balancer will be created with the user-specified loadBalancerIP. If the loadBalancerIP field is not specified, an ephemeral IP will be assigned to the loadBalancer. If the loadBalancerIP is specified, but the cloud provider does not support the feature, the field will be ignored.

Ran a quick test and we are not respecting loadBalancerIP.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.