Git Product home page Git Product logo

Comments (16)

klausenbusk avatar klausenbusk commented on May 15, 2024 1

👍 this can also work with kubeadm.

kubeadm require installing docker, adding the k8s Debian repository and installing kubelet, kubeadm and kubectl (not sure if kubectl is required).

bootkube on the other hand work with CoreOS where docker is already installed, but require rendering the manifest file with bootkube render, copy the required file to the server, and add a kubelet.service systemd service file.

I'm not sure what the best solution is in the long run, but I prefer bootkube as we are using it in production (as it supports multi-master (not required for e2e), works with CoreOS out-of-the-box (not required for e2e) and support running self-hosting (kubeadm has beta support for that now)).

If kubernetes-retired/bootkube#803 and kubernetes-retired/bootkube#804 gets merged, we could probably just submodule the hacks/quickstart directory and wouldn't need to "fork"/copy the quickstart script. We of course still need logic to create the droplets, but that is doable.

@klausenbusk can I assign this issue to you

Fine with me..

is it something you can work on in the near future?

I can probably take a stab at in, in the upcoming Christmas holidays.

from digitalocean-cloud-controller-manager.

xmudrii avatar xmudrii commented on May 15, 2024

On the unrelated note, does anybody know how to use cloud-controller with terraform provider?

I believe I need to insert cloud-provider flag for kubelet, but adding it to ExecStart here doesn't seem to work.

Am I missing something?

from digitalocean-cloud-controller-manager.

andrewsykim avatar andrewsykim commented on May 15, 2024

I did some initial work in kubernetes-digitalocean-terraform/kubernetes-digitalocean-terraform#39 for cloud controller integration but it needs to be updated to work with v1.8 which is pending review for kubernetes-digitalocean-terraform/kubernetes-digitalocean-terraform#47. You can use that PR as a reference, let me know if anything else is unclear.

from digitalocean-cloud-controller-manager.

bhcleek avatar bhcleek commented on May 15, 2024

@xmudrii Here's a patch to https://github.com/kubernetes-digitalocean-terraform/kubernetes-digitalocean-terraform that works for me. It includes https://github.com/kubernetes-digitalocean-terraform/kubernetes-digitalocean-terraform#39 and https://github.com/kubernetes-digitalocean-terraform/kubernetes-digitalocean-terraform#47 plus one other change (the release file to use).

from digitalocean-cloud-controller-manager.

bhcleek avatar bhcleek commented on May 15, 2024

@xmudrii Are you planning to work on this issue? I was planning to jump in it next, but don't want to duplicate work if you're already working on it.

from digitalocean-cloud-controller-manager.

xmudrii avatar xmudrii commented on May 15, 2024

@bhcleek Thanks for the answer. No not for now, feel free to take it. :)

from digitalocean-cloud-controller-manager.

bhcleek avatar bhcleek commented on May 15, 2024

I forgot to paste the patch. Here you go:

diff --git a/01-master.yaml b/01-master.yaml
index 97cf2f5..2bbce16 100644
--- a/01-master.yaml
+++ b/01-master.yaml
@@ -36,12 +36,14 @@ write_files:
         ExecStart=/usr/lib/coreos/kubelet-wrapper \
           --anonymous-auth=false \
           --client-ca-file=/etc/kubernetes/ssl/ca.pem \
-          --api-servers=http://127.0.0.1:8080 \
+          --kubeconfig=/etc/kubernetes/kubelet-kubeconfig.yaml \
+          --require-kubeconfig \
           --network-plugin-dir=/etc/kubernetes/cni/net.d \
+          --register-schedulable=false \
+          --cloud-provider=external \
           --container-runtime=docker \
           --allow-privileged=true \
           --pod-manifest-path=/etc/kubernetes/manifests \
-          --hostname-override=$private_ipv4 \
           --cluster-dns=${DNS_SERVICE_IP} \
           --cluster-domain=cluster.local \
           --node-labels=kubernetes.io/role=master \
@@ -52,6 +54,23 @@ write_files:
 
         [Install]
         WantedBy=multi-user.target
+  - path: "/etc/kubernetes/kubelet-kubeconfig.yaml"
+    permissions: "0755"
+    content: |
+      apiVersion: v1
+      kind: Config
+      clusters:
+      - name: local
+        cluster:
+          server: http://127.0.0.1:8080
+      users:
+      - name: kubelet
+      contexts:
+      - context:
+          cluster: local
+          user: kubelet
+        name: kubelet-context
+      current-context: kubelet-context
   - path: "/etc/kubernetes/manifests/kube-apiserver.yaml"
     permissions: "0755"
     content: |
@@ -79,6 +98,8 @@ write_files:
             - --service-cluster-ip-range=${SERVICE_IP_RANGE}
             - --secure-port=443
             - --storage-backend=etcd2
+            - --cloud-provider=external
+            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
             - --advertise-address=$private_ipv4
             - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
             - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
@@ -161,6 +182,7 @@ write_files:
             - --master=http://127.0.0.1:8080
             - --leader-elect=true
             - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
+            - --cloud-provider=external
             - --root-ca-file=/etc/kubernetes/ssl/ca.pem
             livenessProbe:
               httpGet:
diff --git a/02-worker.yaml b/02-worker.yaml
index 44ae8af..0a4be27 100644
--- a/02-worker.yaml
+++ b/02-worker.yaml
@@ -37,13 +37,13 @@ write_files:
         ExecStart=/usr/lib/coreos/kubelet-wrapper \
           --anonymous-auth=false \
           --client-ca-file=/etc/kubernetes/ssl/ca.pem \
-          --api-servers=https://${MASTER_HOST} \
+          --require-kubeconfig \
           --network-plugin-dir=/etc/kubernetes/cni/net.d \
           --container-runtime=docker \
           --register-node=true \
           --allow-privileged=true \
+          --cloud-provider=external \
           --pod-manifest-path=/etc/kubernetes/manifests \
-          --hostname-override=$private_ipv4 \
           --cluster-dns=${DNS_SERVICE_IP} \
           --cluster-domain=cluster.local \
           --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
@@ -104,6 +104,7 @@ write_files:
         clusters:
         - name: local
           cluster:
+            server: https://${MASTER_HOST}
             certificate-authority: /etc/kubernetes/ssl/ca.pem
         users:
         - name: kubelet
diff --git a/05-do-secret.yaml b/05-do-secret.yaml
new file mode 100644
index 0000000..9d89afe
--- /dev/null
+++ b/05-do-secret.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: Secret
+metadata:
+  name: digitalocean
+  namespace: kube-system
+data:
+  # insert your base64 encoded DO access token here, ensure there's no trailing newline:
+  # to base64 encode your token run:
+  #      echo -n "abc123abc123doaccesstoken" | base64
+  access-token: "$DO_ACCESS_TOKEN_BASE64"
diff --git a/deploy.tf b/deploy.tf
index 9b31387..01f195e 100644
--- a/deploy.tf
+++ b/deploy.tf
@@ -23,7 +23,7 @@ variable "ssh_private_key" {
 
 variable "number_of_workers" {}
 variable "hyperkube_version" {
-    default = "v1.7.3_coreos.0"
+    default = "v1.8.0_coreos.0"
 }
 
 variable "prefix" {
@@ -435,3 +435,16 @@ resource "null_resource" "deploy_microbot" {
 EOF
     }
 }
+
+resource "null_resource" "deploy_digitalocean_cloud_controller_manager" {
+    depends_on = ["null_resource.setup_kubectl"]
+    provisioner "local-exec" {
+        command = <<EOF
+            TOKEN=$(echo "${var.do_token}" | tr -d '\n' | base64)
+            sed -e "s/\$DO_ACCESS_TOKEN_BASE64/$TOKEN/" < ${path.module}/05-do-secret.yaml > ./secrets/05-do-secret.rendered.yaml
+            until kubectl get pods 2>/dev/null; do printf '.'; sleep 5; done
+            kubectl create -f ./secrets/05-do-secret.rendered.yaml
+            kubectl create -f https://raw.githubusercontent.com/digitalocean/digitalocean-cloud-controller-manager/master/releases/v0.1.2.yml
+EOF
+    }
+}

from digitalocean-cloud-controller-manager.

klausenbusk avatar klausenbusk commented on May 15, 2024

What is the status of this?

We use a modified version of the bootkube quickstart script.

So we could tweak the quickstart script, and create a simple script which create the droplets. Something like:

ssh-keygen # to tmp file
<pull token from env DO_TOKEN, and configure doctl>
ID="$(head -10 /dev/urandom | sha512sum | cut -b 1-30)"
trap "doctl remove all droplet with tag ID=$ID" EXIT

doctl compute droplet create master --image coreos-stable --size 2gb --wait --tag "$ID" --ssh-keys <tmp ssh file>
./init-master.sh <ip>

# Add DO CCM here or in init-master..

doctl compute droplet create worker1--image coreos-stable --size 2gb --wait --tag "$ID" --ssh-keys <tmp ssh file>
./init-node.sh <ip> <kube-config>

doctl compute droplet create worker2 --image coreos-stable --size 2gb --wait --tag "$ID" --ssh-keys <tmp ssh file>
./init-node.sh <ip> <kube-config>
until kubectl get node == 3 ready nodes; # TODO: Add timeout
  sleep 5
done

# Run e2e test

What do you think? or is terraform preferred?

from digitalocean-cloud-controller-manager.

andrewsykim avatar andrewsykim commented on May 15, 2024

+1 for using simple bash scripts or creating droplets using godo directly in the test setup.

from digitalocean-cloud-controller-manager.

klausenbusk avatar klausenbusk commented on May 15, 2024

+1 for using simple bash scripts or creating droplets using godo directly in the test setup.

I have opened a new bootkube issue: kubernetes-retired/bootkube#800 , as I think we need a tolerant for node.cloudprovider.kubernetes.io/uninitialized.

# Run e2e test

Something like:

for node in nodes; do
    #check
    failure-domain.beta.kubernetes.io/region == doctl get node region
    beta.kubernetes.io/instance-type == doctl get type
    InternalIP == doctl get private ipv4
    ExternalIP == doctl get public ipv4
done
doctl delete node worker2
timeout 120 until kubectl get node worker2 gone; do
    sleep 5
done

For load balancer e2e I think something like:

# Maybe ignore https example for now?
for manifest in examples/loadbalancers/*.yml; do
    kubectl create -f $manifest
    timeout 120 until kubectl get loadbalancer ready; do
        sleep 5
    done
    timeout 120 until curl success; do
        sleep 5
    done
    kubectl delete -f $manifest
    timeout 120 until kubectl get loadbalancer gone; do
        sleep 5
    done
    doctl check loadbalancer is gone
done

from digitalocean-cloud-controller-manager.

andrewsykim avatar andrewsykim commented on May 15, 2024

👍 this can also work with kubeadm. @klausenbusk can I assign this issue to you, is it something you can work on in the near future?

from digitalocean-cloud-controller-manager.

klausenbusk avatar klausenbusk commented on May 15, 2024

With bootkube at least, we seems to end up in a chicken-egg situation. kube-apiserver needs the hostIP when starting (kubernetes-retired/bootkube#453). I could probably remove --advertise-address from the manifests, but then we probably end up in a new chicken-egg situation (due to TLS bootstrapping): https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#chicken-and-egg

I think kubeadm also use "TLS bootstrapping": https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.8.md

Edit: I'm not sure TLS bootstrapping is used for the initial node? So I'm not sure if this is even a problem.

from digitalocean-cloud-controller-manager.

klausenbusk avatar klausenbusk commented on May 15, 2024

Tracked upstream here (external cloud provider + TLS bootstrapping): kubernetes/kubernetes#55633

from digitalocean-cloud-controller-manager.

peterver avatar peterver commented on May 15, 2024

@xmudrii you could also simply do as we do and run an ansible script from terraform that then provisions CCM :) ?

...
# Execute all playbooks
resource "null_resource" "execute_playbooks" {
        depends_on      = ["null_resource.write_digitalocean_token"]
        triggers {
                rendered_inventory = "${template_file.ansible_inventory.rendered}"
        }
        provisioner "local-exec" {
                command = <<EOF
                    export ANSIBLE_HOST_KEY_CHECKING=False
                    ...
                    ansible-playbook -v -i ${var.output_path}/inventory ${path.module}/playbooks/myplaybook.yml
                    ...
EOF
        }
}
...
---
- hosts: master
  become: yes
  tasks:

##### Secrets

    - name: 'Copy digitalocean secret yml to master node'
      copy:
        src: "{{ ath_output_path }}/do_secret.rendered.yml"
        dest: /home/valkyrie/do_secret.yml
        owner: valkyrie
        mode: 0755

    - name: 'Install DO secret'
      become: yes
      become_user: valkyrie
      shell: kubectl create -f $HOME/do_secret.yml >> /home/valkyrie/do_secrets_installed.txt
      args:
        creates: /home/valkyrie/do_secrets_installed.txt

##### Cloud Controller Manager (CCM)

    - name: 'Copy CCM yml to master node'
      copy:
        src: ./assets/do_ccm.yml
        dest: /home/valkyrie/do_ccm.yml
        owner: valkyrie
        mode: 0755

    - name: 'Check if CCM was installed'
      stat:
        path: /home/valkyrie/ccm_installed.txt
      register: ccm_installed

    - name: 'Patch CCM'
      become: yes
      become_user: valkyrie
      shell: kubectl replace --force -f $HOME/do_ccm.yml
      when: ccm_installed.stat.exists == True

    - name: 'Install CCM'
      become: yes
      become_user: valkyrie
      shell: kubectl create -f $HOME/do_ccm.yml >> /home/valkyrie/ccm_installed.txt
      when: ccm_installed.stat.exists == False

##### Cloud Storage Interface (CSI)

    - name: 'Copy CSI yml to master node'
      copy:
        src: ./assets/do_csi.yml
        dest: /home/valkyrie/do_csi.yml
        owner: valkyrie
        mode: 0755

    - name: 'Check if CSI was installed'
      stat:
        path: /home/valkyrie/csi_installed.txt
      register: csi_installed

    - name: 'Install CSI'
      become: yes
      become_user: valkyrie
      shell: kubectl create -f $HOME/do_csi.yml >> /home/valkyrie/csi_installed.txt
      when: csi_installed.stat.exists == False

from digitalocean-cloud-controller-manager.

timoreimann avatar timoreimann commented on May 15, 2024

@andrewsykim has this been implemented by #148, or is there more to do?

from digitalocean-cloud-controller-manager.

timoreimann avatar timoreimann commented on May 15, 2024

Implemented.

from digitalocean-cloud-controller-manager.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.