Git Product home page Git Product logo

cluster-api-provider-gcp's Introduction

capicapi

Build Status Go Report Card


Kubernetes Cluster API Provider GCP

Kubernetes-native declarative infrastructure for GCP.

What is the Cluster API Provider GCP?

The Cluster API brings declarative Kubernetes-style APIs to cluster creation, configuration and management. The API itself is shared across multiple cloud providers allowing for true Google Cloud hybrid deployments of Kubernetes.

Documentation

Please see our book for in-depth documentation.

Quick Start

Checkout our Cluster API Quick Start to create your first Kubernetes cluster on Google Cloud Platform using Cluster API.


Support Policy

This provider's versions are compatible with the following versions of Cluster API:

Cluster API v1alpha3 (v0.3.x) Cluster API v1alpha4 (v0.4.x) Cluster API v1beta1 (v1.0.x)
Google Cloud Provider v0.3.x
Google Cloud Provider v0.4.x
Google Cloud Provider v1.0.x

This provider's versions are able to install and manage the following versions of Kubernetes:

Google Cloud Provider v0.3.x Google Cloud Provider v0.4.x Google Cloud Provider v1.0.x
Kubernetes 1.15
Kubernetes 1.16
Kubernetes 1.17
Kubernetes 1.18
Kubernetes 1.19
Kubernetes 1.20
Kubernetes 1.21
Kubernetes 1.22

Each version of Cluster API for Google Cloud will attempt to support at least two versions of Kubernetes e.g., Cluster API for GCP v0.1 may support Kubernetes 1.13 and Kubernetes 1.14.

NOTE: As the versioning for this project is tied to the versioning of Cluster API, future modifications to this policy may be made to more closely align with other providers in the Cluster API ecosystem.


Getting Involved and Contributing

Are you interested in contributing to cluster-api-provider-gcp? We, the maintainers and the community would love your suggestions, support and contributions! The maintainers of the project can be contacted anytime to learn about how to get involved.

Before starting with the contribution, please go through prerequisites of the project.

To set up the development environment, checkout the development guide.

In the interest of getting new people involved, we have issues marked as good first issue. Although these issues have a smaller scope but are very helpful in getting acquainted with the codebase. For more, see the issue tracker. If you're unsure where to start, feel free to reach out to discuss.

See also: Our own contributor guide and the Kubernetes community page.

We also encourage ALL active community participants to act as if they are maintainers, even if you don't have 'official' written permissions. This is a community effort and we are here to serve the Kubernetes community. If you have an active interest and you want to get involved, you have real power!

Office hours

  • Join the SIG Cluster Lifecycle Google Group for access to documents and calendars.
  • Participate in the conversations on Kubernetes Discuss
  • Provider implementers office hours (CAPI)
    • Weekly on Wednesdays @ 10:00 am PT (Pacific Time) on Zoom
    • Previous meetings: [ notes | recordings ]
  • Cluster API Provider GCP office hours (CAPG)
    • Monthly on first Thursday @ 09:00 am PT (Pacific Time) on Zoom
    • Previous meetings: [ notes|recordings ]

Other ways to communicate with the contributors

Please check in with us in the #cluster-api-gcp on Slack.

Github Issues

Bugs

If you think you have found a bug, please follow the instruction below.

  • Please give a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.
  • Get the logs from the custom controllers and please paste them in the issue.
  • Open a bug report.
  • Remember users might be searching for the issue in the future, so please make sure to give it a meaningful title to help others.
  • Feel free to reach out to the community on slack.

Tracking new feature

We also have an issue tracker to track features. If you think you have a feature idea, that could make Cluster API provider GCP become even more awesome, then follow these steps.

  • Open a feature request.
  • Remember users might be searching for the issue in the future, so please make sure to give it a meaningful title to help others.
  • Clearly define the use case with concrete examples. Example: type this and cluster-api-provider-gcp does that.
  • Some of our larger features will require some design. If you would like to include a technical design in your feature, please go ahead.
  • After the new feature is well understood and the design is agreed upon, we can start coding the feature. We would love for you to code it. So please open up a WIP (work in progress) PR and happy coding!

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

cluster-api-provider-gcp's People

Contributors

akshay196 avatar alexander-demicev avatar aniruddha2000 avatar cpanato avatar damdo avatar dependabot[bot] avatar detiber avatar dims avatar evanfreed avatar fiunchinho avatar jayesh-srivastava avatar justinsb avatar k4leung4 avatar k8s-ci-robot avatar kahun avatar kcoronado avatar meobilivang avatar mkjelland avatar nader-ziada avatar prajyot-parab avatar prksu avatar richardcase avatar richardchen331 avatar roberthbailey avatar rsmitty avatar salasberryfin avatar sayantani11 avatar snehala27 avatar spew avatar vincepri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-api-provider-gcp's Issues

Rebase E2E tests to use Cluster API (from kube-up)

From @rsdcastro on January 29, 2018 23:45

Benefits as per @roberthbailey

  • More test cycles (we will get hundreds of cluster create / delete cycles per day, hook into the upgrade test infra, etc).
  • It will also force us to build enough configurability to do all of the edge cases demanded by the tests
  • This can be used to convince ourselves that the API is solid and flexible (e.g. if we end up with un-readable templating like kube-up or k8s-anywhere we haven't succeeded)

Copied from original issue: kubernetes-retired/kube-deploy#547

Support External IPs

I'm interested in having some defined external IPs that I can attach to my master instances, so that I can know how to reach them ahead of time. I'll be following this issue with a PR that provides code to do this so we can get started talking about how folks feel about it.

Use a pre-built node and master image

From @rsdcastro on April 17, 2018 18:52

From @karan on January 24, 2018 19:18

Currently, spinning up nodes and masters can take ~minutes. Most of this time is due to apt-get installs. We can cache these by using a pre-built image, and then simply run kubeadm join. This will make node startups faster.

Similar for master.

https://github.com/kubernetes/kube-deploy/blob/master/cluster-api-gcp/cloud/google/templates.go#L159

Copied from original issue: kubernetes-retired/kube-deploy#514

Copied from original issue: kubernetes-sigs/cluster-api#97

Cannot create VMs for Machines. /root/.kube/config missing

I've created cluster from HEAD. Initial machines as defined in machines.yaml have been created succesfully. However when I tried to create Machinset [1], VMs for underlying Machines have not been created. Based on logs, it seems there is a problem while genering kubeadm token:

E1130 12:14:10.410996       1 machineactuator.go:785] unable to create token: exit status 1 [failed to load admin kubeconfig [open /root/.kube/config: no such file or directory]    

[1]

apiVersion: "cluster.k8s.io/v1alpha1"
kind: MachineSet
metadata:
  name: ms3
spec:
  replicas: 3
  selector:
    matchLabels:
      foo: bar
  template:
    metadata:
      labels:
        foo: bar
    spec:
      providerConfig:
        value:
          apiVersion: "gceproviderconfig/v1alpha1"
          kind: "GCEMachineProviderConfig"
          roles:
          - Node
          zone: "us-central1-f"
          machineType: "n1-standard-1"
          os: "ubuntu-1604-lts"
          disks:
          - initializeParams:
            diskSizeGb: 30
            diskType: "pd-standard"
      versions:
        kubelet: 1.12.0

Remove ginkgo code (introduced by kubebuilder migration)

We previously went through and removed all of the ginkgo code. Then it came back. There is only a tiny bit of it in this repo and the sooner we take it out the easier it will be to do so.

This should be relatively easy, even for someone new to the project.

/cc @krousey

Reduce man in the middle for SSH

Instead of setting ScrictHostKeyChecking to no, have a check to fetch the host key if there is not host key and remove the that flag. What this will do is reduce the possibility of man in the middle attacks from every time to just the first time for the machine actuator.

GCE machine actuator should collect current state

From @rsdcastro on April 17, 2018 11:53

From @jessicaochen on January 11, 2018 17:8

The machine actuator should get current state from the actual sources (eg. kubelet version from machine, VM info from gce) instead of a cached last-applied state. This will help with cases like the state changing under the covers.

https://github.com/kubernetes/kube-deploy/blob/master/cluster-api-gcp/cloud/google/instancestatus.go#L13

Copied from original issue: kubernetes-retired/kube-deploy#482

Copied from original issue: kubernetes-sigs/cluster-api#99

No need for prips on the node

Currently the node script computes in the bootstrap script the DNS server IP, using the prips tool, by finding the 10th IP.

This is kinda silly. It would be much easier to compute it server-side in the machine-controller. The machine controller could also get it from the kube-dns service, if the cluster is already running. (I'm not really sure why the kubelet doesn't do this - maybe it would introduce a circular dependency!)

Anyway, now that there can be multiple service IP blocks, this logic seems particularly fragile.

Document alignment with supported Cluster API and Kubernetes versions

Let's add a section to the README that describes which versions of Cluster API work with which versions of this repository.

Let's also add a section that describes which versions of Kubernetes this provider is able to provision.

Example:
This provider's versions are compatible with the following versions of Cluster API:

Cluster API 0.1 Cluster API 0.2
XYZ Provider 0.1
XYZ Provider 0.2

This provider's versions are able to install and manage the following versions of Kubernetes:

Kubernetes 1.11 Kubernetes 1.12 Kubernetes 1.13 Kubernetes 1.14
XYZ Provider 0.1
XYZ Provider 0.2

/kind documentation
/priority important-soon

More secure transport mechanism for custom certificate authorities

In PR-153 initial support was added for custom CAs when using gcp-deployer. The transport mechanism for bringing a custom CA onto the cluster is the instance metadata. This approach is not as secure as we may like as every user of the instance can access the private key through the metadata.

This issue tracks the need for a better approach that does not expose secrets to unprivileged users.

If minikube already exists then create cluster doesn't work

./bin/clusterctl create cluster --provider google -c cmd/clusterctl/examples/google/out/cluster.yaml -m cmd/clusterctl/examples/google/out/machines.yaml -p cmd/clusterctl/examples/google/out/provider-components.yaml -a cmd/clusterctl/examples/google/out/addons.yaml --minikube="kubernetes-version=v1.12.4" --v=4
I0104 11:07:43.406268 28287 machineactuator.go:813] Using the default GCP client
I0104 11:07:43.408657 28287 plugins.go:39] Registered cluster provisioner "google"
I0104 11:07:43.412794 28287 createbootstrapcluster.go:28] Creating bootstrap cluster
I0104 11:07:43.412829 28287 minikube.go:58] Running: minikube [start --bootstrapper=kubeadm --kubernetes-version=v1.12.4]
I0104 11:08:36.802657 28287 minikube.go:62] Ran: minikube [start --bootstrapper=kubeadm --kubernetes-version=v1.12.4] Output: Starting local Kubernetes v1.12.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Machine exists, restarting cluster components...
Verifying kubelet health ...
Verifying apiserver health ....Kubectl is now configured to use the cluster.
Loading cached images from config file.

Everything looks great. Please enjoy minikube!
I0104 11:08:36.806728 28287 clusterdeployer.go:95] Applying Cluster API stack to bootstrap cluster
I0104 11:08:36.806756 28287 applyclusterapicomponents.go:27] Applying Cluster API Provider Components
I0104 11:08:36.806776 28287 clusterclient.go:521] Waiting for kubectl apply...
I0104 11:08:37.436939 28287 clusterclient.go:550] Waiting for Cluster v1alpha resources to become available...
I0104 11:08:37.447889 28287 clusterclient.go:563] Waiting for Cluster v1alpha resources to be listable...
I0104 11:08:37.461941 28287 clusterdeployer.go:100] Provisioning target cluster via bootstrap cluster
I0104 11:08:41.478073 28287 applycluster.go:37] Creating cluster object test1-jtpdx in namespace "default"
I0104 11:08:45.488807 28287 createbootstrapcluster.go:37] Cleaning up bootstrap cluster.
I0104 11:08:45.488838 28287 minikube.go:58] Running: minikube [delete]
I0104 11:08:46.652070 28287 minikube.go:62] Ran: minikube [delete] Output: Deleting local Kubernetes cluster...
Machine deleted.
F0104 11:08:46.652839 28287 create_cluster.go:64] unable to create cluster "test1-jtpdx" in bootstrap cluster: error creating cluster in namespace default: clusters.cluster.k8s.io "test1-jtpdx" already exists

Add support for multi-project clusters on GCE

From @mkjelland on May 18, 2018 22:27

Once PR #183 is merged, a single project for a cluster is defined in the GCEClusterProviderConfig. Only a single project is supported. This issue is to allow the GCEClusterProviderConfig to take a list of projects that are supported for the cluster. Then each machine will need to define its own Project field in GCEMachineProviderConfig, which will be required to be in the list in GCEClusterProviderConfig.

Provisioning of networks, firewall rules and service account permissions will be impacted by this change.

Copied from original issue: kubernetes-sigs/cluster-api#191

Cannot create cluster as HEAD

I cannot create a new cluster using HEAD of this repo. The first issue I hit is kubernetes-sigs/cluster-api#533. I've managed to get bootstrap cluster to work by combination of updating minikube to the newest version and passing --minikube "kubernetes-version=v1.12.0" to clusterctl. However I still hit the same problem in target cluster, which is running K8s 1.9.4. A dummy changing version anywhere do 1.12.1 did not help (I suspect the startup script should be update somehow).

Any tips on how to proceed would be welcome.

Support using custom static pod specs

We want to add support for the use case of a power user who wants to specify custom static pod specs for their control plane when starting up a machine instead of using the defaults from kubeadm init. Note that this does not eliminate the option of using kubeadm.

An idea for implementing this:

  • Use a component setup ConfigMap to map a control plane version to the set of static pod specs to use in bootstrapping.
  • The machine controller will read from the ConfigMap and handle putting the files into /etc/kubernetes/manifests.
  • If the machine controller doesn't find this ConfigMap, then it will assume the user will bootstrap the machine with kubeadm init.

unable to apply cluster api stack to bootstrap cluster

I am following the exact guidelines as shown in
https://github.com/kubernetes-sigs/cluster-api-provider-gcp#getting-started
and cluster creation fails
screen shot 2018-12-27 at 1 48 40 pm

Please let me know how to proceed. I haven't done any fancy stuffs, just following the guidelines.
and below are some useful information.

bsingarayan@bsingarayan-mbp ~/g/s/s/c/c/c/e/g/out> minikube version
minikube version: v0.32.0

I manually brought up minikube and applied the provider-components.yaml file and it had the same issue.
screen shot 2018-12-27 at 1 52 16 pm

Below is the provider-components.yaml file for reference
provider-components.yaml.txt

BTW- I also discussed this issue in slack (cluster-api) thread and suggested to create an issue here.

Thanks

prompted for SSH key on GCE

While setting up my new cluster, I was prompted for my SSH key which I would usually use to log into GCE VMs.

I0707 17:58:31.742852   61559 clusterdeployer.go:225] Getting internal cluster kubeconfig.
I0707 17:58:31.742879   61559 clusterdeployer.go:341] Waiting for kubeconfig on gce-master-mpg2g to become ready...
Enter passphrase for key '/Users/craigbox/.ssh/google_compute_engine': 
I0707 17:59:39.438545   61559 clusterdeployer.go:341] Waiting for kubeconfig on gce-master-mpg2g to become ready...
Enter passphrase for key '/Users/craigbox/.ssh/google_compute_engine': 
I0707 17:59:49.436231   61559 clusterdeployer.go:341] Waiting for kubeconfig on gce-master-mpg2g to become ready...
Enter passphrase for key '/Users/craigbox/.ssh/google_compute_engine': 
I0707 18:00:03.792973   61559 loader.go:357] Config loaded from file /var/folders/nn/6bcypbkx16b0_45yhpv_yw3m0072tf/T/806424865
I0707 18:00:03.794215   61559 clusterdeployer.go:162] Applying Cluster API stack to internal cluster
I0707 18:00:03.794233   61559 clusterdeployer.go:294] Applying Cluster API APIServer

(I eventually ssh-add'd it in another window)

Support additional metadata for instances

As I've been getting familiar with the GCP cluster-api provider, I've noticed that there is really only a "startup-script" that can get passed in as a metadata item for the instances that get created. I'd love to be able to squirrel away other metadata as desired. I've got a PR that I'll be chasing this issue with that provides this functionality.

Fetch Kubeconfig From Master via SSH

Currently fetching kubeconfig for master is a per-provider implementation in the deployer. However, we could have a generic default implementation if we do the following:

  1. Have the deployer generate/get a SSH key from the user
  2. Have the deployer provide said SSH key to the machine controller (perhaps a mounted secret?)
  3. Have the machine actuator apply the SSH key to provisioned machines
  4. The deployer can then use the SSH key to access the master and get the kubeconfig (Note that it can get the IP from the cluster object #158)

Having a provider-specific way to get kubeconfig should be an option and not a requirement.

Move from exec.Command("ssh") to golang/x/crypto/ssh

gcp-deployer is making use of command line ssh as an exec.Command. This is an artifact of moving quickly to build out gcp-deployer and using exec.Command on "gcloud ssh".

As per @medinatiger, we should move to the crypto/ssh library built into go to give us more fine control over the interaction, remove the need to turn off host checking, and to add flexibility.

Allow configuration of network and subnetwork in GCE cloud provider

From @mkjelland on April 16, 2018 22:46

Currently the machine actuator hard codes the network and subnetwork that it creates the instances in. Eventually we want to be able to provision networks/subnetworks for the cluster to live in. For now, we should allow users to specify the network and subnetwork names where they want the cluster to be created.

This issue includes:

  • Split GCEProviderConfig into cluster specific and machine specific configuration
  • Add networkName and subNetworkName fields to GCE cluster provider config type
  • Update instance insertion code in GCE machine actuator to create instances in network/subnetwork found in cluster.spec.cloudProvider.networkName and cluster.spec.cloudProvider.subNetworkName
  • Update cloud-config files to use specified network/subnetwork
  • Default to current behavior if network name is not passed in

Not included in this issue:

  • Generating a network/subnetwork if one does not exist with the specified name. For now, throw an error

Copied from original issue: kubernetes-sigs/cluster-api#72

apt-key timeouts cause node to fail to start

I observed a node that failed to join the cluster, because apt-key failed.

+ apt-key adv --keyserver hkp://keyserver.ubuntu.com --recv-keys F76221572C52609D
Executing: /tmp/tmp.k1GjT3orwh/gpg.1.sh --keyserver
hkp://keyserver.ubuntu.com
--recv-keys
F76221572C52609D
gpg: requesting key 2C52609D from hkp server keyserver.ubuntu.com
gpg: keyserver timed out
gpg: keyserver receive failed: keyserver error

Wondering if we should pivot to golang now, instead of wasting time trying to harden this

Machine's spec got truncated when creating via kubectl

Here is Machine definition, copied from example/out/ directory.

apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
  generateName: gce-node-
  labels:
    set: node
  spec:
    providerConfig:
      value:
        apiVersion: "gceproviderconfig/v1alpha1"
        kind: "GCEMachineProviderConfig"
        roles:
        - Node
        zone: "us-central1-f"
        machineType: "n1-standard-1"
        os: "ubuntu-1604-lts"
        disks:
        - initializeParams:
            diskSizeGb: 30
            diskType: "pd-standard"
    versions:
      kubelet: 1.12.0

I tried to create this Machine using kubectl create -f mymachine.yaml. The object is created but it contect is truncated. Calling kubectl desccribe on this machine returns

Namespace:    default                                                                                                                                                                                                                                                                     
Labels:       set=node                                                                                                                                                                                                                                                                    
Annotations:  <none>                                                                                                                                                                                                                                                                      
API Version:  cluster.k8s.io/v1alpha1                                                                                                                                                                                                                                                     
Kind:         Machine                                                                                                                                                                                                                                                                     
Metadata:                                                                                                                                                                                                                                                                                 
  Creation Timestamp:  2018-12-14T13:14:07Z                                                                                                                                                                                                                                               
  Finalizers:                                                                                                                                                                                                                                                                             
    machine.cluster.k8s.io                                                                                                                                                                                                                                                                
  Generate Name:     gce-node-                                                                                                                                                                                                                                                            
  Generation:        2                                                                                                                                                                                                                                                                    
  Resource Version:  1650209                                                                                                                                                                                                                                                              
  Self Link:         /apis/cluster.k8s.io/v1alpha1/namespaces/default/machines/gce-node-kzg2n                                                                                                                                                                                             
  UID:               2340b3c5-ffa2-11e8-bbf6-42010a800003                                                                                                                                                                                                                                 
Spec:                                                                                                                                                                                                                                                                                     
  Metadata:                                                                                                                                                                                                                                                                               
    Creation Timestamp:  <nil>                                                                                                                                                                                                                                                            
  Provider Config:                                                                                                                                                                                                                                                                        
  Versions:                                                                                                                                                                                                                                                                               
    Kubelet:                                                                                                                                                                                                                                                                              
Events:       <none>            

Both kubelet version and provider config got somehow truncated.

Generated addons.yaml file is incorrect and cannot be applied

The generated addons.yaml file cannot be applied (either by passing to clusterctl or via kubectl apply). Output:

storageclass.storage.k8s.io/standard created
serviceaccount/glbc created
deployment.extensions/l7-default-backend created
configmap/ingress-controller-config created
service/default-http-backend created
deployment.extensions/l7-lb-controller created
secret/glbc-gcp-key created
Error from server (NotFound): error when creating "examples/google/out/addons.yaml": the server could not find the requested resource
Error from server (NotFound): error when creating "examples/google/out/addons.yaml": the server could not find the requested resource

Generated addons.yaml, when passed to clusterctl causing it to fail cluster creation.

Cluster Controller Should Update Cluster APIEndpoints

Currently it is the responsibility of the deployer to update cluster APIEndpoints after the control plane is provisioned. This should really live in the cluster controller so that the logic works across updates and is consistent.

Cannot create cluster using clusterctl: unable to create master machine: timed out waiting for the condition

I am unable to create cluster with clusterctl using this repo from HEAD. I am getting:
F0928 10:27:27.798129 159895 create_cluster.go:61] unable to create master machine: timed out waiting for the condition

Increasing logging level to 5 doesn't provide much more information, I see a lot of checks:
I0928 10:19:06.558359 159895 clusterclient.go:433] Waiting for Machine gce-master-4btlq to become ready...

There is no trace on GCE side of any attempts to create a VM (not operations logged etc.).

Doesn't work with latest minikube

It looks like minikube v0.30.0 isn't a recent enough version of the k8s API to work with the cluster API libraries being used here. Specifically when following the directions in the README, you'll hit this error when running clustectl create cluster:

F1128 14:25:40.581898    8057 create_cluster.go:65] unable to update bootstrap cluster endpoint: unable to update cluster endpoint: the server could not find the requested resource (put clusters.cluster.k8s.io test1-b70jo)

It looks like the code here is what fails. The cluster exists, just can't be updated with this API.

minikube v0.30.0 is running kubernetes v1.10.0. I was able to get past this bug by using https://github.com/kubernetes-sigs/kubeadm-dind-cluster and v1.12 instead.

gcp-provider-controller-manager goes into crashloop

While trying 35cbb1e faced this issue

kubectl get pods --all-namespaces
NAMESPACE             NAME                                    READY     STATUS             RESTARTS   AGE
cluster-api-system    cluster-api-controller-manager-0        1/1       Running            0          12m
gcp-provider-system   gcp-provider-controller-manager-0       0/1       CrashLoopBackOff   7          12m
kube-system           coredns-576cbf47c7-jbpph                1/1       Running            0          12m
kube-system           etcd-minikube                           1/1       Running            0          11m
kube-system           kube-addon-manager-minikube             1/1       Running            0          11m
kube-system           kube-apiserver-minikube                 1/1       Running            0          11m
kube-system           kube-controller-manager-minikube        1/1       Running            0          11m
kube-system           kube-proxy-9zn2h                        1/1       Running            0          12m
kube-system           kube-scheduler-minikube                 1/1       Running            0          11m
kube-system           kubernetes-dashboard-5bb6f7c8c6-22vcx   1/1       Running            0          12m
kube-system           storage-provisioner                     1/1       Running            0          12m

Saw that gcp-provider-controller-manager-0 is complaining about manager location

Warning Failed 7m (x5 over 9m) kubelet, minikube Error: failed to start container "manager": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/manager\": stat /manager: no such file or directory": unknown
--

Perhaps gcp-provider-controller-manager image has not been updated. for this change 7329c8d. Below is the image i get even if i force pull.

gcr.io/cluster-api-provider-gcp/gcp-cluster-api-controller   latest              901dd3ddca1f        2 months ago        188MB

Which still has the binary in old location

docker run -it --entrypoint sh gcr.io/cluster-api-provider-gcp/gcp-cluster-api-controller:latest
# ls
manager
# pwd
/root
# cd /
# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
# exit

Unify ssh command

There are multiple different command execs in the codebase all doing roughly the same thing with ssh. This issue is tracking unification of all those commands.

Note that there is also the usage of the "gcloud compute ssh" command which is a bit different and it may not be desirable to unify this.

Create ProviderStatus and GCEProviderStatus Objects

Service account and firewall rule state should be stored in the cluster.status.providerStatus field. In order to make that possible the cluster.status.providerStatus field needs to be of type ProviderStatus and we need to create a new object called GCEProviderStatus.

Once the objects are created, move all cluster provider state into this field for GCE.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.