Git Product home page Git Product logo

vultr-cloud-controller-manager's Introduction

Unit Tests

Kubernetes Cloud Controller Manager for Vultr

The Vultr Cloud Controller Manager (ccm) provides a fully supported experience of Vultr features in your Kubernetes cluster.

  • Node resources are assigned their respective Vultr instance hostnames, Region, PlanID and public/private IPs.
  • Node resources get put into their proper state if they are shutdown or removed. This allows for Kubernetes to properly reschedule pods
  • Vultr LoadBalancers are automatically deployed when a LoadBalancer service is deployed.

This plugin is in active development and you can track progress in the Milestones.

Getting Started

More information about running Vultr cloud controller manager can be found here

Examples can also be found here

Note: do not modify vultr load-balancers manually

When a load-balancer is created through the CCM (Loadbalancer service type), you should not modify the load-balancer. Your changes will eventually get reverted back due to the CCM validating state.

Any changes to the load-balancer should be done through the service object.

Development

Go minimum version 1.19.0

The vultr-cloud-controller-manager uses go modules for its dependencies.

Building the Binary

Since the vultr-cloud-controller-manager is meant to run inside a kubernetes cluster you will need to build the binary to be Linux specific.

GOOS=linux GOARCH=amd64 go build -o dist/vultr-cloud-controller-manager .

or by using our Makefile

make build-linux

This will build the binary and output it to a dist folder.

Note However if you wish to build the binary with the OS you are using you can run make build

Building the Docker Image

To build a docker image of the vultr-cloud-controller-manager you can use the docker-build entry in the make file. Take note that it requires 2 variables

  • Version
  • REGISTRY (dockerhub registry name)

an example could be

VERSION=v0.1.0 REGISTRY=vultr make docker-build

or if you chose to run it manually

docker build . -t vultr-cloud-controller-manager

Running the image

docker run -ti vultr/vultr-cloud-controller-manager

Deploying to a kubernetes cluster

You will need to make sure that your kubernetes cluster is configured to interact with a external cloud provider

More can be read about this in the Running Cloud Controller

To deploy the versioned CCM that Vultr providers you will need to apply two yaml files to your cluster which can be found here.

  • Secret.yml will take in the region ID in which your cluster is deployed in and your API key.

  • v0.X.X.yml is a preconfigured set of kubernetes resources which will help get the CCM installed.

vultr-cloud-controller-manager's People

Contributors

ddymko avatar dependabot[bot] avatar happytreees avatar mondragonfx avatar optik-aper avatar reubit avatar tlitovsk avatar vultj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

vultr-cloud-controller-manager's Issues

[Feature] - load balancer label annotation

Please add the possibility to set a load balancer label using an annotation like "service.beta.kubernetes.io/vultr-loadbalancer-label", otherwise the load balancer is simply created with the ID as name

CSI and block storage

Hi! Is a CSI driver, either in the controller manager or separate, planned? Thanks!

[BUG] - Using annotations results in no ssl

Describe the bug
For some reason using described annotations makes my endpoints serve http instead of https, I don't have enough knowledge to explain/understand properly but using those as per the doc results in net::ERR_SSL_PROTOCOL_ERROR and let's encrypt throwing Server only speaks HTTP, not TLS

    service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections"
    service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-protocol: 'http'
    service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-path: '/health'
    service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-check-interval: '10'
    service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-response-timeout: '5'
    service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-unhealthy-threshold: '5'
    service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-healthy-threshold: '5'

To Reproduce
Use the caddy ingress helm-chart with the annotations above

Expected behavior
Not using the annotations make the endpoint work fine

[Feature] - Migrate to API v2

Migrate the CCM over to GoVultr v2. This will introduce pagination, consistent typing, and allow us to remove a good chunk of code that was required for the CCM to work with v1 of the API.

[BUG] - cloud-controller-manager example - region is mentioned but missing

Describe the bug
A clear and concise description of what the bug is.

In cloud-controller-manager example the region value is mentioned in line 7:
"# Replace the api-key and region with proper values"
However, there's no setting to configure the value.

To Reproduce
Steps to reproduce the behavior:

  1. Go to cloud-controller-manager.yml , line 7
  2. There is no line in the file where region variable exist as opposed to the API key.

Expected behavior
Region value should exist in the file and should be replaced like the API key value.

Screenshots
Not required.

Desktop (please complete the following information where applicable:
Not required.

Additional context

None

Reporting a vulnerability

Hello!

I hope you are doing well!

We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called Private vulnerability reporting, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.

Can you enable it, so that we can report it?

Thanks in advance!

PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository

[BUG] - Error syncing load balancer

Describe the bug
Apologies if this is not the correct place to log this, but I'm also logging it with ingress-nginx.
Had a load balancer completely fail today for no obvious reason, couldn't access the cluster, so I destroyed it and rebuilt it (it's using the Terraform condor plugin)
When bringing everything back up (same versions as prior to destroying it) I'm now getting the below error when trying to bring up a load balancer through ingress-nginx

Error syncing load balancer: failed to ensure load balancer: error getting the provider ID : providerID cannot be an empty string

The logs in the Vultr CCM pod are spamming this constantly

I0114 16:22:49.318052       1 event.go:291] "Event occurred" object="default/ingress-nginx-controller" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
E0114 16:22:49.466674       1 controller.go:275] error processing service default/ingress-nginx-controller (will retry): failed to ensure load balancer: error getting the provider ID  : providerID cannot be an empty string

Has anything changed that could cause this in the last 6 months? (that's how long it's been since I brought up the original cluster)
Or is there any way I can debug to find out what providerID is? (I've done lots of searching and cannot see any reference to it in either package)

To Reproduce
Steps to reproduce the behavior:

  1. Bring up a kubernetes cluster in terraform using Condor
  2. Install the Helm chart from ingress-nginx
    helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace default --values=values.yml
  3. Watch the services for ingress-nginx-controller - it will remain in the "Pending" state
  4. See the event log says the given error message
  5. Check the Vultr CCM pod, see the above error messages

Expected behavior
A load balancer to be provisioned

Screenshots
N/A

Desktop (please complete the following information where applicable:
N/A

Additional context

contents of values.yml in the reproduction steps:

controller:
  config:
    use-forwarded-headers: "true"
    compute-full-forwarded-for: "true"
    use-proxy-protocol: "true"
  service:
    annotations:
      service.beta.kubernetes.io/vultr-loadbalancer-proxy-protocol: "true"

Again, sorry if this is the wrong place to post this, please let me know if there's a better place to ask.

[Feature] - Allow for disabling load balancer creation

Is your feature request related to a problem? Please describe.

There are many scenarios when you might not want the Vultr CCM to create a loadbalancer. Even if you have a different k8s load balancing solution, the CCM will create a vultr loadbalancer in addition to the one you actually want.

An example use case is that Vultr has fantastic BGP support, and you may want to use metallb instead of a third party node balancer. In this case, you likely still want to have the other features of the CCM such as putting the node into the proper state for pod scheduling.

Describe the solution you'd like

A way to turn off load balancer creation. Either with an opt-out or opt-in service annotation, or even a global flag that can be set on the cluster.

Describe alternatives you've considered

The alternative is to just not run the CCM, which works okay, but isn't the ideal scenario.

[Question] - Is there anyway to automatically scale a Kubernetes Cluster so it reflects in Vultr billing?

Hi,

I had a question regarding Vultr in relation to on-demand pricing. How might one automatically adjust Vultr resource allocation to reflect actual demand, possibly making use of Kubernetes autoscaling to do so?

It seems like the node pool size is fixed, so the Kubernetes autoscaling won't affect the actual underlying resource allocation and the associated Vultr billing cost. I was hoping to simulate something a bit like AWS Lambda functions, where idle time is basically free.

Cheers!

[BUG] - load balancer creation fails if a HTTP health-check-port is defined in the annotations

I created a LB using a k8s service definition, however, if i set the annotation "service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-port" CCM complains about missing port in the service, even if it is set.

"Event occurred" object="ingress-nginx/ingress-nginx-lb-service" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: provided health check port 10254 does not exist for service ingress-nginx/ingress-nginx-lb-service"

But like you can see in this service definition, that port is configured

As workaround i now use TCP and the k8s standard port (which is in fact simply the Node port which is mapped to nginx port 80) but this is not a reliable configuration

[BUG] - Enter a descriptive title

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information where applicable:

  • OS: [e.g. iOS]
  • Language Version [e.g. PHP7, Go 1.12]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context

Add any other context about the problem here.

[BUG] - loadBalancerIP not assigning reserver IP

Describe the bug
When creating a service of LoadBalancer type I would expect there would either be annotation I can pass or loadBalancerIP field to use previously reserved IP address.

To Reproduce
Steps to reproduce the behavior:
Create a service with field loadBalancerIP
Newly create IP is being assigned

Expected behavior
A clear and concise description of what you expected to happen.

[Feature] - Support v4 and v6 PTR records by annotation

For standalone LBs, Vultr allows configuring PTR records for both IPv4 and IPv6. For LBs provisioned and managed by vultr-cloud-controller-manager, this capability should be surfaced via annotations.

For v4, this is a simple scalar value.

For v6, the console allows creating multiple addresses. As I'm new to Vultr, I'm not clear on the use case yet (because while creating PTRs for IPv6 addresses other than the one presented does properly resolve, it doesn't cause the LB to bind to this address), but a format similar to firewall-rules would make sense, where you have a semicolon delimited list defining multiple addresses. Each element of the list can be in the form <64-bit suffix>,<hostname> or, for the LB's primary assigned IP, simply <hostname>.

Thanks!

[BUG] - The image is missing on docker hub

Describe the bug

Its not pushed to docker hub
Normal | BackOff (4) | 6 mins ago | Back-off pulling image "vultr/vultr-cloud-controller-manager:v0.9.0"

Expected behavior
image to be available on docker hub

[Feature] - Use different front end and back end protocol in Forwarding Rules

Is your feature request related to a problem? Please describe.
The front end and back end protocols are always the same as defined at https://github.com/vultr/vultr-cloud-controller-manager/blob/master/vultr/loadbalancers.go#L603
In the Vultr Load Balancer, it is possible to use HTTPS on the LB Protocol and HTTP on the Instance Protocol.
I want to use SSL termination at the Vultr Load Balancer so my application doesn't have to care about the SSL certificates. It works on a manually created Load Balancer where the configuration is HTTPS on the LB Protocol and HTTP on the Instance Protocol.
Currently, the backend protocol is always HTTPS. Therefore, I have to use an additional NGINX reverse proxy to handle the HTTPS. It is unnecessary in my opinion.
Another solution is to use the NGINX ingress controller. That means I can't take advantage of the SSL Termination at the Load Balancer and my nodes have to perform the SSL termination.

Describe the solution you'd like
Works the same as Vultr Load Balancer (HTTPS on LB and HTTP on instance protocol)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.