Git Product home page Git Product logo

external-dns's People

Contributors

21h avatar abursavich avatar alejandrojnm avatar badliveware avatar dependabot[bot] avatar dsalisbury avatar ericrrath avatar fred78290 avatar hjacobs avatar jgrumboe avatar johngmyers avatar k8s-ci-robot avatar kbhandari avatar linki avatar megum1n avatar mloiseleur avatar njuettner avatar pg2000 avatar raffo avatar renehernandez avatar seanmalloy avatar sewci0 avatar sheerun avatar skoef avatar stevehipwell avatar szuecs avatar tariq1890 avatar thomask33 avatar timja avatar vinny-sabatini avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

external-dns's Issues

Support custom TTLs

Especially for k8s services it would be good to be able to set a lower TTL than 300 seconds.
When one node goes down it could take up to five minutes to have an updated DNS.

EDIT
just realized that the load balancer's IP address is used so my argumentation is not valid. Nevertheless a custom TTL would still be a good idea ;)

consider changing controller annotation's value to "external-dns"

By annotating the controller by which objects should be processed, the user can opt-in and out from being processed by external-dns.

external-dns.alpha.kubernetes.io/controller: mate

However, this controller's identifer was defined to be dns-controller but I think it makes more sense to use the name of this project, external-dns. Meaning, not annotating or annotating a resource with external-dns will allow to opt-in.

external-dns.alpha.kubernetes.io/controller: external-dns

Currently, you would have to use

external-dns.alpha.kubernetes.io/controller: dns-controller

Come up with an Interface for internal storage

We need some way of keeping track of what records external-dns is responsible for in order to:

  • support running with already present legacy records
  • allowing for easy migration to external-dns
  • running multiple instances of external-dns concurrently

In mate we used co-located TXT records with a value identifying the instance of mate to keep track of it. This worked well but turned out to add some additional complexity. Maybe there's another way.

@justinsb mentioned keeping track of that data as annotations or via a ConfigMap. This wouldn't work with multiple clusters targetting the same zones, though.

Anyways, maybe we are able to come up with an interface/abstraction so we can make this storage configurable:

  • back external-dns with a ConfigMap so that it can decide what records it has and which are legacy/manually created records (would not support concurrent instances across clusters, given there aren't any other restrictions)
  • back it by using some storage in the dns-provider (TXT record, S3 bucket, separate etcd cluster ๐Ÿ˜ฎ , federated control plane's storage, what have you...) to support multiple instances of external-dns across different clusters.
  • null-storage when you just don't care, i.e. every record is automatically under external-dns juristiction.

GitHub repo description & topics

Just a reminder to hunt down some admin to fix the GitHub repo description. My proposal:

Configure external DNS servers (Route53, Cloud DNS and others) for Kubernetes ingress and services

I think the repo description should be meaningful and attract the main potential users (which are users of cloud providers right now as we have no other implementation at the start).

Increase logging output

external-dns currently doesn't log anything if it's not running in dry-run mode. Change that!

Add policy flag to control lifecycle behaviour of DNS records

Use a flag --policy flag to control how external-dns deals with the lifecycle of DNS records.

Possible values could be:

  • create-only: only create records, don't update or delete
  • update-only: only update existing records, never created nor delete anything else
  • upsert-only: create and update records as needed, never remove records without replacement
  • crud or sync: full authority over records, create, update and delete records as needed

/cc @ideahitme

Project structure and file system layout

Proposal and brainstorm kickoff for the general project structure. This shouldn't be too technical but rather serve as a way to capture the components involved.

external-dns/
  cmd/external-dns/        <= place for external-dns binary
  producers/               <= place for producer implementations (kubernetes, fake, ...)
  consumers/               <= place for consumer implementations (route53, googledns, ...)
  controller/              <= place for orchestrator that connects producers and consumers
  plan/                    <= place for code related to the plan object
  util/                    <= place for commonly shared code
  vendor/                  <= place for vendored libraries (maybe wrong with bazel)

Questions

  • Different naming for producers and consumers? e.g. sources and targets?

Single broken entity breaks the whole batch

If a single dns record cannot be created, e.g. when a desired DNS name doesn't match the managed zone it will fail and the whole batch will be rolled back. We should tolerate individual records failing.

Setup basic integration test on GCP

I'll try to setup a basic integration test for Services on Google Cloud Platform.

This will run in its own Google project, gets a valid DNS name, spins up a cluster running external-dns inside. At some point we may be able to run it wherever CNCF runs the Kubernetes integration tests?

Annotations strategy

We'll support several annotations from the beginning, some of them are supposed to be deprecated soon, others will be added over time as new requirements come up.

@justinsb throw in an idea where we store JSON as the value of an annotation which can then be used to encode more complicated data in order to support more advanced but less common use cases.

We could design our annotations around a single annotation holding JSON. This would be the source of truth for our controller. Keys can be added over time to support new features (CNAME, TTL, provider specific stuff). Though, normally one would use the simpler annotations using plain text values (e.g. external-dns/hostname: foo) but internally external-dns would convert the simpler annotations to the source-of-truth-struct before processing.

This way, most users could just use the simple annotations that are exposed but more adavanced users could directly set the JSON based annotation to have access to everything that external-dns provides.

This can also help with solving backwards-incopatible changes and with rolling out new versions. New features could be made available solely via some keys in the JSON before being promoted to top-level annotations if they prove to be useful.

@justinsb @ideahitme @iterion Any thoughs?

Run on kube?

Hey, is there a way to run this as a sidecar / recurring job in kube itself?

The One with the Plan

Background

When developing mate we found out that one of the more difficult parts was calculating the diff from a current state to a desired state of dns records. In mate, this responsibility was part of each consumer implementation although many things could have been shared between consumers and it was difficult to test.

Glossary:

  • producer: source of desired dns records
  • consumer: adapter to the specific dns provider
  • controller: glue between producers and consumers

Proposal

We propose an intermediate object that's used to capture the transition that's needed to go from a current to a desired state of dns recors in a dns-provider-independent way. This object would then be used as input to the consumers and the logic is not part of each of them.

Hence, making the consumer implementations easier to write (less responsibilities) and allowing the intermediate object to be easier to test (less external dependencies). I would like to call this object Plan as it's a sort of execution plan.

A plan would be constructed from two input lists:

  • a desired list of records and a current list of records.

And it would return three lists as output:

  • a list of records to create
  • a list of records to update (including old and new records)
  • a list of records to delete

This plan object can be tested in isolation and many combinations and edge cases can easily be added to the test suite.

The plan would then be passed to a consumer object which would "just execute" the actions according to the dns provider. This also makes each consumer easier to test as one just needs to provide a Plan and see that the consumer runs the right actions against the dns provider. Any errors that may happen (e.g. because of records to delete that don't exist) can be considered errors and don't need to be handled necessarily.

The flow would be the following:

  • controller asks producer to return a list of desired dns records
  • controller asks consumer for the current list of dns records
  • controller passes list from producer (desired) and list from consumer (current) to plan calculator and retrieves a Plan
  • controller passes Plan to consumer for execution

We chose to represent the necessary actions as three lists as it seems to be the most compatible way of defining it between dns providers. For instance, AWS Route53 doesn't differentiate between creates and updates so it could just merge them into an upsert list before processing. On the other hand, Google CloudDNS doesn't support updates natively and one would have to convert the updates list to a combination of delete+create first. Other consumers will be different but we think those three lists capture all the information they would require.

Inspired by work we had planned for mate: linki/mate#46

Add a dry-run flag

Very useful for the DNS provider side. Just display what would have been done. This is not intended to replace the fake DNS provider but rather to run a provider safely.

Add detailed documentation how to use external-dns

  • GCE
    • with type=LoadBalancer services
    • with GLBC ingress controller
    • with nginx-ingress-controller
    • with voyager ingress controller
    • with traefik ingress controller
  • on AWS
    • with type=LoadBalancer services
    • with skipper-ingress-controller
    • with nginx-ingress-controller
    • with voyager ingress controller
    • with traefik ingress controller
  • on Bare metal (no cloud load balancer)
    • with nginx-ingress-controller

Contribution guideline

Improve contribution guideline docs. Should include manual regarding createing PRs, PR labelling, updating Changelog.

Add a way to provide external-dns with a template to generate service FQDNs

It'll be great to have the possibility to provide external-dns with a template string like {{.Namespace}}-{{.Name}}.example.com to generate DNS records instead of using the annotation in the service.
Using this template one can manage multiple zones which contains all services without altering the annotation for each service. Also; as far as I understand; currently one service can only map to exactly one FQDN.
Using the templating I can launch one external-dns instance per zone without messing around with my kubernetes service definition.
zalando-mate supports this using --kubernetes-format which was a great fit for our use case.

Read LB address from infrastructure-managed ressource

I'm opening this issue to find out if our use-case is supported.

We use ingress objects and nginx-ingress-controller to route external traffic coming from ALBs inside the cluster. Currently we have our own approach of updating route53 with the address of the ALB.
However we'd like to move to external-dns and have it update route53 with the address of the ALB, but without leaking the ALB-address into the application manifests.

Not sure how feasible that is. One solution could be to get the ALB-address from a TPR?

Make every flag configurable via ENV vars

Be twelve-factor. Also allows to transparently inject env vars into pods and have them picked up in the future, e.g. with PodInjectionPolicies.

drone does a good job here

$ docker run drone/drone:0.5 server --help
OPTIONS:
   --debug							start the server in debug mode [$DRONE_DEBUG]
   --broker-debug						start the broker in debug mode [$DRONE_BROKER_DEBUG]
   --server-addr ":8000"					server address [$DRONE_SERVER_ADDR]
   --server-cert 						server ssl cert [$DRONE_SERVER_CERT]

Requirements for initial usable version

Late braindump, take with a grain of salt.

In- and out-of-scope for marking the first "working" version (time frame: before KubeCon)

  • basic sync loop
    • watch can be done later
  • support for Service and Ingress hostnames
    • Node can come later
  • support for simple new annotations
    • i.e. hostname + controller
    • legacy + more advanced can come later
  • single cluster, single zone support
    • zone auto-detection can come later
    • multi-cluster single-zone conflicts out of scope
  • support for tolerating conflicting records in target zone
    • i.e. do not mess with existing legacy records in zone
    • leads to: basic ownership model must exist, has to work across pod restarts
  • CNAME record support for Route53
    • A should be easy to add but probably not needed
    • ALIAS out of scope if too difficult
  • A record support Google CloudDNS
    • CNAME easy but probably not needed
  • Good unit test coverage
    • basic integration test would be great, though

@ideahitme @justinsb @iterion let me know if you agree or want to add/remove stuff.

Set up Travis CI

We should setup travis at some point. This should include various tests.

AFAIK standard way of doing it for k8s related projects is to include some of the bash/python scripts from https://github.com/kubernetes/kubernetes/tree/master/hack. It provides a way to test if the boilerplate is set correctly for all relevant files (license agreement on top of the file) + all go tools verifications (vet, fmt, lint etc) + unit/integration/e2e tests

Basically after we include all of the above everything can be run with slightly modified version of https://github.com/kubernetes/kubernetes/blob/master/hack/verify-all.sh

Opening this issue for now and we can add all of this once actual development starts :)

About using bazel

During the meeting @justinsb had the idea of using bazel for building.

Let's discuss what the benefits are and how we can best tackle this.

Thanks!

Not really an issue at all. Just wanted to quickly say "Thanks" for the work you are doing. The project is shaping up nicely and with some of the 0.2 stuff I think we can start testing it out in some of our lower environments.

Support DNS records for ingresses without rules

If my ingress does not contain any rules but only a default backend no records are created. It would be great to implement an alternative way to create records if no rules are set.
For example the way mentioned in #142 can be a solution here. As a result we have a consistent way to handle services and ingresses.
We can just use both ways at the same time (use the template AND create records for every rule if one exists) or just fallback to the template method from #142 if no rules are defined.
Maybe a more sophisticated opt in/out configuration using annotations could be useful.

Repository Set up

  • Makefile
  • Code of Conduct
  • CONTRIB.md
  • Apache License

We just need most of these dropped in, we can always change them later.

Provide a way to specify the source

Currently external-dns is hard-coded to watch Services only. We should allow to watch Ingress as well. Here's one way:

  • To keep the controller simple it should still take a single Source.
  • Create a MultiSource object that implements Source and wraps other Sources
  • One multi-source would the "Kubernetes" source which fans-out to Service and Ingress Source

Add a ROADMAP

TL;DR add a ROADMAP file

Our first release won't be the fully usable thing we intend to achieve (and need for ourselves). However, we want to release early and not wait until everything is perfectly done.

Users visiting the project after the first release should have a clear understanding where the project is right now and what and in which order they can expect certain things.

Add Kubernetes CustomResourceDefinition Source

Add a source that lists/watches for a specific CustomResourceDefinition objects.

We could make a CRD the central source for DNS entries. And then just create those objects as needed. This would allow other components to declare DNS entries as well.

Clarify whether to use alpha annotations

I was wondering if we should rather use alpha annotations while external-dns is still unstable?

Looking at other projects I think it makes to target alpha.external-dns.kubernetes.io for now so that we allow us to change them at any time if we discover they aren't suitable. I know that we discussed the desired annotations for quite some time but you never know if they really works out.

Expose metrics through Prometheus endpoint

A lot of kubernetes users use prometheus for monitoring their cluster. And, many core kubernetes components offer a /metrics endpoint. It might be useful to expose some info, even if it's just the basic go metrics that are provided by the go prometheus package.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.