kubernetes-sigs / external-dns Goto Github PK
View Code? Open in Web Editor NEWConfigure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
License: Apache License 2.0
Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
License: Apache License 2.0
CoreDNS is an alternative for on-prem deployments where google/aws are not available. It has some support as a Federation DNS backend now: https://github.com/kubernetes/kubernetes/blob/master/federation/pkg/dnsprovider/providers/coredns/coredns.go
Can it be added as a valid external DNS source as well?
Especially for k8s services it would be good to be able to set a lower TTL than 300 seconds.
When one node goes down it could take up to five minutes to have an updated DNS.
EDIT
just realized that the load balancer's IP address is used so my argumentation is not valid. Nevertheless a custom TTL would still be a good idea ;)
Add a source that lists/watches for Ingress objects.
That got lost somehow :) @ideahitme
By annotating the controller by which objects should be processed, the user can opt-in and out from being processed by external-dns
.
external-dns.alpha.kubernetes.io/controller: mate
However, this controller's identifer was defined to be dns-controller
but I think it makes more sense to use the name of this project, external-dns
. Meaning, not annotating or annotating a resource with external-dns
will allow to opt-in.
external-dns.alpha.kubernetes.io/controller: external-dns
Currently, you would have to use
external-dns.alpha.kubernetes.io/controller: dns-controller
We need some way of keeping track of what records external-dns
is responsible for in order to:
external-dns
external-dns
concurrentlyIn mate
we used co-located TXT records with a value identifying the instance of mate
to keep track of it. This worked well but turned out to add some additional complexity. Maybe there's another way.
@justinsb mentioned keeping track of that data as annotations or via a ConfigMap
. This wouldn't work with multiple clusters targetting the same zones, though.
Anyways, maybe we are able to come up with an interface/abstraction so we can make this storage configurable:
external-dns
with a ConfigMap so that it can decide what records it has and which are legacy/manually created records (would not support concurrent instances across clusters, given there aren't any other restrictions)external-dns
across different clusters.external-dns
juristiction.Afaik kops-dns-controller
is using Kuberentes dnsprovider lib - https://github.com/kubernetes/kubernetes/tree/master/federation/pkg/dnsprovider
It probably makes sense to investigate if we should consider using the library to keep us in sync with kubernetes main repository and help us speed up the development. If we find it to be unusable or a blocker we could consider implementing our own dnsprovider, like we did in Mate.
Just a reminder to hunt down some admin to fix the GitHub repo description. My proposal:
Configure external DNS servers (Route53, Cloud DNS and others) for Kubernetes ingress and services
I think the repo description should be meaningful and attract the main potential users (which are users of cloud providers right now as we have no other implementation at the start).
First thing to do for this new Incubator project: create a design doc (docs/design
folder as @justinsb proposed) with all planned features and the interface to the user (annotations and Ingress resource).
We can use the contents of the "External DNS" Google Doc as a start.
external-dns
currently doesn't log anything if it's not running in dry-run
mode. Change that!
We should create a chart as soon as we have a suitable version.
Use a flag --policy
flag to control how external-dns
deals with the lifecycle of DNS records.
Possible values could be:
create-only
: only create records, don't update or deleteupdate-only
: only update existing records, never created nor delete anything elseupsert-only
: create and update records as needed, never remove records without replacementcrud
or sync
: full authority over records, create, update and delete records as needed/cc @ideahitme
Proposal and brainstorm kickoff for the general project structure. This shouldn't be too technical but rather serve as a way to capture the components involved.
external-dns/
cmd/external-dns/ <= place for external-dns binary
producers/ <= place for producer implementations (kubernetes, fake, ...)
consumers/ <= place for consumer implementations (route53, googledns, ...)
controller/ <= place for orchestrator that connects producers and consumers
plan/ <= place for code related to the plan object
util/ <= place for commonly shared code
vendor/ <= place for vendored libraries (maybe wrong with bazel)
producers
and consumers
? e.g. sources
and targets
?If a single dns record cannot be created, e.g. when a desired DNS name doesn't match the managed zone it will fail and the whole batch will be rolled back. We should tolerate individual records failing.
I'll try to setup a basic integration test for Services on Google Cloud Platform.
This will run in its own Google project, gets a valid DNS name, spins up a cluster running external-dns inside. At some point we may be able to run it wherever CNCF runs the Kubernetes integration tests?
We'll support several annotations from the beginning, some of them are supposed to be deprecated soon, others will be added over time as new requirements come up.
@justinsb throw in an idea where we store JSON as the value of an annotation which can then be used to encode more complicated data in order to support more advanced but less common use cases.
We could design our annotations around a single annotation holding JSON. This would be the source of truth for our controller. Keys can be added over time to support new features (CNAME, TTL, provider specific stuff). Though, normally one would use the simpler annotations using plain text values (e.g. external-dns/hostname: foo
) but internally external-dns
would convert the simpler annotations to the source-of-truth-struct before processing.
This way, most users could just use the simple annotations that are exposed but more adavanced users could directly set the JSON based annotation to have access to everything that external-dns
provides.
This can also help with solving backwards-incopatible changes and with rolling out new versions. New features could be made available solely via some keys in the JSON before being promoted to top-level annotations if they prove to be useful.
@justinsb @ideahitme @iterion Any thoughs?
Hey, is there a way to run this as a sidecar / recurring job in kube itself?
If setting up A records with multiple target IPs on GCP external-dns
is unable to remove them and fails.
Also required to support services with type=NodePort
When developing mate
we found out that one of the more difficult parts was calculating the diff from a current state to a desired state of dns records. In mate
, this responsibility was part of each consumer implementation although many things could have been shared between consumers and it was difficult to test.
Glossary:
We propose an intermediate object that's used to capture the transition that's needed to go from a current to a desired state of dns recors in a dns-provider-independent way. This object would then be used as input to the consumers and the logic is not part of each of them.
Hence, making the consumer implementations easier to write (less responsibilities) and allowing the intermediate object to be easier to test (less external dependencies). I would like to call this object Plan
as it's a sort of execution plan.
A plan would be constructed from two input lists:
And it would return three lists as output:
This plan object can be tested in isolation and many combinations and edge cases can easily be added to the test suite.
The plan would then be passed to a consumer object which would "just execute" the actions according to the dns provider. This also makes each consumer easier to test as one just needs to provide a Plan
and see that the consumer runs the right actions against the dns provider. Any errors that may happen (e.g. because of records to delete that don't exist) can be considered errors and don't need to be handled necessarily.
The flow would be the following:
producer
to return a list of desired dns recordsconsumer
for the current list of dns recordsproducer
(desired) and list from consumer
(current) to plan calculator and retrieves a Plan
Plan
to consumer
for executionWe chose to represent the necessary actions as three lists as it seems to be the most compatible way of defining it between dns providers. For instance, AWS Route53 doesn't differentiate between creates and updates so it could just merge them into an upsert list before processing. On the other hand, Google CloudDNS doesn't support updates natively and one would have to convert the updates list to a combination of delete+create first. Other consumers will be different but we think those three lists capture all the information they would require.
Inspired by work we had planned for mate
: linki/mate#46
Very useful for the DNS provider side. Just display what would have been done. This is not intended to replace the fake DNS provider but rather to run a provider safely.
type=LoadBalancer
servicestype=LoadBalancer
servicesWe have to setup a Pipeline that builds and pushes our docker image.
I would like to give Google's Container Builder a try: https://cloudplatform.googleblog.com/2017/03/Google-Cloud-Container-Builder-a-fast-and-flexible-way-to-package-your-software.html?m=1
Improve contribution guideline docs. Should include manual regarding createing PRs, PR labelling, updating Changelog.
Go through these resources and make sure external-dns
is in good shape
relevant PR #144
As suggested by @linki it might make sense to checkout the client-go informer
library to fetch/watch the list of services/ingress resources. A pretty clean example can be found here:
It is currently used in main kuberentes repo and in few of the /contrib projects as well
It'll be great to have the possibility to provide external-dns with a template string like {{.Namespace}}-{{.Name}}.example.com to generate DNS records instead of using the annotation in the service.
Using this template one can manage multiple zones which contains all services without altering the annotation for each service. Also; as far as I understand; currently one service can only map to exactly one FQDN.
Using the templating I can launch one external-dns instance per zone without messing around with my kubernetes service definition.
zalando-mate supports this using --kubernetes-format which was a great fit for our use case.
Currently it's not possible to opt-out of being processed by external-dns.
Add an implementation using AWS Route53
I'm opening this issue to find out if our use-case is supported.
We use ingress
objects and nginx-ingress-controller
to route external traffic coming from ALBs inside the cluster. Currently we have our own approach of updating route53 with the address of the ALB.
However we'd like to move to external-dns
and have it update route53 with the address of the ALB, but without leaking the ALB-address into the application manifests.
Not sure how feasible that is. One solution could be to get the ALB-address from a TPR?
Be twelve-factor. Also allows to transparently inject env vars into pods and have them picked up in the future, e.g. with PodInjectionPolicies
.
drone does a good job here
$ docker run drone/drone:0.5 server --help
OPTIONS:
--debug start the server in debug mode [$DRONE_DEBUG]
--broker-debug start the broker in debug mode [$DRONE_BROKER_DEBUG]
--server-addr ":8000" server address [$DRONE_SERVER_ADDR]
--server-cert server ssl cert [$DRONE_SERVER_CERT]
Late braindump, take with a grain of salt.
In- and out-of-scope for marking the first "working" version (time frame: before KubeCon)
@ideahitme @justinsb @iterion let me know if you agree or want to add/remove stuff.
Add a source that lists/watches for Service objects.
We should setup travis at some point. This should include various tests.
AFAIK standard way of doing it for k8s related projects is to include some of the bash/python scripts from https://github.com/kubernetes/kubernetes/tree/master/hack. It provides a way to test if the boilerplate is set correctly for all relevant files (license agreement on top of the file) + all go tools verifications (vet, fmt, lint etc) + unit/integration/e2e tests
Basically after we include all of the above everything can be run with slightly modified version of https://github.com/kubernetes/kubernetes/blob/master/hack/verify-all.sh
Opening this issue for now and we can add all of this once actual development starts :)
Not really an issue at all. Just wanted to quickly say "Thanks" for the work you are doing. The project is shaping up nicely and with some of the 0.2 stuff I think we can start testing it out in some of our lower environments.
Add a source that generates fake values (for testing).
Add a source that lists/watches for Node objects.
If my ingress does not contain any rules but only a default backend no records are created. It would be great to implement an alternative way to create records if no rules are set.
For example the way mentioned in #142 can be a solution here. As a result we have a consistent way to handle services and ingresses.
We can just use both ways at the same time (use the template AND create records for every rule if one exists) or just fallback to the template method from #142 if no rules are defined.
Maybe a more sophisticated opt in/out configuration using annotations could be useful.
We just need most of these dropped in, we can always change them later.
Currently external-dns
is hard-coded to watch Services only. We should allow to watch Ingress as well. Here's one way:
Source
.MultiSource
object that implements Source
and wraps other Source
sSource
TL;DR add a ROADMAP file
Our first release won't be the fully usable thing we intend to achieve (and need for ourselves). However, we want to release early and not wait until everything is perfectly done.
Users visiting the project after the first release should have a clear understanding where the project is right now and what and in which order they can expect certain things.
Please put the following in the README:
Example: https://github.com/kubernetes-incubator/service-catalog#kubernetes-incubator
cc @calebamiles
Add a source that lists/watches for a specific CustomResourceDefinition objects.
We could make a CRD the central source for DNS entries. And then just create those objects as needed. This would allow other components to declare DNS entries as well.
I was wondering if we should rather use alpha annotations while external-dns is still unstable?
Looking at other projects I think it makes to target alpha.external-dns.kubernetes.io for now so that we allow us to change them at any time if we discover they aren't suitable. I know that we discussed the desired annotations for quite some time but you never know if they really works out.
Add an implementation using Google CloudDNS.
A lot of kubernetes users use prometheus for monitoring their cluster. And, many core kubernetes components offer a /metrics
endpoint. It might be useful to expose some info, even if it's just the basic go metrics that are provided by the go prometheus package.
We entirely skipped integration tests for mate
. Let's not make the same mistake again.
https://github.com/StackExchange/dnscontrol looks interesting and may be worth integrating with external-dns.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.