Git Product home page Git Product logo

registrator's Introduction

Registrator

Service registry bridge for Docker.

Circle CI Docker pulls IRC Channel

Registrator automatically registers and deregisters services for any Docker container by inspecting containers as they come online. Registrator supports pluggable service registries, which currently includes Consul, etcd and SkyDNS 2.

Full documentation available at http://gliderlabs.com/registrator

Getting Registrator

Get the latest release, master, or any version of Registrator via Docker Hub:

$ docker pull gliderlabs/registrator:latest

Latest tag always points to the latest release. There is also a :master tag and version tags to pin to specific releases.

Using Registrator

The quickest way to see Registrator in action is our Quickstart tutorial. Otherwise, jump to the Run Reference in the User Guide. Typically, running Registrator looks like this:

$ docker run -d \
    --name=registrator \
    --net=host \
    --volume=/var/run/docker.sock:/tmp/docker.sock \
    gliderlabs/registrator:latest \
      consul://localhost:8500

CLI Options

Usage of /bin/registrator:
  /bin/registrator [options] <registry URI>

  -cleanup=false: Remove dangling services
  -deregister="always": Deregister exited services "always" or "on-success"
  -internal=false: Use internal ports instead of published ones
  -ip="": IP for ports mapped to the host
  -resync=0: Frequency with which services are resynchronized
  -retry-attempts=0: Max retry attempts to establish a connection with the backend. Use -1 for infinite retries
  -retry-interval=2000: Interval (in millisecond) between retry-attempts.
  -tags="": Append tags for all registered services
  -ttl=0: TTL for services (default is no expiry)
  -ttl-refresh=0: Frequency with which service TTLs are refreshed

Contributing

Pull requests are welcome! We recommend getting feedback before starting by opening a GitHub issue or discussing in Slack.

Also check out our Developer Guide on Contributing Backends and Staging Releases.

Sponsors and Thanks

Big thanks to Weave for sponsoring, Michael Crosby for skydock, and the Consul mailing list for inspiration.

For a full list of sponsors, see SPONSORS.

License

MIT

registrator's People

Contributors

ashmckenzie avatar avinson avatar blalor avatar ch3lo avatar cnf avatar derekelkins avatar dminkovsky avatar dsouzajude avatar enpassant avatar etopian avatar hawka avatar hookenz avatar jmeichle avatar johnydays avatar josegonzalez avatar lex-r avatar mattatcha avatar mgood avatar mikljohansson avatar mvanholsteijn avatar nicofuccella avatar nirped avatar progrium avatar rifelpet avatar sheldonh avatar sisir-chowdhury avatar sttts avatar thenathanblack avatar virtuald avatar wixyvir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

registrator's Issues

Add service check support for dynamically allocated ports

I some cases its useful to deploy containers and allow the Docker daemon to select which ports to publish. Such as with docker run -d -p :5000 ... or docker run -d -P .... Such as during a rolling deployment. I'm wondering if it would be possible to have registrator account for this when registering checks.

This probably isn't the right approach, but I imagined some sort of magic variable interpolation in the SERVICE_CHECK_SCRIPT envvar:

docker run -d -p :80 -e SERVICE_CHECK_SCRIPT="curl -sS http://localhost:$PORT_TCP_80" nginx

Connection refused

docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h node1 progrium/registrator consul://ipaddress:8500

fcb44e6877b12d45c1ddbdf48eaeb210ca632bfbbfd695be96d736330987f10d
[root@babulinux4 sysconfig]# docker logs fcb44e6877b12d45c1ddbdf48eaeb210ca632bfbbfd695be96d736330987f10d
2014/10/07 00:33:51 registrator: Using consul registry backend at consul://:8500
2014/10/07 00:33:51 registrator: dial unix /tmp/docker.sock: connection refused
lxc-start: The container failed to start.
lxc-start: Additional information can be obtained by setting the --logfile and --log-priority options.

how do i fix this?
thanks-

Maybe a stupid question

Hello,

First, many thx for all the work done ! :)

I have a question about the "IP" field, it's always the "registrator -ip=" or the consul ip... i don't see anywhere a way to use the docker internal ip (172.17..).
Is there any way to register the container ip address ? (docker inspect -f '{{ .NetworkSettings.IPAddress }}' "name or id of container").

Actually i'm using etcd/skydns/skydock, but i would love to use only one solution with consul (in brief, i would love to see skydock=registrator)

Thx in advance

Registrator registers its own IP instead of service container's IP

Summary

Using the skydns2 ServiceRegistry, registrator registers the IP of its own container for services that come up.

Steps to replicate

etcd is empty:

$ etcdctl ls /
/coreos.com

Start registrator

$ ETCD="$(ifconfig docker0 | grep 'inet ' | awk '{ print $2}'):4001"
$ docker run \
    --name registrator \
    -v /var/run/docker.sock:/tmp/docker.sock \
    -h $HOSTNAME \
    -d progrium/registrator skydns2://"$ETCD"/skydns.local
> 0a0223c76fca1706c5fe16362f57629cabfdcd601adc391c7aaba9b19469a785

Logs:

$ docker logs -f registrator
2014/10/09 03:56:16 registrator: Using skydns2 registry backend at skydns2://172.17.42.1:4001/skydns.local
2014/10/09 03:56:16 registrator: ignored: 0a0223c76fca no published ports
2014/10/09 03:56:16 registrator: Listening for Docker events...
# more later

Start, let's say, redis:

docker run --name redis -p 6379:6379 --rm redis
# redis runs

In the registrator log follow we see appended:

2014/10/09 04:04:56 registrator: added: abdd65515759 ip-172-30-0-76:redis:6379

and the redis container has the following IP:

$ docker inspect redis | grep Address
        "IPAddress": "172.17.0.100",

but in etcd:

$ etcdctl ls /skydns/local/skydns/redis
/skydns/local/skydns/redis/ip-172-30-0-76:redis:6379
$ etcdctl get /skydns/local/skydns/redis/ip-172-30-0-76:redis:6379
{"host":"172.17.0.99","port":6379}

Now lets say we restart registrator and look at the redis registration in etcd again:

$ docker stop registrator
registrator
$ docker start registrator
registrator
$ etcdctl get /skydns/local/skydns/redis/ip-172-30-0-76:redis:6379                                                                                          
{"host":"172.17.0.101","port":6379}

So what's happening is that registrator is registering its container IP for services, instead of the host's IP.

$ docker inspect registrator | grep Address
        "IPAddress": "172.17.0.101",

This causes skydns to report that redis is running in the registrator container. The behavior I expected was that etcdctl get /skydns/local/skydns/redis/ip-172-30-0-76:redis:6379 would produce the IP of the redis host. The current behavior causes skydns2 to resolve to the IP of the registrator container.

Thank you very much for reading and comments.

backends should set a ttl?

I keep ending up with various invalid keys when rebooting a system, presumably because of an ordering problem on shutdown. However, you could imagine that if a host had a hard shutdown, then the keys associated with containers on the host would never go invalid either.

Currently (at least with etcd backend), registrator does not set a TTL, so the key lives forever and is never removed. A TTL (or some expiration mechanism) should be set on the keys so they expire.

registrator container ip address, not target container ip address, in etcd

I am running two jarvis-stream containers on a host, with registrator:latest running. It seems registrator enteres its own container ip address, not the jarvis-stream container ip addresses, in the etcd.

core@coreos04 ~ $ etcdctl get /services/jarvis-stream-8085/coreos04:stoic_franklin:8085
172.17.0.59:49169
core@coreos04 ~ $ etcdctl get /services/jarvis-stream-8085/coreos04:naughty_meitner:8085
172.17.0.59:49171

The following are the processes for the jarvis-stream containers.

core@coreos04 ~ $ ps -ef|grep 8085
root 16673 4199 0 21:25 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49169 -container-ip 172.17.0.56 -container-port 8085
root 16855 4199 0 21:31 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49171 -container-ip 172.17.0.57 -container-port 8085
core 17117 15477 0 21:53 pts/1 00:00:00 grep --colour=auto 8085

The registrator should enter the values 172.17.0.56:49169 and 172.17.0.57:49171 in etcd.

The following are the containers running on the testing host.

core@coreos04 ~ $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b59e5dcc37e6 progrium/registrator:latest "/bin/registrator et 11 seconds ago Up 11 seconds sad_pasteur
90129210c70a coreos04.plaxo.com:5000/release/jarvis-stream:0.0.1.3 "/usr/bin/supervisor 16 minutes ago Up 16 minutes 0.0.0.0:49170->22/tcp, 0.0.0.0:49171->8085/tcp naughty_meitner
df50931ebdac coreos04.plaxo.com:5000/release/jarvis-stream:0.0.1.2 "/usr/bin/supervisor 22 minutes ago Up 22 minutes 0.0.0.0:49168->22/tcp, 0.0.0.0:49169->8085/tcp stoic_franklin
0d4ff32db3bc atcol/docker-registry-ui:latest ""/bin/sh -c 'sed - 21 hours ago Up 21 hours 0.0.0.0:8080->8080/tcp distracted_hypatia
dfb18df7ca31 registry:latest "docker-registry" 2 days ago Up 45 hours 0.0.0.0:5000->5000/tcp csv_docker_new
54dc1df1c59f ubuntu:latest "/bin/bash" 5 days ago Exited (0) 5 days ago registry_web

container registered with name:tag in consul

Sometimes some containers are registered with the format name:image-tag, like cadvisor:latest, I think it's related how it's launched the container if using google/cadvisor:latest and not google/cadvisor, this is the command that I use to launch cadvisor:

sudo docker run --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --name=cadvisor google/cadvisor:latest

screen shot 2014-08-13 at 10 56 05
screen shot 2014-08-13 at 11 49 18

Some containers registration is skipped (race condition)

I have issue that one container starting right after registrator container is skipped, sometimes. Looking at the code I see that theoretically its possible that it would happen in between when fetching and registering existing containers and adding listeners for new ones.
Maybe it would help if event listener would be registered right before fetching existing containers?

No containers added when running as a service

When I run registrator using supervisord or as a system-level service with an initscript, it will poll Docker once at start and add any new containers, but any subsequent starts are ignored.

Only when running in the foreground does it work.

get Container ID from registrator data

While working with CoreOS, I ran into this issue where fleetctl destroy would not remove any stopped containers.

Later, I learned the cause for this:

Containers are handled quite fine with fleet. It's just edge cases like --name that causes you to do some workarounds because of the way it's handled in Dockerland.

I already replied to this with questions on how to properly stop/destroy a container.

However, I have a feeling I should be able to do this with Registrator, but haven't been able to do so yet.

What I'm running into is that I need to convert this:

ExecStart=/usr/bin/docker run --name xxx ...."
ExecStop=/usr/bin/docker stop xxx

into something where I can drop --name and ExecStop uses the container ID to stop the container, instead of using the --name value.

Using jq (which is also awesome), I could parse the Consul data stored by Registrator like this:

$ curl -s $COREOS_PRIVATE_IPV4:8500/v1/catalog/service/consul | jq '.[0]'
{
  "Node": "portal-2",
  "Address": "172.17.8.102",
  "ServiceID": "consul",
  "ServiceName": "consul",
  "ServiceTags": [],
  "ServicePort": 8300
}

$ curl -s $COREOS_PRIVATE_IPV4:8500/v1/catalog/service/consul | jq '.[0] | .ServicePort'
8300

But this doesn't give me the container ID. The container ID is already printed in the logs:

2014/12/17 08:14:56 registrator: added: 24f4ffb25f6a portal-3:consul:53:udp
2014/12/17 08:14:56 registrator: added: 24f4ffb25f6a portal-3:consul:8300
2014/12/17 08:14:56 registrator: added: 24f4ffb25f6a portal-3:consul:8301

so having this in Registrator would be really nice.

Having said that, if you - as an expert on Docker/CoreOS/discovery - have any insights into this, and perhaps a better way to solve this issue, then I'm all ears :) 👍

Forcing IP address doesn't work

I'm having trouble forcing the IP address of the services to be the host's address rather than the consul container's. I'm launching the registrator container like this:

docker run --rm -v /var/run/docker.sock:/tmp/docker.sock --name registrator.1 --link consul.1:consul progrium/registrator -ip 192.168.140.2 consul://consul:8500

... yet all the services are defined with the consule container's IP anyway. It actually took me a while to realize that the IP and port of the services corresponded to the host rather than the containers themselves.

Am I doing something wrong?

Ability to exclude ports or containers

I would like the ability to exclude specific ports or an entire docker container. I'm considering running this along side of progrium/docker-consul and when doing so you end up with consul containing a very verbose set of ports.

setting Attrs with SERVICE_<metadata>

I am using Docker with consul and tried this out the other day. I am particularly interested in adding custom metadata to the Attrs data structure, however I couldn't get this to work.

I tried setting a random metadata env variable such as SERVICE_RELEASE=12345 and SERVICE_SERVERCLASS=SEARCH in the Dockerfile, and the env variables are visible from inside the running container. However, after querying the consul services I did not see and Attrs.

Overwriting Tags via the env variable SERVICE_TAGS did work however, so I think I am doing it right.

any known issues with this? I don't know go so I didn't peek into the source code.

Consul container's IP address for all services

Maybe I'm missing something, but I think something strange is happening. I have a test setup with docker-consul, registrator and mongodb. I would expect registrator to register the mongodb service with the IP address of it's container. In fact, when querying Consul for the service, it gives me the IP address of the Consul container.

Here's what I've done, from a new docker host on Ubuntu 13.10

Consul:

sudo docker run -d \
    -p 8500:8500 \
    --name consul \
    -h node1 \
    progrium/consul -server -bootstrap

Registrator:

sudo docker run -d \
    -v /var/run/docker.sock:/tmp/docker.sock \
    --name registrator \
    --link consul:consul \
    progrium/registrator consul://consul:8500

MongoDB:

sudo docker run -d -p 27017:27017 --name mongodb dockerfile/mongodb

Then, I queried Consul's HTTP API for mongodb

http://localhost:8500/v1/catalog/service/mongodb-27017

which returned

[{"Node":"node1","Address":"172.17.0.4","ServiceID":"node1:mongodb:27017","ServiceName":"mongodb-27017","ServiceTags":null,"ServicePort":27017}]

As you can see, the IP address it returns is 172.17.0.4. This is not the IP address of the mongodb container.

-> sudo docker inspect mongodb | grep IPAddress
        "IPAddress": "172.17.0.13",

Can you help me out with this?

Docker instances without ports

I believe that being able to register docker instances without exported ports (suitably tagged) should be one of the things that registrator supports.

Use cases for such dockers would be portless docker instances that merely get data (web spiders, webhook senders, queue pullers and pushers of all types). You'd want to know that they exist even though they don't have exposed external ports.

This would allow for monitoring and configuration management, etc.

It should also be noted that consul (at least), supports portless services.

[Feature] Add container launched with --net=host

I have a couple of containers that are launched with --net=host, by default registrator only add to consul container with exposed ports but could be a very helpful to add support for containers with the host net option, maybe adding a env variable (like the exposed port) for registrator get the values to register in consul.

Consul services seem to be registered under wrong node

I have one container running, and the registrator on a host. I started the registrator with the following:

docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h 10.10.2.55 progrium/registrator consul://10.10.1.130:8500

Where 10.10.1.130 is a remote consul server and 10.10.2.55 is the host machine of the box. Some reason consul registrator registers the services on this box as if they were on 10.10.1.130 instead of 10.10.2.55. In the registrator logs I do see the following:

registrator: added: 2b330163ada4 10.10.2.55:high_sinoussi:8080

Any Idea? This may be operator error. Does it matter what the hostname of the containers running on this host is?

skydns2 backend broken for multiple ports

The skydns backend is broken for multiple ports.

Let's say you have a web container that publishes ports 80 and 443. We should be writing these keys like these:

/skydns/docker/web/web1/_tcp/_http   <- {"host":"10.1.42.80","port":80}
/skydns/docker/web/web1/_tcp/_https  <- {"host":"10.1.42.80","port":443}
/skydns/docker/web/web1              <- {"host":"10.1.42.80"}

This would support A record lookups of web1.web.docker, and SRV lookups of _http._tcp.web1.web.docker and _https._tcp.web1.web.docker.

It's not clear to me whether we should also support SRV lookups against the SERVICE_NAME as well, e.g. _http._tcp.web.docker.

I'd like to listen to some feedback before I work on a patch.

Consul Key-value Store doesn't work (fig.yml file attached)

Hi,
First, thanks a lot for this project! it's one of the reasons we're considering to switch to docker! :)

I've experimented with the Consul Service Catalog option and it works great! :)

The reason i'm considering to work with the Consul Key-value Store option is to simplify local development by allowing processes running on the host to be registered with consul (without consul installed on my host)

As I said the Consul Key-value Store option doesn't work (when I login to the console @ 8500) I don't see any entries and also can't "ping redis" in any "docker run --rm -it ubuntu:14.04 /bin/bash" container.

I'm using boot2docker and my fig.yml file is:

consul:
ports:
- "8400:8400"
- "8500:8500"
- "172.17.42.1:53:53/udp"
image: progrium/consul
hostname: node1
name: consul
command: -server -bootstrap -advertise 172.17.42.1

registrator:
links:
- consul:consul
hostname: boot2docker
image: progrium/registrator
command: consul:///services
volumes:
- /var/run/docker.sock:/tmp/docker.sock

redis:
image: redis
ports:
- "6379:6379"
name: redis

Status check failing with dial unix /var/run/docker.sock: no such file or directory

docker run -p 8001:8001 -d -e 'SERVICE_TAGS=www' \
    -e 'SERVICE_CHECK_HTTP=/status' \
    -e 'SERVICE_CHECK_INTERVAL=15s' --name status.0 status

results in:

image

which is failing with the following error:

Get http:///var/run/docker.sock/v1.14/images/a38aa7eb7912/json: dial unix /var/run/docker.sock: no such file or directory2014/12/29 18:38:51 Post http:///var/run/docker.sock/v1.14/containers/create: dial unix /var/run/docker.sock: no such file or directory

Environment:

OSX 10.10
boot2docker
docker-consul backend
docker 1.4.1

Build fails

src/github.com/progrium/registrator/bridge.go:157: published[0].HostIp undefined (type docker.PortBinding has no field or method HostIp)

HostIp should be HostIP

Registrator not listening to docker events - Cent OS 7

Consul is running fine

Registrator is running fine, but it's not listening to docker events.

ran registrator as :

docker run -d -h 10.153.6.131 -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://10.153.6.131:8500

also tried

docker run -d -h 10.153.6.131 -P -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://10.153.6.131:8500

docker logs 73acf7b7f687b6c02e35ccfa6e2a41e42eea18a27292befd87d65fc665d20320 2014/10/07 21:39:49 registrator: Using consul registry backend at consul://10.153.6.131:8500

But not "Listening to docker events". Du to this container service is not recognized.

please help.

Launching instances with shipyard doesn't get containers registered on consul and causes errors in registrator logs

When deploying containers with shipyard (which uses cpu / mem parameters) registrator (either deployed from command line or via shipyard itself) is unable to parse new containers and register them on consul.

Example:

Consul is executed with:

docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h 10.10.4.203 progrium/registrator consul://10.10.4.203:8500

Running redis with:

docker run -d --name redis.0 -P dockerfiles/redis

Makes the redis container show up in consul and registrator logs show:
2014/10/24 17:12:49 registrator: Listening for Docker events...
2014/10/24 17:13:58 registrator: added: 59d4cb348421 10.10.4.203:redis.0:6379

Deploying the same redis instance from shipyard, with cpu: 0.1 and mem: 94 causes:

2014/10/24 17:12:49 registrator: unable to inspect container: a6ea5530f519 json: cannot unmarshal number 1.00663296e+08 into Go value of type int64

Deploying the same redis instance from shipyard but setting cpu: 0.0 and mem: 0.0 (which causes it to skip setting those values) succeeds:

$ docker logs cdcaf513965c
2014/10/24 17:20:03 registrator: Using consul registry backend at consul://10.10.4.203:8500
2014/10/24 17:20:03 registrator: Listening for Docker events...
2014/10/24 17:21:14 registrator: added: 4158db669660 10.10.4.203:goofy_darwin:6379

Here you can see docker inspect of the redis instance when deploy with cpu/mem:0.0 as opposed to "cpus": 0.1, "memory": 96,

https://gist.github.com/pythianliappis/51db463731cbaa1f2901

The belief is that there is a bug in the go docker client used by registrator while parsing :
fsouza/go-dockerclient

Shipyard/citadel/interlock and other projects are using the go docker client from sam alba:

https://github.com/samalba/dockerclient/blob/master/types.go#L9

and do not seem to have an issue.

Ideas on what's wrong here more than welcome!

At the moment the work around is to set cpu: 0.0 and mem: 0.0 in shipyard for the deployed container.

Add set of command line arguments to easily read out from consul/etcd the registered services

My use case is CoreOS systemd units, though this is a more general request. If I stick registrator on all of my cluster hosts, then when I bring things up and down they get into etcd without any mess. This is pretty sweet.

Now, I need to somehow pass the registered container addresses into my other containers when those containers start. I think that generally users would like to be able to do simple things like 'docker run $(registrator /some/service/key) my_container' and have that just work, instead of figuring out what weird etcdctl + sed madness I need to do to pass arguments to the container.

Arguably, one should probably integrate into etcd/consul directly. However, some services can't do that yet… I just realized that's what ambassadord is for. Hm. I feel that this type of functionality could be useful though.

Latest automated build is stale/broken

When I try to run the latest:

$ docker run --rm --name registrator -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator -ip ${COREOS_PRIVATE_IPV4} skydns2://172.17.42.1:4001/skydns.local                                          
2014/10/21 17:34:45 registrator: Forcing host IP to skydns2://172.17.42.1:4001/skydns.local
2014/10/21 17:34:45 registrator: -ttl must be greater than -refresh-interval
$

This log message is from the penultimate commit at the time of this issue posting (06551c1), with master not having this log. This appears to be the result of the binary not being rebuilt/recommitted to the repo.

registrator not picking up docker events

I start a consul container locally w/ the following command:

sudo docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h 10.0.0.6 progrium/consul -server -bootstrap-expect 1

(I have also tried w/o the -h flag), it appears to start up fine, I can get to http://10.0.0.6:8500/...

I then start up a registrator container w/ the following command:

sudo docker run -v /var/run/docker.sock:/tmp/docker.sock progrium/registrator consul://10.0.0.6:8500

it appears to start up fine and gets to the
2015/01/27 15:12:00 registrator: Listening for Docker events....
I also see it sync w/ the consul service in the consul logs on the consul container

I then fire up any other docker container (ubuntu) with environment variables set, and I don't see registrator pick up on it.

I have verified that docker is firing events using the docker events command

`2015-01-27T10:08:36.000000000-05:00 f60a0a8c4de2e2caf43f3c042b2a308d742f4f32f5292edd3a13ec8e69a2a6e5: (from ubuntu:latest) create

2015-01-27T10:08:36.000000000-05:00 f60a0a8c4de2e2caf43f3c042b2a308d742f4f32f5292edd3a13ec8e69a2a6e5: (from ubuntu:latest) start`

doing an lsof on /var/run/docker.sock on the host machine shows that the docker process is listening on this socket.

Any insights into what the issue is or what I'm doing wrong? seems like this should be a fairly straightforward setup

Inconsistencies when host has multiple IP addresses

Our docker hosts have 2 IP adresses that Dockers might get ports forwarded from : internal (LAN) and public.
So far we were only using the LAN IPs and since our Consul agents are bound to those IPs as well, registrator did its job perfectly.
However, if we map Dockers ports from the public IPs, the services are registered in Consul with the IP of the agent, which is the LAN one.
This is because Registrator uses consulapi.AgentServiceRegistration, which registers the service as one of an existing Agent(node).
With Consul K/V or ETCD (I guess, not tested) the correct IP is registered.

Obviously, Consul DNS resolution for those services is incorrect.
I see 3 ways to address this:

  • Completely skip service registration for the container based on an environment variable (SERVICE_IGNORE). This doesn't really solve the problem, but it might be useful for other purposes and is quite easy to implement. I'll be submitting a PR shortly.
  • Don't register services for port mappings with a service.IP that's different from the host's IP. Again, it doesn't completely solve the issue, but at least we would be able to keep registering other services that bind to the LAN IP for the same container.
  • Register services mapped to an IP that's different from the host's IP as External Services (http://www.consul.io/docs/guides/external.html)

I'll try and submit a PR for the second option as well.
Just need to dive into Golang a bit more.

The third option would be the best one, but it implies the creation of a new node.
I'm not sure how it could be named, so that it could be reused for other services bound to that same IP.
Perhaps it could simply be named after the IP (eg. 8-8-8-8).

I'm open to suggestions.

How to register existing containers

I'm attempting to run a SkyDNS2-based deployment with a bunch of web app containers fronted by Nginx. After launching SkyDNS and Registrator, my expectation was that it would sync its state with Etcd, loading all existing containers before watching for new events. This doesn't appear to be the case, though.

Is it possible to have Registrator load all currently-running containers into SkyDNS before it begins watching for new events?

Nodes registration on vagrant are not correct

Hi
Trying to execute registrator om my vagrant :
Executing consul server :
docker run -d --name consul -h consul -p 8300:8300 -p 8301:8301 -p 8301:8301/udp -p 8302:8302 -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:53/udp --volume=/data:/data --volume=/etc/timezone:/etc/timezone:ro --volume=/etc/localtime:/etc/localtime:ro myd-vm00941.hpswlabs.adapps.hp.com:5000/consul:0.3.1 -server -bootstrap

Executing registrator :
docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h $HOSTNAME progrium/registrator $(echo consul://docker inspect -f '{{ .NetworkSettings.IPAddress }}' consul:docker PORT consul 8500 | awk -F ':' '{print $2}')

Executing postgres

docker run -d --name postgresql-node-1 -h postgresql-node-1 -p 5432:5432 -p 9200:9200 --volume=/etc/timezone:/etc/timezone:ro --volume=/etc/localtime:/etc/localtime:ro myd-vm00941.hpswlabs.adapps.hp.com:5000/postgresql:9.3

When I opening consul i see something wrong

  1. I have only one node colsul
  2. postgresql ip the same ip as consul

Please assists, it really looks like my mistake :)

Thanks.

[doc] hint for a less verbose http-check for consul-haproxy users

Thank you for this great tool !

Just to report that the suggested check-http cause consul's service check 'output' to be synced by consul agent each 5 minutes. Indeed, the output contain a timestamp so is changing every check.

This can be problematic when using consul-haproxy as this sync is detected as a change and lead to an haproxy reload. As the number of services increase you end up with a very high - possibly unhealthy - number or haproxy reload per minute.

The check-http script already allow for a simple work around, unsure if it's worth a mention in the doc ?

SERVICE_80_CHECK_HTTP="/health/endpoint/path --dump-header /dev/null"

Do not deregister a service with non-zero exit code

Maybe I'm missing something, but registrator deregisters services when the process exits and does not have any regard for exit code. I'd prefer that a service be deregistered if it exited gracefully so that I can use Consul's health check system to inform me of a failed service. But again, maybe I'm missing something.

Implement service reconciliation on registrator start

This idea came out of #1, and I think it needs to be implemented (and I may do it myself). In its most simple implementation, Registrator should examine the backend on startup and remove all services it previously managed; if they're still relevant, they'll get added as part of the normal startup process.

In reality, that will be too simplistic, so figuring out what's already registered and removing what should not be registered will be necessary.

consul api usage seems out of date?

The AgentServiceRegistration bits seem to be out of date. Are there any thoughts to updating usage? Do you consider this an upstream problem or is there a way to include the Consul bits directly from Hashicorp's public github without breaking things?

In a bit of a pickle, would be happy to help testing.

Proposal: Host networking; Support adding services with arbitrary ports even if not exposed

Hi,

I have a few services using the host networking for various reasons. Registrator won't register those services because it can't know their external ports. Therefor I would suggest the following notion:

SERVICES_EXT=2342,1.2.3.4:1234(/udp)

In the host networking case, this will expose the named ports as if they were published by docker. All other env variables can be still used (SERVICE_2342_...).
The port type is optional and defaults to tcp. The address is optional and defaults to the hosts IP as resolved or specified.

What do you think? I'm happy to implement this, if this is something you want.

Consul TTL Health Check checkID?

I'd like to be able to implement Consul TTL health checks when running containers, by specifying SERVICE_CHECK_TTL=30s at runtime, but I am wondering if there is a straightforward way for the container to be aware of the checkId for its particular registration, so that, inside the container, I can implement a passing /v1/agent/check/pass/<checkId> call to the Consul API (upstream, Consul agent is not running in the container) to affirm health for the container.

Re-Register services on Consul restart

This is a minor yet annoying issue.
I've had an issue where, due to network overload, the Consul-agents disconnected from the Consul-server and did not reconnect until each agent was restarted - which is an issue in itself...

The issue with the Registrator is, that is did not recognize that the Consul container restarted, and so it had to also be restarted for the running containers on that host to be Re-registered with the Consul server.

It would be really great if the Registrator recognized the restart (easy enough since it listens on the docker socket) and took appropriate action.
(Of course, it can still just be restarted...)

registrator gives skydns the wrong IP address?

Trying out registrator with etcd +skydns2 and seeing an odd behavior. I running on coreos (alpha channel), and I got skydns, redis, and registrator running.

core@core-01 ~ $ docker ps --no-trunc
CONTAINER ID                                                       IMAGE                          COMMAND                                                    CREATED             STATUS              PORTS                          NAMES
1aa182faf3ade7be9551755634980553ff136a67979376d0a2acde9b4af22fed   redis:latest                   "/entrypoint.sh redis-server"                              7 minutes ago       Up 5 minutes        0.0.0.0:6379->6379/tcp         redis
4f0c303cb025a023234b250d9e3b7bdb142eb65a69a6219f4cf93807e9703b04   progrium/registrator:latest    "/bin/registrator skydns2://10.1.42.1:4001/skydns.local"   27 minutes ago      Up 27 minutes                                      registrator
33f0c27f0ca7cbfd5072c8535e2528bb6d2f0a603985bcf67ae5d6ff379afb5c   skynetservices/skydns:latest   "skydns --addr=0.0.0.0:53"                                 35 minutes ago      Up 34 minutes       53/tcp, 10.1.42.1:53->53/udp   skydns

registrator sees it and registers the redis server.

core@core-01 ~ $ etcdctl get /skydns/local/skydns/redis/core-01:redis:6379
{"host":"10.1.0.7","port":6379}

But that IP address is the IP address of the registrator container, not the redis container.

core@core-01 ~ $ docker inspect redis | grep IPAddress
        "IPAddress": "10.1.0.18",
core@core-01 ~ $ docker inspect registrator | grep IPAddress
        "IPAddress": "10.1.0.7",

I'm sure i'm doing something dumb as a newb. Any suggestions?

Also, if i have a 3 node vagrant cluster, how would i get registrator to register the public IP with skydns so i can access the redis service from another node in the cluster??

Docker random ports not stored by registrator

It seems that Registrator does not handle randomly-asigned public-facing ports properly.

This (using CoreOS/Systemd):

docker run --name portal-development --publish 3000

And this:

docker run --rm --name consul --hostname %H               \
           --volume /var/run/consul:/data                 \
           --publish ${COREOS_PRIVATE_IPV4}:8300:8300     \
           --publish ${COREOS_PRIVATE_IPV4}:8301:8301     \
           --publish ${COREOS_PRIVATE_IPV4}:8301:8301/udp \
           --publish ${COREOS_PRIVATE_IPV4}:8302:8302     \
           --publish ${COREOS_PRIVATE_IPV4}:8302:8302/udp \
           --publish ${COREOS_PRIVATE_IPV4}:8400:8400     \
           --publish ${COREOS_PRIVATE_IPV4}:8500:8500     \
           --publish ${COREOS_PUBLIC_IPV4}:53:53/udp      \
           progrium/consul -server                        \
             -bootstrap-expect 3                          \
             -advertise $COREOS_PRIVATE_IPV4

Result in the following Registrator log:

$ docker logs registrator
2014/12/16 21:32:58 registrator: Using consul registry backend at consul://172.17.8.103:8500
2014/12/16 21:33:00 registrator: added: a036eb00e236 portal-3:consul:53:udp
2014/12/16 21:33:00 registrator: added: a036eb00e236 portal-3:consul:8300
2014/12/16 21:33:00 registrator: added: a036eb00e236 portal-3:consul:8301
2014/12/16 21:33:00 registrator: added: a036eb00e236 portal-3:consul:8301:udp
2014/12/16 21:33:00 registrator: added: a036eb00e236 portal-3:consul:8302
2014/12/16 21:33:00 registrator: added: a036eb00e236 portal-3:consul:8302:udp
2014/12/16 21:33:00 registrator: added: a036eb00e236 portal-3:consul:8400
2014/12/16 21:33:00 registrator: added: a036eb00e236 portal-3:consul:8500
2014/12/16 21:33:00 registrator: Listening for Docker events...
2014/12/16 22:19:43 registrator: ignored 13c3fe7e348a port 3000 not published on host
2014/12/16 22:26:39 registrator: added: 9e7bf86936d3 portal-3:portal-development:3000

And Consul returns the following details:

$ curl 172.17.8.103:8500/v1/catalog/services
{
  "consul":[],
  "consul-53":["udp"],
  "consul-8300":[],
  "consul-8301":["udp"],
  "consul-8302":["udp"],
  "consul-8400":[],
  "consul-8500":[],
  "portal-development":[]
}

While the actual portal-development container does have a public-facing port available, just one that was randomly assigned by docker by using --publish 3000 without a fixed :3000 after it:

$ docker ps
CONTAINER ID        IMAGE                         COMMAND                CREATED             STATUS              PORTS                                                                                                                                                                                                                                NAMES
9e7bf86936d3        portal-development:latest     "./script/docker-ent   23 minutes ago      Up 23 minutes       0.0.0.0:49153->3000/tcp                                                                                                                                                                                                              portal-development     
a036eb00e236        progrium/consul:latest        "/bin/start -server    About an hour ago   Up About an hour    172.17.8.103:53->53/udp, 172.17.8.103:8300->8300/tcp, 172.17.8.103:8301->8301/tcp, 172.17.8.103:8301->8301/udp, 172.17.8.103:8302->8302/udp, 172.17.8.103:8302->8302/tcp, 172.17.8.103:8400->8400/tcp, 172.17.8.103:8500->8500/tcp   consul                 
3e6a4d94c8f8        progrium/registrator:latest   "/bin/registrator co   About an hour ago   Up About an hour                                                                                                                                                                                                                                         registrator            

As you can see, portal-development was assigned a random public facing port: 0.0.0.0:49153->3000/tcp (in fact, this app is available in the browser using the CoreOS public IP + the random port).

I'd like to do two things:

  1. keep this port random, so I can spin up multiple app containers, without worrying about conflicting ports
  2. get the details of this port so I can use consul-template to manage a dynamically generated nginx config file.

health check interval seems ignored

NB: cross posed in the consul google group and progrium/docker-consul#49, for consistency.

Regardless of the interval, be it global or port specific, results in an ignored value. Instead, the interval seems closer to something between 300 and 310 seconds.

E.g.

docker run -d -P --name nginx \
-e "SERVICE_80_CHECK_HTTP=/webhook/health" \
-e "SERVICE_80_CHECK_INTERVAL=15s" nginx

Where the end point webhook/health prints the STDOUT of uptime -- giving me a nice timestamp.

Not recognizing containers

Testing docker-consul and registrator out on a vagrant box (provisioned with Ubuntu 14.04 and docker version 1.2.0), and it seems like registrator simply keeps missing new containers.

Running the commands manually the exactly as in the README for both projects (with the exception of the -d flag for docker-consul). And yet firing up the redis container (or any other container for that matter), totally bypasses registrator's notice (despite ports being opened).

docker-consul command:
docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap

registrator command:
docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h node1 progrium/registrator consul:

redis command:
docker run -d --name redis.0 -p 10000:6379 dockerfile/redis

The only logs I see from registrator no matter how often I try are it's initialization log indicating the consul server, and the log saying it's ignoring itself due to no ports.

lsof shows docker using the /var/run/docker.sock socket as well.

Using the exact same commands above, it is also failing to register running locally on my laptop (with same OS version and docker version).

Support services without published ports

I was hoping to use Registrator to add/remove SkyDNS entries for containers whose ports I don't publish to the host. I have a number of web apps running in containers. Nginx is also in a container, which publishes its own ports to the host. Nginx in turn proxies requests to the back-end containers. Because Nginx and its containers are on the same network, it isn't necessary for the containers to publish their containers to the host, and I'd rather not add that just because Registrator wants it.

What are folks' thoughts on adding a case for registering services for containers that don't publish any ports? I'm a bit surprised that the ports need to be published at all, since that doesn't appear necessary for containers communicating with other containers. At least, if I do:

docker run --name postgres postgres:latest

then:

docker run --name testapp --link postgres:postgres testapp

it isn't necessary for postgres to publish ports to the host in order for testapp to reach it at postgres.

Thanks.

Notifications

Hello! I have been using Registrator with Consul and love it. I'm starting on a tool to send Docker events to Hipchat, and thought an optional 'Notifications' feature might be something that could fit into Registrator, since it's already running and listening to the Docker socket. My thoughts are that it would be a generic notifications lib with various chat client plugins (Hipchat/Slack/IRC/etc), basically how Registrator also already works.

Is this something you'd like to see in Registrator? If so, I'll build it as a component and send over a PR in the near future. If not, I'll just build a separate thing :)

Thanks!

No error on invalid (etcd) url

May have time to debug that further, but there is some error handling missing:

./registrator etcd://google.com/services

Zookeeper backend

Thought I'd file an issue for this so that I and anyone else interested have an issue to watch to be notified if anyone is able to attempt it

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.