Git Product home page Git Product logo

interlock's Introduction

Interlock Build Status

Dynamic, event-driven extension system using Swarm. Extensions include HAProxy and Nginx for dynamic load balancing.

The recommended release is ehazlett/interlock:1.4.0

Quickstart

For a quick start with Compose, see the Swarm Example.

Documentation

To get started with Interlock view the Documentation.

Building

To build a local copy of Interlock, you must have the following:

  • Go 1.5+
  • Use the Go vendor experiment

You can use the Makefile to build the binary. For example:

make build

This will build the binary in cmd/interlock/interlock.

There is also a Docker image target in the makefile. You can build it with make image. Note: you will need at least Docker version 17.05 to build the image as we take advantage of multistage builds.

License

Licensed under the Apache License, Version 2.0. See LICENSE for full license text.

interlock's People

Contributors

allencloud avatar anton44eg avatar arhea avatar bfirsh avatar bsedin avatar crashsystems avatar ehazlett avatar etoews avatar evlos avatar krishamoud avatar mbentley avatar nathanleclaire avatar normalfaults avatar owen avatar rgbkrk avatar sebgoa avatar vortec avatar wolfgang42 avatar wr0ngway avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

interlock's Issues

How I can do session stickiness with interlock

How session stickiness work with interlock?? Is it default configured ??
And what about changing the algorithm. I cant see any good documentation on the how to use things.
If you can create one that will be of great use.

I can prepare one. If you can give basic guidline.

Regards,
ManMohan Vyas

Interlock crashed report

FYI... happened during (as far as I'm aware) normal operation. Restarting the container brought it back up.

time="2015-06-30T14:24:29Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1" 
panic: runtime error: slice bounds out of range

goroutine 28 [running]:
runtime.panic(0x7d5840, 0xc1b2af)
        /usr/src/go/src/pkg/runtime/panic.c:279 +0xf5
main.(*EventHandler).Handle(0xc20802a038, 0xc2083c0000, 0xc208004360, 0x0, 0x0, 0x0)
        /go/src/github.com/ehazlett/interlock/interlock/handler.go:25 +0x296
main.*EventHandler.Handle·fm(0xc2083c0000, 0xc208004360, 0x0, 0x0, 0x0)
        /go/src/github.com/ehazlett/interlock/interlock/manager.go:55 +0x58
github.com/samalba/dockerclient.(*DockerClient).getEvents(0xc2080308a0, 0xc208001100, 0xc208004360, 0x0, 0x0, 0x0)
        /go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:262 +0x3e0
created by github.com/samalba/dockerclient.(*DockerClient).StartMonitorEvents
        /go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:243 +0x83

goroutine 16 [chan receive, 126 minutes]:
main.waitForInterrupt()
        /go/src/github.com/ehazlett/interlock/interlock/commands.go:23 +0x155
main.cmdStart(0xc208062180)
        /go/src/github.com/ehazlett/interlock/interlock/commands.go:88 +0xb0d
github.com/codegangsta/cli.Command.Run(0x827db0, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/codegangsta/cli/command.go:113 +0xeb2
github.com/codegangsta/cli.(*App).Run(0xc2080272b0, 0xc208004060, 0x6, 0x6, 0x0, 0x0)
        /go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/codegangsta/cli/app.go:156 +0xaea
main.main()
        /go/src/github.com/ehazlett/interlock/interlock/main.go:98 +0x622

goroutine 19 [finalizer wait, 4 minutes]:
runtime.park(0x417a10, 0xc32350, 0xc1d549)
        /usr/src/go/src/pkg/runtime/proc.c:1369 +0x89
runtime.parkunlock(0xc32350, 0xc1d549)
        /usr/src/go/src/pkg/runtime/proc.c:1385 +0x3b
runfinq()
        /usr/src/go/src/pkg/runtime/mgc0.c:2644 +0xcf
runtime.goexit()
        /usr/src/go/src/pkg/runtime/proc.c:1445

goroutine 20 [syscall, 129 minutes]:
os/signal.loop()
        /usr/src/go/src/pkg/os/signal/signal_unix.go:21 +0x1e
created by os/signal.init·1
        /usr/src/go/src/pkg/os/signal/signal_unix.go:27 +0x32

goroutine 536 [chan receive, 73 minutes]:
github.com/ehazlett/interlock/plugins/haproxy.func·002()
        /go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:233 +0x4d
created by github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.Init
        /go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:241 +0x68


goroutine 1774 [select, 7 minutes]:
net/http.(*persistConn).readLoop(0xc2080ec8f0)
        /usr/src/go/src/pkg/net/http/transport.go:868 +0x829
created by net/http.(*Transport).dialConn
        /usr/src/go/src/pkg/net/http/transport.go:600 +0x93f

Backend tries connecting to 0.0.0.0:<port>

I'm trying Interlock on a brand-new CoreOS installation.

docker run -p 80:8080 ehazlett/interlock -swarm tcp://10.1.42.1:2375
docker run -ti -P -d --hostname www.example.com -e INTERLOCK_DATA='{"alias_domains": ["foo.com"], "port": 8080, "warm": true}' ehazlett/go-demo

Now I visit /haproxy?stats, and it reports the backends for www_example_com_0 and foo_com_0 as DOWN. cat /proxy.conf in the interlock container reveals the following line:

server foo_com_0 0.0.0.0:49155 check inter 5000

For some reason, Interlock is picking up 0.0.0.0 instead of the correct IP address, and I don't know enough about how Docker works to figure out why.

Basic tutorial on Shipyard intergration

Hi. Could you make a simple tutorial for using the Interlock extension image with Shipyard?
What I personally would find most helpful is a tutorial on how to set up a simple website which load balances over two identical docker containers (maybe some variance for the purpose of demonstration).
In other words could you recreate this video (https://www.youtube.com/watch?v=pLX3QF17Sj0) but in the new Shipyard and with the Interlock extension image. Either in a video format or in writing would be fantastic. Thanks.

docker-compose support

I don't think that it's possible to add the plugin option to a docker-compose.yml file. Maybe you can use environment variables instead of custom arguments?

Interlock dies if swarm node disappears

I believe we had discussed this previously but I figured it would be worth tracking that interlock will die if a swarm node goes away, likely due to the error that occurs in swarm:

time="2015-02-16T02:12:43Z" level=error msg="Flagging node as dead. Updated state failed: 500 Internal Server Error: engine is shutdown\n" id="ZJKA:BBJB:KVQU:T2ID:KPMV:4V64:PYDN:UN4S:TH3Q:4PC6:W5HG:ONQK" name=docker2
2015/02/16 02:12:58 Event decoding failed: unexpected EOF

I'd expect that the problem is occurring during the unexpected EOF error so I'd be led to believe that this is actually an issue with swarm and how it is handing nodes going away.

Interlock handles the swarm manager going away just fine but this it chokes on. I've ran into this when I have been attempting to upgrade the docker engine on by backend hosts and when the engine is restarted, interlock dies.

No connection to Swarm

Hi there,
im using shipyard v3 and want to use interlock/haproxy. I managed to get it running, but only direct via docker:
docker run -p 80:80 -v /var/run/docker.sock:/var/run/docker.sock -e "HAPROXY_PROXY_BACKEND_OVERRIDE_ADDRESS=MY_PUBLIC_IP" -d ehazlett/interlock --plugin haproxy start

When im using the "stock" shipyard setup, how can i connect to swam?
docker run -p 80:80 -d ehazlett/interlock --swarm-url ?URL? --plugin haproxy start
I tried several combinations like localhost:2375 or 127.0.0.1:2375. With http and tcp. But the log says things like "connection refused".

Thanks in advance

shipyard extension install fails with "Tag v1 not found in repository ehazlett/interlock"

Hello,

it seams that a resent change on the docker registry publication broke the shipyard extension install:

hier the output of the install.

shipyard cli> shipyard add-extension --url https://raw.githubusercontent.com/shipyard/shipyard-extensions/master/routing/interlock.conf
configuring interlock (https://github.com/ehazlett/interlock for more info)
enter value for container argument -shipyard-url: http://example.deom
enter value for container argument -shipyard-service-key: KEY_REPLACED
FATA[0041] error adding extension: Tag v1 not found in repository ehazlett/interlock

HAProxy spawning multiple processes when too many events occur in a row

When performing a docker rm -f <container> and then immediately after doing a docker run on a container to replace it (like when doing an app deployment), interlock seems to be not able to keep up with the requests. As a result, multiple haproxy instances end up running when the remove and create happen 10 times in a row. In this case, I end up with 4 HAProxy pids running in the container:

UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                15334               10629               1                   16:56               ?                   00:00:02            /usr/local/bin/interlock --swarm-url=tcp://54.152.22.32:3376 --swarm-tls-ca-cert=/etc/docker/ca.pem --swarm-tls-cert=/etc/docker/ser
ver.pem --swarm-tls-key=/etc/docker/server-key.pem --debug --plugin=haproxy start
root                15785               15334               0                   16:57               ?                   00:00:00            /usr/sbin/haproxy -D -f /proxy.conf -p /proxy.pid -sf 21
root                15793               15334               0                   16:57               ?                   00:00:00            /usr/sbin/haproxy -D -f /proxy.conf -p /proxy.pid -sf 25
root                15794               15334               0                   16:57               ?                   00:00:00            /usr/sbin/haproxy -D -f /proxy.conf -p /proxy.pid -sf 25
root                15797               15334               0                   16:57               ?                   00:00:00            /usr/sbin/haproxy -D -f /proxy.conf -p /proxy.pid -sf 36

Adding a delay, it ends up working as expected:

UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                16476               10629               0                   17:02               ?                   00:00:00            /usr/local/bin/interlock --swarm-url=tcp://54.152.22.32:3376 --swarm-tls-ca-cert=/etc/docker/ca.pem --swarm-tls-cert=/etc/docker/ser
ver.pem --swarm-tls-key=/etc/docker/server-key.pem --debug --plugin=haproxy start
root                17442               16476               0                   17:06               ?                   00:00:00            /usr/sbin/haproxy -D -f /proxy.conf -p /proxy.pid -sf 58

This is when using the latest interlock:
time="2015-04-11T17:09:27Z" level="info" msg="interlock running version=0.2.1 (cd41ba3)"

No ports exposed on a Swarm cluster

Hi, I am getting the warning below:

time="2015-06-25T20:53:53Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-06-25T20:53:53Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] 8b53f9427351: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] 8b53f9427351: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] c433e7535019: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] c433e7535019: no ports exposed"
time="2015-06-25T20:53:53Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] 8b53f9427351: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] 8b53f9427351: no ports exposed"
time="2015-06-25T20:53:53Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] c433e7535019: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] c433e7535019: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] 8b53f9427351: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] 8b53f9427351: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] c433e7535019: no ports exposed"
time="2015-06-25T20:53:53Z" level="warning" msg="[haproxy] c433e7535019: no ports exposed"

This is all run in a swarm cluster. I launced interlock with:

docker run -d -p 8080:8080 ehazlett/interlock --swarm-url tcp://10.1.8.43:2375 --plugin haproxy start

This is the output from docker inspect on one those containers:

[{
    "AppArmorProfile": "",
    "Args": [
        "-ip",
        "10.1.8.52",
        "consul://10.1.8.42:8500"
    ],
    "Config": {
        "AttachStderr": true,
        "AttachStdin": false,
        "AttachStdout": true,
        "Cmd": [
            "-ip",
            "10.1.8.52",
            "consul://10.1.8.42:8500"
        ],
        "CpuShares": 0,
        "Cpuset": "",
        "Domainname": "crakmedia.lan",
        "Entrypoint": [
            "/bin/registrator"
        ],
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
        ],
        "ExposedPorts": null,
        "Hostname": "aws038",
        "Image": "jmaitrehenry/registrator",
        "Labels": {},
        "MacAddress": "",
        "Memory": 0,
        "MemorySwap": 0,
        "NetworkDisabled": false,
        "OnBuild": null,
        "OpenStdin": false,
        "PortSpecs": null,
        "StdinOnce": false,
        "Tty": false,
        "User": "",
        "Volumes": null,
        "WorkingDir": ""
    },
    "Created": "2015-06-25T16:31:18.262059841Z",
    "Driver": "devicemapper",
    "ExecDriver": "native-0.2",
    "ExecIDs": null,
    "HostConfig": {
        "Binds": [
            "/var/run/docker.sock:/tmp/docker.sock"
        ],
        "CapAdd": null,
        "CapDrop": null,
        "CgroupParent": "",
        "ContainerIDFile": "",
        "CpuShares": 0,
        "CpusetCpus": "",
        "Devices": [],
        "Dns": null,
        "DnsSearch": null,
        "ExtraHosts": null,
        "IpcMode": "",
        "Links": null,
        "LogConfig": {
            "Config": null,
            "Type": "json-file"
        },
        "LxcConf": [],
        "Memory": 0,
        "MemorySwap": 0,
        "NetworkMode": "bridge",
        "PidMode": "",
        "PortBindings": {},
        "Privileged": false,
        "PublishAllPorts": false,
        "ReadonlyRootfs": false,
        "RestartPolicy": {
            "MaximumRetryCount": 0,
            "Name": "no"
        },
        "SecurityOpt": null,
        "Ulimits": null,
        "VolumesFrom": null
    },
    "HostnamePath": "/var/lib/docker/containers/c433e7535019d8019bb39c15af01625f2e18d59f92ac57f9966b8c78771fc89b/hostname",
    "HostsPath": "/var/lib/docker/containers/c433e7535019d8019bb39c15af01625f2e18d59f92ac57f9966b8c78771fc89b/hosts",
    "Id": "c433e7535019d8019bb39c15af01625f2e18d59f92ac57f9966b8c78771fc89b",
    "Image": "3670a23094d05bbf18d54c285e0a1f96af9076ff18b1ac073a6e0e00bc50c5c9",
    "LogPath": "/var/lib/docker/containers/c433e7535019d8019bb39c15af01625f2e18d59f92ac57f9966b8c78771fc89b/c433e7535019d8019bb39c15af01625f2e18d59f92ac57f9966b8c78771fc89b-json.log",
    "MountLabel": "",
    "Node": {
        "ID": "SUG7:SFJX:FAGB:32TC:YEJY:Q5FS:2AIU:TKPG:4JTC:LN7J:2ZPR:Q5DE",
        "IP": "10.1.8.52",
        "Addr": "10.1.8.52:2375",
        "Name": "aws038.crakmedia.lan",
        "Cpus": 4,
        "Memory": 8070311936,
        "Labels": {
            "executiondriver": "native-0.2",
            "kernelversion": "3.10.0-229.4.2.el7.x86_64",
            "operatingsystem": "CentOS Linux 7 (Core)",
            "storagedriver": "devicemapper"
        }
    },
    "Name": "/aws038.crakmedia.lan",
    "NetworkSettings": {
        "Bridge": "docker0",
        "Gateway": "172.17.42.1",
        "GlobalIPv6Address": "",
        "GlobalIPv6PrefixLen": 0,
        "IPAddress": "172.17.0.8",
        "IPPrefixLen": 16,
        "IPv6Gateway": "",
        "LinkLocalIPv6Address": "fe80::42:acff:fe11:8",
        "LinkLocalIPv6PrefixLen": 64,
        "MacAddress": "02:42:ac:11:00:08",
        "PortMapping": null,
        "Ports": {}
    },
    "Path": "/bin/registrator",
    "ProcessLabel": "",
    "ResolvConfPath": "/var/lib/docker/containers/c433e7535019d8019bb39c15af01625f2e18d59f92ac57f9966b8c78771fc89b/resolv.conf",
    "RestartCount": 0,
    "State": {
        "Dead": false,
        "Error": "",
        "ExitCode": 0,
        "FinishedAt": "0001-01-01T00:00:00Z",
        "OOMKilled": false,
        "Paused": false,
        "Pid": 11847,
        "Restarting": false,
        "Running": true,
        "StartedAt": "2015-06-25T16:31:19.459395099Z"
    },
    "Volumes": {
        "/tmp/docker.sock": "/run/docker.sock"
    },
    "VolumesRW": {
        "/tmp/docker.sock": true
    }
}
]

The swarm cluster is using Consul and Registrator. Containers are being launched with:

 docker-compose -f deploy-blue.yml scale skeletonserviceblue=4

The service defintion is:

skeletonserviceblue:
  extends:
    file: deploy-common.yml
    service: skeletonservice
  image:
    crakmedia/skeleton-service:blue

with contens from deploy-common.yml, like:

skeletonservice:
  ports:
   - "80"
   - "443"
  volumes:
    - /var/log/skeleton-service:/var/log
    - /var/log/skeleton-service/nginx:/var/log/nginx
    - /var/log/skeleton-service/php-fpm:/var/log/php-fpm
    - /var/lib/php/session:/tmp/php-session

Any ideas what might be causing this?

Cannot start the latest interlock with --swarm-url

I'm consistently getting an error about the --swarm-url parameter to interlock.
I'm using the latest docker, docker-machine, docker-swarm and interlock (interlock:1.0.0 or interlock:ng)

docker run ehazlett/interlock:ng --swarm-url $DOCKER_HOST --swarm-tls-ca-cert=/etc/docker/ca.pem --swarm-tls-cert=/etc/docker/server.pem --swarm-tls-key=/etc/docker/server-key.pem --plugin nginx start

results in an incorrect usage followed by this

time="2016-02-28T11:02:48Z" level=fatal msg="flag provided but not defined: -swarm-url" 

Note the single dash before swarm-url.

panic: runtime error: index out of range

this was working fine. now when i try and run it i get the following:

panic: runtime error: index out of range

goroutine 16 [running]:
runtime.panic(0x6f4ee0, 0x8ee83c)
/usr/local/go/src/pkg/runtime/panic.c:279 +0xf5
main.(_Manager).GenerateProxyConfig(0xc20800ebd0, 0xc208040500, 0x0, 0x0, 0x0)
/var/lib/jenkins/workspace/interlock/src/github.com/ehazlett/interlock/controller/manager.go:209 +0x25a9
main.(_Manager).UpdateConfig(0xc20800ebd0, 0x0, 0x0, 0x0)
/var/lib/jenkins/workspace/interlock/src/github.com/ehazlett/interlock/controller/manager.go:272 +0xb3
main.(*Manager).Run(0xc20800ebd0, 0x0, 0x0)
/var/lib/jenkins/workspace/interlock/src/github.com/ehazlett/interlock/controller/manager.go:379 +0xb3
main.main()
/var/lib/jenkins/workspace/interlock/src/github.com/ehazlett/interlock/controller/main.go:134 +0x92b

goroutine 19 [finalizer wait]:
runtime.park(0x419190, 0x8f2890, 0x8f0ae9)
/usr/local/go/src/pkg/runtime/proc.c:1369 +0x89
runtime.parkunlock(0x8f2890, 0x8f0ae9)
/usr/local/go/src/pkg/runtime/proc.c:1385 +0x3b
runfinq()
/usr/local/go/src/pkg/runtime/mgc0.c:2644 +0xcf
runtime.goexit()
/usr/local/go/src/pkg/runtime/proc.c:1445

goroutine 20 [syscall]:
os/signal.loop()
/usr/local/go/src/pkg/os/signal/signal_unix.go:21 +0x1e
created by os/signal.init·1
/usr/local/go/src/pkg/os/signal/signal_unix.go:27 +0x32

goroutine 29 [runnable]:
net/http.(_persistConn).readLoop(0xc2080442c0)
/usr/local/go/src/pkg/net/http/transport.go:868 +0x829
created by net/http.(_Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:600 +0x93f

goroutine 22 [chan receive]:
main.(_Manager).reconnectOnFail(0xc20800ebd0)
/var/lib/jenkins/workspace/interlock/src/github.com/ehazlett/interlock/controller/manager.go:348 +0x44
created by main.(_Manager).connect
/var/lib/jenkins/workspace/interlock/src/github.com/ehazlett/interlock/controller/manager.go:325 +0x1e4

goroutine 24 [IO wait]:
net.runtime_pollWait(0x7fced77a8910, 0x72, 0x0)
/usr/local/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(_pollDesc).Wait(0xc208024680, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(_pollDesc).WaitRead(0xc208024680, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(_netFD).Read(0xc208024620, 0xc208077000, 0x1000, 0x1000, 0x0, 0x7fced77a7418, 0xb)
/usr/local/go/src/pkg/net/fd_unix.go:242 +0x34c
net.(_conn).Read(0xc208038078, 0xc208077000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/net.go:122 +0xe7
net/http.noteEOFReader.Read(0x7fced77a8a70, 0xc208038078, 0xc2080441b8, 0xc208077000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transport.go:1203 +0x72
net/http.(_noteEOFReader).Read(0xc2080406a0, 0xc208077000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
:124 +0xca
bufio.(_Reader).fill(0xc2080046c0)
/usr/local/go/src/pkg/bufio/bufio.go:97 +0x1b3
bufio.(_Reader).ReadSlice(0xc2080046c0, 0xa, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/bufio/bufio.go:298 +0x22c
net/http.readLine(0xc2080046c0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/chunked.go:111 +0x59
net/http.(_chunkedReader).beginChunk(0xc20800f1d0)
/usr/local/go/src/pkg/net/http/chunked.go:48 +0x45
net/http.(_chunkedReader).Read(0xc20800f1d0, 0xc208092000, 0x200, 0x200, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/chunked.go:78 +0xc0
net/http.(_body).readLocked(0xc208042900, 0xc208092000, 0x200, 0x200, 0x54a272, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transfer.go:577 +0x81
net/http.(_body).Read(0xc208042900, 0xc208092000, 0x200, 0x200, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transfer.go:572 +0x11a
net/http.(_bodyEOFSignal).Read(0xc208042940, 0xc208092000, 0x200, 0x200, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transport.go:1126 +0x254
encoding/json.(_Decoder).readValue(0xc208026340, 0x14, 0x0, 0x0)
/usr/local/go/src/pkg/encoding/json/stream.go:124 +0x557
encoding/json.(_Decoder).Decode(0xc208026340, 0x63ca80, 0xc2080380a0, 0x0, 0x0)
/usr/local/go/src/pkg/encoding/json/stream.go:44 +0x7a
github.com/samalba/dockerclient.(_DockerClient).getEvents(0xc208040540, 0xc208034a10, 0xc2080041e0, 0x0, 0x0, 0x0)
/var/lib/jenkins/workspace/interlock/src/github.com/ehazlett/interlock/controller/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:259 +0x44d
created by github.com/samalba/dockerclient.(_DockerClient).StartMonitorEvents
/var/lib/jenkins/workspace/interlock/src/github.com/ehazlett/interlock/controller/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:243 +0x83

goroutine 27 [select]:
net/http.(_persistConn).readLoop(0xc208044160)
/usr/local/go/src/pkg/net/http/transport.go:868 +0x829
created by net/http.(_Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:600 +0x93f

goroutine 28 [select]:
net/http.(_persistConn).writeLoop(0xc208044160)
/usr/local/go/src/pkg/net/http/transport.go:885 +0x38f
created by net/http.(_Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:601 +0x957

goroutine 30 [select]:
net/http.(_persistConn).writeLoop(0xc2080442c0)
/usr/local/go/src/pkg/net/http/transport.go:885 +0x38f
created by net/http.(_Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:601 +0x957

How to compile?

Not really an issue, but does anyone have any directions on how to build/compile this repo. It could just be my lack of familiarity with go, but a wiki page would be appreciated. Uncertain as to what my GOPATH should be set to, and how to fetch dependencies, etc. Running make or make add-deps with a GOPATH of the repo's dir fails to find packages:

mconway@wrongway ~/workspace/interlock(master)$ echo $GOPATH
/Users/mconway/workspace/interlock
mconway@wrongway ~/workspace/interlock(master)$ make add-deps
godep: cannot find package "_/Users/mconway/workspace/interlock" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/_/Users/mconway/workspace/interlock (from $GOROOT)
    /Users/mconway/workspace/interlock/src/_/Users/mconway/workspace/interlock (from $GOPATH)
godep: cannot find package "github.com/samalba/dockerclient" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/samalba/dockerclient (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/samalba/dockerclient (from $GOPATH)
godep: error loading dependencies
make: *** [add-deps] Error 1
mconway@wrongway ~/workspace/interlock(master)$ make
commands.go:15:2: cannot find package "github.com/Sirupsen/logrus" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/Sirupsen/logrus (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/Sirupsen/logrus (from $GOPATH)
commands.go:16:2: cannot find package "github.com/codegangsta/cli" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/codegangsta/cli (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/codegangsta/cli (from $GOPATH)
commands.go:17:2: cannot find package "github.com/ehazlett/interlock" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/ehazlett/interlock (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/ehazlett/interlock (from $GOPATH)
commands.go:18:2: cannot find package "github.com/ehazlett/interlock/plugins" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/ehazlett/interlock/plugins (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/ehazlett/interlock/plugins (from $GOPATH)
plugins.go:5:2: cannot find package "github.com/ehazlett/interlock/plugins/example" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/ehazlett/interlock/plugins/example (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/ehazlett/interlock/plugins/example (from $GOPATH)
plugins.go:6:2: cannot find package "github.com/ehazlett/interlock/plugins/haproxy" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/ehazlett/interlock/plugins/haproxy (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/ehazlett/interlock/plugins/haproxy (from $GOPATH)
plugins.go:7:2: cannot find package "github.com/ehazlett/interlock/plugins/nginx" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/ehazlett/interlock/plugins/nginx (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/ehazlett/interlock/plugins/nginx (from $GOPATH)
plugins.go:8:2: cannot find package "github.com/ehazlett/interlock/plugins/stats" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/ehazlett/interlock/plugins/stats (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/ehazlett/interlock/plugins/stats (from $GOPATH)
commands.go:19:2: cannot find package "github.com/ehazlett/interlock/version" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/ehazlett/interlock/version (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/ehazlett/interlock/version (from $GOPATH)
handler.go:8:2: cannot find package "github.com/samalba/dockerclient" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/samalba/dockerclient (from $GOROOT)
    /Users/mconway/workspace/interlock/src/github.com/samalba/dockerclient (from $GOPATH)
make: *** [build] Error 1

Unable to reconnect to engine issue

11:26 InAnimaT : ehazlett: that config seems to work:) however, interlock wont even start if one of the nodes isn't contactable: dial tcp: lookup stan.arbor.net: no such host
11:27 InAnimaT : which is kinda meh, since the proxy should still be alive and contact the other nodes if available
11:29 ehazlett : hmm ok -- well it expects that you aren't adding downed hosts but yeah i think i can update to check for that and skip
11:30 InAnimaT : ok, i can file an issue if you'd like to track it
11:30 InAnimaT : my dns hasn't updated yet for one of the hosts
11:30 InAnimaT : so ideally, if it could keep trying every x minutes
....
20:21 InAnimaT : sigh
20:21 InAnimaT : ok, so i started the interlock container hours ago, when all hosts had come back up
20:22 InAnimaT : since then, the container has stayed running
20:22 InAnimaT : the boxes have gone down and come back a couple times as i was working on them
20:22 InAnimaT : interlock now doesn't seem to be seeing anything on the event stream
20:22 InAnimaT : since its not doing anything
20:22 InAnimaT : tried -p 9001:9001 as well as the variable and nothing
20:24 InAnimaT : just sitting at http://tty0.in/view/2f77d53e

Ideally, interlock should continuously try to reconnect to a specified engine if it went down. Also, if one of the engine's is down, this should never stop the container from starting, although, it should probably warn.

Remote ip not resolved correctly after haproxy

When using interlock and haproxy remote ip cannot be resolved correctly. Example in PHP code
["REMOTE_ADDR"] return proxy IP (example 172.17.42.1)
["HTTP_X_FORWARDED_FOR"] return docker host internal id in my case it is 192.168.1.200
even http heade cannot give correct ["HTTP_X_FORWARDED_FOR"] gives again docker host internal ip 192.168.1.200

Expexted result should be that client real IP is resolved.

error

getting this error while running interlock haproxy
time="2015-06-11T03:09:14Z" level="info" msg="using tls for communication with swarm"
time="2015-06-11T03:09:17Z" level="info" msg="interlock running version=0.2.6 (043cb24)"
time="2015-06-11T03:09:17Z" level="debug" msg="connecting to swarm on tcp://10.6.33.65:3376"
time="2015-06-11T03:09:17Z" level="info" msg="loading plugin name=haproxy version=0.1"
time="2015-06-11T03:09:17Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-06-11T03:09:17Z" level="debug" msg="[haproxy] update request received"
time="2015-06-11T03:09:17Z" level="debug" msg="[haproxy] generating proxy config"
time="2015-06-11T03:09:18Z" level="info" msg="[haproxy] aaron.java-test.dev: upstream=10.6.33.65:32768 container=java-getting-started"
time="2015-06-11T03:09:18Z" level="debug" msg="[haproxy] adding alias java-test.dev for 47dc7d4cdb20"
time="2015-06-11T03:09:18Z" level="debug" msg="[haproxy] adding host name=aaron_java-test_dev domain=aaron.java-test.dev"
time="2015-06-11T03:09:18Z" level="debug" msg="[haproxy] adding host name=java-test_dev domain=java-test.dev"
time="2015-06-11T03:09:18Z" level="debug" msg="[haproxy] jobs: 0"
time="2015-06-11T03:09:18Z" level="debug" msg="[haproxy] reload triggered"
time="2015-06-11T03:09:18Z" level="error" msg="[haproxy] error reloading: exit status 1"

docker run --name interlock --restart="always" -p 80:8080 -p 443:8443 -d -v /certs:/certs/:ro -e HAPROXY_SSL_CERT=/certs/server.pem ehazlett/interlock --swarm-url tcp://:3376 --swarm-tls-ca-cert=/certs/ca.pem --swarm-tls-cert=/certs/server.pem --swarm-tls-key=/certs/server.key --debug --plugin haproxy start

any ideas?
@ehazlett

HAProxy not starting until an event occurs

When using ehazlett/interlock:0.2.1, I am seeing behavior where upon starting interlock, it isn't starting haproxy until an event occurs. I am using swarm version 0.1.0 (2acbea1)

time="2015-04-11T15:57:31Z" level="info" msg="[stats] sending stats every 10 seconds"
time="2015-04-11T15:57:31Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-04-11T15:57:33Z" level="info" msg="using tls for communication with swarm"
time="2015-04-11T15:57:33Z" level="info" msg="interlock running version=0.2.1 (cd41ba3)"
time="2015-04-11T15:57:33Z" level="info" msg="loaded plugin name=haproxy version=0.1"
time="2015-04-11T15:57:33Z" level="info" msg="[stats] sending stats every 10 seconds"
time="2015-04-11T15:57:33Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-04-11T16:00:29Z" level="info" msg="[stats] sending stats every 10 seconds"
time="2015-04-11T16:00:29Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-04-11T16:00:29Z" level="info" msg="[stats] sending stats every 10 seconds"
time="2015-04-11T16:00:29Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-04-11T16:00:29Z" level="info" msg="[stats] sending stats every 10 seconds"
time="2015-04-11T16:00:29Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.84.118.113:49483"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.84.118.113:49482"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.84.118.113:49481"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.152.55.59:49510"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.84.118.113:49480"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.152.55.59:49509"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.152.55.59:49508"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.152.55.59:49507"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.152.55.59:49506"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] docker.demo.dckr.org: upstream=54.152.55.59:49512"
time="2015-04-11T16:00:29Z" level="info" msg="[haproxy] proxy reloaded and ready"

Here is my run command if it might make any difference:

docker run -d -p 8181:8080 -v /etc/docker:/etc/docker:ro --name interlock-0.2.1 -e HAPROXY_STATS_USER=user -e HAPROXY_STATS_PASSWORD=password ehazlett/interlock:0.2.1 --swarm-url=tcp://54.152.22.32:3376 --swarm-tls-ca-cert=/etc/docker/ca.pem --swarm-tls-cert=/etc/docker/server.pem --swarm-tls-key=/etc/docker/server-key.pem --plugin=haproxy start

Might I be doing something odd here to prevent it from started initially?

Interlock + shipyard example is not working

Environment: Mac OS with single boot2docker vm running (docker 1.7.1).
Scenario:

  1. Use shipyard-deploy to start shipyard: http://shipyard-project.com/docs/quickstart/
  2. Create local engine for boot2docker vm in UI as described: http://shipyard-project.com/docs/quickstart/
  3. Login to shipyard-cli and run interlock as described in https://asciinema.org/a/13318
  4. Run "go-demo" as described https://asciinema.org/a/13318

Result: When accessing http://interlock.host:80/ I get response "503 Service Unavailable. No server is available to handle this request.".

Also tried:

  1. To detroy/run, restart interlock and go-demo.
  2. Noticed that shipyard in http://shipyard-project.com/docs/quickstart/ exposes port 8080, which I considered as conflicting with go-demo (it exposes 8080 too and runs on the same boot2docker host), so reinstalled shipyard manually to use another port. However same result, 503 err.

/haproxy?stats does shows some table without any backends.

When checking haproxy config I get:
bash$ docker exec -it sleepy_ritchie cat /proxy.conf

managed by interlock

global
maxconn 0
pidfile proxy.pid

defaults
mode http
retries 3
option redispatch
option httplog
option dontlognull
option http-server-close
option forwardfor
timeout connect 0
timeout client 0
timeout server 0

frontend http-default
bind *:8080
monitor-uri /haproxy?monitor
stats enable
stats uri /haproxy?stats
stats refresh 5s

interlock-extension for shipyard does not work

hi,

this image does not work when installed as extension via shipyard.
the output of docker logs <id> shows

flag provided but not defined: -shipyard-url

and according to your swarm merge 11 days ago appearently you removed the shipyard config in config.go. there are even more files that get rid of shipyard code

Interlock crashes when member container stops

10 web front ends. docker stop one of the members. Interlock crashes:

time="2015-09-16T19:08:59Z" level="info" msg="[haproxy] web.docker.demo: upstream=10.0.10.15:809 container=www9" 
time="2015-09-16T19:08:59Z" level="info" msg="[haproxy] web.docker.demo: upstream=10.0.10.15:808 container=www8" 
fatal error: unexpected signal during runtime execution
[signal 0xb code=0x1 addr=0x63 pc=0x7f7eb69494fc]

runtime stack:
runtime.gothrow(0x8cdad0, 0x2a)
/usr/src/go/src/runtime/panic.go:503 +0x8e
runtime.sigpanic()
/usr/src/go/src/runtime/sigpanic_unix.go:14 +0x5e

goroutine 28 [syscall, locked to thread]:
runtime.cgocall_errno(0x401930, 0xc20801c4d0, 0x0)
/usr/src/go/src/runtime/cgocall.go:130 +0xf5 fp=0xc20801c490 sp=0xc20801c468
net._C2func_getaddrinfo(0x7f7eb0000a00, 0x0, 0xc20801c5c8, 0xc20801c518, 0xc200000000, 0x0, 0x0)
/usr/src/go/src/net/:26 +0x55 fp=0xc20801c4d0 sp=0xc20801c490
net.cgoLookupIPCNAME(0xc20800b8c7, 0x2f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/cgo_unix.go:96 +0x1c5 fp=0xc20801c600 sp=0xc20801c4d0
net.cgoLookupIP(0xc20800b8c7, 0x2f, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc20800ad27)
/usr/src/go/src/net/cgo_unix.go:148 +0x65 fp=0xc20801c658 sp=0xc20801c600
net.lookupIP(0xc20800b8c7, 0x2f, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/lookup_unix.go:64 +0x5f fp=0xc20801c6a0 sp=0xc20801c658
net.func·026(0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/lookup.go:79 +0x55 fp=0xc20801c708 sp=0xc20801c6a0
net.(*singleflight).doCall(0xc54df0, 0xc20800bc80, 0xc20800b8c7, 0x2f, 0xc20802b0a0)
/usr/src/go/src/net/singleflight.go:91 +0x2f fp=0xc20801c7b8 sp=0xc20801c708
runtime.goexit()
/usr/src/go/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc20801c7c0 sp=0xc20801c7b8
created by net.(*singleflight).DoChan
/usr/src/go/src/net/singleflight.go:84 +0x42b

goroutine 1 [chan receive, 13 minutes]:
main.waitForInterrupt()
/go/src/github.com/ehazlett/interlock/interlock/commands.go:23 +0x1dc
main.cmdStart(0xc20807e0c0)
/go/src/github.com/ehazlett/interlock/interlock/commands.go:88 +0xd44
github.com/codegangsta/cli.Command.Run(0x85d0d0, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/codegangsta/cli/command.go:113 +0x1038
github.com/codegangsta/cli.(*App).Run(0xc208034680, 0xc20800a000, 0x6, 0x6, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/codegangsta/cli/app.go:156 +0xcf7
main.main()
/go/src/github.com/ehazlett/interlock/interlock/main.go:98 +0x90f

goroutine 5 [syscall, 15 minutes]:
os/signal.loop()
/usr/src/go/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
/usr/src/go/src/os/signal/signal_unix.go:27 +0x35

goroutine 7 [chan receive, 15 minutes]:
main.(*Manager).reconnectOnFail(0xc20800cde0)
/go/src/github.com/ehazlett/interlock/interlock/manager.go:72 +0x47
created by main.(*Manager).connect
/go/src/github.com/ehazlett/interlock/interlock/manager.go:49 +0x272

goroutine 8 [chan receive, 15 minutes]:
main.func·002()
/go/src/github.com/ehazlett/interlock/interlock/manager.go:101 +0x52
created by main.(*Manager).Run
/go/src/github.com/ehazlett/interlock/interlock/manager.go:104 +0x95

goroutine 20 [select]:
net/http.(*persistConn).roundTrip(0xc208074840, 0xc20802b310, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:1082 +0x7ad
net/http.(*Transport).RoundTrip(0xc208064240, 0xc2080356c0, 0xc20800b380, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:235 +0x558
net/http.send(0xc2080356c0, 0x7f7eb84d9dc0, 0xc208064240, 0x5e, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:219 +0x4fc
net/http.(*Client).send(0xc20800cf30, 0xc2080356c0, 0x5e, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:142 +0x15b
net/http.(*Client).doFollowingRedirects(0xc20800cf30, 0xc2080356c0, 0x906c88, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:367 +0xb25
net/http.(*Client).Do(0xc20800cf30, 0xc2080356c0, 0xc, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:174 +0xa4
github.com/samalba/dockerclient.(*DockerClient).doRequest(0xc20801e7e0, 0x8508b0, 0x3, 0xc2080c1710, 0x23, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:79 +0x3a2
github.com/samalba/dockerclient.(*DockerClient).InspectContainer(0xc20801e7e0, 0xc2080d0b80, 0xc, 0xc20807d960, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:143 +0x215
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.GenerateProxyConfig(0xc20800cdb0, 0xc2080740b0, 0xc20801e7e0, 0x0, 0x53026e, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:310 +0x34e
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.updateConfig(0xc20800cdb0, 0xc2080740b0, 0xc20801e7e0, 0x0, 0xc2080d0000, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:487 +0x48
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.handleUpdate(0xc20800cdb0, 0xc2080740b0, 0xc20801e7e0, 0x0, 0xc2080d0000, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:220 +0xf2
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.HandleEvent(0xc20800cdb0, 0xc2080740b0, 0xc20801e7e0, 0x0, 0xc2080d0000, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:238 +0x128
github.com/ehazlett/interlock/plugins/haproxy.(*HaproxyPlugin).HandleEvent(0xc2080760a0, 0xc2080d0000, 0x0, 0x0)
<autogenerated>:5 +0xbc
github.com/ehazlett/interlock/plugins.DispatchEvent(0xc20800cdb0, 0xc20801e7e0, 0xc2080d0000, 0xc20800a2a0)
/go/src/github.com/ehazlett/interlock/plugins/plugins.go:48 +0x532
created by main.(*EventHandler).Handle
/go/src/github.com/ehazlett/interlock/interlock/handler.go:27 +0x313

goroutine 10 [IO wait]:
net.(*pollDesc).Wait(0xc2080108b0, 0x72, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2080108b0, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc208010850, 0xc20800f000, 0x1000, 0x1000, 0x0, 0x7f7eb84d9a88, 0xc2080c46d0)
/usr/src/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208038080, 0xc20800f000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/src/go/src/net/net.go:121 +0xdc
net/http.noteEOFReader.Read(0x7f7eb84db3a0, 0xc208038080, 0xc2080743c8, 0xc20800f000, 0x1000, 0x1000, 0x74ede0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:1270 +0x6e
net/http.(*noteEOFReader).Read(0xc20801eae0, 0xc20800f000, 0x1000, 0x1000, 0x2, 0x0, 0x0)
<autogenerated>:125 +0xd4
bufio.(*Reader).fill(0xc20800acc0)
/usr/src/go/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc20800acc0, 0xa, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/bufio/bufio.go:295 +0x257
net/http/internal.readLine(0xc20800acc0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/internal/chunked.go:110 +0x5a
net/http/internal.(*chunkedReader).beginChunk(0xc20800cf00)
/usr/src/go/src/net/http/internal/chunked.go:47 +0x46
net/http/internal.(*chunkedReader).Read(0xc20800cf00, 0xc2080ae000, 0x200, 0x200, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/internal/chunked.go:77 +0xbb
net/http.(*body).readLocked(0xc2080367c0, 0xc2080ae000, 0x200, 0x200, 0xffffffff, 0x0, 0x0)
/usr/src/go/src/net/http/transfer.go:584 +0x7a
net/http.(*body).Read(0xc2080367c0, 0xc2080ae000, 0x200, 0x200, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/transfer.go:579 +0x115
net/http.(*bodyEOFSignal).Read(0xc208036800, 0xc2080ae000, 0x200, 0x200, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:1193 +0x285
encoding/json.(*Decoder).readValue(0xc20802c000, 0xc207ffc7fa, 0x0, 0x0)
/usr/src/go/src/encoding/json/stream.go:124 +0x5e1
encoding/json.(*Decoder).Decode(0xc20802c000, 0x72d6e0, 0xc208038050, 0x0, 0x0)
/usr/src/go/src/encoding/json/stream.go:44 +0x7b
github.com/samalba/dockerclient.(*DockerClient).getEvents(0xc20801e7e0, 0xc20802b280, 0xc20800a2a0, 0x0, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:258 +0x3b7
created by github.com/samalba/dockerclient.(*DockerClient).StartMonitorEvents
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:243 +0x86

goroutine 14 [select, 15 minutes]:
net/http.(*persistConn).readLoop(0xc208074370)
/usr/src/go/src/net/http/transport.go:928 +0x9ce
created by net/http.(*Transport).dialConn
/usr/src/go/src/net/http/transport.go:660 +0xc9f

goroutine 16 [IO wait]:
net.(*pollDesc).Wait(0xc208010840, 0x72, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc208010840, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc2080107e0, 0xc2080ab000, 0x1000, 0x1000, 0x0, 0x7f7eb84d9a88, 0xc2081084b0)
/usr/src/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208038090, 0xc2080ab000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/src/go/src/net/net.go:121 +0xdc
net/http.noteEOFReader.Read(0x7f7eb84db3a0, 0xc208038090, 0xc208074318, 0xc2080ab000, 0x1000, 0x1000, 0x7a07e0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:1270 +0x6e
net/http.(*noteEOFReader).Read(0xc20801eb80, 0xc2080ab000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0)
<autogenerated>:125 +0xd4
bufio.(*Reader).fill(0xc20800ade0)
/usr/src/go/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).Peek(0xc20800ade0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/bufio/bufio.go:132 +0xf0
net/http.(*persistConn).readLoop(0xc2080742c0)
/usr/src/go/src/net/http/transport.go:842 +0xa4
created by net/http.(*Transport).dialConn
/usr/src/go/src/net/http/transport.go:660 +0xc9f

goroutine 17 [syscall, 15 minutes, locked to thread]:
runtime.goexit()
/usr/src/go/src/runtime/asm_amd64.s:2232 +0x1

goroutine 15 [select, 15 minutes]:
net/http.(*persistConn).writeLoop(0xc208074370)
/usr/src/go/src/net/http/transport.go:945 +0x41d
created by net/http.(*Transport).dialConn
/usr/src/go/src/net/http/transport.go:661 +0xcbc

goroutine 18 [select]:
net/http.(*persistConn).writeLoop(0xc2080742c0)
/usr/src/go/src/net/http/transport.go:945 +0x41d
created by net/http.(*Transport).dialConn
/usr/src/go/src/net/http/transport.go:661 +0xcbc

goroutine 21 [select]:
net/http.(*persistConn).roundTrip(0xc2080746e0, 0xc20802ba80, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:1082 +0x7ad
net/http.(*Transport).RoundTrip(0xc208064240, 0xc208035930, 0xc20800be00, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:235 +0x558
net/http.send(0xc208035930, 0x7f7eb84d9dc0, 0xc208064240, 0x5e, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:219 +0x4fc
net/http.(*Client).send(0xc20800cf30, 0xc208035930, 0x5e, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:142 +0x15b
net/http.(*Client).doFollowingRedirects(0xc20800cf30, 0xc208035930, 0x906c88, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:367 +0xb25
net/http.(*Client).Do(0xc20800cf30, 0xc208035930, 0xc, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:174 +0xa4
github.com/samalba/dockerclient.(*DockerClient).doRequest(0xc20801e7e0, 0x8508b0, 0x3, 0xc2080c1fb0, 0x23, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:79 +0x3a2
github.com/samalba/dockerclient.(*DockerClient).InspectContainer(0xc20801e7e0, 0xc2080d1e00, 0xc, 0x0, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:143 +0x215
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.GenerateProxyConfig(0xc20800cdb0, 0xc208074160, 0xc20801e7e0, 0x0, 0x53026e, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:310 +0x34e
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.updateConfig(0xc20800cdb0, 0xc208074160, 0xc20801e7e0, 0x0, 0xc2080d0300, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:487 +0x48
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.handleUpdate(0xc20800cdb0, 0xc208074160, 0xc20801e7e0, 0x0, 0xc2080d0300, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:220 +0xf2
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.HandleEvent(0xc20800cdb0, 0xc208074160, 0xc20801e7e0, 0x0, 0xc2080d0300, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:238 +0x128
github.com/ehazlett/interlock/plugins/haproxy.(*HaproxyPlugin).HandleEvent(0xc208076180, 0xc2080d0300, 0x0, 0x0)
<autogenerated>:5 +0xbc
github.com/ehazlett/interlock/plugins.DispatchEvent(0xc20800cdb0, 0xc20801e7e0, 0xc2080d0300, 0xc20800a2a0)
/go/src/github.com/ehazlett/interlock/plugins/plugins.go:48 +0x532
created by main.(*EventHandler).Handle
/go/src/github.com/ehazlett/interlock/interlock/handler.go:27 +0x313

goroutine 22 [select]:
net/http.(*persistConn).roundTrip(0xc2080742c0, 0xc208108470, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:1082 +0x7ad
net/http.(*Transport).RoundTrip(0xc208064240, 0xc208035a00, 0xc2080f6420, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:235 +0x558
net/http.send(0xc208035a00, 0x7f7eb84d9dc0, 0xc208064240, 0x5e, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:219 +0x4fc
net/http.(*Client).send(0xc20800cf30, 0xc208035a00, 0x5e, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:142 +0x15b
net/http.(*Client).doFollowingRedirects(0xc20800cf30, 0xc208035a00, 0x906c88, 0x0, 0x0, 0x0)
time="2015-09-16T19:08:59Z" level="info" msg="[haproxy] web.docker.demo: upstream=10.0.10.15:809 container=www9" 
time="2015-09-16T19:08:59Z" level="info" msg="[haproxy] web.docker.demo: upstream=10.0.10.15:807 container=www7" 
/usr/src/go/src/net/http/client.go:367 +0xb25
net/http.(*Client).Do(0xc20800cf30, 0xc208035a00, 0xc, 0x0, 0x0)
/usr/src/go/src/net/http/client.go:174 +0xa4
github.com/samalba/dockerclient.(*DockerClient).doRequest(0xc20801e7e0, 0x8508b0, 0x3, 0xc2080f45a0, 0x23, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:79 +0x3a2
github.com/samalba/dockerclient.(*DockerClient).InspectContainer(0xc20801e7e0, 0xc2080368c0, 0xc, 0x0, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/dockerclient.go:143 +0x215
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.GenerateProxyConfig(0xc20800cdb0, 0xc208074210, 0xc20801e7e0, 0x0, 0x53026e, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:310 +0x34e
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.updateConfig(0xc20800cdb0, 0xc208074210, 0xc20801e7e0, 0x0, 0xc2080d0580, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:487 +0x48
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.handleUpdate(0xc20800cdb0, 0xc208074210, 0xc20801e7e0, 0x0, 0xc2080d0580, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:220 +0xf2
github.com/ehazlett/interlock/plugins/haproxy.HaproxyPlugin.HandleEvent(0xc20800cdb0, 0xc208074210, 0xc20801e7e0, 0x0, 0xc2080d0580, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/plugins/haproxy/haproxy.go:238 +0x128
github.com/ehazlett/interlock/plugins/haproxy.(*HaproxyPlugin).HandleEvent(0xc208076260, 0xc2080d0580, 0x0, 0x0)
<autogenerated>:5 +0xbc
github.com/ehazlett/interlock/plugins.DispatchEvent(0xc20800cdb0, 0xc20801e7e0, 0xc2080d0580, 0xc20800a2a0)
/go/src/github.com/ehazlett/interlock/plugins/plugins.go:48 +0x532
created by main.(*EventHandler).Handle
/go/src/github.com/ehazlett/interlock/interlock/handler.go:27 +0x313

goroutine 33 [select]:
net.lookupIPDeadline(0xc20800bda7, 0x2f, 0xecd8bb569, 0x21e7a5c9, 0xc557c0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/lookup.go:82 +0x6cb
net.resolveInternetAddr(0x85d5b0, 0x3, 0xc20800bda7, 0x34, 0xecd8bb569, 0x21e7a5c9, 0xc557c0, 0x0, 0x0, 0x0, ...)
/usr/src/go/src/net/ipsock.go:285 +0x49b
net.resolveAddr(0x857530, 0x4, 0x85d5b0, 0x3, 0xc20800bda7, 0x34, 0xecd8bb569, 0xc221e7a5c9, 0xc557c0, 0x0, ...)
/usr/src/go/src/net/dial.go:110 +0x378
net.(*Dialer).Dial(0xc208037100, 0x85d5b0, 0x3, 0xc20800bda7, 0x34, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/dial.go:158 +0xf6
net.DialTimeout(0x85d5b0, 0x3, 0xc20800bda7, 0x34, 0x6fc23ac00, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/dial.go:150 +0xe6
github.com/samalba/dockerclient.func·001(0x85d5b0, 0x3, 0xc20800bda7, 0x34, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/utils.go:19 +0x7c
net/http.(*Transport).dial(0xc208064240, 0x85d5b0, 0x3, 0xc20800bda7, 0x34, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:479 +0x84
net/http.(*Transport).dialConn(0xc208064240, 0x0, 0xc20800bda0, 0x4, 0xc20800bda7, 0x34, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:564 +0x1678
net/http.func·019()
/usr/src/go/src/net/http/transport.go:520 +0x42
created by net/http.(*Transport).getConn
/usr/src/go/src/net/http/transport.go:522 +0x335

goroutine 27 [select]:
net.lookupIPDeadline(0xc20800b8c7, 0x2f, 0xecd8bb569, 0x21db260c, 0xc557c0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/lookup.go:82 +0x6cb
net.resolveInternetAddr(0x85d5b0, 0x3, 0xc20800b8c7, 0x34, 0xecd8bb569, 0x21db260c, 0xc557c0, 0x0, 0x0, 0x0, ...)
/usr/src/go/src/net/ipsock.go:285 +0x49b
net.resolveAddr(0x857530, 0x4, 0x85d5b0, 0x3, 0xc20800b8c7, 0x34, 0xecd8bb569, 0x21db260c, 0xc557c0, 0x0, ...)
/usr/src/go/src/net/dial.go:110 +0x378
net.(*Dialer).Dial(0xc2080d1bc0, 0x85d5b0, 0x3, 0xc20800b8c7, 0x34, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/dial.go:158 +0xf6
net.DialTimeout(0x85d5b0, 0x3, 0xc20800b8c7, 0x34, 0x6fc23ac00, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/dial.go:150 +0xe6
github.com/samalba/dockerclient.func·001(0x85d5b0, 0x3, 0xc20800b8c7, 0x34, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/ehazlett/interlock/interlock/Godeps/_workspace/src/github.com/samalba/dockerclient/utils.go:19 +0x7c
net/http.(*Transport).dial(0xc208064240, 0x85d5b0, 0x3, 0xc20800b8c7, 0x34, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:479 +0x84
net/http.(*Transport).dialConn(0xc208064240, 0x0, 0xc20800b8c0, 0x4, 0xc20800b8c7, 0x34, 0xc20801f000, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:564 +0x1678
net/http.func·019()
/usr/src/go/src/net/http/transport.go:520 +0x42
created by net/http.(*Transport).getConn
/usr/src/go/src/net/http/transport.go:522 +0x335

goroutine 31 [IO wait]:
net.(*pollDesc).Wait(0xc2080116b0, 0x72, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2080116b0, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc208011650, 0xc2080bc000, 0x1000, 0x1000, 0x0, 0x7f7eb84d9a88, 0xc208108ea0)
/usr/src/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208038170, 0xc2080bc000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/src/go/src/net/net.go:121 +0xdc
net/http.noteEOFReader.Read(0x7f7eb84db3a0, 0xc208038170, 0xc208074738, 0xc2080bc000, 0x1000, 0x1000, 0x7a07e0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:1270 +0x6e
net/http.(*noteEOFReader).Read(0xc20801f220, 0xc2080bc000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0)
<autogenerated>:125 +0xd4
bufio.(*Reader).fill(0xc20800bd40)
/usr/src/go/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).Peek(0xc20800bd40, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/bufio/bufio.go:132 +0xf0
net/http.(*persistConn).readLoop(0xc2080746e0)
/usr/src/go/src/net/http/transport.go:842 +0xa4
created by net/http.(*Transport).dialConn
/usr/src/go/src/net/http/transport.go:660 +0xc9f

goroutine 34 [chan receive]:
net/http.func·016()
/usr/src/go/src/net/http/transport.go:507 +0x65
created by net/http.func·017
/usr/src/go/src/net/http/transport.go:513 +0xba

goroutine 29 [IO wait]:
net.(*pollDesc).Wait(0xc208011720, 0x72, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc208011720, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc2080116c0, 0xc2080ba000, 0x1000, 0x1000, 0x0, 0x7f7eb84d9a88, 0xc208109890)
/usr/src/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208038168, 0xc2080ba000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/src/go/src/net/net.go:121 +0xdc
net/http.noteEOFReader.Read(0x7f7eb84db3a0, 0xc208038168, 0xc208074898, 0xc2080ba000, 0x1000, 0x1000, 0x7a07e0, 0x0, 0x0)
/usr/src/go/src/net/http/transport.go:1270 +0x6e
net/http.(*noteEOFReader).Read(0xc20801f1c0, 0xc2080ba000, 0x1000, 0x1000, 0xc208012000, 0x0, 0x0)
<autogenerated>:125 +0xd4
bufio.(*Reader).fill(0xc20800bce0)
/usr/src/go/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).Peek(0xc20800bce0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/src/go/src/bufio/bufio.go:132 +0xf0
net/http.(*persistConn).readLoop(0xc208074840)
/usr/src/go/src/net/http/transport.go:842 +0xa4
created by net/http.(*Transport).dialConn
/usr/src/go/src/net/http/transport.go:660 +0xc9f

goroutine 30 [select]:
net/http.(*persistConn).writeLoop(0xc208074840)
/usr/src/go/src/net/http/transport.go:945 +0x41d
created by net/http.(*Transport).dialConn
/usr/src/go/src/net/http/transport.go:661 +0xcbc

goroutine 32 [select]:
net/http.(*persistConn).writeLoop(0xc2080746e0)
/usr/src/go/src/net/http/transport.go:945 +0x41d
created by net/http.(*Transport).dialConn
/usr/src/go/src/net/http/transport.go:661 +0xcbc

goroutine 35 [chan receive]:
net/http.func·016()
/usr/src/go/src/net/http/transport.go:507 +0x65
created by net/http.func·017
/usr/src/go/src/net/http/transport.go:513 +0xba```

feature request

is there anyway to have the members of the pool named after the container names.
right now if the pool is named foo.mydomain.com the members show as foo_mydomain_dom_0, _1, etc...

it would be so much more helpful if it showed the actual name of the containers instead.

{"port":80} directive does not pick up the right port when more then one port is published and containers are added and/or removed

docker run -p 80:80 --name interlock --restart=always --hostname interlock -e INTERLOCK_DATA='{"port":80}' -e HAPROXY_PROXY_BACKEND_OVERRIDE_ADDRESS=192.168.99.100 -e HAPROXY_CLIENT_TIMEOUT=2000000 -e HAPROXY_SERVER_TIMEOUT=2000000 ehazlett/interlock -s tcp://192.168.99.100:2375 --plugin haproxy start

does not always pick up port 80 of a container

log snippet:

works and publishes correct: ( chance ~ 1 in 10 )

"�2016-02-29T11:34:37.669957771Z time="2016-02-29T11:34:37Z" level="info" msg="[haproxy] []: upstream=:30001 container="

but when a container is added or removed the following pops up:

2016-02-29T12:15:05.495103290Z time="2016-02-29T12:15:05Z" level="info" msg="[haproxy] : upstream=:30003 container="

port 30001 is port 80, 30003 is MySQL and 30002 is ssh

We see similar behaviour when ports are dynamically allocated and with fewer ports publised ( eg: 80 & ssh ) with only one port this does work.

docker machine, compose, swarm haproxy (or nginx) working example

I'm having trouble getting the reverse proxy to work with Docker Swarm.

I followed this tutorial https://blog.docker.com/2015/02/orchestrating-docker-with-machine-swarm-and-compose/

So I'm using docker-machine, docker-swarm, and scaling with docker-compose.

And I have my docker swarm running with 3 swarm nodes / agents.
I can access my public http site on all 3 individual IP addresses for the swarm agents but cannot figure out how to reverse proxy them to be accessible through ONE public, specified IP address.

My docker compose file looks like:

web:
  build: .
  ports:
    - "3000"
  links:
    - droppriceCode
droppriceCode:
  image: lukemadera/dropprice-code

docker ps outputs this:

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                                 NAMES
04de9f99bda9        droppricecompose_web   "/bin/sh -c 'node run"   About an hour ago   Up 25 minutes       3002/tcp, 27017/tcp, 104.236.87.160:32781->3000/tcp   swarm-master/04de9f99bd_04de9f99bd_04de9f99bd_04de9f99bd_04de9f99bd_04de9f99bd_droppricecompose_web_3
3ed097d392be        droppricecompose_web   "/bin/sh -c 'node run"   5 hours ago         Up 25 minutes       3002/tcp, 104.236.204.109:3000->3000/tcp              swarm-02/droppricecompose_web_2
3c8f88b7832b        droppricecompose_web   "/bin/sh -c 'node run"   5 hours ago         Up 25 minutes       3002/tcp, 104.236.204.109:80->3000/tcp                swarm-02/droppricecompose_web_1
8d74da04e2c3        mongo                  "/entrypoint.sh mongo"   31 hours ago        Up 31 hours         27017/tcp                                             swarm-master/droppricecompose_mongo_1

So I tried this docker haproxy plugin:
I run docker run -p 3000:80 -d -v /etc/docker:/etc/docker ehazlett/interlock --swarm-url $DOCKER_HOST --plugin haproxy start

It ran the first time but I couldn't access any public webserver. I tried again but now it always gives an unable to find a node with port 3000 available

All my servers are on digitalocean:

  • main server I'm running all the above commands from: 104.236.133.134
  • swarm master is 104.236.87.160
  • and then 3 separate swarm nodes with their own ip addresses.

I think I'm getting lost in all the ip addresses and ports. I'm not sure which ones to use where..

Again everything seems to be working except the final step of linking / proxying all the individual swarm node ip addresses into one public ip address. I can access each node individually just fine.
Any help would be greatly appreciated.
Thanks!

cnt.Engine.Addr = unix:///docker.sock

Hi ehazlett,

I found it's still not working when deployed on production environment. And I investigated as more as I can, I found proxy.conf inside the interlock container has missed some arguments.

# managed by interlock
global

    maxconn 0
    pidfile proxy.pid

defaults
    mode http
    retries 3
    option redispatch
    option httplog
    option dontlognull
    timeout connect 0
    timeout client 0
    timeout server 0

frontend http-default
    bind *:8080
    monitor-uri /haproxy?monitor

    stats enable
    stats uri /haproxy?stats
    stats refresh 5s
    acl is_foo_dev hdr_beg(host) foo.dev
    use_backend foo_dev if is_foo_dev

backend foo_dev
    http-response add-header X-Request-Start %Ts.%ms
    balance roundrobin
    option forwardfor


    server foo_dev_0 :49156 check inter 5000

It seems the host value from the controller is empty, according to

hostAddrUrl, err := url.Parse(cnt.Engine.Addr)
if err != nil {
    logger.Warnf("%s: unable to parse engine addr: %s", cntId, err)
    continue
}
host := hostAddrUrl.Host
hostParts := strings.Split(hostAddrUrl.Host, ":")
if len(hostParts) != 1 {
    host = hostParts[0]
}
if len(cnt.Ports) == 0 {
    logger.Warnf("%s: no ports exposed", cntId)
    continue
}
portDef := cnt.Ports[0]
addr := fmt.Sprintf("%s:%d", host, portDef.Port)
if interlockData.Port != 0 {
    for _, p := range cnt.Ports {
        if p.ContainerPort == interlockData.Port {
            addr = fmt.Sprintf("%s:%d", host, p.Port)
        }
    }
}
up := &interlock.Upstream{
    Addr:          addr,
    CheckInterval: checkInterval,
}

And

server {{ $host.Name }}_{{ $i }} {{ $up.Addr }} check inter {{ $up.CheckInterval }}

I think cnt.Engine.Addr should be unix:///docker.sock in this situation and it is not accepted for haproxy.

Please help me, thank you :)

"No server is available to handle this request"

Hi,

I have filled "foo" at hostname field and "aoi.im" at domain field when creating the container. (I have also tried to put "foo.aoi.im" at domain field and left hostname field blank.) I added an A record with hostname "foo" and IP "127.0.0.1" on domain "aoi.im". I can access the website inside the container by "127.0.0.1:[container-port]" and "foo.aoi.im:[container-port]". But I saw "No server is available to handle this request" once I accessed "foo.aoi.im" directly.

I also found the status of this container on haproxy status page is "DOWN".

Please help me.

Thank you and have a nice day!

v1.0.0 will not work with UCP/Swarm and overlay network

Hi there,

I have spent the last few days trying to get interlock v1.0.0 to work on our UCP/Swarm cluster but with no luck.

Using the overlay network seems to break the generation of the configuration files.
Example Compose file:

version: "2"

services:

  webapp:
    image: ‘blah:latest’
    environment:
      - "constraint:node==production-frontend*"
    expose:
      - '80'
    labels:
      - "interlock.hostname=www”
      - "interlock.domain=blah.com”
    depends_on:
      - interlock

  interlock:
      image: blah/interlock:1.0.0
      command: run -c /bin/config.toml
      ports:
          - 8080
      volumes:
          - haproxy:/etc
          - /var/run/docker.sock:/var/run/docker.sock
          - /var/lib/docker/discovery_certs:/var/lib/docker/discovery_certs
      environment:
        - "constraint:node==production-frontend-1"

  haproxy:
      image: haproxy:latest
      ports:
          - 80:80
      labels:
          - "interlock.ext.name=haproxy"
      volumes:
          - haproxy:/usr/local/etc/haproxy
      environment:
        - "constraint:node==production-frontend-1"
      depends_on:
        - interlock


networks:
  default:
    external:
      name: overlay_network

volumes:
  haproxy:

Toml config file:


listenAddr = ":8080"
dockerURL = "tcp://<UCP/SWARM IP>:2376"
TLSCACert = "/var/lib/docker/discovery_certs/ca.pem"
TLSCert = "/var/lib/docker/discovery_certs/cert.pem"
TLSKey = "/var/lib/docker/discovery_certs/key.pem"
AllowInsecure = false
EnableMetrics = true

[[extensions]]
name = "haproxy"
configPath = "/etc/haproxy.cfg"
pidPath = "/etc/haproxy.pid"
maxConn = 1024
port = 80
adminUser = "admin"
adminPass = "interlock"

This generates the following haproxy.conf


backend www_blah_com
    http-response add-header X-Request-Start %Ts.%ms
    balance roundrobin



    server blah_webapp_4 : check inter 5000
    server  blah_webapp_3 : check inter 5000
    server  blah_webapp_2 : check inter 5000
    server  blah_webapp_1 : check inter 5000

I believe it should be something like:

This generates the following haproxy.conf


backend www_blah_com
    http-response add-header X-Request-Start %Ts.%ms
    balance roundrobin



    server blah_webapp_4  1.2.3.4:80 check inter 5000
    server  blah_webapp_3 2.3.4.5:80 check inter 5000

  etc etc....

As a result the haproxy shows all nodes being down "Layer 4 connection problem".

If I manually run curl against the blah_webapp_4 then it returns the webpage just fine so there has to be an issue with configuration somewhere.

Interlock stops processing further events on swarm container error

When running a interlock + nginx plugin against a swarm cluster, I am running into what looks like a race condition where interlock tries to query for a container that doesn't exist and swarm throws an error stating that the said container doesn't exist. It seems that interlock stops processing further events. Here is a gif of the error in action. Interlock is on top, swarm on the bottom:

interlock-swarm

I am running swarm from master on all of my swarm nodes as well as master and interlock:latest (0.2.6)

not picking up changes

sometimes the haproxy does not pick up changes that happen.
i will add new node to a pool or remove some and it will not show up.
i have to restart the container to see the changes.

i dont see anything in the logs, so im not sure what to give you to help.

set timeout server

is there anyway to set the 'timeout server' as a backend option in the INTERLOCK_DATA options?
i know i can set it as a env variable for the default values but that affects all pools.

thanks!

Bad certificats error when used on local Swarm created with Machine

Hi Evan,

When deploying this compose file on a VirtualBox Swarm created with Machine, I got some error from interlock regarding bad certificate.

Creation of the Swarm on VirtualBox

# KV Store
docker-machine create -d virtualbox consul
eval "$(docker-machine env consul)"
docker run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap

# Swarm master
docker-machine create \
-d virtualbox \
--swarm --swarm-image="swarm" --swarm-master \
--swarm-discovery="consul://$(docker-machine ip consul):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
demo0

# Swarm node
docker-machine create -d virtualbox \
--swarm --swarm-image="swarm:1.0.0-rc2" \
--swarm-discovery="consul://$(docker-machine ip consul):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
demo1

docker-compose.yml

interlock:
  image: ehazlett/interlock
  ports:
    - "80:80"
  volumes:
    - /home/docker/.docker:/etc/docker  # Certs created by Machine are in /home/docker/.docker folder, right ?
  command: "--swarm-url tcp://192.168.99.101:2376 --swarm-tls-ca-cert=/etc/docker/ca.pem --swarm-tls-cert=/etc/docker/cert.pem --swarm-tls-key=/etc/docker/key.pem --plugin haproxy start"
db:
  image: mongo:3.0
  command: "--storageEngine wiredTiger"
  environment:
    - "constraint:node==demo0"
api:
  image: lucj/message-app:v3
  environment:
    - MONGO_URL=mongodb://testscale_db_1:27017/messageApp
    - INTERLOCK_DATA={"hostname":"api","domain":"message-app.com"}

Compose ran with:

docker-compose --x-networking --x-network-driver=overlay up

Connection Errors:

interlock_1 | time="2015-11-19T11:20:14Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
interlock_1 | time="2015-11-19T11:20:14Z" level="warning" msg="Get https://192.168.99.101:2376/v1.15/containers/json?all=0&size=0: remote error: bad certificate"
interlock_1 | time="2015-11-19T11:20:14Z" level="error" msg="[haproxy] error reloading: exit status 1"
interlock_1 | time="2015-11-19T11:20:14Z" level="warning" msg="error receiving events; attempting to reconnect"
interlock_1 | time="2015-11-19T11:20:14Z" level="error" msg="Get https://192.168.99.101:2376/v1.15/events: remote error: bad certificate"

Any idea what I'm missing ?

How to handle service upgrades?

I'm curious if Interlock does anything to support atomically swapping out versions of an application. For instance, suppose you have 1 container running version A of your application. If you start another container running version A, it should be added to the same backend pool, so HAProxy can balance between them.

However, if you then deploy a container running version B, you likely don't want it added to the same backend pool as the version As. Doing so would cause problems with the wrong versions of assets being requested (if a request for /s/B/main.js was received by a backend running version A, it would return 404, or worse, return version A, which may then be cached by the CDN), and behavior flapping back and forth from one request to another.

I think the thing to do would be to atomically remove the A backend pool, and replace it with a B backend pool, when a version B container is seen.

Does Interlock address this? I took a quick look through the code, and didn't see anything, but it is quite possible I missed it. If you think it is the right approach, I can try to take a stab at a PR implementing my idea.

Backend not removing when using 'docker stop' on a container

I am starting a container like this:
docker run -i -t -d -p 80 -h test1.docker1.mbentley.net --name test1 mbentley/ubuntu-nginx

I can see it adds to the haproxy config and serves. Then I stop it:
docker stop test1

I can see in haproxy that haproxy itself detects that the backend is down but it doesn't trigger a re-read and reload of haproxy as expected.

Starting the same container back up and then doing a docker kill test1 works as expected. I've reproduced this a number of times. I haven't looked at the docker event stream to see what exactly is going on or why it might not be reporting but it is certainly odd to see that behavior.

TLS to backends

Would it be possible to include an additional parameter to the backends to allow for SSL to the backends? Maybe it can be a "tls_backend": true passed in INTERLOCK_DATA so that the communication stream will be encrypted to the backends? That would be great, thanks!

haproxy bug: ordering of ssl / verify none / sni causes proxy.conf to fail parsing

Hey there!

It looks like the ordering of ssl / verify / SNI in the haproxy server stanza causes the proxy.conf to not be parsed.

Tested: interlock 0.3.3 -- but I think this bug will also be present in interlock 1.0.0 because the ordering is the same.

Configuration:

docker run -e INTERLOCK_DATA='{"ssl_backend_tls_verify": "none", "ssl_backend": true}' --hostname foo.bar.baz -p 3000 dtr.foo/test/test-repo:latest

docker run -p 80:80 -p 443:443 -v /etc/docker:/etc/docker:ro -v /root:/ssl:ro -e HAPROXY_SSL_CERT=/ssl/wildcard.certs ehazlett/interlock --swarm-url tcp://swarm-path:4000 --plugin haproxy --swarm-tls-ca-cert /etc/docker/ca.pem --swarm-tls-cert /etc/docker/cert.pem --swarm-tls-key /etc/docker/key.pem start

Once test/test-repo has started up, the following messages are observed in the ehazlett/interlock logs:

time="2016-03-03T20:32:08Z" level=error msg="error reloading haproxy: exit status 1"
time="2016-03-03T20:32:08Z" level=error msg="[haproxy] error reloading: exit status 1"
time="2016-03-03T20:32:08Z" level=warning msg="error receiving events; attempting to reconnect"

When I connect to the interlock container:

# cat /proxy.conf
(config elided)
server stoic_shaw 3.101.113.47:32777 check inter 5000 ssl sni req.hdr(Host) verify none

# haproxy -d -c -f proxy.conf
[WARNING] 062/205826 (31) : config : log format ignored for frontend 'http-default' since it has no log address.
[ALERT] 062/205826 (31) : Proxy 'XXX', server 'stoic_shaw' [proxy.conf:40] verify is enabled by default but no CA file specified. If you're running on a LAN where you're certain to trust the server's certificate, please set an explicit 'verify none' statement on the 'server' line, or use 'ssl-server-verify none' in the global section to disable server-side verifications by default.
[WARNING] 062/205826 (31) : Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it you should set it to at least 2048. Please set a value >= 1024 to make this warning disappear.
[ALERT] 062/205826 (31) : Fatal errors found in configuration.

The issue seems to be the server line: server stoic_shaw 3.101.113.47:32777 check inter 5000 ssl sni req.hdr(Host) verify none

If I change the ordering to put verify none before sni, as such: server stoic_shaw 3.101.113.47:32777 check inter 5000 ssl verify none sni req.hdr(Host), interlock starts & runs as expected.

This is a head scratcher; reading the HAProxy docs seems to indicate the difference in ordering should be irrelevant...

I'm going to send a message to the HAProxy mailing list & dig on the code there.

If you would like, I can send in a fix to Interlock to reorder the config files. interlock:0.3.3 & interlock:master best?

thanks for some cool software!

Multi host networking...

Problem

The "multihost" branch provided a mechanism to get Interlock to forward requests via the internal network to the Web container. It also relaxed the need to specify a an External Port of the Web container.

Given that Multi-host networking is now "GA", what's the recommended way to do this now?

Support version X.Y instead of just X.Y.Z

It would help to be able to retrieve the latest 1.0.x version by using

docker run ehazlett/interlock:1.0

This currently doesn't work as the full ehzalett/interlock:1.0.0 is needed.

make build-container fails due to missing golang base image

Dockerfile.build (used to compile in a container) references a golang:1.5-cross image, which is not an official image (the latest cross image is 1.4.3). As this is not being cross compiled, it should probably use the latest golang base image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.