Git Product home page Git Product logo

pipework's Introduction

⚠️ WARNING: this project is not maintained.

It was written in the early days of Docker, when people needed a way to "plumb" Docker containers into arbitrary network topologies. If you want to use it today (post-2020), you can, but it's at your own risk. Small contributions (of a few lines) are welcome, but I don't have the time to review and test bigger contributions, so don't expect any new features or significant fixes (for instance, if Docker changes the way it handles container networking, this will break, and I will not fix it).

Proceed at your own risk! :)

Pipework

Software-Defined Networking for Linux Containers

Pipework lets you connect together containers in arbitrarily complex scenarios. Pipework uses cgroups and namespace and works with "plain" LXC containers (created with lxc-start), and with the awesome Docker.

Table of Contents generated with DocToc

Things to note

vCenter / vSphere / ESX / ESXi

If you use vCenter / VSphere / ESX / ESXi, set or ask your administrator to set Network Security Policies of the vSwitch as below:

  • Promiscuous mode: Accept
  • MAC address changes: Accept
  • Forged transmits: Accept

After starting the guest OS and creating a bridge, you might also need to fine-tune the br1 interface as follows:

  • brctl stp br1 off (to disable the STP protocol and prevent the switch from disabling ports)
  • brctl setfd br1 2 (to reduce the time taken by the br1 interface to go from blocking to forwarding state)
  • brctl setmaxage br1 0

Virtualbox

If you use VirtualBox, you will have to update your VM network settings. Open the settings panel for the VM, go the the "Network" tab, pull down the "Advanced" settings. Here, the "Adapter Type" should be pcnet (the full name is something like "PCnet-FAST III"), instead of the default e1000 (Intel PRO/1000). Also, "Promiscuous Mode" should be set to "Allow All".

If you don't do that, bridged containers won't work, because the virtual NIC will filter out all packets with a different MAC address. If you are running VirtualBox in headless mode, the command line equivalent of the above is modifyvm --nicpromisc1 allow-all. If you are using Vagrant, you can add the following to the config for the same effect:

config.vm.provider "virtualbox" do |v|
  v.customize ['modifyvm', :id, '--nictype1', 'Am79C973']
  v.customize ['modifyvm', :id, '--nicpromisc1', 'allow-all']
end

Note: it looks like some operating systems (e.g. CentOS 7) do not support pcnet anymore. You might want to use the virtio-net (Paravirtualized Network) interface with those.

Docker

Before using Pipework, please ask on the docker-user mailing list if there is a "native" way to achieve what you want to do without Pipework.

In the long run, Docker will allow complex scenarios, and Pipework should become obsolete.

If there is really no other way to plumb your containers together with the current version of Docker, then okay, let's see how we can help you!

The following examples show what Pipework can do for you and your containers.

LAMP stack with a private network between the MySQL and Apache containers

Let's create two containers, running the web tier and the database tier:

APACHE=$(docker run -d apache /usr/sbin/httpd -D FOREGROUND)
MYSQL=$(docker run -d mysql /usr/sbin/mysqld_safe)

Now, bring superpowers to the web tier:

pipework br1 $APACHE 192.168.1.1/24

This will:

  • create a bridge named br1 in the docker host;
  • add an interface named eth1 to the $APACHE container;
  • assign IP address 192.168.1.1 to this interface,
  • connect said interface to br1.

Now (drum roll), let's do this:

pipework br1 $MYSQL 192.168.1.2/24

This will:

  • not create a bridge named br1, since it already exists;
  • add an interface named eth1 to the $MYSQL container;
  • assign IP address 192.168.1.2 to this interface,
  • connect said interface to br1.

Now, both containers can ping each other on the 192.168.1.0/24 subnet.

Docker integration

Pipework can resolve Docker containers names. If the container ID that you gave to Pipework cannot be found, Pipework will try to resolve it with docker inspect. This makes it even simpler to use:

docker run -name web1 -d apache
pipework br1 web1 192.168.12.23/24

Peeking inside the private network

Want to connect to those containers using their private addresses? Easy:

ip addr add 192.168.1.254/24 dev br1

Voilà!

Setting container internal interface

By default pipework creates a new interface eth1 inside the container. In case you want to change this interface name like eth2, e.g., to have more than one interface set by pipework, use:

pipework br1 -i eth2 ...

Note:: for InfiniBand IPoIB interfaces, the default interface name is ib0 and not eth1.

Setting host interface name

By default pipework will create a host-side interface with a fixed prefix but random suffix. If you would like to specify this interface name use the -l flag (for local):

pipework br1 -i eth2 -l hostapp1 ...

Using a different netmask

The IP addresses given to pipework are directly passed to the ip addr tool; so you can append a subnet size using traditional CIDR notation.

I.e.:

pipework br1 $CONTAINERID 192.168.4.25/20

Don't forget that all containers should use the same subnet size; pipework is not clever enough to use your specified subnet size for the first container, and retain it to use it for the other containers.

Setting a default gateway

If you want outbound traffic (i.e. when the containers connects to the outside world) to go through the interface managed by Pipework, you need to change the default route of the container.

This can be useful in some usecases, like traffic shaping, or if you want the container to use a specific outbound IP address.

This can be automated by Pipework, by adding the gateway address after the IP address and subnet mask:

pipework br1 $CONTAINERID 192.168.4.25/[email protected]

Connect a container to a local physical interface

Let's pretend that you want to run two Hipache instances, listening on real interfaces eth2 and eth3, using specific (public) IP addresses. Easy!

pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24

Note that this will use macvlan subinterfaces, so you can actually put multiple containers on the same physical interface. If you don't want to virtualize the interface, you can use the --direct-phys option to namespace an interface exclusively to a container without using a macvlan bridge.

pipework --direct-phys eth1 $CONTAINERID 192.168.1.2/24

This is useful for assigning SR-IOV VFs to containers, but be aware of added latency when using the NIC to switch packets between containers on the same host.

Use MAC address to specify physical interface

In case you want to connect a local physical interface with a specific name inside the container, it will also rename the physical one, this behaviour is not idempotent:

pipework --direct-phys eth1 -i container0 $CONTAINERID 0/0
# second call would fail because physical interface eth1 has been renamed

We can use the interface MAC address to identify the interface the same way any time (udev networking rules use a similar method for interfaces persistent naming):

pipework --direct-phys mac:00:f3:15:4a:42:c8 -i container0 $CONTAINERID 0/0

Let the Docker host communicate over macvlan interfaces

If you use macvlan interfaces as shown in the previous paragraph, you will notice that the host will not be able to reach the containers over their macvlan interfaces. This is because traffic going in and out of macvlan interfaces is segregated from the "root" interface.

If you want to enable that kind of communication, no problem: just create a macvlan interface in your host, and move the IP address from the "normal" interface to the macvlan interface.

For instance, on a machine where eth0 is the main interface, and has address 10.1.1.123/24, with gateway 10.1.1.254, you would do this:

ip addr del 10.1.1.123/24 dev eth0
ip link add link eth0 dev eth0m type macvlan mode bridge
ip link set eth0m up
ip addr add 10.1.1.123/24 dev eth0m
route add default gw 10.1.1.254

Then, you would start a container and assign it a macvlan interface the usual way:

CID=$(docker run -d ...)
pipework eth0 $CID 10.1.1.234/[email protected]

Wait for the network to be ready

Sometimes, you want the extra network interface to be up and running before starting your service. A dirty (and unreliable) solution would be to add a sleep command before starting your service; but that could break in "interesting" ways if the server happens to be a bit slower at one point.

There is a better option: add the pipework script to your Docker image, and before starting the service, call pipework --wait. It will wait until the eth1 interface is present and in UP operational state, then exit gracefully.

If you need to wait on an interface other than eth1, pass the -i flag like this:

pipework --wait -i ib0

Add the interface without an IP address

If for some reason you want to set the IP address from within the container, you can use 0/0 as the IP address. The interface will be created, connected to the network, and assigned to the container, but without configuring an IP address:

pipework br1 $CONTAINERID 0/0

Add a dummy interface

If for some reason you want a dummy interface inside the container, you can add it like any other interface. Just set the host interface to the keyword dummy. All other options - IP, CIDR, gateway - function as normal.

pipework dummy $CONTAINERID 192.168.21.101/[email protected]

Of course, a gateway does not mean much in the context of a dummy interface, but there it is.

DHCP

You can use DHCP to obtain the IP address of the new interface. Just specify the name of the DHCP client that you want to use instead on an IP address; for instance:

pipework eth1 $CONTAINERID dhclient

You can specify the following DHCP clients:

  • dhclient
  • udhcpc
  • dhcpcd
  • dhcp

The first three are "normal" DHCP clients. They have to be installed on your host for this option to work. The last one works differently: it will run a DHCP client in a Docker container sharing its network namespace with your container. This allows to use DHCP configuration without worrying about installing the right DHCP client on your host. It will use the Docker busybox image and its embedded udhcpc client.

The value of $CONTAINERID will be provided to the DHCP client to use as the hostname in the DHCP request. Depending on the configuration of your network's DHCP server, this may enable other machines on the network to access the container using the $CONTAINERID as a hostname; therefore, specifying $CONTAINERID as a container name rather than a container id may be more appropriate in this use-case.

You need three things for this to work correctly:

  • obviously, a DHCP server (in the example above, a DHCP server should be listening on the network to which we are connected on eth1);
  • a DHCP client (either udhcpc, dhclient or dhcpcp) must be installed on your Docker host (you don't have to install it in your containers, but it must be present on the host), unless you specify dhcp as the client, in which case the Docker busybox image should be available;
  • the underlying network must support bridged frames.

The last item might be particularly relevant if you are trying to bridge your containers with a WPA-protected WiFi network. I'm not 100% sure about this, but I think that the WiFi access point will drop frames originating from unknown MAC addresses; meaning that you have to go through extra hoops if you want it to work properly.

It works fine on plain old wired Ethernet, though.

Lease Renewal

All of the DHCP options - udhcpc, dhcp, dhclient, dhcpcd - exit or are killed by pipework when they are done assigning a lease. This is to prevent zombie processes from existing after a container exits, but the dhcp client still exists.

However, if the container is long-running - longer than the life of the lease - then the lease will expire, no dhcp client renews the lease, and the container is stuck without a valid IP address.

To resolve this problem, you can cause the dhcp client to remain alive. The method depends on the dhcp client you use.

  • dhcp: see the next section DHCP Options
  • dhclient: use DHCP client dhclient-f
  • udhcpc: use DHCP client udhcpc-f
  • dhcpcd: not yet supported.

Note: If you use this option you will be responsible for finding and killing those dhcp client processes in the future. pipework is a one-time script; it is not intended to manage long-running processes for you.

In order to find the processes, you can look for pidfiles in the following locations:

  • dhcp: see the next section DHCP Options
  • dhclient: pidfiles in /var/run/dhclient.$GUESTNAME.pid
  • udhcpc: pidfiles in /var/run/udhcpc.$GUESTNAME.pid
  • dhcpcd: not yet supported

$GUESTNAME is the name or ID of the guest as you passed it to pipework on instantiation.

DHCP Options

You can specify extra DHCP options to be passed to the DHCP client by adding them with a colon. For instance:

pipework eth1 $CONTAINERID dhcp:-f

This will tell Pipework to setup the interface using the DHCP client of the Docker busybox image, and pass -f as an extra flag to this DHCP client. This flag instructs the client to remain in the foreground instead of going to the background. Let's see what this means.

Without this flag, a new container is started, in which the DHCP client is executed. The DHCP client obtains a lease, then goes to the background. When it goes to the background, the PID 1 in this container exits, causing the whole container to be terminated. As a result, the "pipeworked" container has its IP address, but the DHCP client has gone. On the up side, you don't have any cleanup to do; on the other, the DHCP lease will not be renewed, which could be problematic if you have short leases and the server and other clients don't validate their leases before using them.

With this flag, a new container is started, it runs the DHCP client just like before; but when it obtains the lease, it remains in the foreground. As a result, the lease will be properly renewed. However, when you terminate the "pipeworked" container, you should also take care of removing the container that runs the DHCP client. This can be seen as an advantage if you want to reuse this network stack even if the initial container is terminated.

Specify a custom MAC address

If you need to specify the MAC address to be used (either by the macvlan subinterface, or the veth interface), no problem. Just add it as the command-line, as the last argument:

pipework eth0 $(docker run -d haproxy) 192.168.1.2/24 26:2e:71:98:60:8f

This can be useful if your network environment requires whitelisting your hardware addresses (some hosting providers do that), or if you want to obtain a specific address from your DHCP server. Also, some projects like Orchestrator rely on static MAC-IPv6 bindings for DHCPv6:

pipework br0 $(docker run -d zerorpcworker) dhcp fa:de:b0:99:52:1c

Note: if you generate your own MAC addresses, try remember those two simple rules:

  • the lowest bit of the first byte should be 0, otherwise, you are defining a multicast address;
  • the second lowest bit of the first byte should be 1, otherwise, you are using a globally unique (OUI enforced) address.

In other words, if your MAC address is ?X:??:??:??:??:??, X should be 2, 6, a, or e. You can check Wikipedia if you want even more details.

If you want a consistent MAC address across container restarts, but don't want to have to keep track of the messy MAC addresses, ask pipework to generate an address for you based on a specified string, e.g. the hostname. This guarantees a consistent MAC address:

pipework eth0 <container> dhcp U:<some_string>

pipework will take some_string and hash it using MD5. It will then take the first 40 bits of the MD5 hash, add those to the locally administered prefix of 0x02, and create a unique MAC address.

For example, if your unique string is "myhost.foo.com", then the MAC address will always be 02:72:6c:cd:9b:8d.

This is particularly useful in the case of DHCP, where you might want the container to stop and start, but always get the same address. Most DHCP servers will keep giving you a consistent IP address if the MAC address is consistent.

Note: Setting the MAC address of an IPoIB interface is not supported.

Virtual LAN (VLAN)

If you want to attach the container to a specific VLAN, the VLAN ID can be specified using the [MAC]@VID notation in the MAC address parameter.

Note: VLAN attachment is currently only supported for containers to be attached to either an Open vSwitch bridge or a physical interface. Linux bridges are currently not supported.

The following will attach container zerorpcworker to the Open vSwitch bridge ovs0 and attach the container to VLAN ID 10.

pipework ovsbr0 $(docker run -d zerorpcworker) dhcp @10

Control Routes

If you want to add/delete/replace routes in the container, you can run any iproute2 route command via pipework.

All you have to do is set the interface to be route, followed by the container ID or name, followed by the route command.

Here are some examples.

pipework route $CONTAINERID add 10.0.5.6/24 via 192.168.2.1
pipework route $CONTAINERID replace default via 10.2.3.5.78

Everything after the container ID (or name) will be run as an argument to ip route inside the container's namespace. Use the iproute2 man page.

Control Rules

If you want to add/delete/replace IP rules in the container, you can do the same thing with ip rule that you can with ip route.

Specify the interface to be rule, followed by the container ID or name, followed by the rule command.

Here are some examples, to specify a route table:

pipework rule $CONTAINERID add from 172.19.0.2/32 table 1
pipework rule $CONTAINERID add to 172.19.0.2/32 table 1

Note that for these rules to work you first need to execute the following in your container:

echo "1 admin" >> /etc/iproute2/rt_tables

You can read more on using route tables, specifically to setup multiple NICs with different default gateways, here: https://kindlund.wordpress.com/2007/11/19/configuring-multiple-default-routes-in-linux/

Control tc

If you want to use tc from within the container namespace, you can do so with the command pipework tc $CONTAINERID <tc_args>.

Example, to simulate 30% packet loss on eth0 within the container:

pipework tc $CONTAINERID qdisc add dev eth0 root netem loss 30%

Support Open vSwitch

If you want to attach a container to the Open vSwitch bridge, no problem.

ovs-vsctl list-br
ovsbr0
pipework ovsbr0 $(docker run -d mysql /usr/sbin/mysqld_safe) 192.168.1.2/24

If the ovs bridge doesn't exist, it will be automatically created

Support InfiniBand IPoIB

Passing an IPoIB interface to a container is supported. The IPoIB device is created as a virtual device, similarly to how macvlan devices work. The interface also supports setting a partition key for the created virtual device.

The following will attach a container to ib0

pipework ib0 $CONTAINERID 10.10.10.10/24

The following will do the same but connect it to ib0 with pkey 0x8001

pipework ib0 $CONTAINERID 10.10.10.10/24 @8001

Gratuitous ARP

If arping is installed, it will be used to send a gratuitous ARP reply to the container's neighbors. This can be useful if the container doesn't emit any network traffic at all, and seems unreachable (but suddenly becomes reachable after it generates some traffic).

Note, however, that Ubuntu/Debian distributions contain two different arping packages. The one you want is iputils-arping.

Cleanup

When a container is terminated (the last process of the net namespace exits), the network interfaces are garbage collected. The interface in the container is automatically destroyed, and the interface in the docker host (part of the bridge) is then destroyed as well.

Integrating pipework with other tools

@dreamcat4 has built an amazing fork of pipework that can be integrated with other tools in the Docker ecosystem, like Compose or Crane. It can be used in "one shot," to create a bunch of network connections between containers; it can run in the background as a daemon, watching the Docker events API, and automatically invoke pipework when containers are started, and it can also expose pipework itself through an API.

For more info, check the dreamcat4/pipework image on the Docker Hub.

About this file

This README file is currently the only documentation for pipework. When updating it (specifically, when adding/removing/moving sections), please update the table of contents. This can be done very easily by just running:

docker-compose up

This will build a container with doctoc and run it to regenerate the table of contents. That's it!

pipework's People

Contributors

alexamirante avatar better0332 avatar cpuguy83 avatar deitch avatar dineshs-altiscale avatar discordianfish avatar gashev avatar haggaie avatar hansode avatar hookenz avatar imadhsissou avatar jamesguthrie avatar jefferai avatar jimmcslim avatar johngmyers avatar jpetazzo avatar kojiromike avatar lukaspustina avatar mckelvin avatar milesrichardson avatar oldgregg avatar palfrey avatar peerxu avatar rafal-prasal avatar rakurai avatar spheresh avatar tomcsi avatar tsaikd avatar whiteanthrax avatar ymaclook avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pipework's Issues

support for fedora

I use fedora, there's no package related to udhcpc,I changed to dhclient。 but it seemed this tools does not work with fedora

/usr/local/sbin/pipework: line 177: 13867 Segmentation fault      ip link add link $IFNAME dev $GUEST_IFNAME type macvlan mode bridge

Using DHCP on CoreOS

pipework uses udhcpc for DHCP leases, but udhcpc does not seem to be available in CoreOs. Anyone that has experience with pipework on CoreOS and use of DHCP?

Osx not working

It would be great if pipework can work on mac.
I am getting error:

./pipework: line 81: /proc/mounts: No such file or directory
Could not locate cgroup mount point.

I tried creating /proc/mounts but that leaves me with:

Could not locate cgroup mount point.

Is there any way pipework can work on OSX?

Access tun device inside docker container

Hi,

Thanks a lot for pipework, it is truly very useful! I have an unusual setup, in which I am running an OpenVPN client inside a docker container. This creates a tun device inside the container. I now want to bridge this tun interface with a bridge that exists on the host device, in order to transmit data out of this VPN connection. Is this possible?

Thanks!

DOCKERPID is always 0 when used with systemd

Hi, if i run this systemd unit, the DOCKERPID is always 0 and the ip setup fails
if i run manually all works like expected.

 [Unit]
Description = Jenkins Instance
After= network.service

[Service]
ExecStart=/bin/bash -c '/usr/bin/docker run --rm --net=none --name jenkins index.example.com:5000/jenkins-test:1 /run'
ExecStartPost=/bin/bash -c '/srv/tools/pipework eno1 -i eth0 jenkins 10.61.9.23/[email protected]'
ExecStop=/usr/bin/docker kill jenkins
Restart=always

[Install]
WantedBy=local.target

here is the journal output, i modified pipework to run with set -x

Jun 05 20:26:33 example.com bash[14517]: ++ docker inspect '--format={{ .State.Pid }}' jenkins
Jun 05 20:26:33 example.com docker[3295]: 2014/06/05 20:26:33 GET /v1.11/containers/jenkins/json
Jun 05 20:26:33 example.com docker[3295]: [df0243d7] +job inspect(jenkins, container)
Jun 05 20:26:33 example.com docker[3295]: [df0243d7] -job inspect(jenkins, container) = OK (0)
Jun 05 20:26:33 example.com bash[14517]: + DOCKERPID=0
Jun 05 20:26:33 example.com bash[14517]: + '[' 0 = '<no value>' ']'
Jun 05 20:26:33 example.com bash[14517]: + '[' 10.61.9.23/[email protected] = dhcp ']'
Jun 05 20:26:33 example.com bash[14517]: + echo 10.61.9.23/[email protected]
Jun 05 20:26:33 example.com bash[14517]: + grep -q /
Jun 05 20:26:33 example.com bash[14517]: + echo 10.61.9.23/[email protected]
Jun 05 20:26:33 example.com bash[14517]: + grep -q @
Jun 05 20:26:33 example.com bash[14517]: ++ echo 10.61.9.23/[email protected]
Jun 05 20:26:33 example.com bash[14517]: ++ cut -d@ -f2
Jun 05 20:26:33 example.com bash[14517]: + GATEWAY=10.61.9.1
Jun 05 20:26:33 example.com bash[14517]: ++ echo 10.61.9.23/[email protected]
Jun 05 20:26:33 example.com bash[14517]: ++ cut -d@ -f1
Jun 05 20:26:33 example.com bash[14517]: + IPADDR=10.61.9.23/24
Jun 05 20:26:33 example.com bash[14517]: + '[' 0 ']'
Jun 05 20:26:33 example.com bash[14517]: + NSPID=0
Jun 05 20:26:33 example.com bash[14517]: + '[' '!' -d /var/run/netns ']'
Jun 05 20:26:33 example.com bash[14517]: + '[' -f /var/run/netns/0 ']'
Jun 05 20:26:33 example.com bash[14517]: + ln -s /proc/0/ns/net /var/run/netns/0
Jun 05 20:26:33 example.com bash[14517]: ln: failed to create symbolic link '/var/run/netns/0': File exists
Jun 05 20:26:33 example.com systemd[1]: jenkins.service: control process exited, code=exited status=1

ideas?

regards f0

ipv6 support

Not seeing this anywhere but is ipv6 supported?

problem accessing first container added to internal bridge

After using pipework to setup a network between containers created with
docker, I was not able to access the first container that an interface was
added to with pipework.

My configuration is docker version 0.7.6 running on a Ubuntu 13.10 host.

After creating 3 containers, I ran the following commands on the Ubuntu host:
$ pipework br1 $CONTAINER_A 172.80.0.1/24
$ pipework br1 $CONTAINER_B 172.80.0.2/24
$ pipework br1 $CONTAINER_C 172.80.0.3/24
$ ip addr add 172.80.0.254/24 dev br1

Next, pings from the Ubuntu host to 172.80.0.2 and 172.80.0.3 were successful
while a ping to 172.80.0.1 failed.

I am FAR from an expert on linux internals/networking so what I did was add "set -x" to the script and see what was different about the commands run for each container. What I discovered was that a command like :
ip link set vethl20041eth1 master br1
was run for the 2nd and 3rd container but not the first.

I was able to fix the problem by adding the line "BRTYPE=linux" as shown in the
code fragment below. This may not be the general solution but it seems to fix the problem for my configuration.

code fragment from pipework

=====================

First step: determine type of first argument (bridge, physical interface...)

if [ -d /sys/class/net/$IFNAME ]
then
... deleted code ...
else
case "$IFNAME" in
br*)
IFTYPE=bridge

     BRTYPE=linux          # FIX FOR MY CONFIG
     # ================================
     ;;
    *)
     echo "I do not know how to setup interface $IFNAME."
     exit 1
     ;;
esac

fi

Docker + plex + pipework , stuck with connection issues

Hi,

First, thanks a lot for your work on pipework, it is great 👍
I wasn't sure on how to contact you about an issue with pipework in my specific situation so i ended up describing it in this github issue.

I'm working on a docker + plex image docker-plex. All the work on pipework integration is commited on a pipework branch in the git repository.
As plex needs to access my local network i was trying to run my container with pipework to make an eth1 network interface with a private ip address in my local network.

However i didn't figure out how to make all pieces communicate.

  1. First i started my container with pipework --wait to wait for eth1 to become available before launching plex.
  2. Then i followed pipework readme file and on the host server i launched :
    sudo pipework wlan0 $CONTAINERID 192.168.111.16/[email protected].

It exit successfully and the eth1 interface was set up with the given ip address.
I also checked that the default gateway was correcly set which was the case.

However i couldn't make the container communicate with the outside world and with other computers on the private network. It wasn't reachable from other servers either.
The only working network trafic was on the network set up by docker with the docker0 bridge (172.17.42.1 ip address on host).

Can you help me to make it all work ?
Did i miss something here ?

How to set coreos host to communicate via macvlan to talk to containers

Hello,
I realize this is probably more of a coreos issue, but I can't find anyone in that community that can answer this. My host machines running docker are core os. How would I structure my unit files for the networking such that I can both provision those host interfaces with DHCP and allow them to communicate to the containers?

Thanks!
-Brett

/16 network?

How do I have the bridge on a /16 rather than a /24?

Help configuring Docker Containers with DHCP addresses.

I have a server (Ubuntu 12:04.4) with a single static IP address on my corporate network:
eth0 - a.b.c.d

I want to run multiple docker containers on this machine - I would like each of them to be assigned a DHCP address from the corporate DHCP server.

I am rather confused about how to go about this...
I thought I could start the container and then using pipework do the following
pipework eth0 <CONTAINER_ID> dhcp

However this gives me:
mkdir: cannot create directory `/var/run/netns': Permission denied

I started to look around at other options:
Should I replace eth0 on my server(host) with a Bridged Ethernet setup:
http://www.havetheknowhow.com/Configure-the-server/Network-Bridge.html

I could then tell docker to use a pre-created bridge and then try pipework.

I also looked at using lxc-conf:
docker run -i -t -lxc-conf='lxc.network.type=veth' -lxc-conf='lxc.network.link=eth0' -lxc-conf='lxc.network.name=eth0' -lxc-conf='lxc.network.ipv4=0.0.0.0' -lxc-conf='lxc.network.flags=up'

But that did not appear to work either.
Any help or advice much appreciated.

Dan

"name" too long after several pipework runs

Hi,

after running pipework several times, the generated interface name gets too long:

while true; do docker kill hubward; docker rm hubward; ID=`docker run -d -name hubward busybox sleep 30`; sudo ./pipework eth0 $ID 192.168.101.1/22; done
...
Error: argument "macvlan10088eth1" is wrong: "name" too long

(running ubuntu precise)

No work here. Ubuntu 13.10 on VM

1- First I start container#1

  $ docker run -i -t test /bin/bash

2- then container#2

$ docker run -i -t test /bin/bash

**** Containers based on image: stackbrew/ubuntu:13.10

3- On host terminal run:

  $ pipework br1 7187a6f56000 192.168.1.1/24
  $ pipework br1 62bb18c85858 192.168.1.2/24

4- Interfaces & route setting inside containers:

container#1

root@62bb18c85858:/# ifconfig 
eth0      Link encap:Ethernet  HWaddr 86:46:72:d0:9f:38  
          inet addr:172.17.0.4  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::8446:72ff:fed0:9f38/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1296 (1.2 KB)  TX bytes:648 (648.0 B)

eth1      Link encap:Ethernet  HWaddr a2:2d:fa:5d:61:d0  
          inet addr:192.168.1.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::a02d:faff:fe5d:61d0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

root@62bb18c85858:/# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         172.17.42.1     0.0.0.0         UG    0      0        0 eth0
172.17.0.0      *               255.255.0.0     U     0      0        0 eth0
192.168.1.0     *               255.255.255.0   U     0      0        0 eth1

container#2

root@7187a6f56000:/# ifconfig 
eth0      Link encap:Ethernet  HWaddr 42:a0:0c:94:ec:35  
          inet addr:172.17.0.5  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::40a0:cff:fe94:ec35/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:9 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:690 (690.0 B)  TX bytes:648 (648.0 B)

eth1      Link encap:Ethernet  HWaddr e2:f9:e0:60:82:77  
          inet addr:192.168.1.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::e0f9:e0ff:fe60:8277/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

root@7187a6f56000:/# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         172.17.42.1     0.0.0.0         UG    0      0        0 eth0
172.17.0.0      *               255.255.0.0     U     0      0        0 eth0
192.168.1.0     *               255.255.255.0   U     0      0        0 eth1

host interfaces

root@UbuntuServer:~# ifconfig 
br1       Link encap:Ethernet  HWaddr ca:0c:bc:d3:91:9e  
          inet6 addr: fe80::4ce3:c9ff:fe70:6e47/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:536 (536.0 B)  TX bytes:648 (648.0 B)

docker0   Link encap:Ethernet  HWaddr fe:8b:22:36:61:39  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::a8a1:bff:feb5:56e1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:41 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2522 (2.5 KB)  TX bytes:1152 (1.1 KB)

eth0      Link encap:Ethernet  HWaddr 00:0c:29:8c:fb:58  
          inet addr:192.168.0.32  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe8c:fb58/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1539 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1366 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:171156 (171.1 KB)  TX bytes:210873 (210.8 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:76 errors:0 dropped:0 overruns:0 frame:0
          TX packets:76 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:5868 (5.8 KB)  TX bytes:5868 (5.8 KB)

lxcbr0    Link encap:Ethernet  HWaddr c2:83:ee:7f:5e:51  
          inet addr:10.0.3.1  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::c083:eeff:fe7f:5e51/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

veth623UAK Link encap:Ethernet  HWaddr fe:8b:22:36:61:39  
          inet6 addr: fe80::fc8b:22ff:fe36:6139/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:816 (816.0 B)  TX bytes:858 (858.0 B)

vethUR1YOL Link encap:Ethernet  HWaddr fe:e2:51:5d:37:28  
          inet6 addr: fe80::fce2:51ff:fe5d:3728/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:816 (816.0 B)  TX bytes:1506 (1.5 KB)

vethl2088 Link encap:Ethernet  HWaddr ca:0c:bc:d3:91:9e  
          inet6 addr: fe80::c80c:bcff:fed3:919e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

vethl2260 Link encap:Ethernet  HWaddr 72:a3:56:80:76:71  
          inet6 addr: fe80::70a3:56ff:fe80:7671/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

Then:

I can ping each containers on subnet: 172.17.0.0

$  ping 172.17.0.4 
   PING 172.17.0.4 (172.17.0.4) 56(84) bytes of data.
   64 bytes from 172.17.0.4: icmp_seq=1 ttl=64 time=0.258 ms
.
.

I can´t not ping containers on subnet: 192.168.1.0

$ ping 192.168.1.2
   PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
   From 192.168.1.1 icmp_seq=1 Destination Host Unreachable
.
.

Help with configuring DHCP on pipework

Hi all,

I'm currently running docker 1.2 on CentOS 7, and have been trying (unsuccessfully) to assign an IP address to a container using DHCP and pipework.

The command i use to start the container is:

docker run --privileged=true -i -t -d --name <container_name> centos /bin/bash

The pipework command is:

pipework eth0 <container_name> dhcp

As soon as I enter this command, I get the response "I do not know how to setup interface eth0".

What am I doing wrong? I'm extremely new to docker, pipework, and network stuff in general, so please forgive my lack of knowledge.

pipework on CentOS 6.5

This came up on the Docker mailing list a while back; posting it here just for reference:

pipework doesn't seem to be able to map new Ethernet devices into containers on stock CentOS 6.5.

The error is as follows:

# pipework br0 <containerID> <IP>/24
Object "netns" is unknown, try "ip help".

My understanding is that the CentOS 6.5 kernel gained network-namespace functionality, but that the bundled "ip" command did not.

The Internets claim that upgrading to the OpenStack RDO version of the "iproute" package may resolve this issue. I haven't tested this personally.

I would expect the same problem to affect RHEL 6.5 users too?

This is a reasonably-common stock environment that works with Docker but not Pipework. Hopefully this issue will either lead to pipework working, or to people discovering and testing the OpenStack RDO workaround.

Verbose output

I found it useful to see what the script does, as it does it; So, I added a --verbose, and shortcut -v, flag. PR coming in a few seconds.

Containers can lose their DHCP lease when dhclient is used

As a result of 0f59dc4, dhclient is killed immediately after container start-up, which means that when the DHCP lease expires, the container will never renew it, opening up the IP for reassignment (and removing dynamically created DNS entries, if any). I'm not sure why only dhclient is being killed and the other DHCP clients aren't. Is it possible to not kill dhclient and instead solve the original contributor's problem some other way?

[Q] How does the creation of a veth pair for docker work

Hi,

thank your very much for this great script. I'm working on something quite similar but only focused on docker and openvswitch. For this scenario I need to create a variable number of veth pairs for a container.
Would you be so kind to explain how the script gathers the relevant data from a docker container to create the veth pair. From what I understand it's basically these 3 lines that create the pair:
LOCAL_IFNAME=pl$NSPID$CONTAINER_IFNAME
GUEST_IFNAME=pg$NSPID$CONTAINER_IFNAME
ip link add name $LOCAL_IFNAME type veth peer name $GUEST_IFNAME

I would be really thankful!

[edit]:
On a related matter: do you know whether it is possible to add a veth link directly between two container?

DHCP and docker inspect

All,

I am trying to get multiple containers running on a single host.
I need to be able to SSH into each of these SSH containers (as they are Jenkins slaves).
I have had some luck in starting a container and then I use pipework to assign a DHCP address.
The problem is that the NetworkSettings (shown when doing a docker inspect) are not displaying the IPAddress that is being given to the container by Pipework and DHCP.

Instead I just see the IPAddress the container was started within.
Do I need to do something so that the container correctly reports its IPAddress via docker inspect or REST API?

I need the REST API to return the DHCP address as I use the REST API to query for containers and then SSH to them

Any help much appreciated
Dan

dhcpcd empties resolv.conf on the host

Thanks for creating this handy script! Unfortunately I have one issue preventing me from using it successfully.
I run Docker on a CoreOS cluster which has dhcpcd installed. Assigning an IP from DHCP to a docker container is successful the first time. However, I discovered that dhcpcd also writes to /etc/resolv.conf on the host machine. It basically empties the file, and leaves a comment stating that dhcpcd changed the file.

My understanding is that ip netns exec would allow the execution of a namespace unaware application in the namespace. Still it seems that dhcpcd is able to "break out" of the namespace and change the configuration of the host machine.

Do you have any pointers for me how to prevent this?

[ $DHCP_CLIENT = "dhcpcd" ] && ip netns exec $NSPID $DHCP_CLIENT -q $CONTAINER_IFNAME -h $GUESTNAME

README/Code discrepency

Hi,

In the readme, as example code on binding to a physical interface, you have:

pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5

However, running something similar, I get:

# ./pipework eth1 $(docker run -d -t -i --name=test2 ubuntu:14.04 /bin/bash) 192.168.10.72
The IP address should include a netmask.
Maybe you meant 192.168.10.72/24 ?

Is one of these incorrect? Thanks!

When does this get obsoleted?

Fantastic script... using happily... but I'm wondering if/when we get similar functionality in core? Are there any issues or PRs I should be watching? Thanks!

Gratuitous arping fails and causes pipework to return 1

The set -e option causes pipework to return 1 after the gratuitous arping fails. My system fails to get an arping reply when running the following manually (copied from set -ex output of pipework):

ip netns exec 7703 arping -A -c 1 -I eth1 192.168.20.2

Tcpdump on the parent interface and remote hosts across a real switch shows the request, but no reply. When I run the arping on the host's parent device the following succeeds:

arping -c 1 -I vlan20 192.168.20.2

Anyone else seeing this?

For reference pipework invocation is as follows on Ubuntu 14.04:

pipework vlan20 <container> 192.168.20.2/24

Name too long with -i option

If we provide a -i option(container interface) thats longer than 8 characters, pipework throws the following error: Error: argument "peer" is wrong: "name" too long

This is because the name is longer than 15 characters, and ip link add name does not allow that.

Can we have another option that allows the caller to provide the entire interfacename, and so the caller has greater flexibility in naming the interfaces?

--wait blocks forever on physical interfaces

Hi,

when using pipework on a physical interface (-> not a bridge), pipework --wait blocks forever because because /sys/class/net/eth1/operstate is always 'unknown'.

The interface is available and can be used just fine. Not sure why operstate is unknown though.

This happend on 3.11.0-14-generic / ubuntu saucy

Containers on different hosts

I was thinking that pipework might help me setup the required "bridging" for the following setup:

3 hosts (host 1 runs Apache httpd in a container, host 2 runs a JBoss server in a container, host 3 runs a JBoss server in a container)
Apache and JBoss is configured for clustering using mod_cluster (ajp protocol).
If I run all 3 containers on the same docker host it works but on separate hosts it does not work.
Could it be a multicast issue or is it more likely a pure routing issue?
Any ideas on how pipework might be used to solve this?

Containers cannot be connected without iptable tweek

I'm trying to connect two containers in pipework. I run the containers with

sudo docker run --privileged -d --dns 127.0.0.1 -h my_host -t image:tag /usr/bin/wait_script

The /usr/bin/wait_script is on the container image and performs an pipework --wait and dnsmasq. The containers are set so that the IPs are 192.168.100.1 and 192.168.100.2.

After starting the containers I do:
$$ sudo pipework br1 cid $IP/24
where the $IP is the container IP.

In the containers I see:
$$ ifconfig
eth0 Link encap:Ethernet HWaddr 8E:33:7E:B9:8B:FC
inet addr:172.17.0.111 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::8c33:7eff:feb9:8bfc/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:12 errors:0 dropped:2 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:948 (948.0 b) TX bytes:648 (648.0 b)

eth1 Link encap:Ethernet HWaddr EE:38:2B:A8:27:7F
inet addr:192.168.100.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::ec38:2bff:fea8:277f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:62 errors:0 dropped:0 overruns:0 frame:0
TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9066 (8.8 KiB) TX bytes:1950 (1.9 KiB)
$$ ip route
default via 172.17.42.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.127
192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.2

I know now if the default has something to do with it, but wouldn't have since when I change it id doesn't work either.

In the host I get:
$$ ifconfig
br1 Link encap:Ethernet HWaddr 0a:e3:28:c1:ba:9f
inet6 addr: fe80::a8:7dff:fecc:4d10/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:1860 (1.8 KB)

docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:307930 errors:0 dropped:0 overruns:0 frame:0
TX packets:335908 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16359857 (16.3 MB) TX bytes:889708726 (889.7 MB)

eth0 Link encap:Ethernet HWaddr 40:16:7e:64:d5:5d
inet addr:192.168.2.11 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::4216:7eff:fe64:d55d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2058289 errors:0 dropped:0 overruns:0 frame:0
TX packets:985402 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1812426206 (1.8 GB) TX bytes:179195358 (179.1 MB)

If you notice the br1 doesn't have a inet addr with an IPv4 IP address. The containers cannot comunicate with each other. Though the pipework IP (192.168.100.X) but can though the default on 172.17.42.X, which is linkied to the docker0 bridge.

My scripts are based on nicolasff docker-cassandra scripts: https://github.com/nicolasff/docker-cassandra/blob/master/start-cluster.sh

Can you give me some pointers on what's wrong?

Support default gateway

Adding support to override the default_gateway in the container.

Currently all traffic in the container is masqueraded to the bridge interface and in some use cases where there is a need to apply traffic shaping rules, it is a good idea to override the default gateway.

"pipework hostinterface guest ipaddr/subnet[@default_gateway] [macaddr]"

Issues with assignment to physical interface

I am having issues with pipework in regards to connecting containers to a physical interface.

In this example case, I have a VMware VM with two eth interfaces (eth0, eth1).

eth0      Link encap:Ethernet  HWaddr 00:0c:29:ec:d4:2b  
          inet addr:172.16.35.133  Bcast:172.16.35.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feec:d42b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2081 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1617 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:171216 (171.2 KB)  TX bytes:263210 (263.2 KB)

eth1      Link encap:Ethernet  HWaddr 00:0c:29:ec:d4:35  
          inet addr:192.168.102.212  Bcast:192.168.102.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feec:d435/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:182600 errors:0 dropped:0 overruns:0 frame:0
          TX packets:71522 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:273477200 (273.4 MB)  TX bytes:4328088 (4.3 MB)

If, for example, I do:

pipework eth1 94e1fe92ddac 192.168.102.50/24

The container gets the 192.168.102.50 IP and can ping itself at that address, but the host cannot ping the container at that address. Needless to say, probably, the actual intended use doesn't work either: The container cannot be pinged from another host on 192.168.102.0/24.

I can provide more details or do further tests if this is inconclusive, but if there's already something obvious in what I have described above, I'd love to hear it.

Re-assigning IP to existing container kills host / Segfault

Environment
Ubuntu 14.04 LTS (GNU/Linux 3.13.0-30-generic x86_64)
Docker 1.1.1
Latest Pipework

Method to replicate issue:

Bring up a container with an ip using pipework:

ie. root@docker:~# pipework eth0 -i eth1 0b3 1.1.1.1/[email protected]

Then run the same command again after the container is up:

root@docker:~# pipework eth0 -i eth1 0b3 1.1.1.1/[email protected]
Segmentation fault

It segfaults the container, leave docker in a disk wait mode and hoses the physical host. The only method to get the host to do anything is to hard boot it. It will not respond to a restart inside of linux.

help setting Docker DHCP+DNS for my network

I keep finding partial answers to this, so maybe someone here will know:
I am trying to set up a home network, with Ubuntu running Docker, with a container running DNSMASQ (or ISC DHCP + BIND9) to serve the network. the Docker host (Ubuntu) will be wired to the cable modem, and to a wireless access point.
so,

  • the DNSMASQ container needs to be able to serve the real network
  • preferably, the Ubuntu host will get its IP from DNSMASQ after it boots and starts teh container, but i can live with having a static IP on it.
  • DNSMASQ needs to be able to route outside.
  • another container will run IPtables and will serve as the gateway. its gateway will be the modem.
    wow, that sounds like a lot now that i type it. any other approach to getting this done is welcome too. i already thought of running a VM on libvirt to serve the same purpose.

bonus: have multiple networks based on VLANs, each with its own DHCP container.

wow.

final arping never returns 0 (for me at least)

i've not found a situation where this arping won't cause the script run with -e to catch the non-zero return code from arping and exit with a non-zero code (i.e. pipework will always return 1 for me when it works):

ip netns exec $NSPID arping -c 1 -A -I $CONTAINER_IFNAME $IPADDR > /dev/null 2>&1

does it work differently / return 0 for you?

PPTP client in container

Hi @jpetazzo, I am having problem with PPTP client in docker container. Here is the full scenario moby/moby#8269 .

Until Docker fix the issue (if really an issue), can you please show me the best method to use PPTP client inside container using pipework? Thank you so much!

Can this be used across VMs?

So if I have VM1 and VM2. VM1 is hosting apache and VM2 is hosting MYSQL as containers. Do I just need to add a route between the bridges? I am trying to figure out how to keep Containers in their own "VLAN".

Does not work with libcontainer

So far the issue just seems to be that pipework can't find the cgroup the container is in via line 87: NN=$(find "$CGROUPMNT" -name "$DOCKERID" | wc -l)
And indeed the containers don't seem to have anything in there anymore.

This is about as far as I've gotten. Looked at some of the libcontainer code to figure this out but nothing really stood out to me.

Pipework and after reboot

After the host server reboots, the docker containers running previous will be brought up, but these come without the pipework networking previous placed or the IPs assigned via pipework..

Would like to know of the ways to keep the mapping [docker-containers to pipework assigned IPs] and the same be brought up after reboot.

In certain scenarios, the docker container would change [meaning some of the containers retired and new ones added].

Thanks of any suggestion in this regards.....

Solving RTNETLINK error

I'm getting this error trying to run pipework - it's not a pipework problem AFAICT, but do you have any ideas? An old Centos kernel or I need a different version of iptables or something? TIA!

(grave_brattain is my container name):
sudo ./repos/pipework/pipework br1 grave_brattain 172.17.6.100/24
RTNETLINK answers: Operation not supported

$ uname -a
Linux blade-1-4a.dssunnyvale.lan 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

iptables --version
iptables v1.4.7

$ sudo docker version
Client version: 0.7.2
Go version (client): go1.1.2
Git commit (client): 28b162e/0.7.2
Server version: 0.7.2
Git commit (server): 28b162e/0.7.2
Go version (server): go1.1.2
Last stable version: 0.7.6, please update docker

Missing bridge-utils dependency

On a vanilla Ubuntu install with Docker 0.9, pipework produces the following error:

pipework: line 152: brctl: command not found

The brctl command is part of the bridge-utils package in Ubuntu, which was a dependency of LXC. With Docker 0.9 no longer depending on LXC, it is no longer installed.
I recommend putting up a message in the Readme notifying Docker 0.9 users that Pipework requires bridge-utils.

pleth* and veth* interfaces should be ignored by NetworkManager

When using pipework to create a lot of virtual interfaces (~100), NetworkManager (on Ubuntu and Fedora) ends up just flipping out over all the new connections. I thought it was supposed to ignore the veth interfaces created by Docker, but it sure seems to be trying to connect on all the veth* and pleth* interfaces. Is there an easy way to have NetworkManager ignore these by wildcard? Is that out of scope for what the pipework script should do?

Add a way to specify attributes of eth1

Pipework works very well. The only catch (for me, anyway) is that there's no way to specify attributes of the generated eth1. For example, I want to use eth1 from OpenStack running in Docker. OpenStack requires that its "public" interface run in promiscuous mode (http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/compute-configuring-guest-network.html). Since Pipework is run after the container has started, I have no way of setting eth1 to promiscuous mode without logging into the container and doing it manually.

I got around this by adding:

ip netns exec $NSPID ip link set eth1 promisc on

to the end of the pipework script. This works fine, but wouldn't it be cool to be able to specify a set of attributes to apply to eth1 as it's created?

Using DHCP leads to multiple default routes, broken DNS resolution

When I set up a pipework interface to give my container an external DHCP IP on the same network as my host system, I end up with two default routes in the container. One is the docker-supplied default route over eth0. The other is the DHCP supplied default route over eth1. This breaks DNS for some reason (wireshark shows the requests go out and come back to the host, but never get into the container...)

Deleting the default route over eth0 allows DNS to work again. fixes the issue:

ip netns exec $NSPID route del 0.0.0.0 eth0

Perhaps pipework could check for a new default route on the pipework-managed interface, and delete the default route via eth0 if it finds a new one?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.