Experiment about implementing openvswitch in docker environments.
Experiment is based on the following documentation:
http://containertutorials.com/network/ovs_docker.html
Build the openvswitch image:
cd myovs
build.sh
Then run "run.sh" script.
MacBook-Pro:docker-openvswitch-project gaetanoperrone$ cat run.sh
docker run -it -v /proc:/proc -v /var/run/docker.sock:/var/run/docker.sock --rm --name vswitch --privileged -d myovs
docker run -d --name container1 --network none -it --rm dockersecplayground/alpine_networking
docker run -d --name container2 --network none -it --rm dockersecplayground/alpine_networking
./run.sh
In this way:
- A openvswitch container runs in privileged mode (it binds /proc and docker.sock host dirs)
- a container named container1 runs without network
- a container named container2 runs without network
By going inside a container you will see that no interface is attached to:
./go-in-container-1.sh
ifconfig
(no interfaces)
Go inside openvswitch container:
./go-in-switch.sh
No bridge are created:
ovs-vsctl list-br
Create an OVS bridge:
ovs-vsctl add-br ovs-br1
ifconfig ovs-br1 173.16.1.1 netmask 255.255.255.0 up
bash-4.4# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
...
ovs-br1 Link encap:Ethernet HWaddr 46:CF:28:F0:5E:4A
inet addr:173.16.1.1 Bcast:173.16.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Now run connect the containers to OVS bridge:
ovs-docker add-port ovs-br1 eth1 container1 --ipaddress=173.16.1.2/24
ovs-docker add-port ovs-br1 eth1 container2 --ipaddress=173.16.1.3/24
Try the connection between containers:
./go-in-container1.sh
ping
173.16.1.3