Git Product home page Git Product logo

containernet's Introduction

Containernet

Containernet is a fork of the famous Mininet network emulator and allows to use Docker containers as hosts in emulated network topologies. This enables interesting functionalities to build networking/cloud emulators and testbeds. Containernet is actively used by the research community, focussing on experiments in the field of cloud computing, fog computing, network function virtualization (NFV) and multi-access edge computing (MEC). One example for this is the NFV multi-PoP infrastructure emulator which was created by the SONATA-NFV project and is now part of the OpenSource MANO (OSM) project.

Features

  • Add, remove Docker containers to Mininet topologies
  • Connect Docker containers to topology (to switches, other containers, or legacy Mininet hosts)
  • Execute commands inside containers by using the Mininet CLI
  • Dynamic topology changes
    • Add hosts/containers to a running Mininet topology
    • Connect hosts/docker containers to a running Mininet topology
    • Remove Hosts/Docker containers/links from a running Mininet topology
  • Resource limitation of Docker containers
    • CPU limitation with Docker CPU share option
    • CPU limitation with Docker CFS period/quota options
    • Memory/swap limitation
    • Change CPU/mem limitations at runtime!
  • Expose container ports and set environment variables of containers through Python API
  • Traffic control links (delay, bw, loss, jitter)
  • Automated installation based on Ansible playbook

Installation

Containernet comes with two installation and deployment options.

Option 1: Bare-metal installation

This option is the most flexible. Your machine should run Ubuntu 20.04 LTS and Python3.

First install Ansible:

sudo apt-get install ansible

Then clone the repository:

git clone https://github.com/containernet/containernet.git

Finally run the Ansible playbook to install required dependencies:

sudo ansible-playbook -i "localhost," -c local containernet/ansible/install.yml

After the installation finishes, you should be able to get started.

Option 2: Nested Docker deployment

Containernet can be executed within a privileged Docker container (nested container deployment). There is also a pre-build Docker image available on Docker Hub.

Attention: Container resource limitations, e.g. CPU share limits, are not supported in the nested container deployment. Use bare-metal installations if you need those features.

You can build the container locally:

docker build -t containernet/containernet .

or alternatively pull the latest pre-build container:

docker pull containernet/containernet

You can then directly start the default containernet example:

docker run --name containernet -it --rm --privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sock containernet/containernet

or run an interactive container and drop to the shell:

docker run --name containernet -it --rm --privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sock containernet/containernet /bin/bash

Get started

Using Containernet is very similar to using Mininet.

Running a basic example

Make sure you are in the containernet directory. You can start an example topology with some empty Docker containers connected to the network:

sudo python3 examples/containernet_example.py

After launching the emulated network, you can interact with the involved containers through Mininet's interactive CLI. You can for example:

  • use containernet> d1 ifconfig to see the config of container d1
  • use containernet> d1 ping -c4 d2 to ping between containers

You can exit the CLI using containernet> exit.

Running a client-server example

Let's simulate a webserver and a client making requests. For that, we need a server and client image. First, change into the containernet/examples/basic_webserver directory.

Containernet already provides a simple Python server for testing purposes. To build the server image, just run

docker build -f Dockerfile.server -t test_server:latest .

If you have not added your user to the docker group as described here, you will need to prepend sudo.

We further need a basic client to make a CURL request. Containernet provides that as well. Please run

docker build -f Dockerfile.client -t test_client:latest .

Now that we have a server and client image, we can create hosts using them. You can either checkout the topology script demo.py first or run it directly:

sudo python3 demo.py

If everything worked, you should be able to see following output:

Execute: client.cmd("time curl 10.0.0.251")
Hello world.

Customizing topologies

You can also add hosts with resource restrictions or mounted volumes:

# ...

d1 = net.addDocker('d1', ip='10.0.0.251', dimage="ubuntu:trusty")
d2 = net.addDocker('d2', ip='10.0.0.252', dimage="ubuntu:trusty", cpu_period=50000, cpu_quota=25000)
d3 = net.addHost('d3', ip='11.0.0.253', cls=Docker, dimage="ubuntu:trusty", cpu_shares=20)
d4 = net.addDocker('d4', dimage="ubuntu:trusty", volumes=["/:/mnt/vol1:rw"])

# ...

Documentation

Containernet's documentation can be found in the GitHub wiki. The documentation for the underlying Mininet project can be found on the Mininet website.

Research

Containernet has been used for a variety of research tasks and networking projects. If you use Containernet, let us know!

Cite this work

If you use Containernet for your work, please cite the following publication:

M. Peuster, H. Karl, and S. v. Rossem: MeDICINE: Rapid Prototyping of Production-Ready Network Services in Multi-PoP Environments. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, pp. 148-153. doi: 10.1109/NFV-SDN.2016.7919490. (2016)

Bibtex:

@inproceedings{peuster2016medicine,
    author={M. Peuster and H. Karl and S. van Rossem},
    booktitle={2016 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN)},
    title={MeDICINE: Rapid prototyping of production-ready network services in multi-PoP environments},
    year={2016},
    volume={},
    number={},
    pages={148-153},
    doi={10.1109/NFV-SDN.2016.7919490},
    month={Nov}
}

Publications

Other projects and links

There is an extension of Containernet called vim-emu which is a full-featured multi-PoP emulation platform for NFV scenarios. Vim-emu was developed as part of the SONATA-NFV project and is now hosted by the OpenSource MANO project:

For running Mininet or Containernet distributed in a cluster, checkout Maxinet.

You can also find an alternative/teaching-focused approach for Container-based Network Emulation by TU Dresden in their repository.

Contact

Support

If you have any questions, please use GitHub's issue system.

Contribute

Your contributions are very welcome! Please fork the GitHub repository and create a pull request.

Please make sure to test your code using

sudo make test

Lead developer

Manuel Peuster

containernet's People

Contributors

ablu avatar adferguson avatar backb1 avatar bocon13 avatar caustt avatar cdburkard avatar cgeoffroy avatar digiou avatar electricalboy avatar ggee avatar greenscreen23 avatar jonaswre avatar jufil avatar lantz avatar mpeuster avatar mrcdb avatar pantuza avatar pichuang avatar rafaelsche avatar richartkeil avatar rlane avatar setchring avatar souvikdas95 avatar ssikdar1 avatar stauchert avatar stevenvanrossem avatar vitalivanov avatar yeasy avatar zlorb avatar zlorber avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

containernet's Issues

Problem with using default ENTRYPOINTS

From gitter:

I've got a problem with Containernet not starting the container CMD. Building the container with docker CLI works good. This is a recent problem. On an old VM (old build) was working fine.
 
The problem container is the "quagga" one
r1 = net.addDocker('r1', ip='10.0.17.1/24', dimage="nbuonoslab/quagga:1.0")

Manuel Peuster @mpeuster Apr 19 13:44
when did you pull the old working version?
I think you have to manually call r1.start() after the addDocker() command to make it work again
see this commit containernet/containernet@83b5250

nnikodimov @nnikodimov Apr 19 15:35
was 3-4 mounts ago
adding r1.start() after net.start() gives an error
Error during image inspection of nbuonoslab/quagga:1.0:r1: running CMD: [u'/usr/lib/quagga/start.sh', '> /dev/pts/0 2>&1', '&']

nnikodimov @nnikodimov Apr 19 17:30
running this (/usr/lib/quagga/start.sh > /dev/pts/0 2>&1 &) on a started container works

nnikodimov @nnikodimov Apr 19 19:05
any idea how to fix the r1.start()?

nnikodimov @nnikodimov Apr 21 06:58
my docker image is using ENTRYPOINT actually, does it make any difference the what i should call r1.start()?
r1.start()
ubuntu@nbuonos:~$ sudo docker inspect -f '{{.Config.Entrypoint}}' nbuonoslab/quagga:1.0
[/usr/lib/quagga/start.sh]

Manuel Peuster @mpeuster Apr 22 22:47
Maybe just use r1.cmd(....) to call the start.sh directly, after net.start() is done

nnikodimov @nnikodimov Apr 24 20:48
it doesn't work with r1.cmd() either...anyway I went back to an old node.py version of Containernet. Seems like the change you did could be very container build specific...

Manuel Peuster @mpeuster Apr 24 20:55
Ok, interesting. Anyways thanks for letting me know. I am quite busy these days but if this comes up with more containers, I will spent some time to investigate this more. May I ask you to quickly create an issue, so we do not forget? https://github.com/containernet/containernet/issues

"switch <switch name> start" command doesn't work

If I first use the command "switch < switch name > stop", the command "switch < switch name > start" does not work. And output message:

(s1 exited - ignoring cmd('ovs-vsctl', '-- --id=@s1c0 create Controller 
target=\\"tcp:127.0.0.1:6653\\" max_backoff=1000 -- --id=@s1-listen create Controller 
target=\\"ptcp:6634\\" max_backoff=1000 -- --if-exists del-br s1 -- add-br s1 -- set bridge s1 
controller=[@s1c0,@s1-listen] other_config:datapath-id=0000000000000001 fail_mode=secure 
other-config:disable-in-band=true -- add-port s1 s1-eth1 -- set Interface s1-eth1 
ofport_request=1 -- add-port s1 s1-eth2 -- set Interface s1-eth2 ofport_request=2'))

After reading the node.py and cli.py files, I found a workaround that 's to insert the following code into OVSSwitch.start:

self.startShell()
self.mountPrivateDirs()

The code above is from Node.__init__

ImportError: cannot import name Containernet

Hi. I'm trying to implement containernet_example.py. I installed Containernet according to option 1. Now when I want to run containernet_example.py. I face this error:

sudo python containernet_example.py
Traceback (most recent call last):
  File "containernet_example.py", line 5, in <module>
    from mininet.net import Containernet
ImportError: cannot import name Containernet

What's wrong?

containernet_example.py cannot connect to remote controller

Hi, I'm running this example -> containernet/examples/containernet_example.py
In this case the OpenFlow switches get connected to the default SDN controller which is added in the script as net.addController(c0). But, I wish to get the switches connected to a remote custom SDN controller like Ryu. In mininet it's quite simple, the following command is used to specify the remote controller while launching the custom topology:
sudo mn --custom <path_to_custom_topology> --topo <topo_name> --controller=remote --switch ovs,protocols=OpenFlow13
The above command tells mininet to look for the controller on the default address (127.0.0.1) and port
(6633).
I have Ryu installed in the same VM as containernet. For my experiment, I first run a simple switch Ryu app in a separate terminal. Then I launch the containernet_example.py in another terminal. Because I wanted to get rid of the default controller, I commented out the net.addController(c0) line in containernet_example.py and looked for an appropriate flag/argument which I could pass while executing it from CLI in addition to the usual sudo python <topo.py>. I couldn't find any such flag/argument. As an alternative, I ran containernet_example.py as a custom mininet topology with the mininet custom as mentioned earlier (see top), but with no luck.

Then I referred the controllers.py example but even there it doesn't seem like the script is looking for a remote controller to connect to it, it rather creates a new one by net.addController()

SIGINT causes containernet CLI to hang when invoking Docker Host commands from CLI

System: Ubuntu 16.04
Docker: Latest

Tried a lot of ways to fix it but always fails. It seems even when SIGINT is not actually sent to node's STDIN (tried blocking it by empty override Node.sendInt), the polling still halts in CLI.waitForNode. Forcing node.waiting to False helps in breaking down current Command but invoking it again hangs. I doubt SIGINT is not even reaching the container's shell. Upon reattaching to Docker container from outside containernet, I noticed a ping command to still exist in process list and active under these conditions. I don't know what's preventing SIGINT from reaching the container shell...

Under normal hosts, this doesn't happen. SIGINT causes proper breakdown of CLI.waitForNode.

Please investigate. Thanks.

For instance,

containernet> d1 ping d2 -c 30
Press CTRL + C

Pass arguments to containers?

Do you have any recommendations for passing environment variables to containers through containernet?

I'm trying to run the equivalent of
docker run -w /opt -itd -e CONTROLLER_IP=$CONTROLLER_IP --name test test/test

Being able to point to a container that's already running with the required parameters would also solve the problem.

docker rename doesnot work "The container name "/mn.d8" is already in use by container "

root@cool-delight-2:~/containernet/examples# python containernet_example1.py
*** Adding controller
*** Adding docker containers
d8: kwargs {'ip': '10.0.1.252'}
Traceback (most recent call last):
File "containernet_example1.py", line 16, in
d8 = net.addDocker('d8', ip='10.0.1.252', dimage="containernet_example:ubuntu1804")
File "build/bdist.linux-x86_64/egg/mininet/net.py", line 1058, in addDocker
File "build/bdist.linux-x86_64/egg/mininet/net.py", line 242, in addHost
File "build/bdist.linux-x86_64/egg/mininet/node.py", line 822, in init
File "/usr/local/lib/python2.7/dist-packages/docker/api/container.py", line 441, in create_container
return self.create_container_from_config(config, name)
File "/usr/local/lib/python2.7/dist-packages/docker/api/container.py", line 452, in create_container_from_config
return self._result(res, True)
File "/usr/local/lib/python2.7/dist-packages/docker/api/client.py", line 216, in _result
self._raise_for_status(response)
File "/usr/local/lib/python2.7/dist-packages/docker/api/client.py", line 212, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/local/lib/python2.7/dist-packages/docker/errors.py", line 30, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 409 Client Error: Conflict for url: http+docker://localunixsocket/v1.24/containers/create?name=mn.d8 ("Conflict. The container name "/mn.d8" is already in use by container "da38c8afcd5fc4ac71817893c55fcfe8904cb37c76edcf8dded31170c77243f1". You have to remove (or rename) that container to be able to reuse that name.")

Import Error: No module named docker

Hi,
I am planning to make use of containernet for my master thesis. I will containernet to run dockers which will run a specific application in each host. However, I am not able to succeed in installation process.
Both commands below complain that docker is not installed.

  1. sudo ansible-playbook -i "localhost," -c local install.yml
  2. sudo python setup.py install.

I have already tried to install docker using pip install docker command. I see it as an already open issue. Please let me know if anyone has found a work around for this.

:~/containernet$ sudo python setup.py install
Traceback (most recent call last):
File "setup.py", line 11, in
from mininet.net import CONTAINERNET_VERSION
File "/home/basavaraj/containernet/mininet/net.py", line 103, in
from mininet.node import ( Node, Docker, Host, OVSKernelSwitch,
File "/home/basavaraj/containernet/mininet/node.py", line 60, in
import docker
ImportError: No module named docker

Keeping the default `CMD` or `ENTRYPOINT` field of a Docker image is not possible

self.dcmd = dcmd if dcmd is not None else "/bin/bash"

sets the command to bash which basically overrides any CMD field from the Dockerfile. It is also not possible to use ENTRYPOINT since something like:

ENTRYPOINT ["/usr/sbin/sshd", "-D"]

will lead to:

        "Path": "/usr/sbin/sshd",
        "Args": [
            "-D",
            "/bin/bash"
        ],

thus: /usr/sbin/sshd -D /bin/bash which breaks the ENTRYPOINT command.

A simple fix would be to simply drop the forced default of /bin/bash (I could contribute a fix for this), however, I am not sure whether that does not break other use cases?

What would be the best way to resolve this?

bash: docker: command not found

Steps to reproduce issue:

  1. Install containernet on Linux PC (Ubuntu 16.04).
  2. create network topology with containers connected to switches like the one in containernet_example.py. The switches connect to a remote controller.
  3. launch the topology and open terminal of one of the container by typing "xterm d1" on the containernet console.
  4. In the new terminal window, type docker
    OUTPUT:
    bash: docker: command not found

Telnet required for iperf tests

I've been testing the bandwidth limiting functionality of links between containers, unfortunately the net.iperf((d1,d2)) utility requires telnet to exist on both containers.

This isn't mentioned anywhere in the documentation here and isn't in the test for a compatible docker image.

I was using this code to run the test:

from mininet.net import Containernet
from mininet.node import Controller
from mininet.cli import CLI
from mininet.link import TCLink
from mininet.log import info, setLogLevel
setLogLevel('info')

net = Containernet(controller=Controller)
info('*** Adding controller\n')
net.addController('c0')
info('*** Adding docker containers\n')
d1 = net.addDocker('d1', ip='10.0.0.251', dimage="ubuntu:trusty")
d2 = net.addDocker('d2', ip='10.0.0.252', dimage="ubuntu:trusty")
info('*** Adding switches\n')
s1 = net.addSwitch('s1')
s2 = net.addSwitch('s2')
info('*** Creating links\n')
net.addLink(d1, s1)
net.addLink(s1, s2, cls=TCLink, delay='100ms', bw=1)
net.addLink(s2, d2)
info('*** Starting network\n')
net.start()
info('*** Testing connectivity\n')
net.ping([d1, d2])
net.iperf((d1,d2), port=23)

info('*** Stopping network')
net.stop()

This can be fixed by installing the iperf, telnet and telnetd packages for ubuntu specifically:

FROM ubuntu:xenial

RUN apt update && apt install -y \
        net-tools \
        iputils-ping \
        iproute2 \
        telnet telnetd \
        iperf

However for this you need to also start the openbsd-inetd service for each container.

Taken together the following version of the test works:

from mininet.net import Containernet
from mininet.node import Controller
from mininet.cli import CLI
from mininet.link import TCLink
from mininet.log import info, setLogLevel
setLogLevel('info')

net = Containernet(controller=Controller)
info('*** Adding controller\n')
net.addController('c0')
info('*** Adding docker containers\n')
d1 = net.addDocker('d1', ip='10.0.0.251', dimage="cjen1/iperf")
d1.cmd("service openbsd-inetd start")
d2 = net.addDocker('d2', ip='10.0.0.252', dimage="cjen1/iperf")
d2.cmd("service openbsd-inetd start")
info('*** Adding switches\n')
s1 = net.addSwitch('s1')
s2 = net.addSwitch('s2')
info('*** Creating links\n')
net.addLink(d1, s1)
net.addLink(s1, s2, cls=TCLink, delay='100ms', bw=1)
net.addLink(s2, d2)
info('*** Starting network\n')
net.start()
info('*** Testing connectivity\n')
net.ping([d1, d2])
net.iperf((d1,d2), port=23)

info('*** Stopping network')
net.stop()

Probably the best thing to do would be to either try and upstream a fix to iperf that doesn't require telnet, or alternatively change the documentation.

Sometimes when shutting down dockernet just freezes

It just happens when using with a lot of docker hosts, I can't find a common thing that drives to this episode. Sometimes it happens and sometimes it does not happen.

*** Stopping 25 links
..

^CTraceback (most recent call last):
  File "./topology.py", line 140, in <module>
    topoCreator(sys.argv[1], sys.argv[2], sys.argv[3])
  File "./topology.py", line 129, in topoCreator
    net.stop()
  File "build/bdist.linux-x86_64/egg/mininet/net.py", line 583, in stop
  File "build/bdist.linux-x86_64/egg/mininet/link.py", line 482, in stop
  File "build/bdist.linux-x86_64/egg/mininet/link.py", line 477, in delete
  File "build/bdist.linux-x86_64/egg/mininet/link.py", line 200, in delete
  File "build/bdist.linux-x86_64/egg/mininet/link.py", line 64, in cmd
  File "build/bdist.linux-x86_64/egg/mininet/node.py", line 357, in cmd
  File "build/bdist.linux-x86_64/egg/mininet/node.py", line 344, in waitOutput
  File "build/bdist.linux-x86_64/egg/mininet/node.py", line 776, in monitor
  File "build/bdist.linux-x86_64/egg/mininet/node.py", line 220, in read

Add LXD support

Containernet supports Docker already.
It would be great if we would also support LXD as another container solution.

Would translate to a new host class and a couple of new wrapper methods, like:

class LxdHost(Host):
    pass

addLxd(...):
    pass

# and so on

Should not be too complicated to implement. But certainly needs some effort.

Docker not routing as 'ip route get' command shows

Hi all,

I have an environment of 5 routers, 4 of them have 2 interfaces and are connected to:

  1. the fifth router,
  2. one client.

The topology can be summed up as follows, with r5 actually having 3 other interfaces:


client                          router1                        router5
+--------+                      +--------+                     +--------+
|        |                      |        |10.1.1.2    10.1.1.1 |        |
|        |10.0.1.100  10.0.1.101| r1-eth2+----------s2---------+r5-eth1 |
|        +-----------s1---------+r1-eth1 |                     |        |
+--------+                      +--------+                     +--------+

/proc/sys/net/ipv4/ip_forward is set to 1 in both router1 (r1) and router2 (r5).
No iptables set neither in r1 nor r5.
The client is a host, r1 and r5 are Docker containers, s1 and s2 are switches.

The problem is that I cannot ping the client (or any other client) from r5. I can though from r1. My packets are stuck in r1. I cannot see them while looking at r1-eth1 with tcpdump, I can only see ARP requests on r1-eth2.

On r5 I have the following:

root@r5:/# ip route get to  10.0.1.100
10.0.1.100 dev r5-eth1 src 10.1.1.1 
    cache 

On r1 I can see the following:

root@r1:/# ip route get to 10.0.1.100 from 10.1.1.1 iif r1-eth2
10.0.1.100 from 10.1.1.1 dev r1-eth1 
    cache  iif r1-eth2

Do you have any idea why my ping from r5 to the client is no passing through r1?

`CMD` compatibility enhancements

Currently the container seems to keep running even though the RUN command failed.
Additionally, the output performed by the command seems to get lost, rendering docker logs useless.

The install.sh can not work properly

Environment:

  • System: Ubuntu Server 16.04.3 LTS
  • Node: Hyper-V VM in Windows 10 1703
  • Containernet: Latest

Issue

When I was trying to execute ansible-playbook -i "localhost," -c local install.yml noted in the Installation section in the README.md, the git complained about this:

Cloning into 'openflow'...
error: inflate: data stream error (incorrect data check)
fatal: pack has bad object at offset 454242: inflate returned -3
fatal: index-pack failed", "stdout": "Detected Linux distribution: Ubuntu 16.04 xenial amd64

and cause the failure of building the component.

This code seems like not function properly

git clone git://github.com/mininet/openflow

when use git clone git://github.com/mininet/openflow.git it worked well.

Feature : multiple containers run as a single host

Hi,
I have investigated in mininet and docker containers integration for a more widely-use emulation, such as fog computing as my research issue. I prepare a proposal for asking whether containernet is willing to add a new feature or gives any comments.

Background

Current approach of containernet is that optionally replace original mininet network namespace/host into docker container as a host. For a simple NFV orchestration it may be sufficient, but due to the further NFV scenario it may need a “real host” which is capable running multiple VNF instances. Also, some of the researchers/users may desire to deploy multiple applications running in a single host. In the current architecture, it is still a complex work.

As original mininet network namespaces are replaced with docker containers, some of the mininet commands supported by mininet may not be equivalently supported if the pulling images with no specific executable binary, such as ping. Also, because of the sharing file system, keeping original mininet hosts is still convenient of evaluating agent-based mechanism.

So the main target is running multiple docker containers along with maybe no replacing with original mininet host.

As the distributed emulation of MaxiNet provides, I think containernet have the potential for a more comprehensive emulation due to the performance increased.

Objective

  1. To support multiple Docker containers “running” in a single host with no interference with the normal usage.
  2. To clarify the Docker containers and original host scenario, we can name it as an “abstraction node” here. We should have an additional mechanism of the cpu, memory limitation of docker containers running in the “abstraction nodes”.
  3. Also, the original network namespace which represented as a host should also be restricted in the abstraction node.

Abstraction node

In the abstraction node scenario, the specific host along with multiple containers are seen as a single identity.

  1. User can set the resource limitation for the specific abstraction node.
  2. User can set the resource limitation for the specific container running in the abstraction node.
  3. User can choose to not set the resource limitation for the specific container running in the abstraction node, but the maximum resource usage of the containers running in the specific abstraction node will not exceed the limitation it sets.

The docker containers in an abstraction node may have specific ip addresses or using port-forward to reach.

Extends Cgroups and Linux Traffic Control

The cgroup is hierarchical structure and we can easily mount with parent-child relations with simple file operations. The main implementation concept is that the mounting cgroup under the specific parent cgroup would be “inherent” the setting from the parent cgroup.

The excellent way is that docker currently supports --cgroup-parent flag that we can easily sets the running container mounted under the specific host we want.

sudo docker run --it --cgroup-parent=/h1 ubuntu /bin/bash
  • But to implement the internal restriction of each docker container should be discussed.

Also, the Linux Traffic Control should be optionally limited in the abstraction node concepts ( maybe simply implement this via mininet api ).

Containers networks in/across abstraction nodes

There are multiple solutions for additional containers bridged with mininet host, I take three scenarios briefly discussed below.

1. Sharing network stacks with docker container

We can simply run additional docker containers with sharing host docker container network stacks. This is the very simple way, and the sharing network’s multiple containers can use simple port forwarding to retrieve services.

sudo docker run -it --network container:<container> ubuntu /bin/bash

This makes the localhost communication efficient and simplifies the multi-hosts container network solutions.

I.e. If we want to access a application running in an additional container attach on a specific host.

mininet$ h1 curl -qa h2:<specific-port>

2. Sharing network stacks with original network namespace

Similar to the scenario above, but the implementation may need to change the docker source code. Docker currently doesn’t support attaching normal network namespace for sharing network stacks.

3. Additional docker0 bridge / openvswitch bridges with original network

In this scenario, the network topology is similar to the docker default environment in a normal host. We need to patch veths, bridges ( Linux bridge or OpenvSwitch ) , writes the in-namespace NAT, and may need to optionally solve the multi-hosts container networks due to the private container network in each host.

But this may be performance degraded due to the additional bridge.

Prototype Implementation

I implement a not-production-ready prototype with cgroup-parent mounted and container network scenario-3 as a described above.

https://github.com/tz70s/docker-mn

Thanks for reviewing, I'm glad to have any comments.

`sudo mn -c` does not work?

Does anyone encounter similar issues as the following one?

When running the example, i.e., containernet_example.py, I got the following error, since I forgot to install the controller.

*** Adding controller
*** Adding docker containers
d1: kwargs {'ip': '10.0.0.251'}
d1: update resources {'cpu_quota': -1}
d2: kwargs {'ip': '10.0.0.252'}
d2: update resources {'cpu_quota': -1}
*** Adding switches
*** Creating links
(1.00Mbit 100ms delay) (1.00Mbit 100ms delay) (1.00Mbit 100ms delay) (1.00Mbit 100ms delay) *** Starting network
*** Configuring hosts
d1 d2 
*** Starting controller
c0 Cannot find required executable controller.
Please make sure that it is installed and available in your $PATH:
(/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin)

So I ran sudo mn -c to clean, however, it failed to work. And I got the following error:

Traceback (most recent call last):
  File "/usr/local/bin/mn", line 4, in <module>
    __import__('pkg_resources').run_script('mininet==2.0', 'mn')
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 658, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1445, in run_script
    exec(script_code, namespace, namespace)
  File "/usr/local/lib/python2.7/dist-packages/mininet-2.0-py2.7.egg/EGG-INFO/scripts/mn", line 23, in <module>
    
  File "build/bdist.linux-x86_64/egg/mininet/clean.py", line 18, in <module>
  File "/usr/lib/python2.7/dist-packages/iptc/__init__.py", line 10, in <module>
    from iptc.ip4tc import (is_table_available, Table, Chain, Rule, Match, Target,
  File "/usr/lib/python2.7/dist-packages/iptc/ip4tc.py", line 13, in <module>
    from .xtables import (XT_INV_PROTO, NFPROTO_IPV4, XTablesError, xtables,
  File "/usr/lib/python2.7/dist-packages/iptc/xtables.py", line 819, in <module>
    _throw = _lib_xtwrapper.throw_exception
AttributeError: 'NoneType' object has no attribute 'throw_exception'

Default interface has multiple IPs assigned.

Issue description

When adding link(s) to a host and setting the IP of the created interface, the default ip is still added to the default interface when starting the network

Code to reproduce the issue

h1 = net.addHost('h1')
s1 = net.addSwitch('s1')
s2 = net.addSwitch('s2')
link = net.addLink(h1, s1)
link.intf1.setIP('10.10.10.1', '24')
link = net.addLink(h1, s2)
link.intf1.setIP('10.10.10.2', '24')

What's the expected result?

  • Address of 10.10.10.1/24 on interface 0
  • Address of 10.10.10.2/24 on interface 1

What's the actual result?

  • Addresses of 10.0.0.1/8 and 10.10.10.1/24 on interface 0
  • Address of 10.10.10.2/24 on interface 1

Additional details

Setting the ip when adding the host fixes this:

h1 = net.addHost('h1', ip='10.10.10.1/24')

Screenshot example

image

Containernet supports cluster environment?

Containernet supports cluster environment? If yes, how Containernet manage Mininet/Docker containers? Like Mininet Cluster (using SSH connections between cluster nodes to mange Mininet elements), like MaxiNet (using Pyro4 to manage Mininet elements) or Docker Swarm (managing all Docker containers and Mininet elements)?

Got problem while running in Debian

Hello,
I want to use containernet in a Debian system.
The docker and Mininet work as normal.
However, when I run "sudo python examples/containernet_example.py", it says

*** Adding controller *** Adding docker containers d1: kwargs {'ip': '10.0.0.251'} d1: update resources {'cpu_quota': -1} Traceback (most recent call last): File "examples/containernet_example.py", line 16, in <module> d1 = net.addDocker('d1', ip='10.0.0.251', dimage="ubuntu:trusty") File "build/bdist.linux-x86_64/egg/mininet/net.py", line 1058, in addDocker File "build/bdist.linux-x86_64/egg/mininet/net.py", line 242, in addHost File "build/bdist.linux-x86_64/egg/mininet/node.py", line 837, in __init__ File "build/bdist.linux-x86_64/egg/mininet/node.py", line 1110, in update_resources File "/usr/local/lib/python2.7/dist-packages/docker/utils/decorators.py", line 35, in wrapper return f(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/docker/utils/decorators.py", line 21, in wrapped return f(self, resource_id, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/docker/api/container.py", line 1185, in update_container return self._result(res, True) File "/usr/local/lib/python2.7/dist-packages/docker/api/client.py", line 216, in _result self._raise_for_status(response) File "/usr/local/lib/python2.7/dist-packages/docker/api/client.py", line 212, in _raise_for_status raise create_api_error_from_http_exception(e) File "/usr/local/lib/python2.7/dist-packages/docker/errors.py", line 30, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) docker.errors.APIError: 500 Server Error: Internal Server Error for url: http+docker://localunixsocket/v1.24/containers/b6069cc1fd3c4c01d0377de16193e86f1a52a7b7ca8a01ac8d032b6194ea22cd/update ("Cannot update container b6069cc1fd3c4c01d0377de16193e86f1a52a7b7ca8a01ac8d032b6194ea22cd: docker-runc did not terminate sucessfully: failed to write -1 to cpu.cfs_quota_us: open /sys/fs/cgroup/cpu,cpuacct/docker/b6069cc1fd3c4c01d0377de16193e86f1a52a7b7ca8a01ac8d032b6194ea22cd/cpu.cfs_quota_us: permission denied : unknown")
I have already use sudo to run the program, why it says permission denied?
Thank you

Packet loss on default links at larger numbers of containers

I am using the nested docker development environment.

I am using the following script to generate large simple networks and test them.

from mininet.net import Containernet
from mininet.node import Controller
from mininet.cli import CLI
from mininet.link import TCLink
from mininet.log import info, setLogLevel
setLogLevel('info')

net = Containernet(controller=Controller)
info('*** Adding controller\n')
net.addController('c0')

info('*** Adding switches\n')
s1 = net.addSwitch('s1')
#s2 = net.addSwitch('s2')

info('*** Adding docker containers and adding links\n')
n=50
dockers = [net.addDocker('d' + str(i+1), ip='10.0.0.'+str(i+1), dimage='ubuntu:trusty') for i in range(n)]
for d in dockers:
    net.addLink(d,s1)

info('*** Starting network\n')
net.start()

info('*** Testing connectivity\n')
net.pingAll()

info('*** Stopping network')
net.stop()

When testing with large n (~100) no nodes can ping any other nodes. For smaller numbers of nodes the failure is more gradual (45 nodes only 2 fail to connect). The setup has no connectivity failures with 30 or less nodes.

This issue has persisted over restarts as well.

This is occurring on the nested docker which has version 2.3.0d5 of mininet (mn --version).

Thanks for any help that can be offered.

containernet installation error- SyntaxError: invalid syntax\nmake

TASK [built and install Containernet (using Mininet installer)] ****************

fatal: [localhost]: FAILED! => {"changed": true, "cmd": "containernet/util/install.sh", "delta": "0:00:06.775768", "end": "2019-06-14 22:07:36.368097", "failed": true, "rc": 2, "start": "2019-06-14 22:07:29.592329", "stderr": "Traceback (most recent call last):\n File "setup.py", line 11, in \n from mininet.net import CONTAINERNET_VERSION\n File "/home/sauryadeep/containernet/mininet/net.py", line 103, in \n from mininet.node import ( Node, Docker, LibvirtHost, Host, OVSKernelSwitch,\n File "/home/sauryadeep/containernet/mininet/node.py", line 809\n <<<<<<< HEAD\n ^\nSyntaxError: invalid syntax\nmake: *** [install] Error 1", "stdout": "Detected Linux distribution: Ubuntu 16.04 xenial amd64\nUbuntu\nInstalling all packages except for -eix (doxypy, ivs, nox-classic)...\nInstall Mininet-compatible kernel if necessary\nHit:1 http://in.archive.ubuntu.com/ubuntu xenial InRelease\nGet:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]\nHit:3 https://download.docker.com/linux/ubuntu xenial InRelease\nGet:4 http://in.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]\nHit:5 http://in.archive.ubuntu.com/ubuntu xenial-backports InRelease\nFetched 218 kB in 1s (115 kB/s)\nReading package lists...\nReading package lists...\nBuilding dependency tree...\nReading state information...\nlinux-image-4.15.0-45-generic is already the newest version (4.15.0-45.4816.04.1)."Installing Mininet core", "/containernet ~", "install mnexec /usr/bin", "install mn.1 mnexec.1 /usr/share/man/man1", "python setup.py install", "Makefile:50: recipe for target 'install' failed"], "warnings": []}

Creating containers on the swarm using mininet.

Hi,

I want to scale the network by creating a swarm. Currently the containers get deployed only on host machine. Is it possible to deploy the containers over a swarm, as service, while being part of mininet network?

Thanks

Issue with runing containernet_example.py

Hello ,
I based on the second option to install the current version of this containernet, however when I tired to run the containernet_example.py I got the following msg:

Traceback (most recent call last):
File "containernet_example.py", line 5, in
from mininet.net import Containernet
ImportError: cannot import name Containernet

==============================
I will appreciate any suggestion to solve this problem, as I need to run mininet hosts as a separated Docker
Kind regards,
Ali

Missing configuration for extra network interfaces in CentOS-based Docker containers

Hello,

I am working on a porting of Containernet for CentOS hosts, as documented by #59.

The basic unit tests all pass on ubuntu:trusty based Docker containers, but they fail on centos:centos7 instances. The Docker containers are indeed instantiated, but they miss the network configuration on the extra interfaces defined by Containernet. I have tried to attach to one of the containers, and the extra interface is DOWN. The Docker default bridged network works well, tho.

Did you experiment with containers based on other distros than Ubuntu? Any suggestions?

mininet interrupted unexpectedly

I created a simple net with two docker hosts and run ping between them. It exited automatically after a while.

64 bytes from 10.0.0.173: icmp_seq=3747 ttl=64 time=24.8 ms
64 bytes from 10.0.0.173: icmp_seq=3748 ttl=64 time=24.7 ms
Traceback (most recent call last):
  File "./create-net.py", line 82, in <module>
    
  File "./create-net.py", line 45, in main
    net.stop()
  File "/home/ubuntu/trungth/bgp-rabbitmq/containernet/mininet/cli.py", line 71, in __init__
    self.run()
  File "/home/ubuntu/trungth/bgp-rabbitmq/containernet/mininet/cli.py", line 104, in run
    self.cmdloop()
  File "/usr/lib/python2.7/cmd.py", line 142, in cmdloop
    stop = self.onecmd(line)
  File "/usr/lib/python2.7/cmd.py", line 220, in onecmd
    return self.default(line)
  File "/home/ubuntu/trungth/bgp-rabbitmq/containernet/mininet/cli.py", line 423, in default
    self.waitForNode( node )
  File "/home/ubuntu/trungth/bgp-rabbitmq/containernet/mininet/cli.py", line 441, in waitForNode
    bothPoller.poll()
select.error: (4, 'Interrupted system call')

Cannot specify private registry with port

In node.py the splitting of the dimage considers that if there is a ":" in the name, then it must be to specify the tag of the image. In my case I use a private registry with a non-default port, which needs to be specified in the dimage as well, but this breaks the splitting.

I propose to replace the
repo, tag = imagename.split(":")
for
#If two :, then the first is to specify a port. Otherwise, it must be a tag
slices = imagename.split(":")
repo = ":".join(slices[0:-1])
tag = slices[-1]
It has the downside of forcing you to specify the version in case you also want to specify the port, but it is better than breaking.

sudo mn -c

Hello,

I am using containernet for my master thesis. Before installing containernet, I already has mininet 2.5 version installed in my device. After installing the containernet, non of the mn commands are working for me. Whenever, I use mn -c to cleanup the topology an is error is thrown as below.

Traceback (most recent call last):
File "/usr/local/bin/mn", line 4, in
import('pkg_resources').run_script('mininet==2.0', 'mn')
File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 719, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 1511, in run_script
exec(script_code, namespace, namespace)
File "/usr/local/lib/python2.7/dist-packages/mininet-2.0-py2.7.egg/EGG-INFO/scripts/mn", line 23, in

File "build/bdist.linux-x86_64/egg/mininet/clean.py", line 18, in
File "build/bdist.linux-x86_64/egg/iptc/init.py", line 10, in
File "build/bdist.linux-x86_64/egg/iptc/ip4tc.py", line 13, in
File "build/bdist.linux-x86_64/egg/iptc/xtables.py", line 830, in
AttributeError: 'NoneType' object has no attribute 'throw_exception'

Issue with container CLI

I launched a topology with containers connected to switches. In order to install an application on one of the containers, I first get access to it's cli by executing 'xterm d1' on the containernet console. However, I'm unable to install any applications through apt-get install command. The container does have internet connectivity as it can ping google.com. Moreover, when I type 'docker' on the command line, I get
-> bash: docker: command not found

python containernet_container_test.py error: AttributeError: 'Client' object has no attribute 'api'

(py27) ss@ss-VirtualBox:~/containernet/examples$ sudo python containernet_container_test.py
*** Adding controller
*** Adding docker containers
Traceback (most recent call last):
File "containernet_container_test.py", line 19, in
d1 = net.addDocker('d1', dimage="ubuntu:trusty")
File "build/bdist.linux-x86_64/egg/mininet/net.py", line 1058, in addDocker
File "build/bdist.linux-x86_64/egg/mininet/net.py", line 242, in addHost
File "build/bdist.linux-x86_64/egg/mininet/node.py", line 775, in init
AttributeError: 'Client' object has no attribute 'api'

SendCmd

Could you provide an example of how to use SendCmd, or send commands direct to a particular docker?

Need a different cmd() for Docker hosts

The default Mininet way of executing commands in hosts involves changing the prompt to a sentinel value (char(127)) and monitoring the output until this sentinel is encountered.
This is not a very Docker compatible way and creates issues (eg. #1). It seems that the way standard Mininet monitors the stdout of the hosts is not thread safe, I have seen that it blocks on (blocking on the poll() or read() used later on) :

def monitor( self, timeoutms=None, findPid=True ):

Therefore I changed it to use the docker-py function to execute shell commands in Docker containers, but this was overridden by: 9a18adc

@souvikdas95 is there any special reason why you opted to go back to the Mininet default way? What gets broken if we use the docker-py exec_start() iso the Mininet default cmd()?

I would like to use the docker-py supported way, it seems a more elegant and stable solution to monitor the containers' output, but maybe I do not see the consequences...

Containernet with NAT

Hi guys, I want to access the outside world (could be internet or my physical LAN) from the containers created using Containernet. In fact, I'm able to access the Internet/LAN from them, yet it is done through the Docker interface, which is not useful in my case. What I want is to reach them using Containernet deployment, something similar to :
Container -----> Switch -----> Internet/LAN

And here, of course, I'm able to control the Link ( between the container and the switch) through Containernet (using TCLink). Could I use NAT here, if it is the case, how can I do it?

Thanks.

CentOS container build broken

Hi @mrcdb

the container build of the CentOS container seems to be broken. See: https://travis-ci.org/containernet/containernet/builds/365030580#L2645


TASK [install basic packages] **************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Task' object has no attribute 'async_val'
fatal: [localhost]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
	to retry, use: --limit @/containernet/ansible/install_centos.retry
PLAY RECAP *********************************************************************
localhost                  : ok=4    changed=3    unreachable=0    failed=1   
The command '/bin/sh -c ansible-playbook -i "localhost," -c local --skip-tags "notindocker" install_centos.yml' returned a non-zero code: 2
The command "docker build -t containernet/containernet:centos7 -f Dockerfile.centos ." exited with 2.

Any idea how to fix this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.