Git Product home page Git Product logo

lagopus's Introduction

Lagopus Software Switch

Lagopus software switch is a yet another OpenFlow 1.3 software switch implementation. Lagopus software switch is designed to leverage multi-core CPUs for high-performance packet processing and fowarding with DPDK. Many network protocol formats are supported, such as Ethernet, VLAN, QinQ, MAC-in-MAC, MPLS and PBB. In addition, tunnel protocol processing is supported for overlay-type networking with GRE, VxLAN and GTP.

How to use Lagopus vswitch

Supported hardware

Lagopus can run on Intel x86 servers and virtual machine.

  • CPU
    • Intel Xeon, Core, Atom
  • NIC
  • Memory: 2GB or more

Supported distribution

  • Linux
    • Ubuntu 16.04, Ubuntu 18.04
    • CentOS 7
  • FreeBSD 10
  • NetBSD

Support

Lagopus Official site is https://lagopus.github.io/.

Development

Your contribution are very welcome, submit your patch with Github Pull requests. Or if you find any bug, let us know with Github Issues.

License

All of the code is freely available under the Apache 2.0 license.

lagopus's People

Contributors

cglewis avatar cl4u2 avatar falcon8823 avatar hibitomo avatar hidetai avatar hirokazutakahashi avatar junkich avatar justinvreeland avatar sako1847 avatar susami avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lagopus's Issues

Port Modification & OFPPC_NO_PACKET_IN ?

Please look at the mark "<==Attention"

7.3.4.3 Port Modification Message
The controller uses the OFPT_PORT_MOD message to modify the behavior of the port:
/* Modify behavior of the physical port / <==Attention
struct ofp_port_mod {
struct ofp_header header;
uint32_t port_no;
uint8_t pad[4];
uint8_t hw_addr[OFP_ETH_ALEN]; /
The hardware address is not
configurable. This is used to
sanity-check the request, so it must
be the same as returned in an
ofp_port struct. /
uint8_t pad2[2]; /
Pad to 64 bits. /
uint32_t config; /
Bitmap of OFPPC_* flags. /
uint32_t mask; /
Bitmap of OFPPC_* flags to be changed. /
uint32_t advertise; /
Bitmap of OFPPF__. Zero all bits to prevent
any action taking place. */
uint8_t pad3[4]; /_ Pad to 64 bits. */
};
OFP_ASSERT(sizeof(struct ofp_port_mod) == 40);

7.2.1 Port Structures
/* Flags to indicate behavior of the physical port. These flags are <==Attention

  • used in ofp_port to describe the current configuration. They are
  • used in the ofp_port_mod message to configure the port’s behavior.
    /
    enum ofp_port_config {
    OFPPC_PORT_DOWN = 1 << 0, /
    Port is administratively down. /
    OFPPC_NO_RECV = 1 << 2, /
    Drop all packets received by port. /
    OFPPC_NO_FWD = 1 << 5, /
    Drop packets forwarded to port. /
    OFPPC_NO_PACKET_IN = 1 << 6 /
    Do not send packet-in msgs for port. */
    };
    ..
    The OFPPC_NO_PACKET_IN bit indicates that packets on that port that generate a table
    miss should never trigger a packet-in message to the controller.
    ...

We think that the OFP Port Modification is to config physical port. If OFPPC_NO_PACKET_IN is set on that physical port, the packets that coming from the port that generate a table miss should never trigger a packet-in message. If this is true, the implementation in Lagopus 0.1.1 is not correct. Just want to make sure if we are right or not.

Memory leak when merge Write actions

Such code is solving this problem:

 +++ b/src/dataplane/ofproto/datapath.c
@@ -1276,6 +1276,8 @@ execute_action_set(struct lagopus_packet *pkt, struct action_list *actions) {
       break;
     }  
   }   
-  clear_action_set(pkt);
-  pkt->flags &= (uint32_t)~PKT_FLAG_HAS_ACTION;
  return rv; 
  }

Cannot process VLAN packet in Lagopus with DPDK on Virtualbox 5.1

Hi,
I have a problem of processing VLAN packet with Lagopus in the following environment.

  • Software Version
    Lagopus: commit id 7097920
    Virtualbox: 5.1.0r108711
    Host OS: Ubuntu 14.04
    Guest OS: Ubuntu 16.04
  • Network configuration
    The connection between Lagopus-VM and other VM's:
    VM1 - <br0> - Lagopus-VM - <br1> - VM2

br0, br1 are bridge interfaces on Host OS.
Lagopus's port on br0 is port1, and on br1 is port2.

I set up Lagopus as described in Lagopus book Chapter 10, and carried out the
following tests.

test 1

This test only shows the case that Lagopus works correctly for a comparision with test 2.

VM1 has interface eth1 with IP address of 192.168.60.10.
VM2 has VLAN interface eth1.10 with IP address of 192.168.60.11.
ARP entry of opposite VM is statically set up by arp -s command.

After ping from the VM1 to 192.168.60.11, tcpdump on VM1 shows the following logs:

06:12:37.489328 08:00:27:81:e4:23 (oui Unknown) > 08:00:27:e5:35:f9 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 10, p 0, ethertype IPv4, 192.168.60.10 > 192.168.60.11: ICMP echo request, id 6651, seq 19, length 64

output of show flow is as follows:

Lagosh> show flow
[
    {
        "name": "bridge01", 
        "tables": [
            {
                "table": 0, 
                "flows": [
                    {
                        "priority": 40000, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 281478268061280, 
                        "dl_type": "arp", 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": "controller"
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }, 
                    {
                        "priority": 40000, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 281476760496845, 
                        "dl_type": 35138, 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": "controller"
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }, 
                    {
                        "priority": 40000, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 281475396296919, 
                        "dl_type": 35020, 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": "controller"
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }, 
                    {
                        "priority": 30, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 28991922934407109, 
                        "in_port": 1, 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "push_vlan": 33024
                                    }, 
                                    {
                                        "vlan_vid": 4106
                                    }, 
                                    {
                                        "output": 2
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }, 
                    {
                        "priority": 5, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 281479020282374, 
                        "dl_type": "arp", 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": "controller"
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }
                ]
            }
        ], 
        "is-enabled": true
    }
]

test2

This test show the case that Lagopus doesn't work correctly.

VM1 and VM2 have VLAN interface (eth1.10) and IP address is 192.168.60.10 and 192.168.60.11 respectively.
After ping from the VM1 to VM2, tcpdump on VM1 and br1 doesn't show any log.

  • show flow
Lagosh> show flow
[
    {
        "name": "bridge01", 
        "tables": [
            {
                "table": 0, 
                "flows": [
                    {
                        "priority": 40000, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 281478268061280, 
                        "dl_type": "arp", 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": "controller"
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }, 
                    {
                        "priority": 40000, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 281476760496845, 
                        "dl_type": 35138, 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": "controller"
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }, 
                    {
                        "priority": 40000, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 281475396296919, 
                        "dl_type": 35020, 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": "controller"
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }, 
                    {
                        "priority": 30, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 28991922934407109, 
                        "in_port": 1, 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": 2
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }, 
                    {
                        "priority": 5, 
                        "idle_timeout": 0, 
                        "hard_timeout": 0, 
                        "send_flow_rem": null, 
                        "cookie": 281479020282374, 
                        "dl_type": "arp", 
                        "actions": [
                            {
                                "apply_actions": [
                                    {
                                        "output": "controller"
                                    }
                                ]
                            }
                        ], 
                        "stats": {
                            "packet_count": 0, 
                            "byte_count": 0
                        }
                    }
                ]
            }
        ], 
        "is-enabled": true
    }
]

The result of show interface after ping is as follows. All counter is zero before ping.

Lagosh> show interface
[
    {
        "name": "interface01", 
        "rx-packets": 31, 
        "rx-bytes": 3286, 
        "rx-dropped": 0, 
        "rx-errors": 0, 
        "tx-packets": 0, 
        "tx-bytes": 0, 
        "tx-dropped": 0, 
        "tx-errors": 0, 
        "is-enabled": true
    }, 
    {
        "name": "interface02", 
        "rx-packets": 0, 
        "rx-bytes": 0, 
        "rx-dropped": 0, 
        "rx-errors": 0, 
        "tx-packets": 31, 
        "tx-bytes": 3162, 
        "tx-dropped": 0, 
        "tx-errors": 0, 
        "is-enabled": true
    }
]

Besides, result of show port is as follows. I thinks that it is strange that port state is always link-down. But I'm not sure it is related to the above problem.

Lagosh> show port
[
    {
        "name": "port01", 
        "config": [], 
        "state": [
            "link-down"
        ], 
        "curr-features": [
            "other"
        ], 
        "supported-features": [], 
        "is-enabled": true
    }, 
    {
        "name": "port02", 
        "config": [], 
        "state": [
            "link-down"
        ], 
        "curr-features": [
            "other"
        ], 
        "supported-features": [], 
        "is-enabled": true
    }
]

Regards,

ofp barrier reply error

I use "Lagopus 0.1.1" and "Ryu" to test ofp barrier request. The testing environment is described below:
Ryu sends barrier request to lagoups every 1 second. We expect that lagopus will send barrier reply per barrier request. But only the first barrier reply is sent by lagopus. Lagopus does not send the following barrier replies.

OFP_VERSIONS missing bug?

Hi,
I'm not sure if this is a bug, but when I use Ryu without specifying the following value in the Ryu application code (e.g. look as in simple_switch_13.py):
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]

the switch does not work properly. In detail, if I dump the flows with lagosh, I can see that rules are installed correctly but they don't match any packet.

While if I specify OFP_VERSIONS inside Ryu, everything works fine.
I'm not sure how Ryu internally works, but I think that is a problem of Lagopus because the same code when executed on OVS works fine.

How shall I add codes if I want to add some actions?

Now I want to add new actions such as OFPAT_PUSH_GTPU and OFPAT_POP_GTPU. Are there any file I should add codes in the project?

I have added some codes there:
ofp_action.c:
switch(tlv.type):
add case OFPAT_PUSH_GTPU:
add case OFPAT_POP_GTPU:
openflow13.h:
enum ofp_action_type:
add OFPAT_PUSH_GTPU = 28;
add OFPAT_POP_GTPU = 29;
openflow13packet.c:
ofp_action_type_str(enum ofp_action_type val):
add case OFPAT_PUSH_GTPU:
add case OFPAT_POP_GTPU:
group.c:
get_group_features:
add (1 << OFPAT_PUSH_GTPU)
add (1 << OFPAT_POP_GTPU)
flowdb.c:
static int action type[]:
add OFPAT_PUSH_GTPU, OFPAT_POP_GTPU
flow_dump: dump_actions:
add case OFPAT_PUSH_GTPU:
add case OFPAT_POP_GTPU:
flowdb_show.c:
show_action:
add case OFPAT_PUSH_GTPU, case OFPAT_POP_GTPU

Please help to develop new actions :)

lagopus long run test and lagopus crash

I have do some performance test.
I set 6 flows to lagopus software switch by Ryu controller.
The flows are as follows:

  1. priority=65535, in_port=1, output=2
  2. priority=65535, in_port=2, output=1
  3. priority=65535, in_port=3, output=4
  4. priority=65535, in_port=4, output=3
  5. priority=65535, in_port=5, output=6
  6. priority=65535, in_port=6, output=5

Then I send 420000 pps at each port.
At whole night testing, the lagopus have being crashed.

The error message from syslog:
[ 24.819854] eth4: no IPv6 routers present
[ 25.602598] eth5: no IPv6 routers present
[ 26.688945] eth1: no IPv6 routers present
[ 26.928578] eth3: no IPv6 routers present
[ 26.928587] eth2: no IPv6 routers present
[ 27.311958] eth0: no IPv6 routers present
[ 30.778522] eth6: no IPv6 routers present
[ 76.734722] igb: eth2 NIC Link is Down
[ 78.995184] igb: eth3 NIC Link is Down
[ 82.106314] igb: eth1 NIC Link is Down
[ 84.179127] igb: eth0 NIC Link is Down
[ 85.972543] igb: eth4 NIC Link is Down
[ 87.266405] igb: eth5 NIC Link is Down
[ 94.035609] igb 0000:00:14.0: setting latency timer to 64
[ 94.035818] igb 0000:00:14.0: irq 43 for MSI/MSI-X
[ 94.035827] igb 0000:00:14.0: irq 44 for MSI/MSI-X
[ 94.035837] igb 0000:00:14.0: irq 45 for MSI/MSI-X
[ 94.035845] igb 0000:00:14.0: irq 46 for MSI/MSI-X
[ 94.035853] igb 0000:00:14.0: irq 47 for MSI/MSI-X
[ 94.035861] igb 0000:00:14.0: irq 48 for MSI/MSI-X
[ 94.035869] igb 0000:00:14.0: irq 49 for MSI/MSI-X
[ 94.035877] igb 0000:00:14.0: irq 50 for MSI/MSI-X
[ 94.035885] igb 0000:00:14.0: irq 51 for MSI/MSI-X
[ 94.466975] igb 0000:00:14.1: setting latency timer to 64
[ 94.467061] igb 0000:00:14.1: irq 52 for MSI/MSI-X
[ 94.467071] igb 0000:00:14.1: irq 53 for MSI/MSI-X
[ 94.467079] igb 0000:00:14.1: irq 54 for MSI/MSI-X
[ 94.467087] igb 0000:00:14.1: irq 55 for MSI/MSI-X
[ 94.467095] igb 0000:00:14.1: irq 56 for MSI/MSI-X
[ 94.467103] igb 0000:00:14.1: irq 57 for MSI/MSI-X
[ 94.467112] igb 0000:00:14.1: irq 58 for MSI/MSI-X
[ 94.467119] igb 0000:00:14.1: irq 59 for MSI/MSI-X
[ 94.467127] igb 0000:00:14.1: irq 60 for MSI/MSI-X
[ 94.898278] igb 0000:00:14.2: setting latency timer to 64
[ 94.898369] igb 0000:00:14.2: irq 61 for MSI/MSI-X
[ 94.898379] igb 0000:00:14.2: irq 62 for MSI/MSI-X
[ 94.898389] igb 0000:00:14.2: irq 63 for MSI/MSI-X
[ 94.898397] igb 0000:00:14.2: irq 64 for MSI/MSI-X
[ 94.898406] igb 0000:00:14.2: irq 65 for MSI/MSI-X
[ 94.898414] igb 0000:00:14.2: irq 66 for MSI/MSI-X
[ 94.898422] igb 0000:00:14.2: irq 67 for MSI/MSI-X
[ 94.898430] igb 0000:00:14.2: irq 68 for MSI/MSI-X
[ 94.898439] igb 0000:00:14.2: irq 69 for MSI/MSI-X
[ 95.329652] igb 0000:00:14.3: setting latency timer to 64
[ 95.329746] igb 0000:00:14.3: irq 70 for MSI/MSI-X
[ 95.329755] igb 0000:00:14.3: irq 71 for MSI/MSI-X
[ 95.329764] igb 0000:00:14.3: irq 72 for MSI/MSI-X
[ 95.329772] igb 0000:00:14.3: irq 73 for MSI/MSI-X
[ 95.329781] igb 0000:00:14.3: irq 74 for MSI/MSI-X
[ 95.329789] igb 0000:00:14.3: irq 75 for MSI/MSI-X
[ 95.329797] igb 0000:00:14.3: irq 76 for MSI/MSI-X
[ 95.329805] igb 0000:00:14.3: irq 77 for MSI/MSI-X
[ 95.329814] igb 0000:00:14.3: irq 78 for MSI/MSI-X
[ 95.777156] igb 0000:02:00.0: setting latency timer to 64
[ 95.777315] igb 0000:02:00.0: irq 79 for MSI/MSI-X
[ 95.777334] igb 0000:02:00.0: irq 80 for MSI/MSI-X
[ 95.777351] igb 0000:02:00.0: irq 81 for MSI/MSI-X
[ 95.777367] igb 0000:02:00.0: irq 82 for MSI/MSI-X
[ 95.777383] igb 0000:02:00.0: irq 83 for MSI/MSI-X
[ 96.663699] igb 0000:03:00.0: setting latency timer to 64
[ 96.663861] igb 0000:03:00.0: irq 86 for MSI/MSI-X
[ 96.663881] igb 0000:03:00.0: irq 87 for MSI/MSI-X
[ 96.663898] igb 0000:03:00.0: irq 88 for MSI/MSI-X
[ 96.663915] igb 0000:03:00.0: irq 89 for MSI/MSI-X
[ 96.663931] igb 0000:03:00.0: irq 90 for MSI/MSI-X
[ 156.550497] Use MSIX interrupt by default
[ 157.980021] KNI: ######## DPDK kni module loading ########
[ 157.980156] KNI: loopback disabled
[ 157.980159] KNI: ######## DPDK kni module loaded ########
[ 172.736692] igb_uio 0000:00:14.0: setting latency timer to 64
[ 172.736876] igb_uio 0000:00:14.0: irq 43 for MSI/MSI-X
[ 172.737081] uio device registered with irq 2b
[ 178.064345] igb_uio 0000:00:14.1: setting latency timer to 64
[ 178.064556] igb_uio 0000:00:14.1: irq 44 for MSI/MSI-X
[ 178.064745] uio device registered with irq 2c
[ 183.675867] igb_uio 0000:00:14.2: setting latency timer to 64
[ 183.676169] igb_uio 0000:00:14.2: irq 45 for MSI/MSI-X
[ 183.676457] uio device registered with irq 2d
[ 189.314814] igb_uio 0000:00:14.3: setting latency timer to 64
[ 189.315030] igb_uio 0000:00:14.3: irq 46 for MSI/MSI-X
[ 189.315218] uio device registered with irq 2e
[ 195.589132] igb_uio 0000:02:00.0: setting latency timer to 64
[ 195.589564] igb_uio 0000:02:00.0: irq 47 for MSI/MSI-X
[ 195.589793] uio device registered with irq 2f
[ 200.545612] igb_uio 0000:03:00.0: setting latency timer to 64
[ 200.546136] igb_uio 0000:03:00.0: irq 48 for MSI/MSI-X
[ 200.546327] uio device registered with irq 30
[22453.681909] worker_0[8801]: segfault at 0 ip 00007fa186c815b5 sp 00007fa101befba0 error 4 in liblagopus_dataplane.so.0.0.0[7fa186c6f000+f3000]

Thx.

lagopus 0.2 IPv4 forward performance issue

Deal all,
i'm running lagopus 0.2 w/ Intel ixgbe*2 for test.
i tried both 0.1.1 and 0.2 version,
i can get ~10Gbps forward(pkt size=1500) in version 0.1.1 with specify cpu cores.
in version 0.2, i did the same setting, but i only get ~ 400Mbps. :(

do i miss anything?

BR,
mark

my env looks below:

/// install DPDK-2.0.0 ///

mark@Dell-T110:~/lagopus$ ./install-dpdk.sh
[sudo] password for mark:

Network devices using DPDK-compatible driver

0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=
0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=

Network devices using kernel driver

0000:02:00.0 '82571EB Gigabit Ethernet Controller' if=p1p1 drv=e1000e unused=igb_uio
0000:02:00.1 '82571EB Gigabit Ethernet Controller' if=p1p2 drv=e1000e unused=igb_uio
0000:03:00.0 '82571EB Gigabit Ethernet Controller' if=p3p1 drv=e1000e unused=igb_uio
0000:03:00.1 '82571EB Gigabit Ethernet Controller' if=p3p2 drv=e1000e unused=igb_uio
0000:04:00.0 'NetXtreme BCM5722 Gigabit Ethernet PCI Express' if=em1 drv=tg3 unused=igb_uio
0000:05:00.0 '82574L Gigabit Network Connection' if=p4p1 drv=e1000e unused=igb_uio Active

Other network devices

Set hugepagesize=1024 of 2MB page Creating /mnt/huge and mounting as hugetlbfs

/// lagopus.conf ///

mark@Dell-T110:~/lagopus$ more /usr/local/etc/lagopus/lagopus.conf
channel channel01 create -dst-addr 10.1.9.51 -protocol tcp

controller controller01 create -channel channel01 -role equal -connection-type main

interface interface01 create -type ethernet-dpdk-phy -port-number 0

interface interface02 create -type ethernet-dpdk-phy -port-number 1

interface interface03 create -type ethernet-rawsock -device p3p1 -port-number 3

interface interface04 create -type ethernet-rawsock -device p3p2 -port-number 4

port port01 create -interface interface01
port port02 create -interface interface02

port port03 create -interface interface03

port port04 create -interface interface04

bridge bridge01 create -controller controller01 -port port01 1 -port port02 2 -port port03 3 -port port04 4 -dpid 0xB

bridge bridge01 create -controller controller01 -port port01 1 -port port02 2 -dpid 0xB
bridge bridge01 enable

/// start lagopus ///

mark@Dell-T110:~/lagopus$ sudo lagopus -d -l /var/log/lagopus.log -- -cf -n2 -- --rx '(0,0,1),(1,0,1)' --tx '(0,2),(1,2)' --w 3
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
EAL: Setting up memory...
EAL: Ask a virtual area of 0x1200000 bytes
EAL: Virtual area found at 0x7f393a400000 (size = 0x1200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f393a000000 (size = 0x200000)
EAL: Ask a virtual area of 0x29400000 bytes
EAL: Virtual area found at 0x7f3910a00000 (size = 0x29400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f3910600000 (size = 0x200000)
EAL: Ask a virtual area of 0x55000000 bytes
EAL: Virtual area found at 0x7f38bb400000 (size = 0x55000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f38bb000000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f38bac00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f38ba800000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~2393985 KHz
EAL: Master core 0 is ready (tid=42d95840)
PMD: ENICPMD trace: rte_enic_pmd_init
EAL: Core 3 is ready (tid=b8ffc700)
EAL: Core 2 is ready (tid=b97fd700)
EAL: Core 1 is ready (tid=b9ffe700)
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: 0000:01:00.0 not managed by VFIO driver, skipping
EAL: PCI memory mapped at 0x7f393b600000
EAL: PCI memory mapped at 0x7f393b680000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: 0000:01:00.1 not managed by VFIO driver, skipping
EAL: PCI memory mapped at 0x7f393b684000
EAL: PCI memory mapped at 0x7f393b704000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 6
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL: probe driver: 8086:105e rte_em_pmd
EAL: 0000:02:00.0 not managed by VFIO driver, skipping
EAL: 0000:02:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:02:00.1 on NUMA socket -1
EAL: probe driver: 8086:105e rte_em_pmd
EAL: 0000:02:00.1 not managed by VFIO driver, skipping
EAL: 0000:02:00.1 not managed by UIO driver, skipping
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: probe driver: 8086:105e rte_em_pmd
EAL: 0000:03:00.0 not managed by VFIO driver, skipping
EAL: 0000:03:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:03:00.1 on NUMA socket -1
EAL: probe driver: 8086:105e rte_em_pmd
EAL: 0000:03:00.1 not managed by VFIO driver, skipping
EAL: 0000:03:00.1 not managed by UIO driver, skipping
EAL: PCI device 0000:05:00.0 on NUMA socket -1
EAL: probe driver: 8086:10d3 rte_em_pmd
EAL: 0000:05:00.0 not managed by VFIO driver, skipping
EAL: 0000:05:00.0 not managed by UIO driver, skipping
Initialization completed.
NIC RX ports:
port 0 (queue 0)
port 1 (queue 0)

I/O lcore 1 (socket 0):
RX ports:
port 0 (queue 0)
port 1 (queue 0)
Output rings:
0x7f393a0a8600

Worker 0: lcore 3 (socket 0):
Input rings:
0x7f393a0a8600
Output rings per TX port
port 0 (0x7f393a0aa680)
port 1 (0x7f393a0ac700)

NIC TX ports:
0 1

I/O lcore 2 (socket 0):
Input rings per TX port
port 0
worker 0, 0x7f393a0aa680
port 1
worker 0, 0x7f393a0ac700

Ring sizes:
NIC RX = 1024
Worker in = 1024
Worker out = 1024
NIC TX = 1024
Burst sizes:
I/O RX (rd = 144, wr = 144)
Worker (rd = 144, wr = 144)
I/O TX (rd = 144, wr = 144)

Initializing NIC port 0 ...
Initializing NIC port 0 RX queue 0 ...
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f393aeebe00 hw_ring=0x7f393a0ae780 dma_addr=0x36eae780
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Initializing NIC port 0 TX queue 0 ...
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f393aee7cc0 hw_ring=0x7f393a0be800 dma_addr=0x36ebe800
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
Initializing NIC port 1 ...
Initializing NIC port 1 RX queue 0 ...
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f393aee5740 hw_ring=0x7f393a0ce800 dma_addr=0x36ece800
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Initializing NIC port 1 TX queue 0 ...
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f393aee1600 hw_ring=0x7f393a0de880 dma_addr=0x36ede880
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
Logical core 1 (I/O) main loop.
Logical core 2 (I/O) main loop.
Logical core 3 (worker 0) main loop.

Large multipart reply collapsed

※日本語で失礼します。

Lagopus開発者のみなさま、はじめまして。
私は自作のコントローラをTravisCIでテストをするため、Lagopusを使わせてもらっています。
Travisでも動いてくれ、とても感謝しています。

今回ですが、OFPMPF_REPLY_MORE付きのmultipartメッセージのハンドリングのテストを
書いていた際に、Lapopusからコントローラへ、malformedなmultipart replyメッセージが送信されている
ようにみえました。

このコントローラに問題があるのかもしれませんが、ご確認いただければと思います。
また、追加の情報が必要でしたら適宜コメントをおねがいいたします。

環境は以下のとおりです。

ソフトウェア:

Lagopusの設定について

channel channel01 create -dst-addr 127.0.0.1 -protocol tcp

controller controller01 create -channel channel01 -role master -connection-type main

bridge bridge01 create -controller controller01 -dpid 0x1
bridge bridge01 enable

アプリケーションについて

※ ビルドにはインターネット接続、起動にはErlang/OTP 19が必要となります。

$ wget https://packages.erlang-solutions.com/erlang/esl-erlang/FLAVOUR_1_general/esl-erlang_19.1-3~ubuntu~xenial_amd64.deb
$ sudo dpkg -i esl-erlang_19.1-3~ubuntu~xenial_amd64.deb
$ git clone https://github.com/shun159/gyges
$ cd gyges
$ make compile

./start_dev.shを起動し、以下のコードをerl shellで再現できます。

再現コード:

  • flow_stats_reply
    • キャプチャファイル中No.171~200
ok = gyges:start_listener(6633, 16).
FlowMods = lists:map(fun(X) ->
                               Instr = [{goto_table, 1}],
                               Match = [ofproto_oxm:new(openflow_basic, metadata, <<X:64>>),
                                        ofproto_oxm:new(openflow_basic, eth_type, <<16#0800:16>>),
                                        ofproto_oxm:new(openflow_basic, ipv4_src, <<10,10,10,1>>),
                                        ofproto_oxm:new(openflow_basic, ip_dscp, <<1:8>>),
                                        ofproto_oxm:new(openflow_basic, ip_proto, <<6:8>>),
                                        ofproto_oxm:new(openflow_basic, tcp_dst, <<X:16>>)],
                               ofproto_v4:flow_mod_add(0, [{table_id, 1},
                                                           {match, Match},
                                                           {instructions, Instr}])
                       end, lists:seq(1, 1000)).
ofc_dp_conn:send_msgs({"00:00:00:00:00:00:00:01", 0}, FlowMods).
{ok, FlowStats} = ofc_dp_conn:get_flows({"00:00:00:00:00:00:00:01", 0}, [{table_id, 1}]).
  • table_features_reply
    • キャプチャファイル中No.637~655
ok = gyges:start_listener(6633, 16).
FlowMods2 = lists:map(fun(X) ->
                               Instr = [{goto_table, 1}],
                               Match = [ofproto_oxm:new(openflow_basic, metadata, <<X:64>>),
                                        ofproto_oxm:new(openflow_basic, eth_type, <<16#0800:16>>),
                                        ofproto_oxm:new(openflow_basic, ipv4_src, <<10,10,10,1>>),
                                        ofproto_oxm:new(openflow_basic, ip_dscp, <<1:8>>),
                                        ofproto_oxm:new(openflow_basic, ip_proto, <<6:8>>),
                                        ofproto_oxm:new(openflow_basic, tcp_dst, <<X:16>>)],
                               ofproto_v4:flow_mod_add(0, [{table_id, X},
                                                           {match, Match},
                                                           {instructions, Instr}])
                       end, lists:seq(1, 254)).
ofc_dp_conn:send_msgs({"00:00:00:00:00:00:00:01", 0}, FlowMods2).
ofc_dp_conn:get_table_features({"00:00:00:00:00:00:00:01", 0}). 

キャプチャファイル:

Sometimes, interface statistics is not work.

Use:

  • Lagopus commit cb6cb41
  • Intel DPDK 1.7.1
  • Ubuntu 14.04.1 amd64
  • eth0/eth1 used Intel PRO/1000 PT Dual Port (Intel 82571EB)

Topology is below. All ports connected 1GbE.

| pkt-gen tx (em0)|---|(eth0) lagopus (eth1)|---|(em1) pkt-gen rx |

First, lagopus started, openflow statistics and interface statistics is zero.

$ sudo lagopus -l /tmp/lagopus.log -- -c3 -n1 -- -p3
$ sudo lagosh << _EOL_
    show bridge-domains
    show controller 127.0.0.1
    show flow
    show flowcache
    show interface all
_EOL_
bridge: br0
  datapnath id: 34157.a9:95:7a:10:af:41
  max packet buffers: 65535, number of tables: 255
  capabilities: flow_stats on, table_stats on, port_stats on, group_stats on
                ip_reasm off, queue_stats on, port_blocked off
  fail-mode: standalone-mode (default)
port: eth0: ifindex 0, OpenFlow Port 1
port: eth1: ifindex 1, OpenFlow Port 2
Controller 127.0.0.1
 Datapath ID:       000000007a10af41
 Connection status: Connected
Bridge: br0
 Table id: 0
  priority=0,idle_timeout=0,hard_timeout=0,flags=0,cookie=0,packet_count=0,byte_count=0,in_port=1 actions=output:2
  priority=0,idle_timeout=0,hard_timeout=0,flags=0,cookie=0,packet_count=0,byte_count=0,in_port=2 actions=output:1
Bridge: br0
  nentries: 0
  hit:      0
  miss:     0
eth0:
 Description:
  OpenFlow Port: 1
  Hardware Address: 00:15:17:df:5d:62
  PCI Address: 0000:01:00.0
  Config: no restricted
  State: LINK UP, LIVE
 Statistics:
  rx_packets: 0
  tx_packets: 0
  rx_bytes:   0
  tx_bytes:   0
  rx_dropped: 0
  tx_dropped: -1
  rx_error:   0
  tx_error:   0
eth1:
 Description:
  OpenFlow Port: 2
  Hardware Address: 00:15:17:df:5d:63
  PCI Address: 0000:01:00.1
  Config: no restricted
  State: LINK UP, LIVE
 Statistics:
  rx_packets: 0
  tx_packets: 0
  rx_bytes:   0
  tx_bytes:   0
  rx_dropped: 0
  tx_dropped: -1
  rx_error:   0
  tx_error:   0

Send the 180[sec] short frame(64byte/14.8[Mpps]).

Openflow statistics is countup, but interface statistics is not worked.

$ sudo lagosh << _EOL_
    show bridge-domains
    show controller 127.0.0.1
    show flow
    show flowcache
    show interface all
_EOL_
bridge: br0
  datapnath id: 34157.a9:95:7a:10:af:41
  max packet buffers: 65535, number of tables: 255
  capabilities: flow_stats on, table_stats on, port_stats on, group_stats on
                ip_reasm off, queue_stats on, port_blocked off
  fail-mode: standalone-mode (default)
port: eth0: ifindex 0, OpenFlow Port 1
port: eth1: ifindex 1, OpenFlow Port 2
Controller 127.0.0.1
 Datapath ID:       000000007a10af41
 Connection status: Connected
Bridge: br0
 Table id: 0
  priority=0,idle_timeout=0,hard_timeout=0,flags=0,cookie=0,packet_count=123370280,byte_count=7402216800,in_port=1 actions=output:2
  priority=0,idle_timeout=0,hard_timeout=0,flags=0,cookie=0,packet_count=0,byte_count=0,in_port=2 actions=output:1
Bridge: br0
  nentries: 1
  hit:      123370279
  miss:     1
eth0:
 Description:
  OpenFlow Port: 1
  Hardware Address: 00:15:17:df:5d:62
  PCI Address: 0000:01:00.0
  Config: no restricted
  State: LINK UP, LIVE
 Statistics:
  rx_packets: 0
  tx_packets: 0
  rx_bytes:   0
  tx_bytes:   0
  rx_dropped: 0
  tx_dropped: -1
  rx_error:   0
  tx_error:   0
eth1:
 Description:
  OpenFlow Port: 2
  Hardware Address: 00:15:17:df:5d:63
  PCI Address: 0000:01:00.1
  Config: no restricted
  State: LINK UP, LIVE
 Statistics:
  rx_packets: 0
  tx_packets: 0
  rx_bytes:   0
  tx_bytes:   0
  rx_dropped: 0
  tx_dropped: -1
  rx_error:   0
  tx_error:   0

Even if not transfer, expect the rx_packets is increases.

When repeat the same exam, interface statistics is updated in sometimes.

Do I need the configuration work for interface statistics?

Malformed multipart reply

Lagopus sends incorrect ofp_flow_stats_reply message, when contents of openflow tables does not fit into a single openflow message.

Take a look at function s_flow_stats_list_encode in src/agent/ofp_flow_handler.c, which serializes a list of flows into a list of openflow messages. flow_stats_head is initialized to point to the end of the last message in the list -- this pointer is used to set a proper flow stat size after it is completely serialized.

The problem is each of ofp_flow_stats_encode_list, ofp_match_list_encode, and ofp_instruction_list_encode can push new messages to the list, when the size of the current message exceeds OFP_PACKET_MAX_SIZE. In this case, flow_stats_head will point to a junk.

We could parse the compound multipart messages after I've changed the code follows:

-      /* flow_stats head pointer. */
-      flow_stats_head = pbuf_putp_get(*pbuf);

       /* encode flow_stats */
       res = ofp_flow_stats_encode_list(pbuf_list, pbuf, &(flow_stats->ofp));
       if (res == LAGOPUS_RESULT_OK) {
+        /* flow_stats head pointer. */
+        flow_stats_head = pbuf_putp_get(*pbuf) - sizeof(struct ofp_flow_stats);

However, I am not sure, my fix is compliant to openflow specifications, because it allows the switch to send parts of a single flow stat in two different openflow messages.

oftest failed

Hi,
i tried to perform conformance test on lagopus v0.1.2 using oftest, below is my testbed environment:

VirtualBox 4.1.12

VM1 - lagopus:
eth0: 10.0.0.1
eth1: 82540EM intnet=lagopus_port1
eth2: 82540EM intnet=lagopus_port2
eth3: 82540EM intnet=lagopus_port3
eth4: 82540EM intnet=lagopus_port4
eth1 to eth4 are configured as port 1 - 4 of lagopus switch
( I've tried using 82545EM, but lagopus wouldn't start )

VM2 - oftest
eth0: 10.0.0.2
eth1: 82545EM intnet=lagopus_port1
eth2: 82545EM intnet=lagopus_port2
eth3: 82545EM intnet=lagopus_port3
eth4: 82545EM intnet=lagopus_port4

As you can see, eth1 - eth4 in VM1 and VM2 are connected correspondingly by setting respective intnet.

I can successfully fire up lagopus with
lagopus -d -- -c3 -n1 -- -p0xF

Before the test start, I have configured the port mapping in oftest.
Now, when I start oftest by executing:
./oft --platflorm=remote -p6633 -V1.3

Most of the test failed, which shouldn't have happened.

Even worse, when I perform basic.PacketOut test:
./oft --platflorm=remote -p6633 -V1.3 basic.PacketOut

lagopus would just crash. giving me
1249 Segmentation fault (core dumped)

any ideas? thanks in advance

Action won't work when match IP field.

My system env:
Ryu 3.26
Lagopus 0.2.2

The following json is the information about flow in ryu controller.

{"1": [{"actions": ["OUTPUT:1"], "idle_timeout": 0, "cookie": 0, "packet_count": 1065067730, "hard_timeout": 0, "byte_count": 132472983116, "length": 104, "duration_nsec": 327728096, "priority": 0, "duration_sec": 1280, "table_id": 0, "flags": 0, "match": {"dl_type": 2048, "nw_src": "192.85.1.3", "in_port": 3}}]}

I check from controller. I can see the match count keep growing but lagopus switch doesn't do the right action which flow defines. I check Lagopus switch debug mode, and it shows the following message.

I/O RX 2 in (NIC port 1): NIC drop ratio = 0.00 avg burst size = 1.02

If I remove nw_src match field, the flow works.
This problem is also happening in 0.2.1.

about watch group issue

I trace the code int ./src/dataplane/mgr/group.c

Below is the original code. In line 150. Should it be changed to "return bucket;".

struct bucket *
group_live_bucket(struct bridge *bridge,
struct group *group) {
struct group_table *group_table;
struct bucket *bucket, *rv;
struct group *a_group;

group_table = bridge->group_table;

TAILQ_FOREACH(bucket, &group->bucket_list, entry) {
if (port_liveness(bridge, bucket->ofp.watch_port) == true) {
return bucket;
}
if (bucket->ofp.watch_group == OFPG_ANY) {
continue;
}
a_group = group_table_lookup(group_table, bucket->ofp.watch_group);
if (a_group == NULL) {
continue;
}
rv = group_live_bucket(bridge, a_group);
if (rv != NULL) {
return rv; // ??? return bucket;
}
}
return NULL;
}

request switch description with a zero xid

Hi,

There is a problem in channel_multipart_get from src/agent/channel.c. It does not check, if message in the i-th position of the channel storage is currently in use, and returns the first vacant position, when we search for OFPMP_DESC multipart requests with xid == 0, even if this message is at some further position. The following change fixes the issue:

  for (i = 0; i < CHANNEL_SIMULTANEOUS_MULTIPART_MAX; i++) {
    if (channel->multipart[i].used
        && channel->multipart[i].ofp_header.xid == xid_header->xid
        && channel->multipart[i].multipart_type == mtype) {
      /* found */
      break;
    }
  }

I am also bothered by multipart_free.
Shouldn't it set m->used to 0, when it cleans up a multipart request?

barrier_request sent to lagopus but no reply barrier_reply

lagopus run on 4-core atom CPU and 8-core atom CPU
We start lagopus with the following command:

lagopus -d -- -cff -n2 -- -p3f

We will set 6 flows to our lagopus softswitch by Ryu controller.
The flows are as follows:

  1. priority=65535, in_port=1, output=2
  2. priority=65535, in_port=2, output=1
  3. priority=65535, in_port=3, output=4
  4. priority=65535, in_port=4, output=3
  5. priority=65535, in_port=5, output=6
  6. priority=65535, in_port=6, output=5

port 1 connect our traffic generator port 16
port 2 connect our traffic generator port 17
port 3 connect our traffic generator port 18
port 4 connect our traffic generator port 19

we do Ryu test.
The ryu test will always report barrier-reply time out error.

ryu-manager --test-switch-target 0dae4829047d5d02 --test-switch-tester bab59e78f --test-switch-dir of13/action/00_OUTPUT.json tester.py
loading app tester.py
loading app ryu.controller.ofp_handler
instantiating app tester.py of OfTester
target_dpid=0dae4829047d5d02
tester_dpid=0000000bab59e78f
Test files directory = of13/action/00_OUTPUT.json
instantiating app ryu.controller.ofp_handler of OFPHandler
--- Test start ---
waiting for switches connection...
dpid=0000000bab59e78f : Join tester SW.
dpid=0dae4829047d5d02 : Join target SW.
action: 00_OUTPUT
ethernet/ipv4/tcp-->'actions=output:2' ERROR
Failed to add flows: barrier request timeout.
ethernet/ipv6/tcp-->'actions=output:2' ERROR
Failed to add flows: barrier request timeout.
ethernet/arp-->'actions=output:2' ERROR
Failed to add flows: barrier request timeout.
--- Test end ---
--- Test report ---
Failed to add flows(3)
action: 00_OUTPUT ethernet/ipv4/tcp-->'actions=output:2'
action: 00_OUTPUT ethernet/ipv6/tcp-->'actions=output:2'
action: 00_OUTPUT ethernet/arp-->'actions=output:2'

OK(0) / ERROR(3)
Terminated

======= messages in syslog ==========
Sep 5 09:58:58 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:58 UTC 2014][INFO ][3322:0x00007fb160335700:agent]:./channel.c:515:channel_start: Connecting to OpenFlow controller (null):6633
Sep 5 09:58:58 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:58 UTC 2014][INFO ][3322:0x00007fb160335700:agent]:./session.c:95:bind_default: host:0.0.0.0, service:0
Sep 5 09:58:58 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:58 UTC 2014][INFO ][3322:0x00007fb160335700:agent]:./session.c:57:socket_buffer_size_set: SO_SNDBUF buffer size is 131070
Sep 5 09:58:58 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:58 UTC 2014][INFO ][3322:0x00007fb160335700:agent]:./session.c:59:socket_buffer_size_set: SO_RCVBUF buffer size is 131070
Sep 5 09:58:58 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:58 UTC 2014][INFO ][3322:0x00007fb160335700:agent]:./session.c:131:connect_default: host:10.0.119.2, service:6633
Sep 5 09:58:58 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:58 UTC 2014][INFO ][3322:0x00007fb15fb34700:ofp_handler]:./channel.c:668:channel_hello_confirm: channel_hello_confirm in
Sep 5 09:58:59 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:59 UTC 2014][WARN ][3322:0x00007fb15fb34700:ofp_handler]:./ofp_barrier_handler.c:227:ofp_barrier_reply_handle: Not found channel.
Sep 5 09:58:59 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:59 UTC 2014][ERROR][3322:0x00007fb15fb34700:ofp_handler]:./ofp_handler.c:790:s_dequeue: Not found.
Sep 5 09:58:59 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:58:59 UTC 2014][ERROR][3322:0x00007fb15fb34700:ofp_handler]:./ofp_handler.c:932:s_ofph_thread_main: Not found.
Sep 5 09:59:02 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:02 UTC 2014][WARN ][3322:0x00007fb15fb34700:ofp_handler]:./ofp_barrier_handler.c:227:ofp_barrier_reply_handle: Not found channel.
Sep 5 09:59:02 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:02 UTC 2014][ERROR][3322:0x00007fb15fb34700:ofp_handler]:./ofp_handler.c:790:s_dequeue: Not found.
Sep 5 09:59:02 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:02 UTC 2014][ERROR][3322:0x00007fb15fb34700:ofp_handler]:./ofp_handler.c:932:s_ofph_thread_main: Not found.
Sep 5 09:59:06 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:06 UTC 2014][WARN ][3322:0x00007fb15fb34700:ofp_handler]:./ofp_barrier_handler.c:227:ofp_barrier_reply_handle: Not found channel.
Sep 5 09:59:06 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:06 UTC 2014][ERROR][3322:0x00007fb15fb34700:ofp_handler]:./ofp_handler.c:790:s_dequeue: Not found.
Sep 5 09:59:06 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:06 UTC 2014][ERROR][3322:0x00007fb15fb34700:ofp_handler]:./ofp_handler.c:932:s_ofph_thread_main: Not found.
Sep 5 09:59:08 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:08 UTC 2014][INFO ][3322:0x00007fb160335700:agent]:./channel.c:540:channel_stop: channel_stop() is called
Sep 5 09:59:08 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:08 UTC 2014][INFO ][3322:0x00007fb160335700:agent]:./channel.c:544:channel_stop: closing the socket 49
Sep 5 09:59:08 localhost /usr/sbin/lagopus[3322]: [Fri Sep 05 09:59:08 UTC 2014][INFO ][3322:0x00007fb160335700:agent]:./channel.c:560:channel_stop: switch to SWITCH_MODE_STANDALONE

channel_free() return BUSY issue

I try to using the channel_mgr_channel_delete() function to remove a connection, but sometimes I got "LAGOPUS_RESULT_BUSY" response.

Maybe it should add a "channel->refs" detection at channel_mgr_channel_delete() function between "channel_disable()" and "channel_free()" ??

After I modify the code, the issue did not appear until now :
channel_mgr_channel_delete() {
...
...
channel_disable(chan);
while(chan->refs > 0)
sleep(1);
}
ret = channel_free(chan);
...
...
}
I think the Lagopus-0.2 version also have this issue at "channel_delete_internal()" function.

thanks

Race condition in ofcache

As far as I understand, each DPDK worker thread uses its own local cache to speed up packet processing, and these local caches are cleared up on each change in the flow db.

Take a look at the function register_cache_bank in src/dataplane/ofproto/ofcache.c.
I could not find any locks to guard flow cache, thus it could be cleared up by an external tread anytime.
For example, the cache could be cleared right after 'lagopus_hashmap_add'. It invalidates list pointer, and add_cache_list could result into a memory corruption.

I could easily reproduce memory corruption and a subsequent crash before 2.5.0 release.
Although I cannot reproduce it right now, the problem still seems to be there.

This issue will probably require some global changes, thus I do not propose any hotfix.

Lagopus switch with Ryu controller firewall application

Hello, I found a problem that Lagopus switch with Ryu controller firewall application.

Lagopus version is 0.2.5
DPDK version is 2.2.0
Ryu controller execute ryu-manager ryu.app.rest_firewall

I want to two PCs connected Lagopus switch can ping each other, the flow entry and the controller message are as below:

Ryu controller:

curl http://localhost:8080/firewall/rules/25a6024a363e7de6
[{"access_control_list": [{"rules": [{"priority": 1, "dl_type": "IPv4", "nw_proto": "ICMP", "nw_dst": "10.1.1.1/255.255.255.255", "nw_src": "10.1.1.2/255.255.255.255", "rule_id": 1, "actions": "DENY"}, {"priority": 1, "dl_type": "IPv4", "nw_proto": "ICMP", "nw_dst": "10.1.1.2/255.255.255.255", "nw_src": "10.1.1.1/255.255.255.255", "rule_id": 2, "actions": "DENY"}]}], "switch_id": "25a6024a363e7de6"}]

Lagopus

sudo lagosh
show flowBridge: br0
Table id: 0
priority=65534,idle_timeout=0,hard_timeout=0,flags=0,cookie=0,packet_count=3,byte_count=180,arp actions=output:normal

priority=1,idle_timeout=0,hard_timeout=0,flags=0,cookie=1,packet_count=0,byte_count=0,ip,ip_proto=1,ipv4_src=10.1.1.2/0xffffffff,ipv4_dst=10.1.1.1/0xffffffff actions=output:normal

priority=1,idle_timeout=0,hard_timeout=0,flags=0,cookie=2,packet_count=0,byte_count=0,ip,ip_proto=1,ipv4_src=10.1.1.1/0xffffffff,ipv4_dst=10.1.1.2/0xffffffff actions=output:normal

priority=0,idle_timeout=0,hard_timeout=0,flags=0,cookie=0,packet_count=46,byte_count=5220 actions=output:controller

In Ryu controller, the actions are DENY, but in lagopus switch flow entries actions are normal
Is there have any method to solve it ? THANK YOU!

QUICKSTART.md: lagopus.conf must be under /<prefix>/etc/lagopus/ and not /etc/lagopus/

lagopus.conf must be under //etc/lagopus/ and not /etc/lagopus/.
Note: is prefix set by --prefix option.

Maybe we should update QUICKSTART.md ?

Current on Lagopus0.1.2:

    150      % sudo cp lagopus/samples/lagopus.conf /etc/lagopus/
    151      % sudo vi /etc/lagopus/lagopus.conf

Should be

    % sudo cp lagopus/samples/lagopus.conf /<prefix>/etc/lagopus/
    % sudo vi /<prefix>/etc/lagopus/lagopus.conf
    Note:
        --prefix=<prefix>
        For example, /usr/local/etc/lagopus/ if exec was installed at /usr/local/sbin/lagopus.

lagopus cannot start normally

Hi,
We have two CPU Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz on socket0 and socket1(total 24 cores), memory size is 48G and 28 Intel NICs on board, but we can not initiate lagopus.

The command is only to start 24 NICs but at last it will fail.
sudo /usr/sbin/lagopus -d -- -cfff -n 4 -- -pffffff

The following is our log.

user@debian:~$ sudo /usr/sbin/lagopus -d -- -cfff -n 4 -- -pffffff
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 0 on socket 1
EAL: Detected lcore 7 as core 1 on socket 1
EAL: Detected lcore 8 as core 2 on socket 1
EAL: Detected lcore 9 as core 3 on socket 1
EAL: Detected lcore 10 as core 4 on socket 1
EAL: Detected lcore 11 as core 5 on socket 1
EAL: Detected lcore 12 as core 0 on socket 0
EAL: Detected lcore 13 as core 1 on socket 0
EAL: Detected lcore 14 as core 2 on socket 0
EAL: Detected lcore 15 as core 3 on socket 0
EAL: Detected lcore 16 as core 4 on socket 0
EAL: Detected lcore 17 as core 5 on socket 0
EAL: Detected lcore 18 as core 0 on socket 1
EAL: Detected lcore 19 as core 1 on socket 1
EAL: Detected lcore 20 as core 2 on socket 1
EAL: Detected lcore 21 as core 3 on socket 1
EAL: Detected lcore 22 as core 4 on socket 1
EAL: Detected lcore 23 as core 5 on socket 1
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 24 lcore(s)
EAL: Setting up memory...
EAL: Ask a virtual area of 0x7b400000 bytes
EAL: Virtual area found at 0x7f7cf1a00000 (size = 0x7b400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f7cf1400000 (size = 0x400000)
EAL: Ask a virtual area of 0x2000000 bytes
EAL: Virtual area found at 0x7f7cef200000 (size = 0x2000000)
EAL: Ask a virtual area of 0xc00000 bytes
EAL: Virtual area found at 0x7f7cee400000 (size = 0xc00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7cee000000 (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7f7ced600000 (size = 0x800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7ced200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7cece00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7ceca00000 (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7f7cec000000 (size = 0x800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7cebc00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7ceb800000 (size = 0x200000)
EAL: Ask a virtual area of 0x7f000000 bytes
EAL: Virtual area found at 0x7f7c6c600000 (size = 0x7f000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7c6c200000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f7c6bc00000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f7c6b600000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7c6b200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7c6ae00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7c6aa00000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: Requesting 1024 pages of size 2MB from socket 1
EAL: TSC frequency is ~2000000 KHz
EAL: Master core 0 is ready (tid=71ac3800)
EAL: Core 7 is ready (tid=671e1700)
EAL: Core 8 is ready (tid=669e0700)
EAL: Core 9 is ready (tid=661df700)
EAL: Core 10 is ready (tid=659de700)
EAL: Core 11 is ready (tid=651dd700)
EAL: Core 6 is ready (tid=679e2700)
EAL: Core 5 is ready (tid=681e3700)
EAL: Core 4 is ready (tid=689e4700)
EAL: Core 3 is ready (tid=691e5700)
EAL: Core 2 is ready (tid=699e6700)
EAL: Core 1 is ready (tid=6a1e7700)
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d71a6d000
EAL:   PCI memory mapped at 0x7f7d71a69000
EAL: PCI device 0000:05:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d71a49000
EAL:   PCI memory mapped at 0x7f7d71a45000
EAL: PCI device 0000:05:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d71a25000
EAL:   PCI memory mapped at 0x7f7d71a21000
EAL: PCI device 0000:05:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d71a01000
EAL:   PCI memory mapped at 0x7f7d719fd000
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d719dd000
EAL:   PCI memory mapped at 0x7f7d719d9000
EAL: PCI device 0000:07:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d719b9000
EAL:   PCI memory mapped at 0x7f7d719b5000
EAL: PCI device 0000:07:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d71995000
EAL:   PCI memory mapped at 0x7f7d71991000
EAL: PCI device 0000:07:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d71971000
EAL:   PCI memory mapped at 0x7f7d7196d000
EAL: PCI device 0000:0c:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10d3 rte_em_pmd
EAL:   PCI memory mapped at 0x7f7d7194d000
EAL:   PCI memory mapped at 0x7f7d71949000
EAL: PCI device 0000:0d:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10d3 rte_em_pmd
EAL:   PCI memory mapped at 0x7f7d71929000
EAL:   PCI memory mapped at 0x7f7d71925000
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6cfa0000
EAL:   PCI memory mapped at 0x7f7d71b0e000
EAL: PCI device 0000:83:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6cf80000
EAL:   PCI memory mapped at 0x7f7d6cf7c000
EAL: PCI device 0000:83:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6cf5c000
EAL:   PCI memory mapped at 0x7f7d6cf58000
EAL: PCI device 0000:83:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6cf38000
EAL:   PCI memory mapped at 0x7f7d6cf34000
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6cf14000
EAL:   PCI memory mapped at 0x7f7d6cf10000
EAL: PCI device 0000:85:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6cef0000
EAL:   PCI memory mapped at 0x7f7d6ceec000
EAL: PCI device 0000:85:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6cecc000
EAL:   PCI memory mapped at 0x7f7d6cec8000
EAL: PCI device 0000:85:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6cea8000
EAL:   PCI memory mapped at 0x7f7d6cea4000
EAL: PCI device 0000:87:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6ce84000
EAL:   PCI memory mapped at 0x7f7d6ce80000
EAL: PCI device 0000:87:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6ce60000
EAL:   PCI memory mapped at 0x7f7d6ce5c000
EAL: PCI device 0000:87:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6ce3c000
EAL:   PCI memory mapped at 0x7f7d6ce38000
EAL: PCI device 0000:87:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7d6ce18000
EAL:   PCI memory mapped at 0x7f7d6ce14000
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7cf19e0000
EAL:   PCI memory mapped at 0x7f7cf19dc000
EAL: PCI device 0000:89:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f7cf19bc000
EAL:   PCI memory mapped at 0x7f7cf19b8000
EAL: PCI device 0000:89:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   0000:89:00.2 not managed by UIO driver, skipping
EAL: PCI device 0000:89:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   0000:89:00.3 not managed by UIO driver, skipping
PANIC in app_init_rings_tx():
Algorithmic error (no I/O core to handle TX of port 16)
12: [/usr/sbin/lagopus() [0x40b865]]
11: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f7d6ece8ead]]
10: [/usr/sbin/lagopus() [0x40c79c]]
9: [/usr/sbin/lagopus() [0x40c154]]
8: [/usr/lib/liblagopus_util.so.0(lagopus_module_initialize_all+0x76) [0x7f7d6f91ff19]]
7: [/usr/lib/liblagopus_util.so.0(+0x1f986) [0x7f7d6f91f986]]
6: [/usr/lib/liblagopus_dataplane.so.0(datapath_initialize+0x41) [0x7f7d700cd555]]
5: [/usr/lib/liblagopus_dataplane.so.0(lagopus_datapath_init+0xc5) [0x7f7d700f7ba5]]
4: [/usr/lib/liblagopus_dataplane.so.0(app_init+0x18) [0x7f7d700f78c1]]
3: [/usr/lib/liblagopus_dataplane.so.0(app_init_rings_tx+0xe5) [0x7f7d701010a2]]
2: [/usr/sbin/lagopus(__rte_panic+0xc3) [0x40b6f5]]
1: [/usr/sbin/lagopus() [0x4a2463]]
user@debian:~$ 

flow modify OFPBAC_MATCH_INCONSISTENT issue

In OFP SEPC 1.3.4 , chapter 6.4 (page 42)
"....If a set-field action in a flow mod message does not have its prerequisites included in the match
or prior actions of the flow entry, for example, a set IPv4 address action with a match wildcarding
the Ethertype, the switch must reject the flow mod and immediately return an ofp_error_msg with
OFPET_BAD_ACTION type and OFPBAC_MATCH_INCONSISTENT code. "

We add a flow with match(ether_type = IPv6) & set_field(IPv4_Src = "192.168.1.1"), it should return OFPET_BAD_ACTION type and OFPBAC_MATCH_INCONSISTENT code.", but Lagopos_0.1.1 does not return error.

Memory leak while dumping flow statistics

Function channel_send_packet_list in src/agent/channel.c has a memory leak.
The following change fixes the issue:

     while ((pbuf = TAILQ_FIRST(&pbuf_list->tailq)) != NULL) {
       TAILQ_REMOVE(&pbuf_list->tailq, pbuf, entry);
       channel_send_packet_nolock(channel, pbuf);
+      pbuf_free(pbuf);
     }
     channel_unlock(channel);
     res = LAGOPUS_RESULT_OK;

Lagopus with ryu_firewall.py

Hi Everyone,

I am doing an experiment with my mini-server running
Lagopus 0.2.9
ryu 4.7
DPDK version is 2.2.0

I installed Lagopus 0.2.9 on with Hybrid enable following QUICKSTART.md.
$ cd lagopus
$ ./configure --with-dpdk-dir=${RTE_SDK} --enable-hybrid=yes
$ make
$ sudo make install

I want to implement the firewall into my network topology by running rest_firewall.py through ryu-manager. But it doesn't work. After running rest_firewall.py on ryu controller, I execute these command:

Enable firewall on switch dpid=1

$ curl -X PUT http://localhost:8080/firewall/module/enable/0000000000000001

Install rules for ICMP connectivity.

$ curl -X POST -d '{"nw_src": "10.0.0.1/32", "nw_dst": "10.0.0.2/32", "nw_proto": "ICMP"}' http://localhost:8080/firewall/rules/0000000000000001
$ curl -X POST -d '{"nw_src": "10.0.0.2/32", "nw_dst": "10.0.0.1/32", "nw_proto": "ICMP"}' http://localhost:8080/firewall/rules/0000000000000001

Even though, I already set the rule to allow ping between hosts (10.0.0.1 and 10.0.0.2) , I still cannot ping. Below is the flow entry of the Lagopus:

Lagosh> show flow
[
{
"name": "bridge02",
"tables": [
{
"table": 0,
"flows": [
{
"priority": 65534,
"idle_timeout": 0,
"hard_timeout": 0,
"cookie": 0,
"dl_type": "arp",
"actions": [
{
"apply_actions": [
{
"output": "normal"
}
]
}
],
"stats": {
"packet_count": 18,
"byte_count": 1080
}
},
{
"priority": 1,
"idle_timeout": 0,
"hard_timeout": 0,
"cookie": 1,
"dl_type": "ip",
"nw_proto": 1,
"nw_src": "10.0.0.1/255.255.255.255",
"nw_dst": "10.0.0.2/255.255.255.255",
"actions": [
{
"apply_actions": [
{
"output": "normal"
}
]
}
],
"stats": {
"packet_count": 0,
"byte_count": 0
}
},
{
"priority": 1,
"idle_timeout": 0,
"hard_timeout": 0,
"cookie": 2,
"dl_type": "ip",
"nw_proto": 1,
"nw_src": "10.0.0.2/255.255.255.255",
"nw_dst": "10.0.0.1/255.255.255.255",
"actions": [
{
"apply_actions": [
{
"output": "normal"
}
]
}
],
"stats": {
"packet_count": 0,
"byte_count": 0
}
},
{
"priority": 0,
"idle_timeout": 0,
"hard_timeout": 0,
"cookie": 0,
"actions": [
{
"apply_actions": [
{
"output": "controller"
}
]
}
],
"stats": {
"packet_count": 417,
"byte_count": 82929
}
}
]
}
],
"is-enabled": true
}
]

#And I config my lagopus as below:

Configure# show lagopus.conf
log {
syslog;
ident lagopus;
debuglevel 1;
packetdump "";
}
datastore {
addr 0.0.0.0;
port 12345;
protocol tcp;
tls false;
}
agent {
channelq-size 1000;
channelq-max-batches 1000;
}
tls {
cert-file /usr/local/etc/lagopus/catls.pem;
private-key /usr/local/etc/lagopus/key.pem;
certificate-store /usr/local/etc/lagopus;
trust-point-conf /usr/local/etc/lagopus/check.conf;
}
snmp {
master-agentx-socket tcp:localhost:705;
ping-interval-second 10;
}
interface {
interface01 {
type ethernet-dpdk-phy;
port-number 0;
mtu 1500;
ip-addr 127.0.0.1;
}
interface02 {
type ethernet-dpdk-phy;
port-number 1;
mtu 1500;
ip-addr 127.0.0.1;
}
interface03 {
type ethernet-dpdk-phy;
port-number 2;
mtu 1500;
ip-addr 127.0.0.1;
}
}
port {
port01 {
interface interface01;
}
port02 {
interface interface02;
}
port03 {
interface interface03;
}
}
channel {
channel02 {
dst-addr 127.0.0.1;
dst-port 6633;
local-addr 0.0.0.0;
local-port 0;
protocol tcp;
}
}
controller {
controller02 {
channel channel02;
role equal;
connection-type main;
}
}
bridge {
bridge02 {
dpid 1;
controller controller02;
port port01 1;
port port02 2;
port port03 3;
fail-mode secure;
flow-statistics true;
group-statistics true;
port-statistics true;
queue-statistics true;
table-statistics true;
reassemble-ip-fragments false;
max-buffered-packets 65535;
max-ports 225;
max-tables 225;
max-flows 4294967295;
packet-inq-size 1000;
packet-inq-max-batches 1000;
up-streamq-size 1000;
up-streamq-max-batches 1000;
down-streamq-size 1000;
down-streamq-max-batches 1000;
block-looping-ports false;
}
}

But if I use Open vSwitch, it works just fine. I use the command below to create the network topology.
$ sudo mn --topo single,3 --mac --switch ovsk --controller remote -x
$ sudo ovs-vsctl set Bridge s1 protocols=OpenFlow13

After I set the rule above just like Lagopus experiment, I am able to ping between hosts.
So my question is:
Why rest_firewall.py dose not work with Lagopus ?
How to make rest_firewall.py works with Lagopus ?

If you need more information related to the experiment, please do let me know.

Regards,
Hong Panha

Memory leak at the flow cache

Hi,

I've encountered a weird grows of used memory, while generating flows with random headers. I've tried to find the cause of the problem through the sources, and it seems, there is a bug in the caching module (src/dataplane/ofproto/ofcache.c).

Lagopus implements caching with help of a hash tables.
Each key of these tables points to a doubly linked list of cache entries.
However, elements of this dynamic structure are never deallocated on hash reset.
Thereby, the amount of free memory is reduced on each swap of the cache bank.

I have introduced the following function to clear cache entry list:

static void free_cache_list(struct cache_list *list) {
    struct cache_entry *entry;
    while (!TAILQ_EMPTY(&list->entries)) {
      entry = TAILQ_FIRST(&list->entries);
      TAILQ_REMOVE(&list->entries, entry, next);
      free(entry);
   }
   free(list);
}

and used this function to enable an appropriate memory deallocation:

@@ -127,7 +137,7 @@ init_flowcache_bank(int kvs_type, int bank) {                                                                    
     default:                                                                                                                        
       lagopus_hashmap_create(&cache->hashmap,                                                                                       
                              LAGOPUS_HASHMAP_TYPE_ONE_WORD,                                                                         
-                             free);                                                                                                 
+                             free_cache_list);                                                                                      
       break; 

There is a few extra effort, when using rte_hash

 #if RTE_VERSION >= RTE_VERSION_NUM(2, 1, 0, 0)                                                                                      
-    case FLOWCACHE_RTE_HASH:                                                                                                        
-      rte_hash_reset(cache->hash);                                                                                                  
+    case FLOWCACHE_RTE_HASH: {                                                                                                      
+      const void *next_key;                                                                                                         
+      struct cache_list *list;                                                                                                      
+      uint32_t iter = 0;                                                                                                            
+                                                                                                                                    
+      while (rte_hash_iterate(cache->hash, &next_key, (void**) &list, &iter) >= 0) {                                                
+        free_cache_list(list);
+      }
+
       break;
+    }
+
 #endif /* RTE_VERSION */

lagopus cannot transmit packets

Hi,
We have two CPU Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz on socket0 and socket1(total 24 cores), memory size is 48G and 28 Intel NICs on board.

We set the Hugepages as NUMA mode in DPDK and each CPU be assigned 1024 pages.

We only initiate 16 ports for lagopus softswitch.

sudo /usr/sbin/lagopus -d -- -cffffff -n 4 -- -pffff

we can initiate the lagopus but it will output some traffic(874 packets) and then output nothing.
We will set 6 flows to our lagopus softswitch by Ryu controller.
The flows are as follows:

  1. priority=65535, in_port=1, output=2
  2. priority=65535, in_port=2, output=1
  3. priority=65535, in_port=3, output=4
  4. priority=65535, in_port=4, output=3
  5. priority=65535, in_port=5, output=6
  6. priority=65535, in_port=6, output=5

port 1 connect our traffic generator port 16
port 2 connect our traffic generator port 17
port 3 connect our traffic generator port 18
port 4 connect our traffic generator port 19

when we start to send traffic to port 1, the port 2 will output some traffic(874 packets) and then output nothing.

The lagopus is still alive but no traffic output. When we repeat the steps again, no traffic output. Only the first time will output some traffic.

The final result is as the following: you can find Port 16 is still send traffic but Port 17 has no traffic output.
image

The following is our log:

=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2014.10.09 16:50:35 =~=~=~=~=~=~=~=~=~=~=~=
sudo /usr/sbin/lagopus -d -- -cffffff -n 4 -- -pffff
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 0 on socket 1
EAL: Detected lcore 7 as core 1 on socket 1
EAL: Detected lcore 8 as core 2 on socket 1
EAL: Detected lcore 9 as core 3 on socket 1
EAL: Detected lcore 10 as core 4 on socket 1
EAL: Detected lcore 11 as core 5 on socket 1
EAL: Detected lcore 12 as core 0 on socket 0
EAL: Detected lcore 13 as core 1 on socket 0
EAL: Detected lcore 14 as core 2 on socket 0
EAL: Detected lcore 15 as core 3 on socket 0
EAL: Detected lcore 16 as core 4 on socket 0
EAL: Detected lcore 17 as core 5 on socket 0
EAL: Detected lcore 18 as core 0 on socket 1
EAL: Detected lcore 19 as core 1 on socket 1
EAL: Detected lcore 20 as core 2 on socket 1
EAL: Detected lcore 21 as core 3 on socket 1
EAL: Detected lcore 22 as core 4 on socket 1
EAL: Detected lcore 23 as core 5 on socket 1
EAL: Setting up memory...
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f75cc600000 (size = 0x200000)
EAL: Ask a virtual area of 0x7fc00000 bytes
EAL: Virtual area found at 0x7f754c800000 (size = 0x7fc00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f754c400000 (size = 0x200000)
EAL: Ask a virtual area of 0x7fc00000 bytes
EAL: Virtual area found at 0x7f74cc600000 (size = 0x7fc00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f74cc000000 (size = 0x400000)
EAL: Ask a virtual area of 0x80000000 bytes
EAL: Virtual area found at 0x7f744be00000 (size = 0x80000000)
EAL: Ask a virtual area of 0x80000000 bytes
EAL: Virtual area found at 0x7f73cbc00000 (size = 0x80000000)
EAL: Requesting 2048 pages of size 2MB from socket 0
EAL: Requesting 2048 pages of size 2MB from socket 1
EAL: TSC frequency is ~2000000 KHz
EAL: Master core 0 is ready (tid=d1b8f800)
EAL: Core 1 is ready (tid=cd085700)
EAL: Core 3 is ready (tid=ca3ce700)
EAL: Core 2 is ready (tid=cabcf700)
EAL: Core 5 is ready (tid=c93cc700)
EAL: Core 6 is ready (tid=c8bcb700)
EAL: Core 4 is ready (tid=c9bcd700)
EAL: Core 7 is ready (tid=c3fff700)
EAL: Core 9 is ready (tid=c2ffd700)
EAL: Core 11 is ready (tid=c1ffb700)
EAL: Core 14 is ready (tid=c07f8700)
EAL: Core 15 is ready (tid=bfff7700)
EAL: Core 17 is ready (tid=beff5700)
EAL: Core 18 is ready (tid=be7f4700)
EAL: Core 10 is ready (tid=c27fc700)
EAL: Core 21 is ready (tid=bcff1700)
EAL: Core 22 is ready (tid=bc7f0700)
EAL: Core 13 is ready (tid=c0ff9700)
EAL: Core 20 is ready (tid=bd7f2700)
EAL: Core 8 is ready (tid=c37fe700)
EAL: Core 12 is ready (tid=c17fa700)
EAL: Core 19 is ready (tid=bdff3700)
EAL: Core 23 is ready (tid=bbfef700)
EAL: Core 16 is ready (tid=bf7f6700)
Initializing the PMD driver ...
EAL: PCI device 0000:09:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10d3 rte_em_pmd
EAL:   0000:09:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:0a:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10d3 rte_em_pmd
EAL:   0000:0a:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1b39000
EAL:   PCI memory mapped at 0x7f75d1b35000
EAL: PCI device 0000:83:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1b15000
EAL:   PCI memory mapped at 0x7f75d1b11000
EAL: PCI device 0000:83:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1af1000
EAL:   PCI memory mapped at 0x7f75d1aed000
EAL: PCI device 0000:83:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1acd000
EAL:   PCI memory mapped at 0x7f75d1ac9000
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1aa9000
EAL:   PCI memory mapped at 0x7f75d1aa5000
EAL: PCI device 0000:85:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1a85000
EAL:   PCI memory mapped at 0x7f75d1a81000
EAL: PCI device 0000:85:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1a61000
EAL:   PCI memory mapped at 0x7f75d1a5d000
EAL: PCI device 0000:85:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1a3d000
EAL:   PCI memory mapped at 0x7f75d1a39000
EAL: PCI device 0000:87:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d1a19000
EAL:   PCI memory mapped at 0x7f75d1a15000
EAL: PCI device 0000:87:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75d19f5000
EAL:   PCI memory mapped at 0x7f75d19f1000
EAL: PCI device 0000:87:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75cc865000
EAL:   PCI memory mapped at 0x7f75d1bda000
EAL: PCI device 0000:87:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75cc845000
EAL:   PCI memory mapped at 0x7f75cc841000
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75cc821000
EAL:   PCI memory mapped at 0x7f75cc81d000
EAL: PCI device 0000:89:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75cc5e0000
EAL:   PCI memory mapped at 0x7f75d1bd6000
EAL: PCI device 0000:89:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75cc5c0000
EAL:   PCI memory mapped at 0x7f75d19ed000
EAL: PCI device 0000:89:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7f75cc5a0000
EAL:   PCI memory mapped at 0x7f75d19e9000
Initializing NIC port 0 ...
Initializing NIC port 0 RX queue 0 ...
Initializing NIC port 0 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status....................................Port 0 Link Up - speed 1000 Mbps - full-duplex
Initializing NIC port 1 ...
Initializing NIC port 1 RX queue 0 ...
Initializing NIC port 1 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................Port 1 Link Up - speed 1000 Mbps - full-duplex
Initializing NIC port 2 ...
Initializing NIC port 2 RX queue 0 ...
Initializing NIC port 2 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status........................................Port 2 Link Up - speed 1000 Mbps - full-duplex
Initializing NIC port 3 ...
Initializing NIC port 3 RX queue 0 ...
Initializing NIC port 3 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.......................................Port 3 Link Up - speed 1000 Mbps - full-duplex
Initializing NIC port 4 ...
Initializing NIC port 4 RX queue 0 ...
Initializing NIC port 4 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................Port 4 Link Up - speed 1000 Mbps - full-duplex
Initializing NIC port 5 ...
Initializing NIC port 5 RX queue 0 ...
Initializing NIC port 5 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................Port 5 Link Up - speed 1000 Mbps - full-duplex
Initializing NIC port 6 ...
Initializing NIC port 6 RX queue 0 ...
Initializing NIC port 6 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 6 Link Down
Initializing NIC port 7 ...
Initializing NIC port 7 RX queue 0 ...
Initializing NIC port 7 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 7 Link Down
Initializing NIC port 8 ...
Initializing NIC port 8 RX queue 0 ...
Initializing NIC port 8 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 8 Link Down
Initializing NIC port 9 ...
Initializing NIC port 9 RX queue 0 ...
Initializing NIC port 9 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 9 Link Down
Initializing NIC port 10 ...
Initializing NIC port 10 RX queue 0 ...
Initializing NIC port 10 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 10 Link Down
Initializing NIC port 11 ...
Initializing NIC port 11 RX queue 0 ...
Initializing NIC port 11 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 11 Link Down
Initializing NIC port 12 ...
Initializing NIC port 12 RX queue 0 ...
Initializing NIC port 12 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 12 Link Down
Initializing NIC port 13 ...
Initializing NIC port 13 RX queue 0 ...
Initializing NIC port 13 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 13 Link Down
Initializing NIC port 14 ...
Initializing NIC port 14 RX queue 0 ...
Initializing NIC port 14 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 14 Link Down
Initializing NIC port 15 ...
Initializing NIC port 15 RX queue 0 ...
Initializing NIC port 15 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 15 Link Down
Initialization completed.
NIC RX ports:
  port 0 (queue 0)
  port 1 (queue 0)
  port 2 (queue 0)
  port 3 (queue 0)
  port 4 (queue 0)
  port 5 (queue 0)
  port 6 (queue 0)
  port 7 (queue 0)
  port 8 (queue 0)
  port 9 (queue 0)
  port 10 (queue 0)
  port 11 (queue 0)
  port 12 (queue 0)
  port 13 (queue 0)
  port 14 (queue 0)
  port 15 (queue 0)

I/O lcore 1 (socket 0):
 RX ports:
  port 0 (queue 0)
  port 1 (queue 0)
  port 2 (queue 0)
  port 3 (queue 0)
  port 4 (queue 0)
  port 5 (queue 0)
  port 6 (queue 0)
  port 7 (queue 0)
  port 8 (queue 0)
  port 9 (queue 0)
  port 10 (queue 0)
  port 11 (queue 0)
  port 12 (queue 0)
  port 13 (queue 0)
  port 14 (queue 0)
  port 15 (queue 0)
 Output rings:
  0x7f75cc7cb140
  0x7f75cc7cd1c0
  0x7f75cc7cf240
  0x7f75cc7d12c0
  0x7f75cc7d3340
  0x7f75cc7d53c0
  0x7f75cc7d7440
  0x7f75cc7d94c0
  0x7f75cc7db540

Worker 0: lcore 3 (socket 0):
 Input rings:
  0x7f75cc7cb140
 Output rings per TX port
  port 0 (0x7f75cc7dd5c0)
  port 1 (0x7f75cc7df640)
  port 2 (0x7f75cc7e16c0)
  port 3 (0x7f75cc7e3740)
  port 4 (0x7f75cc7e57c0)
  port 5 (0x7f75cc7e7840)
  port 6 (0x7f75cc7e98c0)
  port 7 (0x7f75cc7eb940)
  port 8 (0x7f75cc7ed9c0)
  port 9 (0x7f75cc7efa40)
  port 10 (0x7f75cc7f1ac0)
  port 11 (0x7f75cc7f3b40)
  port 12 (0x7f75cc7f5bc0)
  port 13 (0x7f75cc7f7c40)
  port 14 (0x7f75cc7f9cc0)
  port 15 (0x7f75cc7fbd40)
Worker 1: lcore 4 (socket 0):
 Input rings:
  0x7f75cc7cd1c0
 Output rings per TX port
  port 0 (0x7f75cc7fddc0)
  port 1 (0x7f754c480080)
  port 2 (0x7f754c482100)
  port 3 (0x7f754c484180)
  port 4 (0x7f754c486200)
  port 5 (0x7f754c488280)
  port 6 (0x7f754c48a300)
  port 7 (0x7f754c48c380)
  port 8 (0x7f754c48e400)
  port 9 (0x7f754c490480)
  port 10 (0x7f754c492500)
  port 11 (0x7f754c494580)
  port 12 (0x7f754c496600)
  port 13 (0x7f754c498680)
  port 14 (0x7f754c49a700)
  port 15 (0x7f754c49c780)
Worker 2: lcore 5 (socket 0):
 Input rings:
  0x7f75cc7cf240
 Output rings per TX port
  port 0 (0x7f754c49e800)
  port 1 (0x7f754c4a0880)
  port 2 (0x7f754c4a2900)
  port 3 (0x7f754c4a4980)
  port 4 (0x7f754c4a6a00)
  port 5 (0x7f754c4a8a80)
  port 6 (0x7f754c4aab00)
  port 7 (0x7f754c4acb80)
  port 8 (0x7f754c4aec00)
  port 9 (0x7f754c4b0c80)
  port 10 (0x7f754c4b2d00)
  port 11 (0x7f754c4b4d80)
  port 12 (0x7f754c4b6e00)
  port 13 (0x7f754c4b8e80)
  port 14 (0x7f754c4baf00)
  port 15 (0x7f754c4bcf80)
Worker 3: lcore 6 (socket 1):
 Input rings:
  0x7f75cc7d12c0
 Output rings per TX port
  port 0 (0x7f754c4bf000)
  port 1 (0x7f754c4c1080)
  port 2 (0x7f754c4c3100)
  port 3 (0x7f754c4c5180)
  port 4 (0x7f754c4c7200)
  port 5 (0x7f754c4c9280)
  port 6 (0x7f754c4cb300)
  port 7 (0x7f754c4cd380)
  port 8 (0x7f754c4cf400)
  port 9 (0x7f754c4d1480)
  port 10 (0x7f754c4d3500)
  port 11 (0x7f754c4d5580)
  port 12 (0x7f754c4d7600)
  port 13 (0x7f754c4d9680)
  port 14 (0x7f754c4db700)
  port 15 (0x7f754c4dd780)
Worker 4: lcore 7 (socket 1):
 Input rings:
  0x7f75cc7d3340
 Output rings per TX port
  port 0 (0x7f754c4df800)
  port 1 (0x7f754c4e1880)
  port 2 (0x7f754c4e3900)
  port 3 (0x7f754c4e5980)
  port 4 (0x7f754c4e7a00)
  port 5 (0x7f754c4e9a80)
  port 6 (0x7f754c4ebb00)
  port 7 (0x7f754c4edb80)
  port 8 (0x7f754c4efc00)
  port 9 (0x7f754c4f1c80)
  port 10 (0x7f754c4f3d00)
  port 11 (0x7f754c4f5d80)
  port 12 (0x7f754c4f7e00)
  port 13 (0x7f754c4f9e80)
  port 14 (0x7f754c4fbf00)
  port 15 (0x7f754c4fdf80)
Worker 5: lcore 8 (socket 1):
 Input rings:
  0x7f75cc7d53c0
 Output rings per TX port
  port 0 (0x7f754c500000)
  port 1 (0x7f754c502080)
  port 2 (0x7f754c504100)
  port 3 (0x7f754c506180)
  port 4 (0x7f754c508200)
  port 5 (0x7f754c50a280)
  port 6 (0x7f754c50c300)
  port 7 (0x7f754c50e380)
  port 8 (0x7f754c510400)
  port 9 (0x7f754c512480)
  port 10 (0x7f754c514500)
  port 11 (0x7f754c516580)
  port 12 (0x7f754c518600)
  port 13 (0x7f754c51a680)
  port 14 (0x7f754c51c700)
  port 15 (0x7f754c51e780)
Worker 6: lcore 9 (socket 1):
 Input rings:
  0x7f75cc7d7440
 Output rings per TX port
  port 0 (0x7f754c520800)
  port 1 (0x7f754c522880)
  port 2 (0x7f754c524900)
  port 3 (0x7f754c526980)
  port 4 (0x7f754c528a00)
  port 5 (0x7f754c52aa80)
  port 6 (0x7f754c52cb00)
  port 7 (0x7f754c52eb80)
  port 8 (0x7f754c530c00)
  port 9 (0x7f754c532c80)
  port 10 (0x7f754c534d00)
  port 11 (0x7f754c536d80)
  port 12 (0x7f754c538e00)
  port 13 (0x7f754c53ae80)
  port 14 (0x7f754c53cf00)
  port 15 (0x7f754c53ef80)
Worker 7: lcore 10 (socket 1):
 Input rings:
  0x7f75cc7d94c0
 Output rings per TX port
  port 0 (0x7f754c541000)
  port 1 (0x7f754c543080)
  port 2 (0x7f754c545100)
  port 3 (0x7f754c547180)
  port 4 (0x7f754c549200)
  port 5 (0x7f754c54b280)
  port 6 (0x7f754c54d300)
  port 7 (0x7f754c54f380)
  port 8 (0x7f754c551400)
  port 9 (0x7f754c553480)
  port 10 (0x7f754c555500)
  port 11 (0x7f754c557580)
  port 12 (0x7f754c559600)
  port 13 (0x7f754c55b680)
  port 14 (0x7f754c55d700)
  port 15 (0x7f754c55f780)
Worker 8: lcore 11 (socket 1):
 Input rings:
  0x7f75cc7db540
 Output rings per TX port
  port 0 (0x7f754c561800)
  port 1 (0x7f754c563880)
  port 2 (0x7f754c565900)
  port 3 (0x7f754c567980)
  port 4 (0x7f754c569a00)
  port 5 (0x7f754c56ba80)
  port 6 (0x7f754c56db00)
  port 7 (0x7f754c56fb80)
  port 8 (0x7f754c571c00)
  port 9 (0x7f754c573c80)
  port 10 (0x7f754c575d00)
  port 11 (0x7f754c577d80)
  port 12 (0x7f754c579e00)
  port 13 (0x7f754c57be80)
  port 14 (0x7f754c57df00)
  port 15 (0x7f754c57ff80)

NIC TX ports:
  0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15

I/O lcore 2 (socket 0):
 Input rings per TX port
 port 0
  worker 0, 0x7f75cc7dd5c0
  worker 1, 0x7f75cc7fddc0
  worker 2, 0x7f754c49e800
  worker 3, 0x7f754c4bf000
  worker 4, 0x7f754c4df800
  worker 5, 0x7f754c500000
  worker 6, 0x7f754c520800
  worker 7, 0x7f754c541000
  worker 8, 0x7f754c561800
 port 1
  worker 0, 0x7f75cc7df640
  worker 1, 0x7f754c480080
  worker 2, 0x7f754c4a0880
  worker 3, 0x7f754c4c1080
  worker 4, 0x7f754c4e1880
  worker 5, 0x7f754c502080
  worker 6, 0x7f754c522880
  worker 7, 0x7f754c543080
  worker 8, 0x7f754c563880
 port 2
  worker 0, 0x7f75cc7e16c0
  worker 1, 0x7f754c482100
  worker 2, 0x7f754c4a2900
  worker 3, 0x7f754c4c3100
  worker 4, 0x7f754c4e3900
  worker 5, 0x7f754c504100
  worker 6, 0x7f754c524900
  worker 7, 0x7f754c545100
  worker 8, 0x7f754c565900
 port 3
  worker 0, 0x7f75cc7e3740
  worker 1, 0x7f754c484180
  worker 2, 0x7f754c4a4980
  worker 3, 0x7f754c4c5180
  worker 4, 0x7f754c4e5980
  worker 5, 0x7f754c506180
  worker 6, 0x7f754c526980
  worker 7, 0x7f754c547180
  worker 8, 0x7f754c567980
 port 4
  worker 0, 0x7f75cc7e57c0
  worker 1, 0x7f754c486200
  worker 2, 0x7f754c4a6a00
  worker 3, 0x7f754c4c7200
  worker 4, 0x7f754c4e7a00
  worker 5, 0x7f754c508200
  worker 6, 0x7f754c528a00
  worker 7, 0x7f754c549200
  worker 8, 0x7f754c569a00
 port 5
  worker 0, 0x7f75cc7e7840
  worker 1, 0x7f754c488280
  worker 2, 0x7f754c4a8a80
  worker 3, 0x7f754c4c9280
  worker 4, 0x7f754c4e9a80
  worker 5, 0x7f754c50a280
  worker 6, 0x7f754c52aa80
  worker 7, 0x7f754c54b280
  worker 8, 0x7f754c56ba80
 port 6
  worker 0, 0x7f75cc7e98c0
  worker 1, 0x7f754c48a300
  worker 2, 0x7f754c4aab00
  worker 3, 0x7f754c4cb300
  worker 4, 0x7f754c4ebb00
  worker 5, 0x7f754c50c300
  worker 6, 0x7f754c52cb00
  worker 7, 0x7f754c54d300
  worker 8, 0x7f754c56db00
 port 7
  worker 0, 0x7f75cc7eb940
  worker 1, 0x7f754c48c380
  worker 2, 0x7f754c4acb80
  worker 3, 0x7f754c4cd380
  worker 4, 0x7f754c4edb80
  worker 5, 0x7f754c50e380
  worker 6, 0x7f754c52eb80
  worker 7, 0x7f754c54f380
  worker 8, 0x7f754c56fb80
 port 8
  worker 0, 0x7f75cc7ed9c0
  worker 1, 0x7f754c48e400
  worker 2, 0x7f754c4aec00
  worker 3, 0x7f754c4cf400
  worker 4, 0x7f754c4efc00
  worker 5, 0x7f754c510400
  worker 6, 0x7f754c530c00
  worker 7, 0x7f754c551400
  worker 8, 0x7f754c571c00
 port 9
  worker 0, 0x7f75cc7efa40
  worker 1, 0x7f754c490480
  worker 2, 0x7f754c4b0c80
  worker 3, 0x7f754c4d1480
  worker 4, 0x7f754c4f1c80
  worker 5, 0x7f754c512480
  worker 6, 0x7f754c532c80
  worker 7, 0x7f754c553480
  worker 8, 0x7f754c573c80
 port 10
  worker 0, 0x7f75cc7f1ac0
  worker 1, 0x7f754c492500
  worker 2, 0x7f754c4b2d00
  worker 3, 0x7f754c4d3500
  worker 4, 0x7f754c4f3d00
  worker 5, 0x7f754c514500
  worker 6, 0x7f754c534d00
  worker 7, 0x7f754c555500
  worker 8, 0x7f754c575d00
 port 11
  worker 0, 0x7f75cc7f3b40
  worker 1, 0x7f754c494580
  worker 2, 0x7f754c4b4d80
  worker 3, 0x7f754c4d5580
  worker 4, 0x7f754c4f5d80
  worker 5, 0x7f754c516580
  worker 6, 0x7f754c536d80
  worker 7, 0x7f754c557580
  worker 8, 0x7f754c577d80
 port 12
  worker 0, 0x7f75cc7f5bc0
  worker 1, 0x7f754c496600
  worker 2, 0x7f754c4b6e00
  worker 3, 0x7f754c4d7600
  worker 4, 0x7f754c4f7e00
  worker 5, 0x7f754c518600
  worker 6, 0x7f754c538e00
  worker 7, 0x7f754c559600
  worker 8, 0x7f754c579e00
 port 13
  worker 0, 0x7f75cc7f7c40
  worker 1, 0x7f754c498680
  worker 2, 0x7f754c4b8e80
  worker 3, 0x7f754c4d9680
  worker 4, 0x7f754c4f9e80
  worker 5, 0x7f754c51a680
  worker 6, 0x7f754c53ae80
  worker 7, 0x7f754c55b680
  worker 8, 0x7f754c57be80
 port 14
  worker 0, 0x7f75cc7f9cc0
  worker 1, 0x7f754c49a700
  worker 2, 0x7f754c4baf00
  worker 3, 0x7f754c4db700
  worker 4, 0x7f754c4fbf00
  worker 5, 0x7f754c51c700
  worker 6, 0x7f754c53cf00
  worker 7, 0x7f754c55d700
  worker 8, 0x7f754c57df00
 port 15
  worker 0, 0x7f75cc7fbd40
  worker 1, 0x7f754c49c780
  worker 2, 0x7f754c4bcf80
  worker 3, 0x7f754c4dd780
  worker 4, 0x7f754c4fdf80
  worker 5, 0x7f754c51e780
  worker 6, 0x7f754c53ef80
  worker 7, 0x7f754c55f780
  worker 8, 0x7f754c57ff80

Ring sizes:
  NIC RX     = 1024
  Worker in  = 1024
  Worker out = 1024
  NIC TX     = 1024
Burst sizes:
  I/O RX (rd = 144, wr = 144)
  Worker (rd = 144, wr = 144)
  I/O TX (rd = 144, wr = 144)

Logical core 1 (I/O) main loop.
Logical core 2 (I/O) main loop.
Logical core 3 (worker 0) main loop.
Logical core 4 (worker 1) main loop.
Logical core 5 (worker 2) main loop.
Logical core 6 (worker 3) main loop.
Logical core 7 (worker 4) main loop.
Logical core 8 (worker 5) main loop.
Logical core 9 (worker 6) main loop.
Logical core 10 (worker 7) main loop.
Logical core 11 (worker 8) main loop.
Adding Physical Port 0
00:0b:ab:58:9a:0a:
Adding Physical Port 1
00:0b:ab:58:9a:0b:
Adding Physical Port 2
00:0b:ab:58:9a:0c:
Adding Physical Port 3
00:0b:ab:58:9a:0d:
Adding Physical Port 4
00:0b:ab:59:2e:02:
Adding Physical Port 5
00:0b:ab:59:2e:03:
Adding Physical Port 6
00:0b:ab:59:2e:04:
Adding Physical Port 7
00:0b:ab:59:2e:05:
Adding Physical Port 8
00:0b:ab:59:2e:46:
Adding Physical Port 9
00:0b:ab:59:2e:47:
Adding Physical Port 10
00:0b:ab:59:2e:48:
Adding Physical Port 11
00:0b:ab:59:2e:49:
Adding Physical Port 12
00:0b:ab:59:2e:0e:
Adding Physical Port 13
00:0b:ab:59:2e:0f:
Adding Physical Port 14
00:0b:ab:59:2e:10:
Adding Physical Port 15
00:0b:ab:59:2e:11:
Assigning port id 0 to bridge br0
Assigning port id 1 to bridge br0
Assigning port id 2 to bridge br0
Assigning port id 3 to bridge br0
Assigning port id 4 to bridge br0
Assigning port id 5 to bridge br0
Assigning port id 6 to bridge br0
Assigning port id 7 to bridge br0
Assigning port id 8 to bridge br0
Assigning port id 9 to bridge br0
Assigning port id 10 to bridge br0
Assigning port id 11 to bridge br0
Assigning port id 12 to bridge br0
Assigning port id 13 to bridge br0
Assigning port id 14 to bridge br0
Assigning port id 15 to bridge br0

table features request(OFPMP_TABLE_FEATURES) issue

In ofp spec 1.3.4, "If the request body contains an array of one or more ofp_table_features structs, the switch will attempt to change its flow tables to match the requested flow table configuration". It means that one or more ofp_table_features struct could be sent by by multipart.
(1)A controller wants to send more than one table features request. For example, 3 table features. The message will be
| ofp_table_features+table_property | ofp_table_features+table_property | ofp_table+table_property |

(2)Then this message are encapsulated in multipart mechanism. For example, into 2 ofp messages.

(3)the message is reassembled by switch. The switch processes this message and find that 3 table features struct are in this message.

I trace the code in function ofp_table_features_request_handler(...) in file ofp_table_features_handler.c.
It seems that only one ofp_table_features is handled. If more thane one ofp_table_features, this code can not handle that.

crashing when flows have specified in the dsl file (w/ DPDK).

On Lagopus 0.2 and DPDK with the following dsl file, the switch crashes with segfault.

interface interface01 create -type ethernet-dpdk-phy -port-number 0
interface interface02 create -type ethernet-dpdk-phy -port-number 1
port port01 create -interface interface01
port port02 create -interface interface02
bridge bridge01 create -port port01 1 -port port02 2 -dpid 0x1
bridge bridge01 enable
flow bridge01 add in_port=1 apply_actions=output:2

This happens in clear_worker_flowcache() at line clear_all_cache(lp->cache); because at that time lp->cache is NULL.

Currently, I could not find the quick fix for that. It looks this is an architectural issues. Below is an explanation.

DPDK-based flow cache initialization happens during dataplane_start(). But adding the flow eventually accesses flow_cache via datastore_init() that runs BEFORE dataplane.

Here is the trace:

  • main() -> lagopus_mainloop() -> s_do_mainloop() -> s_prologue()
  • In prologue, lagopus_module_init_all() -> datastore_init() -> add_sub_cmd_parse() -> flow_cmd_mod_add_cmd_parse() -> ofp_flow_mod_check_add() -> clear_flowcache()
  • In prologue, lagopus_module_start_all() -> dataplane_start() -> app_lcore_main_loop() -> app_lcore_main_loop_worker() -> init_flowcache()

lagopus 0.29/dpdk16.07 low throughput

I'm experiencing low throughput (about 0.24Mpps) with lagopus on 10G interfaces (intel xl710). I'm using lagopus 0.29 without changes on a server with a 1.7GHz CPU, 3 cores are used for lagopus. Lagopus is configured with flows that just sends the input on port 1 to port 2 and vice versa, and similarly for in total 8 ports.
I'm using a moonGen as traffic generator on a 2nd server of the same time, generating 2 streams of 3Gbps each, packet size is 512B. This is a packet rate of about 1.5Mpps. I've run the traffic generator for a couple of seconds, generating in total about 77M packets. Thereafter I checked how often the flows in lagopus have been hit: this was just 13M packets. And eventually the amount of packets returned to the server with the traffic generator was just 3M packets, the receive rate was about 0.24Mpps.
I'm using the same DPDK drivers also for the packet generator, when connecting ports of the packet generator directly by a cable, actually the 6Gbps of transmitted traffic are received again.
Are there specific configuration/compile parameters I need to tune to achieve higher throughput?

Crash when using both dpdk and rawsock interfaces

Upon initializing a packet rawsock interfaces do not set pkt->cache, and it could point to anything.
In my case pkt->cache seemed to point to the local cache of a dpdk worker thread.
(I guess the packet of rawsock was allocated at the memory, used to store a packet of dpdk before.)

As a result, two threads were using the same cache structure at once.
Cache overflow and reallocation in one thread caused memory corruption and crash in the other one.

The problem was gone after I have inserted an explicit pkt->cache setting in src/dataplane/mgr/sock_io.c:

pkt = alloc_lagopus_packet();
pkt->cache = NULL;

group table sends to ingress port

I've used a group table to implement port mirroring among 2 ports:
~/ofctl_script/add_flow -t group '{"type":"ALL","group_id":8,"buckets":[{"actions":["OUTPUT:1"]},{"actions":["OUTPUT:2"]},{"actions":["OUTPUT:3"]}]}'

~/ofctl_script/add_flow '{"table_id":0,"priority":10,"actions":["GROUP:8"],"match":{"in_port":1}}'
~/ofctl_script/add_flow '{"table_id":0,"priority":10,"actions":["GROUP:8"],"match":{"in_port":2}}'

When sending a packet to port 1, it actually appears on all 3 ports again. But according to the openflow definition the clone where the egress port corresponds to the ingress port should be discarded. See OF1.3.2, section 5.6.1:
Required: all: Execute all buckets in the group. This group is used for multicast or broadcast
forwarding. The packet is effectively cloned for each bucket; one packet is processed for each
bucket of the group. If a bucket directs a packet explicitly out the ingress port, this packet clone
is dropped.

Do I miss something?

segfault after sending ping to simple switch (DPDK, v0.2.4)

I was able to ping without crash when I run Ryu simple switch with v0.2.3.
However, I have upgraded lagopus from v0.2.3 to v0.2.4 by simply over writing it by following steps.

The difference from v0.2.3 was one line : $ git checkout -b 0.2.3 refs/tags/v0.2.3

Could this be possible regression between v0.2.3 and v0.2.4?

$ git clone https://github.com/lagopus/lagopus.git
$ cd lagopus
$ git checkout -b 0.2.4 refs/tags/v0.2.4
$ ./configure
$ make
$ sudo make install

Commands / Logs:

On host running vswitch:
$ ryu-manager --verbose /usr/local/lib/python2.7/dist-packages/ryu/app/simple_switch_13.py
$ sudo lagopus -d -- -c3 -n1 -- -p7

On host connected to lagopus port:
$ sudo ip netns exec host1 ping 10.0.0.2
  >> no ping response. but flow was added to vswitch.

On Ryu:
EVENT ofp_event->SimpleSwitch13 EventOFPPacketIn
packet in 1 08:00:27:e9:6a:b5 08:00:27:24:12:72 2
EVENT ofp_event->SimpleSwitch13 EventOFPPacketIn
packet in 1 08:00:27:24:12:72 08:00:27:e9:6a:b5 1

On host running vswitch:
$ tail -f /var/log/syslog
Feb 17 16:17:38 lagopus lagopus[25787]: [Wed Feb 17 16:17:38 JST 2016][DEBUG][25787:0x00007f019affd700:ofp_handler]:./ofp_handler.c:645:s_process_channelq_entry: RECV: OFPT_PACKET_OUT (xid=4258370165)
Feb 17 16:17:38 lagopus lagopus[25787]: [Wed Feb 17 16:17:38 JST 2016][DEBUG][25787:0x00007f019b7fe700:agent]:./ofp_handler.c:253:ofp_handler_get_channelq: called. (retptr: 0x7f019b7f9a00)
Feb 17 16:17:38 lagopus lagopus[25787]: [Wed Feb 17 16:17:38 JST 2016][DEBUG][25787:0x00007f019affd700:ofp_handler]:./ofp_handler.c:645:s_process_channelq_entry: RECV: OFPT_FLOW_MOD (xid=4258370166)
Feb 17 16:17:38 lagopus lagopus[25787]: [Wed Feb 17 16:17:38 JST 2016][DEBUG][25787:0x00007f019affd700:ofp_handler]:./ofp_handler.c:645:s_process_channelq_entry: RECV: OFPT_PACKET_OUT (xid=4258370167)
Feb 17 16:17:38 lagopus kernel: [21751.282781] ofp_dpqueue[25795]: segfault at b9af7280 ip 00007f01c6505730 sp 00007f019bffd900 error 6 in liblagopus_dataplane.so.0.0.0[7f01c6280000+33b000]

Lagopus configuration:

$ cat /usr/local/etc/lagopus/lagopus.dsl
channel channel01 create -dst-addr 127.0.0.1 -protocol tcp
controller controller01 create -channel channel01 -role equal -connection-type main
interface interface01 create -type ethernet-dpdk-phy -port-number 0
interface interface02 create -type ethernet-dpdk-phy -port-number 1
interface interface03 create -type ethernet-dpdk-phy -port-number 2
port port01 create -interface interface01
port port02 create -interface interface02
port port03 create -interface interface03
bridge bridge01 create -controller controller01 -port port01 1 -port port02 2 -port port03 3 -dpid 0x1
# bridge bridge01 create -controller controller01 -port port01 1 -port port02 2 -port port03 3 -dpid 0x1 -fail-mode standalone
bridge bridge01 enable

Crash on write_actions

Lagopus crashes while processing a packet, matched by the following set of rules:

table=0,actions=goto_table:1
table=1,in_port=3,ip,nw_src=10.10.111.2,nw_dst=10.10.111.1,actions=goto_table:2
table=2,ip,ip_dscp=24,actions=write_actions(set_queue:2,output:2)

The reason hides within src/dataplane/ofproto/datapath.c:

  • lagopus_match_and_action enters an endless loop,
  • dp_openflow_match continuously pushes new entries back into pkt->matched_flow
  • the number of entries exceeds LAGOPUS_DP_PIPELINE_MAX, and corrupts memory.

To fix this, it was enough to change one sign in lagopus_match_and_action in such a way we could break the loop, even in case the pipeline does not specify the next table to process:

    for (;;) {
      rv = dp_openflow_match(pkt);
      if (rv != LAGOPUS_RESULT_OK) {
        break;
      }
      rv = dp_openflow_do_action(pkt);
+      // the meaning of RV in this context is the following:
+      // RV > 0, it holds the id of the table to go next
+      // RV = 0, everything is ok, pipeline processing finished
+      // RV < 0, pipeline handling should be terminated right now
+      //  (i.e. there was an error, or an explicit output action)

+      if (rv <= LAGOPUS_RESULT_OK) {
-      if (rv < LAGOPUS_RESULT_OK) {
        break;
      }
      if (rv == pkt->table_id) {
        rv = LAGOPUS_RESULT_OK;
        break;
      }
      pkt->table_id = rv;
    }

It may be nice to make lagopus_match_and_action less obscured.
I am also not sure, it is ok to push matched flows without any range checking.

Build Failed in Debian(squeeze)

I tried to compile lagopus with gcc 4.8.1.
A compile error occurred with ./configure && make (not -j option)

./sock/sock.c:446:20: error: 'IFLA_STATS64' undeclared (first use in this function)
               case IFLA_STATS64:
                    ^
./sock/sock.c:446:20: note: each undeclared identifier is reported only once for each function it appears in
./sock/sock.c:461:39: error: dereferencing pointer to incomplete type
     stats->ofp.rx_packets = link_stats->rx_packets;
                                       ^
./sock/sock.c:462:39: error: dereferencing pointer to incomplete type
     stats->ofp.tx_packets = link_stats->tx_packets;
                                       ^
./sock/sock.c:463:37: error: dereferencing pointer to incomplete type
     stats->ofp.rx_bytes = link_stats->rx_bytes;
                                     ^
./sock/sock.c:464:37: error: dereferencing pointer to incomplete type
     stats->ofp.tx_bytes = link_stats->tx_bytes;
                                     ^
./sock/sock.c:465:39: error: dereferencing pointer to incomplete type
     stats->ofp.rx_dropped = link_stats->rx_dropped;
                                       ^
./sock/sock.c:466:39: error: dereferencing pointer to incomplete type
     stats->ofp.tx_dropped = link_stats->tx_dropped;
                                       ^
./sock/sock.c:467:38: error: dereferencing pointer to incomplete type
     stats->ofp.rx_errors = link_stats->rx_errors;
                                      ^
./sock/sock.c:468:38: error: dereferencing pointer to incomplete type
     stats->ofp.tx_errors = link_stats->tx_errors;
                                      ^
./sock/sock.c:469:41: error: dereferencing pointer to incomplete type
     stats->ofp.rx_frame_err = link_stats->rx_frame_errors;
                                         ^
./sock/sock.c:470:40: error: dereferencing pointer to incomplete type
     stats->ofp.rx_over_err = link_stats->rx_over_errors;
                                        ^
./sock/sock.c:471:40: error: dereferencing pointer to incomplete type
     stats->ofp.rx_crc_err =  link_stats->rx_crc_errors;
                                        ^
./sock/sock.c:472:39: error: dereferencing pointer to incomplete type
     stats->ofp.collisions = link_stats->collisions;
                                       ^
./sock/sock.c:370:22: warning: unused variable 'sa' [-Wunused-variable]
   struct sockaddr_nl sa;
                      ^
./sock/sock.c: In function 'datapath_thread_loop':
./sock/sock.c:528:5: warning: implicit declaration of function 'vector_max' [-Wimplicit-function-declaration]
     nb_ports = vector_max(dpmgr->ports) + 1;
     ^
./sock/sock.c:528:5: warning: nested extern declaration of 'vector_max' [-Wnested-externs]
./sock/sock.c:528:41: warning: conversion to 'unsigned int' from 'int' may change the sign of the result [-Wsign-conversion]
     nb_ports = vector_max(dpmgr->ports) + 1;
                                         ^
./sock/sock.c: In function 'get_flowcache_statistics':
./sock/sock.c:582:41: warning: unused parameter 'bridge' [-Wunused-parameter]
 get_flowcache_statistics(struct bridge *bridge, struct ofcachestat *st) {

the problem is

IFLA_STATS64 and rtnl_link_stats64 is not declared.
I guess it seems to be lacking some declaration of dependency.

Problem of processing on decap of VXLAN packet with Lagopus

Hi,
I have a problem of processing on decap of VXLAN packet with Lagopus with DPDK and Extension Ryu.

●Using Software Version
Lagopus: commit 7097920
Extension Ryu : commit e1e343acc21637ccdaa944a5c51d41f725006278
Hosted OS: Ubuntu 14.04

●Using Network
Host1(Hardware) - 【Metal Cable】 - Lagopus1(Hardware) - 【Metal Cable】 - Lagopus2(Hardware) - 【Metal Cable】 - Host2(Hardware)

●Symptoms on problems
I was using of edited sample code(_1) to carry out tests of decap of VXLAN header.
Single match field then insert flow and communicate is available.
But, match field of multiple conditions(ex: in_port and vxlan_vni and udp_src etc...) then not insert flow.
(_1) lagopus/test/ryu/tunnel_vxlan.py

●Sample code of For Extension Ryu(For Lagopus1)

from ryu.base.app_manager import RyuApp
from ryu.controller.ofp_event import EventOFPSwitchFeatures
from ryu.controller.handler import set_ev_cls
from ryu.controller.handler import CONFIG_DISPATCHER
from ryu.ofproto.ofproto_v1_3 import OFP_VERSION

class TunnelVxlan(RyuApp):
    OFP_VERSIONS = [OFP_VERSION]

    def __init__(self, *args, **kwargs):
        super(TunnelVxlan, self).__init__(*args, **kwargs)

    @set_ev_cls(EventOFPSwitchFeatures, CONFIG_DISPATCHER)
    def switch_features_handler(self, ev):
        datapath = ev.msg.datapath
        self.logger.info('installing flow')
        self.install_flow(datapath)

    def install_flow(self, datapath):
        ofp = datapath.ofproto
        ofp_parser = datapath.ofproto_parser
        self.install_flow_encap(datapath, ofp, ofp_parser)
        self.install_flow_decap(datapath, ofp, ofp_parser)

    def install_flow_encap(self, datapath, ofp, ofp_parser):
        # encap vxlan
        self.logger.info('installing encap')
        cookie = cookie_mask = 0
        table_id = 0
        idle_timeout = hard_timeout = 0
        buffer_id = ofp.OFP_NO_BUFFER
        priority = 319
        match = ofp_parser.OFPMatch(in_port=1)
        actions = [ofp_parser.OFPActionEncap(201397),
                   ofp_parser.OFPActionSetField(vxlan_vni=1),
                   ofp_parser.OFPActionEncap(131089),
                   ofp_parser.OFPActionSetField(udp_src=5432),
                   ofp_parser.OFPActionSetField(udp_dst=4789),
                   ofp_parser.OFPActionEncap(67584),
                   ofp_parser.OFPActionSetField(ipv4_dst="10.0.0.1"),
                   ofp_parser.OFPActionSetField(ipv4_src="10.0.0.2"),
                   ofp_parser.OFPActionSetNwTtl(64),
                   ofp_parser.OFPActionEncap(0),
                   ofp_parser.OFPActionSetField(eth_dst="00:90:0B:46:57:8B"),
                   ofp_parser.OFPActionSetField(eth_src="00:90:0B:46:58:09"),
                   ofp_parser.OFPActionOutput(4)]
        inst = [ofp_parser.OFPInstructionActions(ofp.OFPIT_APPLY_ACTIONS,
                                                 actions)]
        req = ofp_parser.OFPFlowMod(datapath, cookie, cookie_mask,
                                    table_id, ofp.OFPFC_ADD,
                                    idle_timeout, hard_timeout,
                                    priority, buffer_id,
                                    ofp.OFPP_ANY, ofp.OFPG_ANY,
                                    ofp.OFPFF_SEND_FLOW_REM,
                                    match, inst)
        datapath.send_msg(req)

    def install_flow_decap(self, datapath, ofp, ofp_parser):
        # decap vxlan
        self.logger.info('installing decap')
        cookie = cookie_mask = 0
        table_id = 0
        idle_timeout = hard_timeout = 0
        buffer_id = ofp.OFP_NO_BUFFER
        priority = 320
        match = ofp_parser.OFPMatch(in_port=4,vxlan_vni=1,udp_src=5432)
        actions = [ofp_parser.OFPActionDecap(cur_pkt_type=0, new_pkt_type=67584),
                   ofp_parser.OFPActionDecap(cur_pkt_type=67584, new_pkt_type=131089),
                   ofp_parser.OFPActionDecap(cur_pkt_type=131089, new_pkt_type=201397),
                   ofp_parser.OFPActionDecap(cur_pkt_type=201397, new_pkt_type=0),
                   ofp_parser.OFPActionOutput(1)]
        inst = [ofp_parser.OFPInstructionActions(ofp.OFPIT_APPLY_ACTIONS,
                                                 actions)]
        req = ofp_parser.OFPFlowMod(datapath, cookie, cookie_mask,
                                    table_id, ofp.OFPFC_ADD,
                                    idle_timeout, hard_timeout,
                                    priority, buffer_id,
                                    ofp.OFPP_ANY, ofp.OFPG_ANY,
                                    ofp.OFPFF_SEND_FLOW_REM,
                                    match, inst)
        datapath.send_msg(req)

If you have resolution to this problem, please teach me.

port status is "link-down" in Lagopus 0.2 w/ rawsock config.

Dears,

i'm testing rawsock in v0.2.
starting lagopus, enter "lagosh -c show port" command, the port state is ["link-down"].
do i need more setting?

BR,
Mark

show log below.

mark@Dell-T110:~$ lagosh -c show port
port01:
[
{
"supported-features": [],
"state": [
"link-down"
],
"config": [],
"name": "port01",
"curr-features": [
"other"
]
}
]
port02:
[
{
"supported-features": [],
"state": [
"link-down"
],
"config": [],
"name": "port02",
"curr-features": [
"other"
]
}
]
port03:
[
{
"supported-features": [],
"state": [
"link-down"
],
"config": [],
"name": "port03",
"curr-features": [
"other"
]
}
]
port04:
[
{
"supported-features": [],
"state": [
"link-down"
],
"config": [],
"name": "port04",
"curr-features": [
"other"
]
}
]

mark@Dell-T110:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: em1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether bc:30:5b:dd:c7:c0 brd ff:ff:ff:ff:ff:ff
3: p1p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:26:55:df:22:32 brd ff:ff:ff:ff:ff:ff
4: p2p1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 90:e2:ba:91:2f:2c brd ff:ff:ff:ff:ff:ff
5: p1p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:26:55:df:22:33 brd ff:ff:ff:ff:ff:ff
6: p2p2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 90:e2:ba:91:2f:2d brd ff:ff:ff:ff:ff:ff
7: p3p1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:26:55:df:23:a6 brd ff:ff:ff:ff:ff:ff
8: p3p2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:26:55:df:23:a7 brd ff:ff:ff:ff:ff:ff
9: p4p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 68:05:ca:18:b5:87 brd ff:ff:ff:ff:ff:ff
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff

mark@Dell-T110:~$ more /usr/local/etc/lagopus/lagopus.conf
channel channel01 create -dst-addr 10.1.9.51 -protocol tcp

channel channel02 create -dst-addr 10.1.9.51 -protocol tcp

controller controller01 create -channel channel01 -role equal -connection-type main

controller controller02 create -channel channel02 -role equal -connection-type main

interface interface01 create -type ethernet-dpdk-phy -port-number 0

interface interface02 create -type ethernet-dpdk-phy -port-number 1

interface interface01 create -type ethernet-rawsock -device p1p1 -port-number 0
interface interface02 create -type ethernet-rawsock -device p1p2 -port-number 1
interface interface03 create -type ethernet-rawsock -device p3p1 -port-number 2
interface interface04 create -type ethernet-rawsock -device p3p2 -port-number 3

port port01 create -interface interface01
port port02 create -interface interface02
port port03 create -interface interface03
port port04 create -interface interface04

bridge bridge01 create -controller controller01 -port port01 1 -port port02 2 -port port03 3 -port port04 4 -dpid 0xB

bridge bridge01 create -controller controller01 -port port01 1 -port port02 2 -dpid 0xB

bridge bridge02 create -controller controller02 -port port03 1 -port port04 2 -dpid 0xC

bridge bridge01 enable

bridge bridge02 enable

The strange lagopus behavior on 4-core INTEL CPU

The behavior is strange when lagopus run on 4-core atom CPU but on 8-core atom CPU, the lagopus works fine.
We start lagopus with the following command:

lagopus -d -- -cff -n2 -- -p3f

We will set 6 flows to our lagopus softswitch by Ryu controller.
The flows are as follows:

  1. priority=65535, in_port=1, output=2
  2. priority=65535, in_port=2, output=1
  3. priority=65535, in_port=3, output=4
  4. priority=65535, in_port=4, output=3
  5. priority=65535, in_port=5, output=6
  6. priority=65535, in_port=6, output=5

port 1 connect our traffic generator port 16
port 2 connect our traffic generator port 17
port 3 connect our traffic generator port 18
port 4 connect our traffic generator port 19

when we start to send traffic to port 1, the port 2 will output the traffic. The result look good.
image

But when we start to send traffic to port 1 and port 2,port 1 will output the most traffic and port 2 has little traffic or nothing output. The behavior is strange.
image

we also do Ryu test.
On 8-core atom CPU, the lagopus work fine and the pass test cases are similar with ryu web site reported. But on 4-core atom CPU, the ryu test will always report barrier-reply time out error.

The following is our app running Ryu controller.

# Copyright (C) 2011 Nippon Telegraph and Telephone Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER
from ryu.controller.handler import set_ev_cls
from ryu.ofproto import ofproto_v1_3
from ryu.lib.packet import packet
from ryu.lib.packet import ethernet


class SimpleSwitch13(app_manager.RyuApp):
    OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]

    def __init__(self, *args, **kwargs):
        super(SimpleSwitch13, self).__init__(*args, **kwargs)
        self.mac_to_port = {}

    @set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
    def switch_features_handler(self, ev):
        datapath = ev.msg.datapath
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser

        # install table-miss flow entry
        #
        # We specify NO BUFFER to max_len of the output action due to
        # OVS bug. At this moment, if we specify a lesser number, e.g.,
        #128, OVS will send Packet-In with invalid buffer_id and
        # truncated packet data. In that case, we cannot output packets
        # correctly.
        match = parser.OFPMatch()
        actions = [parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,
                                          ofproto.OFPCML_NO_BUFFER)]
    
    self.logger.info("switch %s connected", datapath.id)
        self.add_flow(datapath, 0, match, actions)

    match = parser.OFPMatch(in_port=1)
    actions = [parser.OFPActionOutput(2)]
    self.add_flow(datapath, 65535, match, actions)

    match = parser.OFPMatch(in_port=2)
    actions = [parser.OFPActionOutput(1)]
    self.add_flow(datapath, 65535, match, actions)

    match = parser.OFPMatch(in_port=3)
    actions = [parser.OFPActionOutput(4)]
    self.add_flow(datapath, 65535, match, actions)

    match = parser.OFPMatch(in_port=4)
    actions = [parser.OFPActionOutput(3)]
    self.add_flow(datapath, 65535, match, actions)

    match = parser.OFPMatch(in_port=5)
    actions = [parser.OFPActionOutput(6)]
    self.add_flow(datapath, 65535, match, actions)

    match = parser.OFPMatch(in_port=6)
    actions = [parser.OFPActionOutput(5)]
    self.add_flow(datapath, 65535, match, actions)

    def add_flow(self, datapath, priority, match, actions):
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser

        inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,
                                             actions)]

        mod = parser.OFPFlowMod(datapath=datapath, priority=priority,
                                match=match, instructions=inst)
        datapath.send_msg(mod)

The following is CPI information:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Genuine Intel(R) CPU @ 2.40GHz
stepping : 8
microcode : 0x118
cpu MHz : 1200.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch arat epb dtherm tpr_shadow vnmi flexpriority ept vpid smep erms
bogomips : 4800.38
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Genuine Intel(R) CPU @ 2.40GHz
stepping : 8
microcode : 0x118
cpu MHz : 2400.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch arat epb dtherm tpr_shadow vnmi flexpriority ept vpid smep erms
bogomips : 4799.87
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Genuine Intel(R) CPU @ 2.40GHz
stepping : 8
microcode : 0x118
cpu MHz : 2400.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch arat epb dtherm tpr_shadow vnmi flexpriority ept vpid smep erms
bogomips : 4799.87
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Genuine Intel(R) CPU @ 2.40GHz
stepping : 8
microcode : 0x118
cpu MHz : 2400.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch arat epb dtherm tpr_shadow vnmi flexpriority ept vpid smep erms
bogomips : 4799.87
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

PacketOut: packet malformed when plengh < 60 (v0.2.1)

Since v0.2.1, Lagopus started to send malformed packet when packet length in PacketOut is less than 60. (malformed = initial 10~20 bytes of the packet will be 0x00)

Seems like change in rawsock_send_packet_physical() caused this issue.
I have confirmed correct packet will be sent out after below change to v0.2.1 code.

- memset(OS_M_APPEND(m, 60 - plen), 0, (uint32_t)(60 - plen));
+ memset(OS_M_APPEND(m, 60 - plen) + plen, 0, (uint32_t)(60 - plen));

However, I was not sure if this is correct fix or I should change OS_M_APPEND macro.
So would appreciate if you can clarify intent of the macro and correct (clean) way to fix this.

FYI: changes beween 0.2.1 / 0.2.0.

  • src/dataplane/sock/sock.c:
# Lagopus 0.2.1
rawsock_send_packet_physical(struct lagopus_packet *pkt, uint32_t portid) {
  if (pollfd[portid].fd != 0) {
    OS_MBUF *m;
    uint32_t plen;

    m = pkt->mbuf;
    plen = OS_M_PKTLEN(m);
    if (plen < 60) {
      memset(OS_M_APPEND(m, 60 - plen), 0, (uint32_t)(60 - plen));
    }
    (void)write(pollfd[portid].fd, pkt->mbuf->data, OS_M_PKTLEN(pkt->mbuf));
  }
  lagopus_packet_free(pkt);
  return 0;
}
# Lagopus 0.2.0
rawsock_send_packet_physical(struct lagopus_packet *pkt, uint32_t portid) {
  if (pollfd[portid].fd != 0) {
    (void)write(pollfd[portid].fd, pkt->mbuf->data, OS_M_PKTLEN(pkt->mbuf));
  }
  lagopus_packet_free(pkt);
  return 0;
}

lagopus crash when running STP test

We use lagopus 0.1.1 and RYU to test STP. But Lagopus crash. Below are the setting and messages
(a) the command when starting Lagopus
(b)the messages when Lagopus crash
(c)the Lagopus config
(d)the app running in Ryu

(a)lagopus command: /home/genie/lagopus/src/cmds/lagopus -d -C ./lagopus.conf -- -c3 -n1 -- -p3f

(b)message when Lagopus crash:
PANIC in rte_free():
Fatal error: Invalid memory
16: [/lib/libc.so.6(clone+0x6d) [0x7f35752f3b6d]]
15: [/lib/libpthread.so.0(+0x68ca) [0x7f35757908ca]]
14: [/home/genie/lagopus/src/lib/.libs/liblagopus_util.so.0(+0xeaaa) [0x7f3576074aaa]]
13: [/home/genie/lagopus/src/agent/.libs/liblagopus_agent.so.0(+0x6aa86) [0x7f3576530a86]]
12: [/home/genie/lagopus/src/agent/.libs/liblagopus_agent.so.0(+0x6b1a7) [0x7f35765311a7]]
11: [/home/genie/lagopus/src/agent/.libs/liblagopus_agent.so.0(+0x69e13) [0x7f357652fe13]]
10: [/home/genie/lagopus/src/agent/.libs/liblagopus_agent.so.0(ofp_port_mod_handle+0x29c) [0x7f357654c12d]]
9: [/home/genie/lagopus/src/dataplane/.libs/liblagopus_dataplane.so.0(ofp_port_mod_modify+0x5f) [0x7f357683e124]]
8: [/home/genie/lagopus/src/dataplane/.libs/liblagopus_dataplane.so.0(port_config+0x2ba) [0x7f357683df78]]
7: [/home/genie/lagopus/src/dataplane/.libs/liblagopus_dataplane.so.0(lagopus_change_physical_port+0x292) [0x7f35768636d5]]
6: [/home/genie/lagopus/src/cmds/.libs/lt-lagopus(rte_eth_dev_start+0x92) [0x46d3c2]]
5: [/home/genie/lagopus/src/cmds/.libs/lt-lagopus() [0x4595bf]]
4: [/home/genie/lagopus/src/cmds/.libs/lt-lagopus() [0x45e3b1]]
3: [/home/genie/lagopus/src/cmds/.libs/lt-lagopus() [0x46da5e]]
2: [/home/genie/lagopus/src/cmds/.libs/lt-lagopus(__rte_panic+0xc4) [0x404ce4]]
1: [/home/genie/lagopus/src/cmds/.libs/lt-lagopus() [0x47900e]]
Aborted

(c)lagopus config file:
interface {
ethernet {
eth0;
eth1;
eth2;
eth3;
eth4;
eth5;
}
}

bridge-domains {
br0 {
dpid 0.00:00:00:00:99:00;
port {
eth0;
eth1;
}
controller {
192.168.6.136;
}
}
br1 {
dpid 0.00:00:00:00:99:01;
port {
eth2;
eth3;
}
controller {
192.168.6.136;
}
}
br2 {
dpid 0.00:00:00:00:99:02;
port {
eth4;
eth5;
}
controller {
192.168.6.136;
}
}
}
(d) Ryu app for STP
from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER
from ryu.controller.handler import set_ev_cls
from ryu.ofproto import ofproto_v1_3
from ryu.lib import dpid as dpid_lib
from ryu.lib import stplib
from ryu.lib.packet import packet
from ryu.lib.packet import ethernet

class SimpleSwitchStp13(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
_CONTEXTS = {'stplib': stplib.Stp}

def __init__(self, *args, **kwargs):
    super(SimpleSwitchStp13, self).__init__(*args, **kwargs)
    self.mac_to_port = {}
    self.stp = kwargs['stplib']

    # Sample of stplib config.
    #  please refer to stplib.Stp.set_config() for details.
    """
    config = {dpid_lib.str_to_dpid('0000080027016c98'):
                 {'bridge': {'priority': 0x9000}},
              dpid_lib.str_to_dpid('0000a25b2fe13249'):
                 {'bridge': {'priority': 0x9000}},
              dpid_lib.str_to_dpid('0000080027bc4451'):
                 {'bridge': {'priority': 0x9000}}}
    self.stp.set_config(config)
    """

@set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
    datapath = ev.msg.datapath
    ofproto = datapath.ofproto
    parser = datapath.ofproto_parser

    # install table-miss flow entry
    #
    # We specify NO BUFFER to max_len of the output action due to
    # OVS bug. At this moment, if we specify a lesser number, e.g.,
    #128, OVS will send Packet-In with invalid buffer_id and
    # truncated packet data. In that case, we cannot output packets
    # correctly.  The bug has been fixed in OVS v2.1.0.
    match = parser.OFPMatch()
    actions = [parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,
                                      ofproto.OFPCML_NO_BUFFER)]
    self.add_flow(datapath, 0, match, actions)

def add_flow(self, datapath, priority, match, actions, buffer_id=None):
    ofproto = datapath.ofproto
    parser = datapath.ofproto_parser

    inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,
                                         actions)]
    if buffer_id:
        mod = parser.OFPFlowMod(datapath=datapath, buffer_id=buffer_id,
                                priority=priority, match=match,
                                instructions=inst)
    else:
        mod = parser.OFPFlowMod(datapath=datapath, priority=priority,
                                match=match, instructions=inst)
    datapath.send_msg(mod)

def delete_flow(self, datapath):
    ofproto = datapath.ofproto
    parser = datapath.ofproto_parser

    for dst in self.mac_to_port[datapath.id].keys():
        match = parser.OFPMatch(eth_dst=dst)
        mod = parser.OFPFlowMod(
            datapath, command=ofproto.OFPFC_DELETE,
            out_port=ofproto.OFPP_ANY, out_group=ofproto.OFPG_ANY,
            priority=1, match=match)
        datapath.send_msg(mod)


@set_ev_cls(stplib.EventPacketIn, MAIN_DISPATCHER)
def _packet_in_handler(self, ev):
    # If you hit this you might want to increase
    # the "miss_send_length" of your switch
    if ev.msg.msg_len < ev.msg.total_len:
        self.logger.debug("packet truncated: only %s of %s bytes",
                          ev.msg.msg_len, ev.msg.total_len)
    msg = ev.msg
    datapath = msg.datapath
    ofproto = datapath.ofproto
    parser = datapath.ofproto_parser
    in_port = msg.match['in_port']

    pkt = packet.Packet(msg.data)
    eth = pkt.get_protocols(ethernet.ethernet)[0]

    dst = eth.dst
    if dst == '01:80:c2:00:00:0e':
        return
    src = eth.src

    dpid = datapath.id
    self.mac_to_port.setdefault(dpid, {})

    self.logger.info("packet in %s %s %s %s", dpid, src, dst, in_port)

    # learn a mac address to avoid FLOOD next time.
    self.mac_to_port[dpid][src] = in_port
    #self.logger.info("mac_to_port: %s", mac_to_port);

    if dst in self.mac_to_port[dpid]:
        out_port = self.mac_to_port[dpid][dst]
        #self.logger.info("out_port: %s", out_port);
    else:
        out_port = ofproto.OFPP_FLOOD

    actions = [parser.OFPActionOutput(out_port)]

    # install a flow to avoid packet_in next time
    if out_port != ofproto.OFPP_FLOOD:
        match = parser.OFPMatch(in_port=in_port, eth_dst=dst)
        # verify if we have a valid buffer_id, if yes avoid to send both
        # flow_mod & packet_out
        if msg.buffer_id != ofproto.OFP_NO_BUFFER:
            self.add_flow(datapath, 1, match, actions, msg.buffer_id)
            #self.logger.info("add_flow 1");
            return
        else:
            self.add_flow(datapath, 1, match, actions)
            #self.logger.info("add_flow 2");

    data = None
    if msg.buffer_id == ofproto.OFP_NO_BUFFER:
        data = msg.data

    out = parser.OFPPacketOut(datapath=datapath, buffer_id=msg.buffer_id,
                              in_port=in_port, actions=actions, data=data)
    datapath.send_msg(out)

@set_ev_cls(stplib.EventTopologyChange, MAIN_DISPATCHER)
def _topology_change_handler(self, ev):
    dp = ev.dp
    dpid_str = dpid_lib.dpid_to_str(dp.id)
    msg = 'Receive topology change event. Flush MAC table.'
    self.logger.debug("[dpid=%s] %s", dpid_str, msg)

    if dp.id in self.mac_to_port:
        self.delete_flow(dp)
        del self.mac_to_port[dp.id]

@set_ev_cls(stplib.EventPortStateChange, MAIN_DISPATCHER)
def _port_state_change_handler(self, ev):
    dpid_str = dpid_lib.dpid_to_str(ev.dp.id)
    of_state = {stplib.PORT_STATE_DISABLE: 'DISABLE',
                stplib.PORT_STATE_BLOCK: 'BLOCK',
                stplib.PORT_STATE_LISTEN: 'LISTEN',
                stplib.PORT_STATE_LEARN: 'LEARN',
                stplib.PORT_STATE_FORWARD: 'FORWARD'}
    self.logger.debug("[dpid=%s][port=%d] state=%s",
                      dpid_str, ev.port_no, of_state[ev.port_state])

flow entry timeout not functions properly

I insert two flow entries. The first one ("idle_timeout": 60,"hard_timeout": 60) and the second one("idle_timeout": 30,"hard_timeout": 30). No traffic come in while testing. I expect that the second one will timeout after 30 second and the first one will timeout after 60 seconds. But the result is not. The second one timeout after 90 seconds.

running lagopus -v on tag v0.2.3 will show 0.2.2 not 0.2.3

Patch version should be identical to the number on the tag. (3 for this case)
lagopus_version.h : #define LAGOPUS_VERSION_PATCH 2

$ lagopus -v
Lagopus version 0.2.2-release
[Wed Feb 03 16:48:01 JST 2016][WARN ][1171:0x00007fb1a6a96740:lagopus]:./module.c:201:s_atexit_handler: Module finaliations seems not completed.

Lagopus can not start

Hi,
We have two CPU Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz on socket0 and socket1(total 24 cores), memory size is 48G and 28 Intel NICs on board, but we can not initiate lagopus.

The command is only to start 26 NICs but at last it will fail.
sudo lagopus -d -- -cfffffe -n 2 -- --rx '(0,0,1),(1,0,1),(2,0,1),(3,0,1),(4,0,1),(5,0,1),(6,0,1),(7,0,1),(8,0,1),(9,0,1),(10,0,1),(11,0,1),(12,0,1),(13,0,1),(14,0,1),(15,0,1),(16,0,2),(17,0,2),(18,0,2),(19,0,2),(20,0,2),(21,0,2),(22,0,2),(23,0,2),(24,0,2),(25,0,2)' --tx '(0,3),(1,3),(2,3),(3,3),(4,3),(5,3),(6,3),(7,3),(8,3),(9,3),(10,3),(11,3),(12,3),(13,3),(14,3),(15,3),(16,4),(17,4),(18,4),(19,4),(20,4),(21,4),(22,4),(23,4),(24,4),(25,4)' --w 5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23

The following is our log.
============ PuTTY log 2014.10.08 09:17:15 ============

sudo lagopus -d -- -cfffffe -n 2 -- --rx '(0,0,1),(1,0,1),(2,0,1),(3,0,1),(4,0,1),(5,0,1),(6,0,1),(7,0,1),(8,0,1),(9,0,1),(10,0,1),(11,0,1),(12,0,1),(13,0,1),(14,0,1),(15,0,1),(16,0,2),(17,0,2),(18,0,2),(19,0,2),(20,0,2),(21,0,2),(22,0,2),(23,0,2),(24,0,2),(25,0,2)' --tx '(0,3),(1,3),(2,3),(3,3),(4,3),(5,3),(6,3),(7,3),(8,3),(9,3),(10,3),(11,3),(12,3),(13,3),(14,3),(15,3),(16,4),(17,4),(18,4),(19,4),(20,4),(21,4),(22,4),(23,4),(24,4),(25,4)' --w 5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 0 on socket 1
EAL: Detected lcore 7 as core 1 on socket 1
EAL: Detected lcore 8 as core 2 on socket 1
EAL: Detected lcore 9 as core 3 on socket 1
EAL: Detected lcore 10 as core 4 on socket 1
EAL: Detected lcore 11 as core 5 on socket 1
EAL: Detected lcore 12 as core 0 on socket 0
EAL: Detected lcore 13 as core 1 on socket 0
EAL: Detected lcore 14 as core 2 on socket 0
EAL: Detected lcore 15 as core 3 on socket 0
EAL: Detected lcore 16 as core 4 on socket 0
EAL: Detected lcore 17 as core 5 on socket 0
EAL: Detected lcore 18 as core 0 on socket 1
EAL: Detected lcore 19 as core 1 on socket 1
EAL: Detected lcore 20 as core 2 on socket 1
EAL: Detected lcore 21 as core 3 on socket 1
EAL: Detected lcore 22 as core 4 on socket 1
EAL: Detected lcore 23 as core 5 on socket 1
EAL: Setting up memory...
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fa7b0a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x4800000 bytes
EAL: Virtual area found at 0x7fa7ac000000 (size = 0x4800000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fa7aba00000 (size = 0x400000)
EAL: Ask a virtual area of 0x7ae00000 bytes
EAL: Virtual area found at 0x7fa730a00000 (size = 0x7ae00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fa730400000 (size = 0x400000)
EAL: Ask a virtual area of 0x7f800000 bytes
EAL: Virtual area found at 0x7fa6b0a00000 (size = 0x7f800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fa6b0600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fa6b0200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fa6afe00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fa6afa00000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: Requesting 1024 pages of size 2MB from socket 1
EAL: TSC frequency is ~2000001 KHz
EAL: Master core 1 is ready (tid=b5847800)
EAL: Core 2 is ready (tid=af1e7700)
EAL: Core 4 is ready (tid=ae1e5700)
EAL: Core 6 is ready (tid=ad1e3700)
EAL: Core 8 is ready (tid=ac1e1700)
EAL: Core 10 is ready (tid=ab1df700)
EAL: Core 11 is ready (tid=aa9de700)
EAL: Core 12 is ready (tid=aa1dd700)
EAL: Core 14 is ready (tid=a91db700)
EAL: Core 15 is ready (tid=a89da700)
EAL: Core 7 is ready (tid=ac9e2700)
EAL: Core 17 is ready (tid=a37fe700)
EAL: Core 18 is ready (tid=a2ffd700)
EAL: Core 9 is ready (tid=ab9e0700)
EAL: Core 20 is ready (tid=a1ffb700)
EAL: Core 21 is ready (tid=a17fa700)
EAL: Core 23 is ready (tid=a07f8700)
EAL: Core 3 is ready (tid=ae9e6700)
EAL: Core 19 is ready (tid=a27fc700)
EAL: Core 13 is ready (tid=a99dc700)
EAL: Core 5 is ready (tid=ad9e4700)
EAL: Core 22 is ready (tid=a0ff9700)
EAL: Core 16 is ready (tid=a3fff700)
Initializing the PMD driver ...
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fa7b57f1000
EAL:   PCI memory mapped at 0x7fa7b57ed000
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fa7b57cd000
EAL:   PCI memory mapped at 0x7fa7b57c9000
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b57a9000
EAL:   PCI memory mapped at 0x7fa7b57a5000
EAL: PCI device 0000:06:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b5785000
EAL:   PCI memory mapped at 0x7fa7b5781000
EAL: PCI device 0000:06:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b5761000
EAL:   PCI memory mapped at 0x7fa7b575d000
EAL: PCI device 0000:06:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b573d000
EAL:   PCI memory mapped at 0x7fa7b5739000
EAL: PCI device 0000:08:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b5719000
EAL:   PCI memory mapped at 0x7fa7b5715000
EAL: PCI device 0000:08:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b56f5000
EAL:   PCI memory mapped at 0x7fa7b56f1000
EAL: PCI device 0000:08:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b56d1000
EAL:   PCI memory mapped at 0x7fa7b56cd000
EAL: PCI device 0000:08:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b56ad000
EAL:   PCI memory mapped at 0x7fa7b56a9000
EAL: PCI device 0000:0d:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10d3 rte_em_pmd
EAL:   PCI memory mapped at 0x7fa7b0d24000
EAL:   PCI memory mapped at 0x7fa7b5892000
EAL: PCI device 0000:0e:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10d3 rte_em_pmd
EAL:   PCI memory mapped at 0x7fa7b0d04000
EAL:   PCI memory mapped at 0x7fa7b0d00000
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0ce0000
EAL:   PCI memory mapped at 0x7fa7b0cdc000
EAL: PCI device 0000:83:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0cbc000
EAL:   PCI memory mapped at 0x7fa7b0cb8000
EAL: PCI device 0000:83:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0c98000
EAL:   PCI memory mapped at 0x7fa7b0c94000
EAL: PCI device 0000:83:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0c74000
EAL:   PCI memory mapped at 0x7fa7b0c70000
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0c50000
EAL:   PCI memory mapped at 0x7fa7b0c4c000
EAL: PCI device 0000:85:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0c2c000
EAL:   PCI memory mapped at 0x7fa7b0c28000
EAL: PCI device 0000:85:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0c08000
EAL:   PCI memory mapped at 0x7fa7b0c04000
EAL: PCI device 0000:85:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b09e0000
EAL:   PCI memory mapped at 0x7fa7b09dc000
EAL: PCI device 0000:87:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b09bc000
EAL:   PCI memory mapped at 0x7fa7b09b8000
EAL: PCI device 0000:87:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0998000
EAL:   PCI memory mapped at 0x7fa7b0994000
EAL: PCI device 0000:87:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0974000
EAL:   PCI memory mapped at 0x7fa7b0970000
EAL: PCI device 0000:87:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0950000
EAL:   PCI memory mapped at 0x7fa7b094c000
EAL: PCI device 0000:89:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   0000:89:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:89:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   0000:89:00.1 not managed by UIO driver, skipping
EAL: PCI device 0000:89:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b092c000
EAL:   PCI memory mapped at 0x7fa7b0928000
EAL: PCI device 0000:89:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7fa7b0908000
EAL:   PCI memory mapped at 0x7fa7b0904000
Initializing NIC port 0 ...
Initializing NIC port 0 RX queue 0 ...
Initializing NIC port 0 TX queue 0 ...

Checking link status.....................................................................................................Port 0 Link Down
Initializing NIC port 1 ...
Initializing NIC port 1 RX queue 0 ...
Initializing NIC port 1 TX queue 0 ...

Checking link status.....................................................................................................Port 1 Link Down
Initializing NIC port 2 ...
Initializing NIC port 2 RX queue 0 ...
Initializing NIC port 2 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 2 Link Down
Initializing NIC port 3 ...
Initializing NIC port 3 RX queue 0 ...
Initializing NIC port 3 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 3 Link Down
Initializing NIC port 4 ...
Initializing NIC port 4 RX queue 0 ...
Initializing NIC port 4 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 4 Link Down
Initializing NIC port 5 ...
Initializing NIC port 5 RX queue 0 ...
Initializing NIC port 5 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 5 Link Down
Initializing NIC port 6 ...
Initializing NIC port 6 RX queue 0 ...
Initializing NIC port 6 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 6 Link Down
Initializing NIC port 7 ...
Initializing NIC port 7 RX queue 0 ...
Initializing NIC port 7 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 7 Link Down
Initializing NIC port 8 ...
Initializing NIC port 8 RX queue 0 ...
Initializing NIC port 8 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 8 Link Down
Initializing NIC port 9 ...
Initializing NIC port 9 RX queue 0 ...
Initializing NIC port 9 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 9 Link Down
Initializing NIC port 10 ...
Initializing NIC port 10 RX queue 0 ...
Initializing NIC port 10 TX queue 0 ...

Checking link status.....................................................................................................Port 10 Link Down
Initializing NIC port 11 ...
Initializing NIC port 11 RX queue 0 ...
Initializing NIC port 11 TX queue 0 ...

Checking link status.....................................................................................................Port 11 Link Down
Initializing NIC port 12 ...
Initializing NIC port 12 RX queue 0 ...
Initializing NIC port 12 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 12 Link Down
Initializing NIC port 13 ...
Initializing NIC port 13 RX queue 0 ...
Initializing NIC port 13 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 13 Link Down
Initializing NIC port 14 ...
Initializing NIC port 14 RX queue 0 ...
Initializing NIC port 14 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 14 Link Down
Initializing NIC port 15 ...
Initializing NIC port 15 RX queue 0 ...
Initializing NIC port 15 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 15 Link Down
Initializing NIC port 16 ...
Initializing NIC port 16 RX queue 0 ...
Initializing NIC port 16 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 16 Link Down
Initializing NIC port 17 ...
Initializing NIC port 17 RX queue 0 ...
Initializing NIC port 17 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 17 Link Down
Initializing NIC port 18 ...
Initializing NIC port 18 RX queue 0 ...
Initializing NIC port 18 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 18 Link Down
Initializing NIC port 19 ...
Initializing NIC port 19 RX queue 0 ...
Initializing NIC port 19 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 19 Link Down
Initializing NIC port 20 ...
Initializing NIC port 20 RX queue 0 ...
Initializing NIC port 20 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 20 Link Down
Initializing NIC port 21 ...
Initializing NIC port 21 RX queue 0 ...
Initializing NIC port 21 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 21 Link Down
Initializing NIC port 22 ...
Initializing NIC port 22 RX queue 0 ...
Initializing NIC port 22 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 22 Link Down
Initializing NIC port 23 ...
Initializing NIC port 23 RX queue 0 ...
Initializing NIC port 23 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 23 Link Down
Initializing NIC port 24 ...
Initializing NIC port 24 RX queue 0 ...
Initializing NIC port 24 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 24 Link Down
Initializing NIC port 25 ...
Initializing NIC port 25 RX queue 0 ...
Initializing NIC port 25 TX queue 0 ...
PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.

Checking link status.....................................................................................................Port 25 Link Down
Initialization completed.
NIC RX ports:
  port 0 (queue 0)
  port 1 (queue 0)
  port 2 (queue 0)
  port 3 (queue 0)
  port 4 (queue 0)
  port 5 (queue 0)
  port 6 (queue 0)
  port 7 (queue 0)
  port 8 (queue 0)
  port 9 (queue 0)
  port 10 (queue 0)
  port 11 (queue 0)
  port 12 (queue 0)
  port 13 (queue 0)
  port 14 (queue 0)
  port 15 (queue 0)
  port 16 (queue 0)
  port 17 (queue 0)
  port 18 (queue 0)
  port 19 (queue 0)
  port 20 (queue 0)
  port 21 (queue 0)
  port 22 (queue 0)
  port 23 (queue 0)
  port 24 (queue 0)
  port 25 (queue 0)

I/O lcore 1 (socket 0):
 RX ports:
  port 0 (queue 0)
  port 1 (queue 0)
  port 2 (queue 0)
  port 3 (queue 0)
  port 4 (queue 0)
  port 5 (queue 0)
  port 6 (queue 0)
  port 7 (queue 0)
  port 8 (queue 0)
  port 9 (queue 0)
  port 10 (queue 0)
  port 11 (queue 0)
  port 12 (queue 0)
  port 13 (queue 0)
  port 14 (queue 0)
  port 15 (queue 0)
 Output rings:
  0x7fa7b0bcb140
  0x7fa7b0bcd1c0
  0x7fa7b0bcf240
  0x7fa7b0bd12c0
  0x7fa7b0bd3340
  0x7fa7b0bd53c0
  0x7fa7b0bd7440
  0x7fa7b0bd94c0
  0x7fa7b0bdb540
  0x7fa7b0bdd5c0
  0x7fa7b0bdf640
  0x7fa7b0be16c0
  0x7fa7b0be3740
  0x7fa7b0be57c0
  0x7fa7b0be7840
  0x7fa7b0be98c0
  0x7fa7b0beb940
  0x7fa7b0bed9c0
  0x7fa7b0befa40
I/O lcore 2 (socket 0):
 RX ports:
  port 16 (queue 0)
  port 17 (queue 0)
  port 18 (queue 0)
  port 19 (queue 0)
  port 20 (queue 0)
  port 21 (queue 0)
  port 22 (queue 0)
  port 23 (queue 0)
  port 24 (queue 0)
  port 25 (queue 0)
 Output rings:
  0x7fa7b0bf1ac0
  0x7fa7b0bf3b40
  0x7fa7b0bf5bc0
  0x7fa7b0bf7c40
  0x7fa7b0bf9cc0
  0x7fa7b0bfbd40
  0x7fa7b0bfddc0
  0x7fa7aba80080
  0x7fa7aba82100
  0x7fa7aba84180
  0x7fa7aba86200
  0x7fa7aba88280
  0x7fa7aba8a300
  0x7fa7aba8c380
  0x7fa7aba8e400
  0x7fa7aba90480
  0x7fa7aba92500
  0x7fa7aba94580
  0x7fa7aba96600

Worker 0: lcore 5 (socket 0):
 Input rings:
  0x7fa7b0bcb140
  0x7fa7b0bf1ac0
 Output rings per TX port
  port 0 (0x7fa7aba98680)
  port 1 (0x7fa7aba9a700)
  port 2 (0x7fa7aba9c780)
  port 3 (0x7fa7aba9e800)
  port 4 (0x7fa7abaa0880)
  port 5 (0x7fa7abaa2900)
  port 6 (0x7fa7abaa4980)
  port 7 (0x7fa7abaa6a00)
  port 8 (0x7fa7abaa8a80)
  port 9 (0x7fa7abaaab00)
  port 10 (0x7fa7abaacb80)
  port 11 (0x7fa7abaaec00)
  port 12 (0x7fa7abab0c80)
  port 13 (0x7fa7abab2d00)
  port 14 (0x7fa7abab4d80)
  port 15 (0x7fa7abab6e00)
  port 16 (0x7fa7abab8e80)
  port 17 (0x7fa7ababaf00)
  port 18 (0x7fa7ababcf80)
  port 19 (0x7fa7ababf000)
  port 20 (0x7fa7abac1080)
  port 21 (0x7fa7abac3100)
  port 22 (0x7fa7abac5180)
  port 23 (0x7fa7abac7200)
  port 24 (0x7fa7abac9280)
  port 25 (0x7fa7abacb300)
Worker 1: lcore 6 (socket 1):
 Input rings:
  0x7fa7b0bcd1c0
  0x7fa7b0bf3b40
 Output rings per TX port
  port 0 (0x7fa7abacd380)
  port 1 (0x7fa7abacf400)
  port 2 (0x7fa7abad1480)
  port 3 (0x7fa7abad3500)
  port 4 (0x7fa7abad5580)
  port 5 (0x7fa7abad7600)
  port 6 (0x7fa7abad9680)
  port 7 (0x7fa7abadb700)
  port 8 (0x7fa7abadd780)
  port 9 (0x7fa7abadf800)
  port 10 (0x7fa7abae1880)
  port 11 (0x7fa7abae3900)
  port 12 (0x7fa7abae5980)
  port 13 (0x7fa7abae7a00)
  port 14 (0x7fa7abae9a80)
  port 15 (0x7fa7abaebb00)
  port 16 (0x7fa7abaedb80)
  port 17 (0x7fa7abaefc00)
  port 18 (0x7fa7abaf1c80)
  port 19 (0x7fa7abaf3d00)
  port 20 (0x7fa7abaf5d80)
  port 21 (0x7fa7abaf7e00)
  port 22 (0x7fa7abaf9e80)
  port 23 (0x7fa7abafbf00)
  port 24 (0x7fa7abafdf80)
  port 25 (0x7fa7abb00000)
Worker 2: lcore 7 (socket 1):
 Input rings:
  0x7fa7b0bcf240
  0x7fa7b0bf5bc0
 Output rings per TX port
  port 0 (0x7fa7abb02080)
  port 1 (0x7fa7abb04100)
  port 2 (0x7fa7abb06180)
  port 3 (0x7fa7abb08200)
  port 4 (0x7fa7abb0a280)
  port 5 (0x7fa7abb0c300)
  port 6 (0x7fa7abb0e380)
  port 7 (0x7fa7abb10400)
  port 8 (0x7fa7abb12480)
  port 9 (0x7fa7abb14500)
  port 10 (0x7fa7abb16580)
  port 11 (0x7fa7abb18600)
  port 12 (0x7fa7abb1a680)
  port 13 (0x7fa7abb1c700)
  port 14 (0x7fa7abb1e780)
  port 15 (0x7fa7abb20800)
  port 16 (0x7fa7abb22880)
  port 17 (0x7fa7abb24900)
  port 18 (0x7fa7abb26980)
  port 19 (0x7fa7abb28a00)
  port 20 (0x7fa7abb2aa80)
  port 21 (0x7fa7abb2cb00)
  port 22 (0x7fa7abb2eb80)
  port 23 (0x7fa7abb30c00)
  port 24 (0x7fa7abb32c80)
  port 25 (0x7fa7abb34d00)
Worker 3: lcore 8 (socket 1):
 Input rings:
  0x7fa7b0bd12c0
  0x7fa7b0bf7c40
 Output rings per TX port
  port 0 (0x7fa7abb36d80)
  port 1 (0x7fa7abb38e00)
  port 2 (0x7fa7abb3ae80)
  port 3 (0x7fa7abb3cf00)
  port 4 (0x7fa7abb3ef80)
  port 5 (0x7fa7abb41000)
  port 6 (0x7fa7abb43080)
  port 7 (0x7fa7abb45100)
  port 8 (0x7fa7abb47180)
  port 9 (0x7fa7abb49200)
  port 10 (0x7fa7abb4b280)
  port 11 (0x7fa7abb4d300)
  port 12 (0x7fa7abb4f380)
  port 13 (0x7fa7abb51400)
  port 14 (0x7fa7abb53480)
  port 15 (0x7fa7abb55500)
  port 16 (0x7fa7abb57580)
  port 17 (0x7fa7abb59600)
  port 18 (0x7fa7abb5b680)
  port 19 (0x7fa7abb5d700)
  port 20 (0x7fa7abb5f780)
  port 21 (0x7fa7abb61800)
  port 22 (0x7fa7abb63880)
  port 23 (0x7fa7abb65900)
  port 24 (0x7fa7abb67980)
  port 25 (0x7fa7abb69a00)
Worker 4: lcore 9 (socket 1):
 Input rings:
  0x7fa7b0bd3340
  0x7fa7b0bf9cc0
 Output rings per TX port
  port 0 (0x7fa7abb6ba80)
  port 1 (0x7fa7abb6db00)
  port 2 (0x7fa7abb6fb80)
  port 3 (0x7fa7abb71c00)
  port 4 (0x7fa7abb73c80)
  port 5 (0x7fa7abb75d00)
  port 6 (0x7fa7abb77d80)
  port 7 (0x7fa7abb79e00)
  port 8 (0x7fa7abb7be80)
  port 9 (0x7fa7abb7df00)
  port 10 (0x7fa7abb7ff80)
  port 11 (0x7fa7abb82000)
  port 12 (0x7fa7abb84080)
  port 13 (0x7fa7abb86100)
  port 14 (0x7fa7abb88180)
  port 15 (0x7fa7abb8a200)
  port 16 (0x7fa7abb8c280)
  port 17 (0x7fa7abb8e300)
  port 18 (0x7fa7abb90380)
  port 19 (0x7fa7abb92400)
  port 20 (0x7fa7abb94480)
  port 21 (0x7fa7abb96500)
  port 22 (0x7fa7abb98580)
  port 23 (0x7fa7abb9a600)
  port 24 (0x7fa7abb9c680)
  port 25 (0x7fa7abb9e700)
Worker 5: lcore 10 (socket 1):
 Input rings:
  0x7fa7b0bd53c0
  0x7fa7b0bfbd40
 Output rings per TX port
  port 0 (0x7fa7abba0780)
  port 1 (0x7fa7abba2800)
  port 2 (0x7fa7abba4880)
  port 3 (0x7fa7abba6900)
  port 4 (0x7fa7abba8980)
  port 5 (0x7fa7abbaaa00)
  port 6 (0x7fa7abbaca80)
  port 7 (0x7fa7abbaeb00)
  port 8 (0x7fa7abbb0b80)
  port 9 (0x7fa7abbb2c00)
  port 10 (0x7fa7abbb4c80)
  port 11 (0x7fa7abbb6d00)
  port 12 (0x7fa7abbb8d80)
  port 13 (0x7fa7abbbae00)
  port 14 (0x7fa7abbbce80)
  port 15 (0x7fa7abbbef00)
  port 16 (0x7fa7abbc0f80)
  port 17 (0x7fa7abbc3000)
  port 18 (0x7fa7abbc5080)
  port 19 (0x7fa7abbc7100)
  port 20 (0x7fa7abbc9180)
  port 21 (0x7fa7abbcb200)
  port 22 (0x7fa7abbcd280)
  port 23 (0x7fa7abbcf300)
  port 24 (0x7fa7abbd1380)
  port 25 (0x7fa7abbd3400)
Worker 6: lcore 11 (socket 1):
 Input rings:
  0x7fa7b0bd7440
  0x7fa7b0bfddc0
 Output rings per TX port
  port 0 (0x7fa7abbd5480)
  port 1 (0x7fa7abbd7500)
  port 2 (0x7fa7abbd9580)
  port 3 (0x7fa7abbdb600)
  port 4 (0x7fa7abbdd680)
  port 5 (0x7fa7abbdf700)
  port 6 (0x7fa7abbe1780)
  port 7 (0x7fa7abbe3800)
  port 8 (0x7fa7abbe5880)
  port 9 (0x7fa7abbe7900)
  port 10 (0x7fa7abbe9980)
  port 11 (0x7fa7abbeba00)
  port 12 (0x7fa7abbeda80)
  port 13 (0x7fa7abbefb00)
  port 14 (0x7fa7abbf1b80)
  port 15 (0x7fa7abbf3c00)
  port 16 (0x7fa7abbf5c80)
  port 17 (0x7fa7abbf7d00)
  port 18 (0x7fa7abbf9d80)
  port 19 (0x7fa7abbfbe00)
  port 20 (0x7fa7abbfde80)
  port 21 (0x7fa7abbfff00)
  port 22 (0x7fa7abc01f80)
  port 23 (0x7fa7abc04000)
  port 24 (0x7fa7abc06080)
  port 25 (0x7fa7abc08100)
Worker 7: lcore 12 (socket 0):
 Input rings:
  0x7fa7b0bd94c0
  0x7fa7aba80080
 Output rings per TX port
  port 0 (0x7fa7abc0a180)
  port 1 (0x7fa7abc0c200)
  port 2 (0x7fa7abc0e280)
  port 3 (0x7fa7abc10300)
  port 4 (0x7fa7abc12380)
  port 5 (0x7fa7abc14400)
  port 6 (0x7fa7abc16480)
  port 7 (0x7fa7abc18500)
  port 8 (0x7fa7abc1a580)
  port 9 (0x7fa7abc1c600)
  port 10 (0x7fa7abc1e680)
  port 11 (0x7fa7abc20700)
  port 12 (0x7fa7abc22780)
  port 13 (0x7fa7abc24800)
  port 14 (0x7fa7abc26880)
  port 15 (0x7fa7abc28900)
  port 16 (0x7fa7abc2a980)
  port 17 (0x7fa7abc2ca00)
  port 18 (0x7fa7abc2ea80)
  port 19 (0x7fa7abc30b00)
  port 20 (0x7fa7abc32b80)
  port 21 (0x7fa7abc34c00)
  port 22 (0x7fa7abc36c80)
  port 23 (0x7fa7abc38d00)
  port 24 (0x7fa7abc3ad80)
  port 25 (0x7fa7abc3ce00)
Worker 8: lcore 13 (socket 0):
 Input rings:
  0x7fa7b0bdb540
  0x7fa7aba82100
 Output rings per TX port
  port 0 (0x7fa7abc3ee80)
  port 1 (0x7fa7abc40f00)
  port 2 (0x7fa7abc42f80)
  port 3 (0x7fa7abc45000)
  port 4 (0x7fa7abc47080)
  port 5 (0x7fa7abc49100)
  port 6 (0x7fa7abc4b180)
  port 7 (0x7fa7abc4d200)
  port 8 (0x7fa7abc4f280)
  port 9 (0x7fa7abc51300)
  port 10 (0x7fa7abc53380)
  port 11 (0x7fa7abc55400)
  port 12 (0x7fa7abc57480)
  port 13 (0x7fa7abc59500)
  port 14 (0x7fa7abc5b580)
  port 15 (0x7fa7abc5d600)
  port 16 (0x7fa7abc5f680)
  port 17 (0x7fa7abc61700)
  port 18 (0x7fa7abc63780)
  port 19 (0x7fa7abc65800)
  port 20 (0x7fa7abc67880)
  port 21 (0x7fa7abc69900)
  port 22 (0x7fa7abc6b980)
  port 23 (0x7fa7abc6da00)
  port 24 (0x7fa7abc6fa80)
  port 25 (0x7fa7abc71b00)
Worker 9: lcore 14 (socket 0):
 Input rings:
  0x7fa7b0bdd5c0
  0x7fa7aba84180
 Output rings per TX port
  port 0 (0x7fa7abc73b80)
  port 1 (0x7fa7abc75c00)
  port 2 (0x7fa7abc77c80)
  port 3 (0x7fa7abc79d00)
  port 4 (0x7fa7abc7bd80)
  port 5 (0x7fa7abc7de00)
  port 6 (0x7fa7abc7fe80)
  port 7 (0x7fa7abc81f00)
  port 8 (0x7fa7abc83f80)
  port 9 (0x7fa7abc86000)
  port 10 (0x7fa7abc88080)
  port 11 (0x7fa7abc8a100)
  port 12 (0x7fa7abc8c180)
  port 13 (0x7fa7abc8e200)
  port 14 (0x7fa7abc90280)
  port 15 (0x7fa7abc92300)
  port 16 (0x7fa7abc94380)
  port 17 (0x7fa7abc96400)
  port 18 (0x7fa7abc98480)
  port 19 (0x7fa7abc9a500)
  port 20 (0x7fa7abc9c580)
  port 21 (0x7fa7abc9e600)
  port 22 (0x7fa7abca0680)
  port 23 (0x7fa7abca2700)
  port 24 (0x7fa7abca4780)
  port 25 (0x7fa7abca6800)
Worker 10: lcore 15 (socket 0):
 Input rings:
  0x7fa7b0bdf640
  0x7fa7aba86200
 Output rings per TX port
  port 0 (0x7fa7abca8880)
  port 1 (0x7fa7abcaa900)
  port 2 (0x7fa7abcac980)
  port 3 (0x7fa7abcaea00)
  port 4 (0x7fa7abcb0a80)
  port 5 (0x7fa7abcb2b00)
  port 6 (0x7fa7abcb4b80)
  port 7 (0x7fa7abcb6c00)
  port 8 (0x7fa7abcb8c80)
  port 9 (0x7fa7abcbad00)
  port 10 (0x7fa7abcbcd80)
  port 11 (0x7fa7abcbee00)
  port 12 (0x7fa7abcc0e80)
  port 13 (0x7fa7abcc2f00)
  port 14 (0x7fa7abcc4f80)
  port 15 (0x7fa7abcc7000)
  port 16 (0x7fa7abcc9080)
  port 17 (0x7fa7abccb100)
  port 18 (0x7fa7abccd180)
  port 19 (0x7fa7abccf200)
  port 20 (0x7fa7abcd1280)
  port 21 (0x7fa7abcd3300)
  port 22 (0x7fa7abcd5380)
  port 23 (0x7fa7abcd7400)
  port 24 (0x7fa7abcd9480)
  port 25 (0x7fa7abcdb500)
Worker 11: lcore 16 (socket 0):
 Input rings:
  0x7fa7b0be16c0
  0x7fa7aba88280
 Output rings per TX port
  port 0 (0x7fa7abcdd580)
  port 1 (0x7fa7abcdf600)
  port 2 (0x7fa7abce1680)
  port 3 (0x7fa7abce3700)
  port 4 (0x7fa7abce5780)
  port 5 (0x7fa7abce7800)
  port 6 (0x7fa7abce9880)
  port 7 (0x7fa7abceb900)
  port 8 (0x7fa7abced980)
  port 9 (0x7fa7abcefa00)
  port 10 (0x7fa7abcf1a80)
  port 11 (0x7fa7abcf3b00)
  port 12 (0x7fa7abcf5b80)
  port 13 (0x7fa7abcf7c00)
  port 14 (0x7fa7abcf9c80)
  port 15 (0x7fa7abcfbd00)
  port 16 (0x7fa7abcfdd80)
  port 17 (0x7fa7abcffe00)
  port 18 (0x7fa7abd01e80)
  port 19 (0x7fa7abd03f00)
  port 20 (0x7fa7abd05f80)
  port 21 (0x7fa7abd08000)
  port 22 (0x7fa7abd0a080)
  port 23 (0x7fa7abd0c100)
  port 24 (0x7fa7abd0e180)
  port 25 (0x7fa7abd10200)
Worker 12: lcore 17 (socket 0):
 Input rings:
  0x7fa7b0be3740
  0x7fa7aba8a300
 Output rings per TX port
  port 0 (0x7fa7abd12280)
  port 1 (0x7fa7abd14300)
  port 2 (0x7fa7abd16380)
  port 3 (0x7fa7abd18400)
  port 4 (0x7fa7abd1a480)
  port 5 (0x7fa7abd1c500)
  port 6 (0x7fa7abd1e580)
  port 7 (0x7fa7abd20600)
  port 8 (0x7fa7abd22680)
  port 9 (0x7fa7abd24700)
  port 10 (0x7fa7abd26780)
  port 11 (0x7fa7abd28800)
  port 12 (0x7fa7abd2a880)
  port 13 (0x7fa7abd2c900)
  port 14 (0x7fa7abd2e980)
  port 15 (0x7fa7abd30a00)
  port 16 (0x7fa7abd32a80)
  port 17 (0x7fa7abd34b00)
  port 18 (0x7fa7abd36b80)
  port 19 (0x7fa7abd38c00)
  port 20 (0x7fa7abd3ac80)
  port 21 (0x7fa7abd3cd00)
  port 22 (0x7fa7abd3ed80)
  port 23 (0x7fa7abd40e00)
  port 24 (0x7fa7abd42e80)
  port 25 (0x7fa7abd44f00)
Worker 13: lcore 18 (socket 1):
 Input rings:
  0x7fa7b0be57c0
  0x7fa7aba8c380
 Output rings per TX port
  port 0 (0x7fa7abd46f80)
  port 1 (0x7fa7abd49000)
  port 2 (0x7fa7abd4b080)
  port 3 (0x7fa7abd4d100)
  port 4 (0x7fa7abd4f180)
  port 5 (0x7fa7abd51200)
  port 6 (0x7fa7abd53280)
  port 7 (0x7fa7abd55300)
  port 8 (0x7fa7abd57380)
  port 9 (0x7fa7abd59400)
  port 10 (0x7fa7abd5b480)
  port 11 (0x7fa7abd5d500)
  port 12 (0x7fa7abd5f580)
  port 13 (0x7fa7abd61600)
  port 14 (0x7fa7abd63680)
  port 15 (0x7fa7abd65700)
  port 16 (0x7fa7abd67780)
  port 17 (0x7fa7abd69800)
  port 18 (0x7fa7abd6b880)
  port 19 (0x7fa7abd6d900)
  port 20 (0x7fa7abd6f980)
  port 21 (0x7fa7abd71a00)
  port 22 (0x7fa7abd73a80)
  port 23 (0x7fa7abd75b00)
  port 24 (0x7fa7abd77b80)
  port 25 (0x7fa7abd79c00)
Worker 14: lcore 19 (socket 1):
 Input rings:
  0x7fa7b0be7840
  0x7fa7aba8e400
 Output rings per TX port
  port 0 (0x7fa7abd7bc80)
  port 1 (0x7fa7abd7dd00)
  port 2 (0x7fa7abd7fd80)
  port 3 (0x7fa7abd81e00)
  port 4 (0x7fa7abd83e80)
  port 5 (0x7fa7abd85f00)
  port 6 (0x7fa7abd87f80)
  port 7 (0x7fa7abd8a000)
  port 8 (0x7fa7abd8c080)
  port 9 (0x7fa7abd8e100)
  port 10 (0x7fa7abd90180)
  port 11 (0x7fa7abd92200)
  port 12 (0x7fa7abd94280)
  port 13 (0x7fa7abd96300)
  port 14 (0x7fa7abd98380)
  port 15 (0x7fa7abd9a400)
  port 16 (0x7fa7abd9c480)
  port 17 (0x7fa7abd9e500)
  port 18 (0x7fa7abda0580)
  port 19 (0x7fa7abda2600)
  port 20 (0x7fa7abda4680)
  port 21 (0x7fa7abda6700)
  port 22 (0x7fa7abda8780)
  port 23 (0x7fa7abdaa800)
  port 24 (0x7fa7abdac880)
  port 25 (0x7fa7abdae900)
Worker 15: lcore 20 (socket 1):
 Input rings:
  0x7fa7b0be98c0
  0x7fa7aba90480
 Output rings per TX port
  port 0 (0x7fa7abdb0980)
  port 1 (0x7fa7abdb2a00)
  port 2 (0x7fa7abdb4a80)
  port 3 (0x7fa7abdb6b00)
  port 4 (0x7fa7abdb8b80)
  port 5 (0x7fa7abdbac00)
  port 6 (0x7fa7abdbcc80)
  port 7 (0x7fa7abdbed00)
  port 8 (0x7fa7abdc0d80)
  port 9 (0x7fa7abdc2e00)
  port 10 (0x7fa7abdc4e80)
  port 11 (0x7fa7abdc6f00)
  port 12 (0x7fa7abdc8f80)
  port 13 (0x7fa7abdcb000)
  port 14 (0x7fa7abdcd080)
  port 15 (0x7fa7abdcf100)
  port 16 (0x7fa7abdd1180)
  port 17 (0x7fa7abdd3200)
  port 18 (0x7fa7abdd5280)
  port 19 (0x7fa7abdd7300)
  port 20 (0x7fa7abdd9380)
  port 21 (0x7fa7abddb400)
  port 22 (0x7fa7abddd480)
  port 23 (0x7fa7abddf500)
  port 24 (0x7fa7abde1580)
  port 25 (0x7fa7abde3600)
Worker 16: lcore 21 (socket 1):
 Input rings:
  0x7fa7b0beb940
  0x7fa7aba92500
 Output rings per TX port
  port 0 (0x7fa7abde5680)
  port 1 (0x7fa7abde7700)
  port 2 (0x7fa7abde9780)
  port 3 (0x7fa7abdeb800)
  port 4 (0x7fa7abded880)
  port 5 (0x7fa7abdef900)
  port 6 (0x7fa7abdf1980)
  port 7 (0x7fa7abdf3a00)
  port 8 (0x7fa7abdf5a80)
  port 9 (0x7fa7abdf7b00)
  port 10 (0x7fa7abdf9b80)
  port 11 (0x7fa7abdfbc00)
  port 12 (0x7fa7abdfdc80)
  port 13 (0x7fa730400000)
  port 14 (0x7fa730402080)
  port 15 (0x7fa730404100)
  port 16 (0x7fa730406180)
  port 17 (0x7fa730408200)
  port 18 (0x7fa73040a280)
  port 19 (0x7fa73040c300)
  port 20 (0x7fa73040e380)
  port 21 (0x7fa730410400)
  port 22 (0x7fa730412480)
  port 23 (0x7fa730414500)
  port 24 (0x7fa730416580)
  port 25 (0x7fa730418600)
Worker 17: lcore 22 (socket 1):
 Input rings:
  0x7fa7b0bed9c0
  0x7fa7aba94580
 Output rings per TX port
  port 0 (0x7fa73041a680)
  port 1 (0x7fa73041c700)
  port 2 (0x7fa73041e780)
  port 3 (0x7fa730420800)
  port 4 (0x7fa730422880)
  port 5 (0x7fa730424900)
  port 6 (0x7fa730426980)
  port 7 (0x7fa730428a00)
  port 8 (0x7fa73042aa80)
  port 9 (0x7fa73042cb00)
  port 10 (0x7fa73042eb80)
  port 11 (0x7fa730430c00)
  port 12 (0x7fa730432c80)
  port 13 (0x7fa730434d00)
  port 14 (0x7fa730436d80)
  port 15 (0x7fa730438e00)
  port 16 (0x7fa73043ae80)
  port 17 (0x7fa73043cf00)
  port 18 (0x7fa73043ef80)
  port 19 (0x7fa730441000)
  port 20 (0x7fa730443080)
  port 21 (0x7fa730445100)
  port 22 (0x7fa730447180)
  port 23 (0x7fa730449200)
  port 24 (0x7fa73044b280)
  port 25 (0x7fa73044d300)
Worker 18: lcore 23 (socket 1):
 Input rings:
  0x7fa7b0befa40
  0x7fa7aba96600
 Output rings per TX port
  port 0 (0x7fa73044f380)
  port 1 (0x7fa730451400)
  port 2 (0x7fa730453480)
  port 3 (0x7fa730455500)
  port 4 (0x7fa730457580)
  port 5 (0x7fa730459600)
  port 6 (0x7fa73045b680)
  port 7 (0x7fa73045d700)
  port 8 (0x7fa73045f780)
  port 9 (0x7fa730461800)
  port 10 (0x7fa730463880)
  port 11 (0x7fa730465900)
  port 12 (0x7fa730467980)
  port 13 (0x7fa730469a00)
  port 14 (0x7fa73046ba80)
  port 15 (0x7fa73046db00)
  port 16 (0x7fa73046fb80)
  port 17 (0x7fa730471c00)
  port 18 (0x7fa730473c80)
  port 19 (0x7fa730475d00)
  port 20 (0x7fa730477d80)
  port 21 (0x7fa730479e00)
  port 22 (0x7fa73047be80)
  port 23 (0x7fa73047df00)
  port 24 (0x7fa73047ff80)
  port 25 (0x7fa730482000)

NIC TX ports:
  0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25

I/O lcore 3 (socket 0):
 Input rings per TX port
 port 0
  worker 0, 0x7fa7aba98680
  worker 1, 0x7fa7abacd380
  worker 2, 0x7fa7abb02080
  worker 3, 0x7fa7abb36d80
  worker 4, 0x7fa7abb6ba80
  worker 5, 0x7fa7abba0780
  worker 6, 0x7fa7abbd5480
  worker 7, 0x7fa7abc0a180
  worker 8, 0x7fa7abc3ee80
  worker 9, 0x7fa7abc73b80
  worker 10, 0x7fa7abca8880
  worker 11, 0x7fa7abcdd580
  worker 12, 0x7fa7abd12280
  worker 13, 0x7fa7abd46f80
  worker 14, 0x7fa7abd7bc80
  worker 15, 0x7fa7abdb0980
  worker 16, 0x7fa7abde5680
  worker 17, 0x7fa73041a680
  worker 18, 0x7fa73044f380
 port 1
  worker 0, 0x7fa7aba9a700
  worker 1, 0x7fa7abacf400
  worker 2, 0x7fa7abb04100
  worker 3, 0x7fa7abb38e00
  worker 4, 0x7fa7abb6db00
  worker 5, 0x7fa7abba2800
  worker 6, 0x7fa7abbd7500
  worker 7, 0x7fa7abc0c200
  worker 8, 0x7fa7abc40f00
  worker 9, 0x7fa7abc75c00
  worker 10, 0x7fa7abcaa900
  worker 11, 0x7fa7abcdf600
  worker 12, 0x7fa7abd14300
  worker 13, 0x7fa7abd49000
  worker 14, 0x7fa7abd7dd00
  worker 15, 0x7fa7abdb2a00
  worker 16, 0x7fa7abde7700
  worker 17, 0x7fa73041c700
  worker 18, 0x7fa730451400
 port 2
  worker 0, 0x7fa7aba9c780
  worker 1, 0x7fa7abad1480
  worker 2, 0x7fa7abb06180
  worker 3, 0x7fa7abb3ae80
  worker 4, 0x7fa7abb6fb80
  worker 5, 0x7fa7abba4880
  worker 6, 0x7fa7abbd9580
  worker 7, 0x7fa7abc0e280
  worker 8, 0x7fa7abc42f80
  worker 9, 0x7fa7abc77c80
  worker 10, 0x7fa7abcac980
  worker 11, 0x7fa7abce1680
  worker 12, 0x7fa7abd16380
  worker 13, 0x7fa7abd4b080
  worker 14, 0x7fa7abd7fd80
  worker 15, 0x7fa7abdb4a80
  worker 16, 0x7fa7abde9780
  worker 17, 0x7fa73041e780
  worker 18, 0x7fa730453480
 port 3
  worker 0, 0x7fa7aba9e800
  worker 1, 0x7fa7abad3500
  worker 2, 0x7fa7abb08200
  worker 3, 0x7fa7abb3cf00
  worker 4, 0x7fa7abb71c00
  worker 5, 0x7fa7abba6900
  worker 6, 0x7fa7abbdb600
  worker 7, 0x7fa7abc10300
  worker 8, 0x7fa7abc45000
  worker 9, 0x7fa7abc79d00
  worker 10, 0x7fa7abcaea00
  worker 11, 0x7fa7abce3700
  worker 12, 0x7fa7abd18400
  worker 13, 0x7fa7abd4d100
  worker 14, 0x7fa7abd81e00
  worker 15, 0x7fa7abdb6b00
  worker 16, 0x7fa7abdeb800
  worker 17, 0x7fa730420800
  worker 18, 0x7fa730455500
 port 4
  worker 0, 0x7fa7abaa0880
  worker 1, 0x7fa7abad5580
  worker 2, 0x7fa7abb0a280
  worker 3, 0x7fa7abb3ef80
  worker 4, 0x7fa7abb73c80
  worker 5, 0x7fa7abba8980
  worker 6, 0x7fa7abbdd680
  worker 7, 0x7fa7abc12380
  worker 8, 0x7fa7abc47080
  worker 9, 0x7fa7abc7bd80
  worker 10, 0x7fa7abcb0a80
  worker 11, 0x7fa7abce5780
  worker 12, 0x7fa7abd1a480
  worker 13, 0x7fa7abd4f180
  worker 14, 0x7fa7abd83e80
  worker 15, 0x7fa7abdb8b80
  worker 16, 0x7fa7abded880
  worker 17, 0x7fa730422880
  worker 18, 0x7fa730457580
 port 5
  worker 0, 0x7fa7abaa2900
  worker 1, 0x7fa7abad7600
  worker 2, 0x7fa7abb0c300
  worker 3, 0x7fa7abb41000
  worker 4, 0x7fa7abb75d00
  worker 5, 0x7fa7abbaaa00
  worker 6, 0x7fa7abbdf700
  worker 7, 0x7fa7abc14400
  worker 8, 0x7fa7abc49100
  worker 9, 0x7fa7abc7de00
  worker 10, 0x7fa7abcb2b00
  worker 11, 0x7fa7abce7800
  worker 12, 0x7fa7abd1c500
  worker 13, 0x7fa7abd51200
  worker 14, 0x7fa7abd85f00
  worker 15, 0x7fa7abdbac00
  worker 16, 0x7fa7abdef900
  worker 17, 0x7fa730424900
  worker 18, 0x7fa730459600
 port 6
  worker 0, 0x7fa7abaa4980
  worker 1, 0x7fa7abad9680
  worker 2, 0x7fa7abb0e380
  worker 3, 0x7fa7abb43080
  worker 4, 0x7fa7abb77d80
  worker 5, 0x7fa7abbaca80
  worker 6, 0x7fa7abbe1780
  worker 7, 0x7fa7abc16480
  worker 8, 0x7fa7abc4b180
  worker 9, 0x7fa7abc7fe80
  worker 10, 0x7fa7abcb4b80
  worker 11, 0x7fa7abce9880
  worker 12, 0x7fa7abd1e580
  worker 13, 0x7fa7abd53280
  worker 14, 0x7fa7abd87f80
  worker 15, 0x7fa7abdbcc80
  worker 16, 0x7fa7abdf1980
  worker 17, 0x7fa730426980
  worker 18, 0x7fa73045b680
 port 7
  worker 0, 0x7fa7abaa6a00
  worker 1, 0x7fa7abadb700
  worker 2, 0x7fa7abb10400
  worker 3, 0x7fa7abb45100
  worker 4, 0x7fa7abb79e00
  worker 5, 0x7fa7abbaeb00
  worker 6, 0x7fa7abbe3800
  worker 7, 0x7fa7abc18500
  worker 8, 0x7fa7abc4d200
  worker 9, 0x7fa7abc81f00
  worker 10, 0x7fa7abcb6c00
  worker 11, 0x7fa7abceb900
  worker 12, 0x7fa7abd20600
  worker 13, 0x7fa7abd55300
  worker 14, 0x7fa7abd8a000
  worker 15, 0x7fa7abdbed00
  worker 16, 0x7fa7abdf3a00
  worker 17, 0x7fa730428a00
  worker 18, 0x7fa73045d700
 port 8
  worker 0, 0x7fa7abaa8a80
  worker 1, 0x7fa7abadd780
  worker 2, 0x7fa7abb12480
  worker 3, 0x7fa7abb47180
  worker 4, 0x7fa7abb7be80
  worker 5, 0x7fa7abbb0b80
  worker 6, 0x7fa7abbe5880
  worker 7, 0x7fa7abc1a580
  worker 8, 0x7fa7abc4f280
  worker 9, 0x7fa7abc83f80
  worker 10, 0x7fa7abcb8c80
  worker 11, 0x7fa7abced980
  worker 12, 0x7fa7abd22680
  worker 13, 0x7fa7abd57380
  worker 14, 0x7fa7abd8c080
  worker 15, 0x7fa7abdc0d80
  worker 16, 0x7fa7abdf5a80
  worker 17, 0x7fa73042aa80
  worker 18, 0x7fa73045f780
 port 9
  worker 0, 0x7fa7abaaab00
  worker 1, 0x7fa7abadf800
  worker 2, 0x7fa7abb14500
  worker 3, 0x7fa7abb49200
  worker 4, 0x7fa7abb7df00
  worker 5, 0x7fa7abbb2c00
  worker 6, 0x7fa7abbe7900
  worker 7, 0x7fa7abc1c600
  worker 8, 0x7fa7abc51300
  worker 9, 0x7fa7abc86000
  worker 10, 0x7fa7abcbad00
  worker 11, 0x7fa7abcefa00
  worker 12, 0x7fa7abd24700
  worker 13, 0x7fa7abd59400
  worker 14, 0x7fa7abd8e100
  worker 15, 0x7fa7abdc2e00
  worker 16, 0x7fa7abdf7b00
  worker 17, 0x7fa73042cb00
  worker 18, 0x7fa730461800
 port 10
  worker 0, 0x7fa7abaacb80
  worker 1, 0x7fa7abae1880
  worker 2, 0x7fa7abb16580
  worker 3, 0x7fa7abb4b280
  worker 4, 0x7fa7abb7ff80
  worker 5, 0x7fa7abbb4c80
  worker 6, 0x7fa7abbe9980
  worker 7, 0x7fa7abc1e680
  worker 8, 0x7fa7abc53380
  worker 9, 0x7fa7abc88080
  worker 10, 0x7fa7abcbcd80
  worker 11, 0x7fa7abcf1a80
  worker 12, 0x7fa7abd26780
  worker 13, 0x7fa7abd5b480
  worker 14, 0x7fa7abd90180
  worker 15, 0x7fa7abdc4e80
  worker 16, 0x7fa7abdf9b80
  worker 17, 0x7fa73042eb80
  worker 18, 0x7fa730463880
 port 11
  worker 0, 0x7fa7abaaec00
  worker 1, 0x7fa7abae3900
  worker 2, 0x7fa7abb18600
  worker 3, 0x7fa7abb4d300
  worker 4, 0x7fa7abb82000
  worker 5, 0x7fa7abbb6d00
  worker 6, 0x7fa7abbeba00
  worker 7, 0x7fa7abc20700
  worker 8, 0x7fa7abc55400
  worker 9, 0x7fa7abc8a100
  worker 10, 0x7fa7abcbee00
  worker 11, 0x7fa7abcf3b00
  worker 12, 0x7fa7abd28800
  worker 13, 0x7fa7abd5d500
  worker 14, 0x7fa7abd92200
  worker 15, 0x7fa7abdc6f00
  worker 16, 0x7fa7abdfbc00
  worker 17, 0x7fa730430c00
  worker 18, 0x7fa730465900
 port 12
  worker 0, 0x7fa7abab0c80
  worker 1, 0x7fa7abae5980
  worker 2, 0x7fa7abb1a680
  worker 3, 0x7fa7abb4f380
  worker 4, 0x7fa7abb84080
  worker 5, 0x7fa7abbb8d80
  worker 6, 0x7fa7abbeda80
  worker 7, 0x7fa7abc22780
  worker 8, 0x7fa7abc57480
  worker 9, 0x7fa7abc8c180
  worker 10, 0x7fa7abcc0e80
  worker 11, 0x7fa7abcf5b80
  worker 12, 0x7fa7abd2a880
  worker 13, 0x7fa7abd5f580
  worker 14, 0x7fa7abd94280
  worker 15, 0x7fa7abdc8f80
  worker 16, 0x7fa7abdfdc80
  worker 17, 0x7fa730432c80
  worker 18, 0x7fa730467980
 port 13
  worker 0, 0x7fa7abab2d00
  worker 1, 0x7fa7abae7a00
  worker 2, 0x7fa7abb1c700
  worker 3, 0x7fa7abb51400
  worker 4, 0x7fa7abb86100
  worker 5, 0x7fa7abbbae00
  worker 6, 0x7fa7abbefb00
  worker 7, 0x7fa7abc24800
  worker 8, 0x7fa7abc59500
  worker 9, 0x7fa7abc8e200
  worker 10, 0x7fa7abcc2f00
  worker 11, 0x7fa7abcf7c00
  worker 12, 0x7fa7abd2c900
  worker 13, 0x7fa7abd61600
  worker 14, 0x7fa7abd96300
  worker 15, 0x7fa7abdcb000
  worker 16, 0x7fa730400000
  worker 17, 0x7fa730434d00
  worker 18, 0x7fa730469a00
 port 14
  worker 0, 0x7fa7abab4d80
  worker 1, 0x7fa7abae9a80
  worker 2, 0x7fa7abb1e780
  worker 3, 0x7fa7abb53480
  worker 4, 0x7fa7abb88180
  worker 5, 0x7fa7abbbce80
  worker 6, 0x7fa7abbf1b80
  worker 7, 0x7fa7abc26880
  worker 8, 0x7fa7abc5b580
  worker 9, 0x7fa7abc90280
  worker 10, 0x7fa7abcc4f80
  worker 11, 0x7fa7abcf9c80
  worker 12, 0x7fa7abd2e980
  worker 13, 0x7fa7abd63680
  worker 14, 0x7fa7abd98380
  worker 15, 0x7fa7abdcd080
  worker 16, 0x7fa730402080
  worker 17, 0x7fa730436d80
  worker 18, 0x7fa73046ba80
 port 15
  worker 0, 0x7fa7abab6e00
  worker 1, 0x7fa7abaebb00
  worker 2, 0x7fa7abb20800
  worker 3, 0x7fa7abb55500
  worker 4, 0x7fa7abb8a200
  worker 5, 0x7fa7abbbef00
  worker 6, 0x7fa7abbf3c00
  worker 7, 0x7fa7abc28900
  worker 8, 0x7fa7abc5d600
  worker 9, 0x7fa7abc92300
  worker 10, 0x7fa7abcc7000
  worker 11, 0x7fa7abcfbd00
  worker 12, 0x7fa7abd30a00
  worker 13, 0x7fa7abd65700
  worker 14, 0x7fa7abd9a400
  worker 15, 0x7fa7abdcf100
  worker 16, 0x7fa730404100
  worker 17, 0x7fa730438e00
  worker 18, 0x7fa73046db00
I/O lcore 4 (socket 0):
 Input rings per TX port
 port 16
  worker 0, 0x7fa7abab8e80
  worker 1, 0x7fa7abaedb80
  worker 2, 0x7fa7abb22880
  worker 3, 0x7fa7abb57580
  worker 4, 0x7fa7abb8c280
  worker 5, 0x7fa7abbc0f80
  worker 6, 0x7fa7abbf5c80
  worker 7, 0x7fa7abc2a980
  worker 8, 0x7fa7abc5f680
  worker 9, 0x7fa7abc94380
  worker 10, 0x7fa7abcc9080
  worker 11, 0x7fa7abcfdd80
  worker 12, 0x7fa7abd32a80
  worker 13, 0x7fa7abd67780
  worker 14, 0x7fa7abd9c480
  worker 15, 0x7fa7abdd1180
  worker 16, 0x7fa730406180
  worker 17, 0x7fa73043ae80
  worker 18, 0x7fa73046fb80
 port 17
  worker 0, 0x7fa7ababaf00
  worker 1, 0x7fa7abaefc00
  worker 2, 0x7fa7abb24900
  worker 3, 0x7fa7abb59600
  worker 4, 0x7fa7abb8e300
  worker 5, 0x7fa7abbc3000
  worker 6, 0x7fa7abbf7d00
  worker 7, 0x7fa7abc2ca00
  worker 8, 0x7fa7abc61700
  worker 9, 0x7fa7abc96400
  worker 10, 0x7fa7abccb100
  worker 11, 0x7fa7abcffe00
  worker 12, 0x7fa7abd34b00
  worker 13, 0x7fa7abd69800
  worker 14, 0x7fa7abd9e500
  worker 15, 0x7fa7abdd3200
  worker 16, 0x7fa730408200
  worker 17, 0x7fa73043cf00
  worker 18, 0x7fa730471c00
 port 18
  worker 0, 0x7fa7ababcf80
  worker 1, 0x7fa7abaf1c80
  worker 2, 0x7fa7abb26980
  worker 3, 0x7fa7abb5b680
  worker 4, 0x7fa7abb90380
  worker 5, 0x7fa7abbc5080
  worker 6, 0x7fa7abbf9d80
  worker 7, 0x7fa7abc2ea80
  worker 8, 0x7fa7abc63780
  worker 9, 0x7fa7abc98480
  worker 10, 0x7fa7abccd180
  worker 11, 0x7fa7abd01e80
  worker 12, 0x7fa7abd36b80
  worker 13, 0x7fa7abd6b880
  worker 14, 0x7fa7abda0580
  worker 15, 0x7fa7abdd5280
  worker 16, 0x7fa73040a280
  worker 17, 0x7fa73043ef80
  worker 18, 0x7fa730473c80
 port 19
  worker 0, 0x7fa7ababf000
  worker 1, 0x7fa7abaf3d00
  worker 2, 0x7fa7abb28a00
  worker 3, 0x7fa7abb5d700
  worker 4, 0x7fa7abb92400
  worker 5, 0x7fa7abbc7100
  worker 6, 0x7fa7abbfbe00
  worker 7, 0x7fa7abc30b00
  worker 8, 0x7fa7abc65800
  worker 9, 0x7fa7abc9a500
  worker 10, 0x7fa7abccf200
  worker 11, 0x7fa7abd03f00
  worker 12, 0x7fa7abd38c00
  worker 13, 0x7fa7abd6d900
  worker 14, 0x7fa7abda2600
  worker 15, 0x7fa7abdd7300
  worker 16, 0x7fa73040c300
  worker 17, 0x7fa730441000
  worker 18, 0x7fa730475d00
 port 20
  worker 0, 0x7fa7abac1080
  worker 1, 0x7fa7abaf5d80
  worker 2, 0x7fa7abb2aa80
  worker 3, 0x7fa7abb5f780
  worker 4, 0x7fa7abb94480
  worker 5, 0x7fa7abbc9180
  worker 6, 0x7fa7abbfde80
  worker 7, 0x7fa7abc32b80
  worker 8, 0x7fa7abc67880
  worker 9, 0x7fa7abc9c580
  worker 10, 0x7fa7abcd1280
  worker 11, 0x7fa7abd05f80
  worker 12, 0x7fa7abd3ac80
  worker 13, 0x7fa7abd6f980
  worker 14, 0x7fa7abda4680
  worker 15, 0x7fa7abdd9380
  worker 16, 0x7fa73040e380
  worker 17, 0x7fa730443080
  worker 18, 0x7fa730477d80
 port 21
  worker 0, 0x7fa7abac3100
  worker 1, 0x7fa7abaf7e00
  worker 2, 0x7fa7abb2cb00
  worker 3, 0x7fa7abb61800
  worker 4, 0x7fa7abb96500
  worker 5, 0x7fa7abbcb200
  worker 6, 0x7fa7abbfff00
  worker 7, 0x7fa7abc34c00
  worker 8, 0x7fa7abc69900
  worker 9, 0x7fa7abc9e600
  worker 10, 0x7fa7abcd3300
  worker 11, 0x7fa7abd08000
  worker 12, 0x7fa7abd3cd00
  worker 13, 0x7fa7abd71a00
  worker 14, 0x7fa7abda6700
  worker 15, 0x7fa7abddb400
  worker 16, 0x7fa730410400
  worker 17, 0x7fa730445100
  worker 18, 0x7fa730479e00
 port 22
  worker 0, 0x7fa7abac5180
  worker 1, 0x7fa7abaf9e80
  worker 2, 0x7fa7abb2eb80
  worker 3, 0x7fa7abb63880
  worker 4, 0x7fa7abb98580
  worker 5, 0x7fa7abbcd280
  worker 6, 0x7fa7abc01f80
  worker 7, 0x7fa7abc36c80
  worker 8, 0x7fa7abc6b980
  worker 9, 0x7fa7abca0680
  worker 10, 0x7fa7abcd5380
  worker 11, 0x7fa7abd0a080
  worker 12, 0x7fa7abd3ed80
  worker 13, 0x7fa7abd73a80
  worker 14, 0x7fa7abda8780
  worker 15, 0x7fa7abddd480
  worker 16, 0x7fa730412480
  worker 17, 0x7fa730447180
  worker 18, 0x7fa73047be80
 port 23
  worker 0, 0x7fa7abac7200
  worker 1, 0x7fa7abafbf00
  worker 2, 0x7fa7abb30c00
  worker 3, 0x7fa7abb65900
  worker 4, 0x7fa7abb9a600
  worker 5, 0x7fa7abbcf300
  worker 6, 0x7fa7abc04000
  worker 7, 0x7fa7abc38d00
  worker 8, 0x7fa7abc6da00
  worker 9, 0x7fa7abca2700
  worker 10, 0x7fa7abcd7400
  worker 11, 0x7fa7abd0c100
  worker 12, 0x7fa7abd40e00
  worker 13, 0x7fa7abd75b00
  worker 14, 0x7fa7abdaa800
  worker 15, 0x7fa7abddf500
  worker 16, 0x7fa730414500
  worker 17, 0x7fa730449200
  worker 18, 0x7fa73047df00
 port 24
  worker 0, 0x7fa7abac9280
  worker 1, 0x7fa7abafdf80
  worker 2, 0x7fa7abb32c80
  worker 3, 0x7fa7abb67980
  worker 4, 0x7fa7abb9c680
  worker 5, 0x7fa7abbd1380
  worker 6, 0x7fa7abc06080
  worker 7, 0x7fa7abc3ad80
  worker 8, 0x7fa7abc6fa80
  worker 9, 0x7fa7abca4780
  worker 10, 0x7fa7abcd9480
  worker 11, 0x7fa7abd0e180
  worker 12, 0x7fa7abd42e80
  worker 13, 0x7fa7abd77b80
  worker 14, 0x7fa7abdac880
  worker 15, 0x7fa7abde1580
  worker 16, 0x7fa730416580
  worker 17, 0x7fa73044b280
  worker 18, 0x7fa73047ff80
 port 25
  worker 0, 0x7fa7abacb300
  worker 1, 0x7fa7abb00000
  worker 2, 0x7fa7abb34d00
  worker 3, 0x7fa7abb69a00
  worker 4, 0x7fa7abb9e700
  worker 5, 0x7fa7abbd3400
  worker 6, 0x7fa7abc08100
  worker 7, 0x7fa7abc3ce00
  worker 8, 0x7fa7abc71b00
  worker 9, 0x7fa7abca6800
  worker 10, 0x7fa7abcdb500
  worker 11, 0x7fa7abd10200
  worker 12, 0x7fa7abd44f00
  worker 13, 0x7fa7abd79c00
  worker 14, 0x7fa7abdae900
  worker 15, 0x7fa7abde3600
  worker 16, 0x7fa730418600
  worker 17, 0x7fa73044d300
  worker 18, 0x7fa730482000

Ring sizes:
  NIC RX     = 1024
  Worker in  = 1024
  Worker out = 1024
  NIC TX     = 1024
Burst sizes:
  I/O RX (rd = 144, wr = 144)
  Worker (rd = 144, wr = 144)
  I/O TX (rd = 144, wr = 144)

Logical core 2 (I/O) main loop.
Logical core 3 (I/O) main loop.
Logical core 4 (I/O) main loop.
Logical core 5 (worker 0) main loop.
Logical core 6 (worker 1) main loop.
Logical core 7 (worker 2) main loop.
Logical core 8 (worker 3) main loop.
Logical core 9 (worker 4) main loop.
Logical core 10 (worker 5) main loop.
Logical core 11 (worker 6) main loop.
Logical core 12 (worker 7) main loop.
Logical core 13 (worker 8) main loop.
Logical core 14 (worker 9) main loop.
Logical core 15 (worker 10) main loop.
Logical core 16 (worker 11) main loop.
Logical core 17 (worker 12) main loop.
Logical core 18 (worker 13) main loop.
Logical core 19 (worker 14) main loop.
Logical core 20 (worker 15) main loop.
Logical core 21 (worker 16) main loop.
Logical core 22 (worker 17) main loop.
Logical core 23 (worker 18) main loop.
Adding Physical Port 0
00:0b:ab:58:9b:50:
Adding Physical Port 1
00:0b:ab:58:9b:51:
Adding Physical Port 2
00:0b:ab:58:99:fa:
Adding Physical Port 3
00:0b:ab:58:99:fb:
Adding Physical Port 4
00:0b:ab:58:99:fc:
Adding Physical Port 5
00:0b:ab:58:99:fd:
Adding Physical Port 6
00:0b:ab:58:88:90:
Adding Physical Port 7
00:0b:ab:58:88:91:
Adding Physical Port 8
00:0b:ab:58:88:92:
Adding Physical Port 9
00:0b:ab:58:88:93:
Adding Physical Port 10
00:0b:ab:7e:85:5e:
Adding Physical Port 11
00:0b:ab:7e:85:5f:
Adding Physical Port 12
00:0b:ab:58:9a:0a:
Adding Physical Port 13
00:0b:ab:58:9a:0b:
PANIC in rte_free():
Fatal error: Invalid memory
13: [/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fa7b2b29e6d]]
12: [/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7fa7b2fe3b50]]
11: [lagopus() [0x46fb65]]
10: [/usr/lib/liblagopus_dataplane.so.0(app_lcore_main_loop+0x71) [0x7fa7b3e7a4a0]]
9: [/usr/lib/liblagopus_dataplane.so.0(app_lcore_main_loop_io+0x14c) [0x7fa7b3e83900]]
8: [/usr/lib/liblagopus_dataplane.so.0(+0xa24b6) [0x7fa7b3e834b6]]
7: [/usr/lib/liblagopus_dataplane.so.0(update_port_link_status+0x173) [0x7fa7b3e84af4]]
6: [lagopus(rte_eth_dev_start+0x50) [0x465820]]
5: [lagopus() [0x45422a]]
4: [lagopus() [0x459311]]
3: [lagopus() [0x46784e]]
2: [lagopus(__rte_panic+0xc1) [0x4048d8]]
1: [lagopus() [0x471bd3]]
user@debian:~$ 

Lagopus 0.2 can't start due to unable to load lagopus.conf !

Hi ALL,

i download & install v0.2 in lab.

but i can't start lagopus normally when enter "sudo lagopus -d -- -c3 -n1 -- -p3" basic command.

check syslog as below, seems linke can't load "lagopus.conf" for starting.

c:1068:s_load_conf: file: /usr/local/etc/lagopus/lagopus.conf, {"ret":"NOT_FOUND",#12"data":"name = :{", "line": 1, "file": "/usr/local/etc/lagopus/lagopus.conf"}
Aug 28 12:12:00 vm-LP1 lagopus[11352]: [Fri Aug 28 12:12:00 CST 2015][ERROR][11352:0x00007f368b5ce9c0:lagopus]:./load_conf_module.c:28:initialize_internal: Datastore interp error(s).
Aug 28 12:12:00 vm-LP1 lagopus[11352]: [Fri Aug 28 12:12:00 CST 2015][FATAL][11352:0x00007f368b5ce9c0:lagopus]:./load_conf_module.c:29:initialize_internal: can't load the configuration parameters.

follow Quickstart guide,
i don't have "lagopus/samples/lagopus.conf" file after unzip source-code.

i tried to vi my own config-file in /usr/local/etc/lagopus/lagopus.conf, and use lagopus -C to specify the file, still not working.

mark@vm-LP1:~$ more /usr/local/etc/lagopus/lagopus.conf
interface {
interface01 {
type ethernet-dpdk-phy;
port-number 0;
}
interface02 {
type ethernet-dpdk-phy;
port-number 1;
}
}
port {
port01 {
interface interface01;
}
port02 {
interface interface02;
}
}
channel {
channel01 {
dst-addr 127.0.0.1;
}
}
controller {
controller01 {
channel channel01;
}
}
bridge {
bridge0 {
dpid 1;
port port01 1;
port port02 2;
controller controller01;
}
}

can kindly help this issue?

Thanks
mark

How shall I do if I want host can connect to the Internet?

Hi all,

I install lagopus 0.2 & DPDK 1.8.0 & ryu 3.10, and use another 2 PCs as hosts.
Now my hosts can ping each other, but the hosts cannot connect to the Internet.

I want to know can lagopus support hosts connect to the Internet?
Can I use lagopus replace a real L3 switch now? or I need to wait for further version?
If it can, what steps I need to do?

Thanks.
Brian

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.