Git Product home page Git Product logo

cloudnativedataplane / cndp Goto Github PK

View Code? Open in Web Editor NEW
85.0 10.0 32.0 8.61 MB

Cloud Native Data Plane (CNDP) is a collection of user space libraries to accelerate packet processing for cloud applications using AF_XDP sockets as the primary I/O..

License: BSD 3-Clause "New" or "Revised" License

Shell 0.75% Makefile 0.10% Jinja 0.16% Dockerfile 0.09% Go 5.68% Rust 2.45% C 89.12% Meson 1.33% C++ 0.12% Python 0.19%

cndp's Introduction

CNDP - Cloud Native Data Plane

Overview

Cloud Native Data Plane (CNDP) is a collection of userspace libraries for accelerating packet processing for cloud applications. It aims to provide better performance than that of standard network socket interfaces by taking advantage of platform technologies such as Intel(R) AVX-512, Intel(R) DSA, CLDEMOTE, etc. The I/O layer is primarily built on AF_XDP, an interface that delivers packets straight to userspace, bypassing the kernel networking stack. CNDP provides ways to expose metrics and telemetry with examples to deploy network services on Kubernetes.

CNDP Consumers

  • Cloud Network Function (CNF) and Cloud Application developers: Those who create applications based on CNDP. CNDP hides the low-level I/O, allowing the developer to focus on their application.

  • CNF and Cloud Application consumers: Those who consume the applications developed by the CNF developer. CNDP showcases deployment models for their applications using Kubernetes.

CNDP Characteristics

CNDP follows a set of principles:

  • Functionality: Provide a framework for cloud native developers that offers full control of their application.

  • Usability: Simplify cloud native application development to enable the developer to create applications by providing APIs that abstract the complexities of the underlying system while still taking advantage of acceleration features when available.

  • Interoperability: The CNDP framework is built primarily on top of AF_XDP. Other interfaces, such as memif, are also supported, however building on AF_XDP ensures it is possible to move an application across environments wherever AF_XDP is supported.

  • Portability/stability: CNDP provides ABI stability and a common API to access network interfaces.

  • Performance: Take advantage of platform technologies to accelerate packet processing or fall-back to software when acceleration is unavailable.

  • Observability: Provide observability into the performance and operation of the application.

  • Security: Security for deployment in a cloud environment is critical.

CNDP background

CNDP was created to enable cloud native developers to use AF_XDP and other interfaces in a simple way while providing better performance as compared to standard Linux networking interfaces.

CNDP does not replace DPDK (Data Plane Development Kit), which provides the highest performance for packet processing. DPDK implements user space drivers, bypassing the kernel drivers. This approach of rewriting drivers is one reason DPDK achieves the highest performance for packet processing. DPDK also implements a framework to initialize and setup platform resources i.e. scanning PCI bus for devices, allocating memory via hugepages, setting up Primary/Secondary process support, etc.

In contrast to DPDK, CNDP does not have custom drivers. Instead it expects the kernel drivers to implement AF_XDP, preferably in zero-copy mode. Since there are no PCIe drivers, there's no PCI bus scanning, and does not require physically contiguous and pinned memory. This simplifies deployment for cloud native applications while gaining the performance benefits provided by AF_XDP.

CNDP notable directories

The following shows a subset of the directory structure.

.
├── ansible                   # Ansible playbook to install in a system(s)
├── containerization          # Container configuration and setup scripts for Docker/K8s
├── doc                       # Documentation APIs, guides, getting started, ...
├── examples                  # Example applications to understand how to use CNDP features
├── lang                      # Language bindings and examples
│   ├── go                    # Go Language bindings to CNDP and tools (WIP)
│   └── rs                    # Rust Language bindings for CNDP/Wireguard (WIP)
├── lib                       # Set of libraries for building CNDP applications
│   ├── cnet                  # Userspace network stack
│   ├── common                # Libraries used by core and applications libraries
│   ├── core                  # Core libraries for CNDP
│   ├── include               # Common headers for CNDP and applications
│   └── usr                   # User set of libraries that are optional for developer
├── test                      # Unit test framework
│   ├── common                # Common test code
│   ├── fuzz                  # Fuzzing (WIP)
│   └── testcne               # Functional unit testing application
├── tools                     # Tools for building CNDP
│   └── vscode                # Configuration files for vscode
└── usrtools                  # Tools for users
    ├── cnectl                # Remote CLI for CNDP applications
    └── txgen                 # Traffic Generator using AF_XDP and CNDP

cndp's People

Contributors

anatolyburakov avatar badhrinathpa avatar byron-marohn avatar capp21 avatar chmodshubham avatar dependabot[bot] avatar donaldh avatar elzamath avatar haoruan avatar jalalmostafa avatar jeffreybshaw avatar keithwiles avatar leonmatt avatar manojgop avatar maryamtahhan avatar mnjdhl avatar nwaples avatar pbanicev avatar skoikkar avatar sushmasi avatar sweeksbigblue avatar vikasjain85 avatar xiaotia3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cndp's Issues

ubuntu dockerfile should not install packages without apt-get update

When a container image already has cached the first apt-get update, subsequent calls to apt-get install might fail in case repo information changes as is the case in https://github.com/CloudNativeDataPlane/cndp/blob/main/containerization/docker/ubuntu/Dockerfile#L48. The Dockerfile needs to be updated so the required packages are installed wiht apt-get update, or just move them to the first apt-get update && apt-get install line.

GTP-u graph node for the CNET networking stack

Difficulty: Fairly hard as one needs to learn CNDP graph node library and GTP-u protocol.

Description:
GTP-u graph node for CNET a CNDP graph node-based network stack (IPv4/UDP/TCP)

  • Create and test a GTP-u graph node for the CNET stack. CNET stack has a stub version for a starting point cndp/lib/cnet/gtpu.
  • Graph Node library source and documentation.
    • The graph node library was taken from DPDK with minor changes to work within CNDP.
  • GTP-u graph node needs to comply to GTP-u protocol.
  • Needs to be tested and verified it works with the CNET stack.
  • This one can be fairly complex as one needs to learn the graph node library and the GTP-u protocol.

TODO: Add more information.

Do CNDP will support jumbo frames and when do CNDP would support pktmbuf chaining ?

Hello,

I'm analyzing porting dpdk apps to cndp. We need jumbo frames (up to 9KB) and we use chaining pktmbuf functions.
It seems that XDP supports jumbo frames using multi-buffer. Is it planned to support it in CNDP ?
Moreover, when do pktmbuf chaining capability would be available please ?

Thanks in advance for your answer.

Niry

quicly example looks no more working/building

Had issues with quicly/picotls detection and even if I manage to add workarounds, the build will stop with error that signals quicly API have changed, I suspect that version of quicly is too new and example code needs to be updated.

[Question] Where to define lport number in fwd.jsonc? - cndpfwd sample application

Hello, I am trying the cndpfwd application in fwd mode,
I'm generating packet using T-Rex Traffic Generator with the src/dst mac address are:
- dest_mac: 40:a6:b7:19:f2:39 src_mac: 40:a6:b7:19:f2:38
My current fwd.jsonc configuration for the 2 ports of the cndpfwd app is:
"lports": {
"enp175s0f0:0": {
"pmd": "net_af_xdp",
"qid": 0,
"umem": "umem0",
"region": 0,
"description": "LAN 0 port"
},
"enp175s0f1:0": {
"pmd": "net_af_xdp",
"qid": 0,
"umem": "umem0",
"region": 1,
"description": "LAN 1 port"
}
Traffic was generated to port enp175s0f0. The app forwarded the traffic back to this port as it cannot find the corresponding lport as said in the document
"In forward mode (‘fwd’), the destination logical port on which to forward the packet is specified by the last octet of the destination MAC address. It should be a valid lport number 0-N. If the destination port does not exist the packet is sent back out the port on which it was received"

I would like to know how to configure the second lport number of (enp175s0f1) so that the cndpfwd app can forward the packet to this one according to the last octet 39 of the destination MAC address?

Thanks,

Sharable MsgChan for sending messages between processes

Difficulty: Medium

The Message Channel (MsgChan) is a library utilizing lockless rings to send/receive 8 byte messages between threads in a single process. The library creates two lockless rings one for sending and one for receiving messages. The current usage of Message Channels is to send messages between Go language threads and the CNDP C code threads. The library is creating and maintaining these lockless rings plus given the developer a simple set of APIs to utilize the channel.

The goal of this task is to modify the library to allow sharing the lockless rings between processes. The messages can not be pointers to other shared memory regions only offsets into the shared memory. The messages can be any 8 byte message and does not need to point to some other memory. The lockless rings appear to already be shared between processes, but the MsgChan structures are not completely sharable.

The implementation should rely on Linux IPC and existing shared memory infrastructure wherever possible.

lwIP question

Have any evaluations been done on using lwIP as an alternative to the IP stack that was created from scratch?

If so, why was this design chosen instead? Was it due to performance reasons?

IPv6 roadmap

Hello there!

I am interested in finding out more information about the timeline for IPv6 support. If this becomes available, we would be eager to test it out for these use cases:

  1. lw4o6 (RFC 7596) including both:
  • AFTR
  • B4
  1. Keyed IPv6 tunnel (RFC 8159)

We would be happy to contribute this code back as the example projects.

Any feedback would be greatly appreciated.

newer versions of libbpf 0.8.0 or libxdp

It appears while adding txgen latency support, the packet data offset with pktmbuf_mtod() is not pointing to the start of packet data as expected. It appears libbpf 0.6.0 behaves correctly and we need to move to libxdp to verify the problem exists.

Rework the test-cne testing infrastructure

Difficulty: Easy

Enhance the test-cne infrastructure to be more consistent or easier to use. Need all of the tests to look similar and produce similar output.

For example to show this test is different then the others. This test uses printf and not the tst_info() or tst_error() type APIs, these need to be cleaned up to use the correct APIs from the tst_ XXX library.

Python CNDP binding layer

Difficulty: Hard

Create a new CNDP binding layer to allow Python scripts to utilize CNDP libraries. The new binding layer needs to be similar to the Go/Rust binding layers. Create the bindings in lang/python. As an example to use the bindings, create a Python application that uses the bindings similar to the Go fwd example.

Preferably the bindings are auto-generated as much as possible.

remove privileged lport flag from rust implementation

The privileged flag was removed for lports in the c CNDP libraries in this PR and a new flag called xsk_pin_path was also added.

The privileged flag is automatically configured if the xsk_pin_path or the xsk_uds configurations are configured,. the same should be done in rust

Example Go application using Go binding layer

Difficulty: Medium to easy may have to learn Go language

Create a new example Go application doing some real world task and use the CNDP Go Binding layer. We currently have a simple loopback/forward like application called fwd along with a set of Go test code to verify Go bindings work.

Pick a reasonable Go application doing some type of work other than simple forwarding. Write the Go code and utilize the CNDP Go binding APIs. May need to add new or update Go binding APIs.

make static_build=1 rebuild fails

Using Ubuntu 20.04 setup using ansible

"/home/canopus/cndp/tools/cne-build.sh" clean static build
Build environment variables and paths:
CNE_SDK_DIR : /home/canopus/cndp
CNE_TARGET_DIR : usr/local
CNE_BUILD_DIR : /home/canopus/cndp/builddir
CNE_DEST_DIR : /home/canopus/cndp
PKG_CONFIG_PATH : /usr/lib64/pkgconfig
build_path : /home/canopus/cndp/builddir
target_path : /home/canopus/cndp/usr/local

*** Removing '/home/canopus/cndp/builddir' directory

Static build in '/home/canopus/cndp/builddir'
Release build in '/home/canopus/cndp/builddir'
Ninja build in '/home/canopus/cndp/builddir' buildtype='release'
The Meson build system
Version: 0.53.2
Source dir: /home/canopus/cndp
Build dir: /home/canopus/cndp/builddir
Build type: native build
Program cat found: YES (/bin/cat)
Project name: CNDP
Project version: 22.08.0
C++ compiler for the host machine: c++ (gcc 9.4.0 "c++ (Ubuntu 9.4.0-1ubuntu120.04.1) 9.4.0")
C++ linker for the host machine: c++ ld.bfd 2.34
C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1
20.04.1) 9.4.0")
C linker for the host machine: cc ld.bfd 2.34
Host machine cpu family: x86_64
Host machine cpu: x86_64
Checking for size of "void *" : 8
meson.build:91: WARNING: Consider using the built-in debug option instead of using "-g".
Has header "execinfo.h" : YES
Found pkg-config: /usr/bin/pkg-config (0.29.1)
Did not find CMake 'cmake'
Found CMake: NO
Run-time dependency libexecinfo found: NO (tried pkgconfig and cmake)
Run-time dependency numa found: NO (tried pkgconfig and cmake)
Run-time dependency libbsd found: YES 0.10.0
Run-time dependency json-c found: YES 0.13.1
Run-time dependency libpcap found: YES 1.9.1
Run-time dependency libnl-3.0 found: YES 3.4.0
Run-time dependency libnl-cli-3.0 found: YES 3.4.0
Run-time dependency libnl-route-3.0 found: YES 3.4.0
Run-time dependency libxdp found: NO (tried pkgconfig)
Run-time dependency libbpf found: YES 0.3.0
Has header "linux/if_xdp.h" : YES
Has header "bpf/xsk.h" : YES
Has header "bpf/bpf.h" : YES
Run-time dependency libbpf found: YES 0.3.0
Compiler for C supports arguments -muintr: NO
Run-time dependency libdlb found: NO (tried pkgconfig)
Compiler for C supports arguments -Wno-pedantic -Wpedantic: YES
Compiler for C++ supports arguments -Wno-pedantic -Wpedantic: YES
Compiler for C supports arguments -Wcast-qual: YES
Compiler for C++ supports arguments -Wcast-qual: YES
Compiler for C supports arguments -Wdeprecated: YES
Compiler for C++ supports arguments -Wdeprecated: YES
Compiler for C supports arguments -Wformat-nonliteral: YES
Compiler for C++ supports arguments -Wformat-nonliteral: YES
Compiler for C supports arguments -Wformat-security: YES
Compiler for C++ supports arguments -Wformat-security: YES
Compiler for C supports arguments -Wmissing-declarations: YES
Compiler for C++ supports arguments -Wmissing-declarations: YES
Compiler for C supports arguments -Wmissing-prototypes: YES
Compiler for C++ supports arguments -Wmissing-prototypes: NO
Compiler for C supports arguments -Wnested-externs: YES
Compiler for C++ supports arguments -Wnested-externs: NO
Compiler for C supports arguments -Wold-style-definition: YES
Compiler for C++ supports arguments -Wold-style-definition: NO
Compiler for C supports arguments -Wpointer-arith: YES
Compiler for C++ supports arguments -Wpointer-arith: YES
Compiler for C supports arguments -Wsign-compare: YES
Compiler for C++ supports arguments -Wsign-compare: YES
Compiler for C supports arguments -Wstrict-prototypes: YES
Compiler for C++ supports arguments -Wstrict-prototypes: NO
Compiler for C supports arguments -Wundef: YES
Compiler for C++ supports arguments -Wundef: YES
Compiler for C supports arguments -Wwrite-strings: YES
Compiler for C++ supports arguments -Wwrite-strings: YES
Compiler for C supports arguments -Wno-address-of-packed-member -Waddress-of-packed-member: YES
Compiler for C++ supports arguments -Wno-address-of-packed-member -Waddress-of-packed-member: YES
Compiler for C supports arguments -Wno-packed-not-aligned -Wpacked-not-aligned: YES
Compiler for C++ supports arguments -Wno-packed-not-aligned -Wpacked-not-aligned: YES
Compiler for C supports arguments -Wno-missing-field-initializers -Wmissing-field-initializers: YES
Compiler for C++ supports arguments -Wno-missing-field-initializers -Wmissing-field-initializers: YES
Fetching value of define "SSE4_2" : 1
Fetching value of define "AES" : 1
Fetching value of define "PCLMUL" : 1
Fetching value of define "AVX" : 1
Fetching value of define "AVX2" : 1
Fetching value of define "AVX512F" :
Fetching value of define "RDRND" : 1
Fetching value of define "RDSEED" : 1
Fetching value of define "AVX512F" : (cached)
Fetching value of define "AVX512DQ" :
Message: *** IPv4 dump output disabled
Message: *** TCP support disabled
Message: *** Enable support for punting to Linux kernel stack
Message: *** TCP dump output disabled
Configuring cne_build_config.h using configuration
Fetching value of define "AVX2" : 1 (cached)
Fetching value of define "AVX512F" : (cached)
Fetching value of define "AVX512VL" :
Fetching value of define "AVX512CD" :
Fetching value of define "AVX512BW" :
Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES
Compiler for C supports arguments -mavx512f -mavx512dq: YES
Message: *** have multiple avx512f and avx512dq
Compiler for C supports arguments -mavx512bw: YES
Compiler for C supports arguments -mavx512bw: YES
Program python3 found: YES (/usr/bin/python3)
Program doxygen found: YES (/usr/bin/doxygen)
Program generate_doxygen.sh found: YES (/home/canopus/cndp/doc/api/generate_doxygen.sh)
Program generate_examples.sh found: YES (/home/canopus/cndp/doc/api/generate_examples.sh)
Program doxy-html-custom.sh found: YES (/home/canopus/cndp/doc/api/doxy-html-custom.sh)
Configuring doxy-api.conf using configuration
Program sphinx-build found: YES (/usr/bin/sphinx-build)
Message: Fuzz tests require clang. Fuzz tests will not be built.
Message: **** Missing Linux source code, not building afxdp_user example:/usr/src/linux-source-5.13.0
Library quicly found: NO
Message: **** Quicly not found at /work/projects/intel/networking.dataplane/quicly
Update meson_options.txt file.
The quic-echo example cannot be built.
Message: DLB test build missing library
Program tools/mklib.sh found: YES (/home/canopus/cndp/tools/mklib.sh)
Program cp found: YES (/bin/cp)
Program cp found: YES (/bin/cp)
Message: >>> Create pkg-config file
Message: <<< Done pkg-config file
Build targets in project: 69

Option default_library is: static [default: shared]
Found ninja-1.10.0 at /usr/bin/ninja
ninja: Entering directory /home/canopus/cndp/builddir' [75/309] Linking target lib/core/pmds/net/af_xdp/libpmd_af_xdp.so. FAILED: lib/core/pmds/net/af_xdp/libpmd_af_xdp.so cc -o lib/core/pmds/net/af_xdp/libpmd_af_xdp.so 'lib/core/pmds/net/af_xdp/3c1feb5@@pmd_af_xdp@sha/pmd_af_xdp.c.o' -Wl,--as-needed -Wl,--no-undefined -Wl,-O1 -shared -fPIC -Wl,--start-group -Wl,-soname,libpmd_af_xdp.so -Wl,--no-as-needed -pthread -lm -ldl -lbsd -ljson-c -lpcap -lnl-3 -lnl-cli-3 -lnl-route-3 lib/core/osal/libcne_osal.a lib/core/log/libcne_log.a lib/core/mmap/libcne_mmap.a lib/core/cne/libcne_cne.a lib/core/kvargs/libcne_kvargs.a lib/core/pktdev/libcne_pktdev.a lib/core/mempool/libcne_mempool.a lib/core/ring/libcne_ring.a lib/core/pktmbuf/libcne_pktmbuf.a lib/common/uds/libcne_uds.a lib/core/xskdev/libcne_xskdev.a /usr/lib/libbpf.a /usr/lib/x86_64-linux-gnu/libelf.a /usr/lib/x86_64-linux-gnu/libz.a -Wl,--end-group '-Wl,-rpath,$ORIGIN/../../../osal:$ORIGIN/../../../log:$ORIGIN/../../../mmap:$ORIGIN/../../../cne:$ORIGIN/../../../kvargs:$ORIGIN/../../../pktdev:$ORIGIN/../../../mempool:$ORIGIN/../../../ring:$ORIGIN/../../../pktmbuf:$ORIGIN/../../../../common/uds:$ORIGIN/../../../xskdev' -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/osal -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/log -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/mmap -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/cne -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/kvargs -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/pktdev -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/mempool -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/ring -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/pktmbuf -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/common/uds -Wl,-rpath-link,/home/canopus/cndp/builddir/lib/core/xskdev /usr/bin/ld: /usr/lib/x86_64-linux-gnu/libelf.a(elf_error.o): relocation R_X86_64_TPOFF32 against global_error' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/lib/libbpf.a(libbpf.o): relocation R_X86_64_PC32 against symbol `stderr@@GLIBC_2.2.5' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: bad value
collect2: error: ld returned 1 exit status
[84/309] Compiling C object 'lib/usr/clib/acl/ee741e7@@avx512_tmp@sta/acl_run_avx512.c.o'.
ninja: build stopped: subcommand failed.
make: *** [Makefile:46: rebuild] Error 1

Add docker build cache to github actions cache to improve CI build times

Difficulty: Medium to Hard depends are understanding of GitHub Actions.

Potential enhancement idea for future work - it seems like it should be possible to cache part of the docker build layers to improve build times using the GitHub Actions cache system. The docker build is far slower than all the other build steps combined.

Tricky part is to do so in a way that doesn't eventually fill up the actions cache. We were thinking about trying to do this but ran out of time. Here's a decent article with a possible approach to this - https://evilmartians.com/chronicles/build-images-on-github-actions-with-docker-layer-caching

Maintainers, feel free to close or comment on this if you would be interested in this in the future.

Update the Ansible playbook

Difficulty: Easy to Medium

Update the CNDP Ansible playbook to make sure it support for Fedora installs.

The playbook needs to be updated to support libbpf and libxdp as the new libxdp contains the XSK APIs which were deprecated in libbpf , but libbpf is still required.

Perform bulk FIB lookups in cndpfwd l3fwd mode

Difficulty: Medium

The l3fwd_fib_lookup function used in the RX loop in the cndpfwd _l3fwd_test case currently performs a single FIB lookup each time it's called, using the cne_fib_lookup_bulk function.

As its name suggests, cne_fib_lookup_bulk is designed to perform n lookups at a time. In its current usage, we call it repeatedly with an n of 1. A more idiomatic use of this function would be to construct the array of IPs to look up in the _l3fwd_test RX loop, perform the lookups in bulk with a single call to cne_fib_lookup_bulk, then iterate on the result set to modify & TX the packets.

Performance seems to have dropped in our nighly tests

I appears PR #177 keeping the FQ full causes the performance to drop on nightly performance tests.

It maybe because the pktmbuf_t alloc/free calls, but we need to figure it out and rework the xskdev some to regain performance. The cndpfwd example appeared to work correctly and it appeared to fix the Go application performance when using rx_burst() requests larger than 64 mbufs. Need to make sure the performance drop is real in the nightly builds to make sure it is not a false positive or something with the configuration of the performance setup.

Minor doc issues with libbpf installation instructions

I followed the CNDP instructions here: https://github.com/CloudNativeDataPlane/cndp/blob/main/INSTALL.md but had an issue with the libbpf step:

Install libbpf from source

https_proxy="https://user:[email protected]:port" git clone https://github.com/libbpf/libbpf.git
cd libbpf
git checkout ...
make -C src
sudo make -C src install

There are 2 issues here. First I don't think the https_proxy needs to be there. Secondly this gives a version of libbpf (0.8.0/??) which seems incompatible with cndp:

$ make
"/work/cndp/tools/cne-build.sh" build
Build environment variables and paths:
  CNE_SDK_DIR     : /work/cndp
  CNE_TARGET_DIR  : usr/local
  CNE_BUILD_DIR   : /work/cndp/builddir
  CNE_DEST_DIR    : /work/cndp
  PKG_CONFIG_PATH : /usr/lib64/pkgconfig
  build_path      : /work/cndp/builddir
  target_path     : /work/cndp/usr/local

>>> Release build in '/work/cndp/builddir'
>>> Ninja build in '/work/cndp/builddir' buildtype='release'
usage: meson [-h] {setup,configure,dist,install,introspect,init,test,wrap,subprojects,help,rewrite,compile,devenv} ...
meson: error: unrecognized arguments: 
ninja: Entering directory `/work/cndp/builddir'
[10/54] Compiling C object lib/core/xskdev/libcne_xskdev.so.p/xskdev.c.o
FAILED: lib/core/xskdev/libcne_xskdev.so.p/xskdev.c.o 
cc -Ilib/core/xskdev/libcne_xskdev.so.p -Ilib/core/xskdev -I../lib/core/xskdev -Ilib/include -I../lib/include -Ilib/core/osal -I../lib/core/osal -Ilib/core/log -I../lib/core/log -I. -I.. -Ilib/core/cne -I../lib/core/cne -Ilib/common/uds -I../lib/common/uds -Ilib/core/mmap -I../lib/core/mmap -Ilib/core/mempool -I../lib/core/mempool -Ilib/core/pktmbuf -I../lib/core/pktmbuf -I/usr/include/libnl3 -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Wpedantic -Werror -O3 -g -Wno-pedantic -Wcast-qual -Wdeprecated -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-address-of-packed-member -Wno-packed-not-aligned -Wno-missing-field-initializers -D_GNU_SOURCE -include cne_build_config.h -march=native -DCC_AVX2_SUPPORT -fPIC -MD -MQ lib/core/xskdev/libcne_xskdev.so.p/xskdev.c.o -MF lib/core/xskdev/libcne_xskdev.so.p/xskdev.c.o.d -o lib/core/xskdev/libcne_xskdev.so.p/xskdev.c.o -c ../lib/core/xskdev/xskdev.c
../lib/core/xskdev/xskdev.c: In function 'configure_busy_poll':
../lib/core/xskdev/xskdev.c:122:5: error: 'xsk_socket__fd' is deprecated: libbpf v0.7+: AF_XDP support deprecated and moved to libxdp [-Werror=deprecated-declarations]
  122 |     int fd            = xsk_socket__fd(rxq->xsk);
      |     ^~~
In file included from ../lib/core/xskdev/xskdev.h:11,
                 from ../lib/core/xskdev/xskdev.c:27:
/usr/include/bpf/xsk.h:257:5: note: declared here
  257 | int xsk_socket__fd(const struct xsk_socket *xsk);
      |     ^~~~~~~~~~~~~~

This is a useful hint buried there but it isn't obvious:

 error: 'xsk_socket__fd' is deprecated: libbpf v0.7+: AF_XDP

In order to get cndp to compile I had to use libbpf 0.6.1. I don't know if that is the preferred version but it worked.

cd libbfp
git checkout v0.6.1
make -C src && sudo make -C src install

Example Go application of packet distributor

In #223, we need some more examples to show the usage of CNDP Go Binding layer.

Here we track a detail example of packet distributor, it performs the distribution of packets that are received on an rx queue to different cores to process, and send to an tx queue to transmit.

service chaining

Creating a chain of virtual network functions (VNFs) is an intriguing topic. Is this something that Kubernetes stack would take care of?

Are there any methods that could be used to achieve this without compromising performance but in a modular way and not involving Kubernetes?

IMHO this framework does have a great potential to be used outside of Kubernetes.

Sharable Mempool between processes

Difficulty: Hard

Modify CNDP mempool code to allow multiple processes to share a single mempool across processes. CNDP is a single process model application type and we need to be able to share buffers between processes without needing to share the entire process memory space which is what DPDK does today.

Sharing memory between processes is normally done with mmap() API to create a sharable region of memory. When sharing memory between processes you can not assume the virtual address of the memory space is the same between processes. That means memory pointers are not allowed in the shared memory region. This means the pointer currently being used need to be convert to offsets into the memory region.

The implementation should rely on Linux IPC and existing shared memory infrastructure wherever possible.

It should be possible to send and receive packets using AF_XDP sockets which use a shared UMEM between two or more processes.

rust unit tests failing

Following the instructions for the rust APIs through the readme allows for a successful build of those APIs but the unit tests seem to be failing when running the ./run_cne_test.sh script :

running 8 tests
test tests::test_register ... ok
test tests::test_load_config ... FAILED
test tests::test_get_port ... FAILED
test tests::test_get_port_details ... FAILED
test tests::test_cne_instance ... FAILED
test tests::test_loopback ... FAILED
test tests::test_rx_burst ... FAILED
test tests::test_tx_burst ... FAILED

failures:

---- tests::test_load_config stdout ----
thread 'tests::test_load_config' panicked at 'assertion failed: cfg.is_ok()', apis/cne/src/lib.rs:80:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- tests::test_get_port stdout ----
thread 'tests::test_get_port' panicked at 'assertion failed: ret.is_ok()', apis/cne/src/lib.rs:118:9

---- tests::test_get_port_details stdout ----
thread 'tests::test_get_port_details' panicked at 'assertion failed: ret.is_ok()', apis/cne/src/lib.rs:152:9

---- tests::test_cne_instance stdout ----
thread 'tests::test_cne_instance' panicked at 'assertion failed: ret.is_ok()', apis/cne/src/lib.rs:99:9

---- tests::test_loopback stdout ----
thread 'tests::test_loopback' panicked at 'assertion failed: ret.is_ok()', apis/cne/src/lib.rs:282:9

---- tests::test_rx_burst stdout ----
thread 'tests::test_rx_burst' panicked at 'assertion failed: ret.is_ok()', apis/cne/src/lib.rs:180:9

---- tests::test_tx_burst stdout ----
thread 'tests::test_tx_burst' panicked at 'assertion failed: ret.is_ok()', apis/cne/src/lib.rs:233:9


failures:
    tests::test_cne_instance
    tests::test_get_port
    tests::test_get_port_details
    tests::test_load_config
    tests::test_loopback
    tests::test_rx_burst
    tests::test_tx_burst

test result: FAILED. 1 passed; 7 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.01s

error: test failed, to rerun pass `-p cndp-cne --lib`

tools/jsonc_gen.sh Env vars clean up

The AF_XDP device plugin only sets the AFXDP_DEVICES environment variable.

The script needs to be updated to set AFXDP_COPY_MODE and LIST_OF_QIDS appropriately

Initial work was pushed in this PR

receiving zero length packets

Just with a simple go program main.go.txt that loops receiving packets, I am getting zero length packets (even at a low rate of 100pps, with all > 124bytes).
This is on an interface with just 1 channel configured, running on ubuntu (setup with ansible scripts).
If I up the bufcnt to 8 I don't see these anymore.

Example Rust application using Rust binding layer

Difficulty: Medium to hard if you need to learn Rust.

Create a new Rust example using the Rust binding layer APIs. Need to create or port a Rust application using the binding layer. The application needs to do more than a simple forwarding application. The Rust binding layer has two layers one is a low level set API, which is created by the Rust bindgen application to expose all of the CNDP APIs to Rust. The second set of APIs is a higher level of APIs and these are the ones the application needs to be using. Adding this application could require updating or creating new high level APIs.

rust build fail

The rust build inside the fedora oci image is currently failing
reproduce with: $ make oci-fed-image

#0 87.91   --- stderr
#0 87.91   thread 'main' panicked at 'Non floating-type complex? Type(_Complex _Float16, kind: Complex, cconv: 100, decl: Cursor( kind: NoDeclFound, loc: builtin definitions, usr: None), canon: Cursor( kind: NoDeclFound, loc: builtin definitions, usr: None)), Type(_Float16, kind: Float16, cconv: 100, decl: Cursor( kind: NoDeclFound, loc: builtin definitions, usr: None), canon: Cursor( kind: NoDeclFound, loc: builtin definitions, usr: None))', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/bindgen-0.59.2/src/ir/context.rs:1992:26
#0 87.91   note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
#0 87.91 warning: build failed, waiting for other jobs to finish...
#0 88.31 error: failed to compile `fwd v0.1.0 (/cndp/lang/rs/examples/fwd)`, intermediate artifacts can be found at `/cndp/lang/rs/target`
------
Dockerfile:76
--------------------
  74 |     # 'fwd' binary will be installed in /cndp/usr/local/bin
  75 |     WORKDIR /cndp/lang/rs
  76 | >>> RUN CNDP_INSTALL_PATH="/cndp" cargo install --root /cndp/usr/local/ --path examples/fwd
  77 |     
  78 |     WORKDIR /cndp
--------------------
ERROR: failed to solve: process "/bin/sh -c CNDP_INSTALL_PATH=\"/cndp\" cargo install --root /cndp/usr/local/ --path examples/fwd" did not complete successfully: exit code: 101
make: *** [Makefile:118: oci-fed-image] Error 1

cndp performance reducing when both ingress and egress traffic is there.

I have posted question for stack overflow for which I need theanswer.

https://stackoverflow.com/questions/76991150/port-rx-drop-in-intel-corporation-ethernet-controller-xl710-for-40gbe-qsfp?noredirect=1#comment135735741_76991150

I created a cndp forwarding application. And tested it on 40gbps NIC i found that the performance decreases when both ingress and egress traffic on the nic.

With only ingress traffic on the NIC i am able to get 40gbps but when both ingress and egress traffic on NIC is there i get at max 23gbps.
I don't know the reason why it reduces so much.

"umem0": {
"bufcnt": 400,
"bufsz": 2,
"mtype": "2MB",
"regions": [
400
],
"rxdesc": 4,
"txdesc": 4,
"cache": 256,
"description": "UMEM Description 0"
},

This is my umem configuration.

Any insights will be very helpful.

LPORT_UNPRIVILEGED for UDS should be configured under the hood rather than the CNDP app

Currently if we are using the uds we are setting the LPORT_UNPRIVILEGED in the example app rather than the underlying library.

            if (f->xdp_uds) {
                pcfg.xsk_uds = f->xdp_uds;
                lport->flags |= LPORT_UNPRIVILEGED;
            }

It's better if the lport flag is configured in the core library rather than the example (more under the hood rather than having user responsibility to set it)... it might be trickier through the jcfg libraries but less prone to error in the longer term

cnet: ip4_forward node fib lookup tightly coupled with arp entry

Currently with the ip4_forward node we call fib_info_lookup with the arp fib table. We need to use the route fib table in conjunction with the arp table.

fi          = cnet->arp_finfo;
....
 if (unlikely(fib_info_lookup(fi, ip4, (void **)arp, 4) > 0)) {
 ...

with fib_info_lookup if we rely so much on the arp FIB table and not route FIB the ip4_forward node won't know what to do with the traffic that doesn't have an arp entry and the node will assign it to the arp_request node.... the suggestion is to do an arp lookup first, if that fails - try to find the next hop info from the route fib... we may then need to do an arp lookup on that next hop...

Rust language bindings and directory structure needs cleanup and restructure

Hi Folks
I recently started looking through the Rust language bindings for CNDP and ran into a few issues. Following the readme things don't work out of the box. In addition the directory structure is completely confusing (there's no real structure to what's an example vs what's the set of libraries I can use as part of my Rust application). Also I believe the naming should be libcndp-sys for the directory that contains the language bindings (cne is only a small library set of what CNDP has to offer).

The current dir structure is shown below:

.
├── README.md
├── bindings
│   ├── cne
│   │   ├── Cargo.toml
│   │   ├── README.md
│   │   ├── examples
│   │   │   └── loopback
│   │   │       ├── Cargo.toml
│   │   │       ├── run.sh
│   │   │       └── src
│   │   │           └── main.rs
│   │   ├── fwd.jsonc
│   │   ├── run_test.sh
│   │   └── src
│   │       ├── config.rs
│   │       ├── error.rs
│   │       ├── instance.rs
│   │       ├── lib.rs
│   │       ├── packet.rs
│   │       ├── port.rs
│   │       └── util.rs
│   ├── cne-sys
│   │   ├── Cargo.toml
│   │   ├── README.md
│   │   ├── build.rs
│   │   └── src
│   │       ├── bindings
│   │       │   ├── bindings.c
│   │       │   ├── bindings.h
│   │       │   ├── meson.build
│   │       │   └── wrapper.h
│   │       └── lib.rs
│   └── examples
│       └── echo_server
│           ├── Cargo.toml
│           ├── fwd.jsonc
│           ├── run.sh
│           └── src
│               └── main.rs
├── helloworld
│   └── main.rs
├── mmap
│   ├── Cargo.toml
│   └── src
│       └── lib.rs
├── mmap-cndp
│   ├── Cargo.toml
│   ├── build.rs
│   ├── src
│   │   └── lib.rs
│   └── wrapper.h
├── pktfwd
│   ├── Cargo.toml
│   ├── build.rs
│   ├── fwd.jsonc
│   ├── jcfg_parse
│   │   ├── fwd.h
│   │   ├── meson.build
│   │   ├── parse-jsonc.c
│   │   ├── rust_helper.c
│   │   ├── rust_helper.h
│   │   └── stats.c
│   ├── runcmd.sh
│   ├── src
│   │   ├── cndp.rs
│   │   ├── main.rs
│   │   ├── packet.rs
│   │   └── util.rs
│   └── wrapper.h
├── ring_test
│   ├── Cargo.toml
│   ├── build.rs
│   ├── src
│   │   ├── lib.rs
│   │   └── main.rs
│   └── wrapper.h
└── wireguard
    └── patch
        ├── 0001-Integrate-CNDP-Cloud-Native-Data-Plane-with-Wireguar.patch
        ├── 0002-Rename-variable-private-to-priv_-to-fix-build-error.patch
        └── 0003-Remove-extra-argument-from-pktmbuf_dump.patch

Finally - there's only a handful of APIs that have been converted to Rust and the naming is separate to the C libraries - so tracking what's where is going to be difficult.

Is there a plan to cleanup the Rust implementation and port more libraries?

Support hardware offloading on Smart NIC

Hello, I'm interested in XDP and finding a new solution to enhance XDP with hardware support offloading.
I run "examples/cndpfwd" but the current version seems to only run in XDP native mode.
Would you show me some way I can run and offload XDP on Netronome’s Agilio Smart NIC.

Thank you.

Long network device names

We have network interfaces that are reaching the maximum device name length.

(pktdev_allocate : 84) ERR: Ethernet device name is too long
(init_lport : 209) ERR: pktdev_allocate(enp24s0f0np0:10, enp24s0f0np0) failed
(pmd_af_xdp_probe : 248) ERR: Failed to init lport
(pktdev_port_setup : 50) ERR: driver probe(enp24s0f0np0:net_af_xdp) failed
2023/11/01 02:32:01 cndp init failed: pktdev_port_setup() failed for lport enp24s0f0np0:10

and

cndp init failed: pktdev_port_setup() failed for lport enp152s0f0np0:0

Is it possible to increase the PKTDEV_NAME_MAX_LEN value from 16?

go bindings test fail

lang/go/bindings/cne$ ./run_test
=== RUN TestStarting
--- PASS: TestStarting (0.33s)
=== RUN TestGetChannel
=== RUN TestGetChannel/TestLoopback
=== RUN TestGetChannel/TestLoopback#01
--- PASS: TestGetChannel (0.00s)
--- PASS: TestGetChannel/TestLoopback (0.00s)
--- PASS: TestGetChannel/TestLoopback#01 (0.00s)
=== RUN TestRxBurst
=== RUN TestRxBurst/TestRxBurst
cne.test: ../lib/core/cne/cne.c:28: cne_id: Assertion `per_thread__cne != NULL' failed.
SIGABRT: abort
PC=0x7fdce9f0000b m=3 sigcode=18446744073709551610
signal arrived during cgo execution

The same thing happens when I run OpenWIthConfig() in my own code.

Avoid libbpf-dev dependency for CNDP applications

Currently CNDP applications using xskdev.h header needs to include libbpf-dev as dependency because xskdev.h includes <bpf/xsk.h>. Is it possible to refactor xskdev_info_t structure in xskdev.h such a way that we can include “xskdev_xxx” data types in some private structure defined in an internal header file like xskdev_internal.h and then have a member variable like void *priv in xskdev_info_t structure.

File: xskdev_internal.h

#include <bpf/xsk.h>

typedef struct xskdev_info_internal {
    xskdev_rxq_t rxq;              /**< RX queue */
    xskdev_txq_t txq;              /**< TX queue */
    xskdev_get_mbuf_addr_tx_t     __get_mbuf_addr_tx;         /**< Internal function to set the mbuf address on tx */
    xskdev_get_mbuf_rx_t           __get_mbuf_rx;                   /**< Internal function to get the mbuf address on rx */
    xskdev_pull_cq_addr_t          __pull_cq_addr;                  /**< Internal function to pull the complete queue */
}

File: xskdev.h

typedef struct xskdev_info {
      …..

      void *priv; /**  >struct xskdev_info_internal > **/
       …..
}

File: xskdev.c

#include  xskdev_internal.h
#include xskdev.h

xskdev_info  xskdev;
struct xskdev_info_internal *xskdev_internal = (struct xskdev_info_internal *) xskdev->priv;

CNDP Test-cne application

Difficulty: Easy

Enhance the test-cne application to improve an existing test or add a new one for testing a library not previously supported i.e., event/osal/txbuff/... library. Some of these tests could also have a performance test to show the performance of the APIs.

Unify error message language for failed allocations

Difficulty: Easy to Medium

I noticed that the error messages for when calloc returns NULL use different strings in different parts of the codebase.

To see an example of what I'm talking about, try this against the root of the repo:
for s in "Unable" "Failed" "Allocation"; do grep -rn -A3 calloc | grep $s; done

There are probably other contexts where a similar thing happens. I think the most thorough fix for this would probably look like enumerating those contexts and perhaps adding some standards to the coding style guide.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.