Git Product home page Git Product logo

ipfixcol2's Introduction

Tip: Are you sure your NetFlow/IPFIX probe is working correctly? Be sure with our other project: 🌊 FlowTest


Master branch BuildMaster
Devel branch BuildDevel

IPFIXcol2

IPFIXcol2 is a flexible, high-performance NetFlow v5/v9 and IPFIX flow data collector designed to be extensible by plugins. The second generation of the collector includes many design and performance enhancements compared to the original IPFIXcol.

The collector allows you to choose combination of input, intermediate and output plugins that best suit your needs. Do you need to receive data over UDP/TCP and store them for long term preservation? Or, do you prefer conversion to JSON and processing by other systems? No problem, pick any combination of plugins.

Features:

  • Input, intermediate and output plugins with various options
  • Parallelized design for high-performance
  • Support for bidirectional flows (biflow)
  • Support for structured data types (i.e. lists)
  • Built-in support for many Enterprise-Specific Information Elements (Cisco, Netscaler, etc.)

Available plugins

Input plugins - receive NetFlow/IPFIX data. Each can be configured to listen on a specific network interface and a port. Multiple instances of these plugins can run concurrently.

  • UDP - receive NetFlow v5/v9 and IPFIX over UDP
  • TCP - receive IPFIX over TCP
  • FDS File - read flow data from FDS File (efficient long-term storage)
  • IPFIX File - read flow data from IPFIX File

Intermediate plugins - modify, enrich and filter flow records.

  • Anonymization - anonymize IP addresses (in flow records) with Crypto-PAn algorithm

Output plugins - store or forward your flows.

  • FDS File - store all flows in FDS file format (efficient long-term storage)
  • Forwarder - forward flows as IPFIX to one or mode subcollectors
  • IPFIX File - store all flows in IPFIX File format
  • JSON - convert flow records to JSON and send/store them
  • JSON-Kafka - convert flow records to JSON and send them to Apache Kafka
  • Viewer - convert IPFIX into plain text and print it on standard output
  • Time Check - flow timestamp check
  • Dummy - simple output module example
  • lnfstore (*) - store all flows in nfdump compatible format for long-term preservation
  • UniRec (*) - send flow records in UniRec format via TRAP communication interface (into Nemea modules)

* Must be installed individually due to extra dependencies

How to install

If you are running a RHEL system or one of its derivatives, the easiest way to get IPFIXcol installed is using our Copr package repository.

$ dnf install 'dnf-command(copr)'  # Extra step necessary on some systems
$ dnf copr enable @CESNET/IPFIXcol
$ dnf install ipfixcol2

For other systems, follow the build instructions below.

How to build

IPFIXcol is based on libfds library that provides functions for IPFIX parsing and manipulation. First of all, install the library. For more information visit the project website and follow installation instructions.

However, you have to typically do following steps: (extra dependencies may be required)

$ git clone https://github.com/CESNET/libfds.git
$ cd libfds
$ mkdir build && cd build && cmake .. -DCMAKE_INSTALL_PREFIX=/usr
$ make
# make install

Second, install build dependencies of the collector

RHEL/CentOS:

yum install gcc gcc-c++ cmake make python3-docutils zlib-devel librdkafka-devel
# Optionally: doxygen pkgconfig
  • Note: latest systems (e.g. Fedora/CentOS Stream 8) use dnf instead of yum.
  • Note: package python3-docutils may by also named as python-docutils or python2-docutils
  • Note: package pkgconfig may by also named as pkg-config
  • Note: CentOS Stream 8 usually requires additional system repositories to be enabled:
dnf -y install epel-release
dnf config-manager --set-enabled appstream powertools
  • Note: Oracle Linux 8 usually requires additional system repositories to be enabled:
dnf -y install oracle-epel-release-el8
dnf config-manager --set-enabled ol8_appstream ol8_codeready_builder

Debian/Ubuntu:

apt-get install gcc g++ cmake make python3-docutils zlib1g-dev librdkafka-dev
# Optionally: doxygen pkg-config

Finally, build and install the collector:

$ git clone https://github.com/CESNET/ipfixcol2.git
$ cd ipfixcol2
$ mkdir build && cd build && cmake ..
$ make
# make install

How to configure and start IPFIXcol

Before you can start IPFIXcol, you have to prepare a configuration file. The file describes how IPFIXcol is configured at startup, which plugins are used and, for example, where flow data will be stored. The structure of the configuration is described here. Several configuration examples that demonstrate features of the collector are given in the section "Example configuration files".

FAQ

Do you have any troubles? Unable to build and run the collector? Feel free to submit a new issue.

We are open to new ideas! For example, are you missing a specific plugin that could be useful also for other users? Please, share your experiences and thoughts.


Q:My exporter sends flow data over UDP, however, the IPFIXcol doesn't process/store any data immediately after start.
A:This is normal behaviour caused by UDP transport protocol. It may take up few minutes until the first record is processed based on template refresh interval on the exporter. For more information, see documentation of UDP plugin.
Q:The collector is not able to find a plugin. What should I do?
A:First of all, make sure that the plugin is installed. Some plugins (e.g. Unirec) are optional and must be installed separately. Therefore, list all available plugins using ipfixcol2 -L and check if the plugin is on the list. If not, see the plugin page for help. If the problem still persists, check if the plugin is installed in the correct directory. Since plugins might be placed in different locations on different platforms, show help using ipfixcol2 -h and see the default value of -p PATH parameter. In some situations, it is also possible that the plugin cannot be loaded (even when it is properly installed) due to additional dependencies (e.g. missing library etc.). If this is the issue, use ipfixcol2 -L -v and there might be a message like this WARNING: Configurator (plugin manager): Failed to open file... (some reason) on the first line that might help you.
Q:How can I add more IPFIX fields into records?
A:The collector receives flow records captured and prepared by an exporter. IPFIX is an unidirectional protocol which means that the collector is not able to instruct the exporter what to measure or how to behave. If you want to enhance your records, please, check configuration of your exporter.
Q:After manual build and installation the collector is unable to start and a message similar to error while loading shared libraries: libfds.so.0: cannot open shared object file: No such file or directory is given.
A:Make sure that libfds is installed properly and your system is able to locate it. Some systems (e.g. RHEL/CentOS/Fedora) for historical reason doesn't search for shared libraries in the default installation directory where the libfds is installed. You can permanently include this directory. For example, if the library is located in /usr/local/lib64, use as administrator "echo "/usr/local/lib64" > /etc/ld.so.conf.d/local64.conf && ldconfig" or temporarily change an environment variable "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib64/"

ipfixcol2's People

Contributors

bennysim avatar cejkato2 avatar havraji6 avatar hynekkar avatar jaroslavh avatar lukas955 avatar norrisjeremy avatar sedmicha avatar thesablecz avatar thorgrin avatar xkalaj01 avatar xsedla1o avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ipfixcol2's Issues

Json output plugin and nat events

Hi, I am trying to test ipfixcol2 in my environment: ubuntu host with nat (iptables) and ipt_NETFLOW to generate NetflowV9 NAT events.
Example output with json output plugin:
{
"@type": "ipfix.entry",
"en4294967294:id323": ...,
"iana:sourceIPv4Address": "100.79.62.203",
"iana:destinationIPv4Address": "87.255.2.39",
"en4294967294:id225": ...,
"en4294967294:id226": ...,
"iana:sourceTransportPort": 51413,
"iana:destinationTransportPort": 4865,
"en4294967294:id227": 51413,
"en4294967294:id228": 4865,
"iana:protocolIdentifier": "UDP",
"en4294967294:id230": 1
}
How can I configure ipfixcol2 to show fields decoded as en4294967294:id225,id226,id323 in human format? For example, these fields are presented in system/elements/iana.xml in libfds in the correct format.

Kafka compilation failure on ubuntu 16 and throughput issue towards kafka

We can't compile ipfixcol2 with kafka on ubuntu 16 as it shows librdkafka version incompatibility.

We could compile ipfixcol2 with kafka on ubuntu 18, however on running,it produces 17-20K kafka record per second though input ipfix records were much higher.

With previous ipfixcol, on ubuntu 16, it produced 0.1 million kafka record per second, while running the same produced 17-20k kafka record per second on ubuntu 18.

ENHANCEMENT: PEN Fields

Can we have configuration file for PEN related fields like ipfix-elements.xml of earlier version.

Kafka configuration issue

In Kafka configuration, on defining partitions , it displays error as unknown partition. When we defined partition as unassigned, it displayed no error and records to kafka were produced.

Add support for Musl libc

Hello:
Most new embedded OSes use Musl as their C library. Musl does not support the "pthread_rwlockattr_setkind_np()" function and "PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP" definition.

This is the relevant log in our build system that is failing:

[ 75%] Building CXX object src/plugins/output/json/CMakeFiles/json-output.dir/src/File.cpp.o
cd /mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json && /usr/bin/ccache /opt/toolchains/bin/x86_64-openwrt-linux-musl-g++  -Djson_output_EXPORTS -I/mnt/ipfixcol2/ipfixcol2-2.1.0/include -I/mnt/ipfixcol2/ipfixcol2-2.1.0/src -I/mnt/ipfixcol2/.odedeps/usr/include  -Os -pipe -fno-caller-saves -g3 -rdynamic -fhonour-copts -fstack-protector -D_FORTIFY_SOURCE=1 -Wl,-z,now -Wl,-z,relro -Wno-error=unused-const-variable -I/opt/toolchains/include -I/mnt/ipfixcol2/.odedeps/usr/include -I/opt/toolchains/x86_64-openwrt-linux-musl/include -fvisibility=hidden -std=gnu++11 -O2 -DNDEBUG -fPIC   -o CMakeFiles/json-output.dir/src/File.cpp.o -c /mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp: In constructor 'File::File(const cfg_file&, ipx_ctx_t*)':
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp:113:46: error: 'PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP' was not declared in this scope
     if (pthread_rwlockattr_setkind_np(&attr, PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP) != 0) {
                                              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp:113:9: error: 'pthread_rwlockattr_setkind_np' was not declared in this scope
     if (pthread_rwlockattr_setkind_np(&attr, PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP) != 0) {
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp:113:9: note: suggested alternative: 'pthread_rwlockattr_setpshared'
     if (pthread_rwlockattr_setkind_np(&attr, PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP) != 0) {
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
         pthread_rwlockattr_setpshared
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp: In static member function 'static int File::dir_create(ipx_ctx_t*, const string&)':
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp:344:45: error: invalid conversion from 'int' to 'const char*' [-fpermissive]
             const char *err_str = strerror_r(errno, buffer, 128);
                                   ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp:359:45: error: invalid conversion from 'int' to 'const char*' [-fpermissive]
             const char *err_str = strerror_r(errno, buffer, 128);
                                   ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp: In static member function 'static void* File::file_create(ipx_ctx_t*, const string&, const string&, const time_t&, calg)':
/mnt/ipfixcol2/ipfixcol2-2.1.0/src/plugins/output/json/src/File.cpp:421:41: error: invalid conversion from 'int' to 'const char*' [-fpermissive]
         const char *err_str = strerror_r(errno, buffer, 128);
                               ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
make[2]: *** [src/plugins/output/json/CMakeFiles/json-output.dir/src/File.cpp.o] Error 1
src/plugins/output/json/CMakeFiles/json-output.dir/build.make:158: recipe for target 'src/plugins/output/json/CMakeFiles/json-output.dir/src/File.cpp.o' failed
make[2]: Leaving directory '/mnt/ipfixcol2/ipfixcol2-2.1.0'
make[1]: *** [src/plugins/output/json/CMakeFiles/json-output.dir/all] Error 2
CMakeFiles/Makefile2:650: recipe for target 'src/plugins/output/json/CMakeFiles/json-output.dir/all' failed
make[1]: Leaving directory '/mnt/ipfixcol2/ipfixcol2-2.1.0'
make: *** [all] Error 2
Makefile:129: recipe for target 'all' failed

There is a similar resolved issue for CESNET's libnetconf2 repo: CESNET/libnetconf2#160

It looks like the solution for libnetconf2 was to simply check for that support, and then just #ifdef the entire code block that uses it: CESNET/libnetconf2@153fe40

Not sure if that would be an appropriate solution for ipfixcol2 or not?

Thanks!

[lnfstore] Enable to configure file prefix/suffix and UTC via plugin config

Hi,

just found ipfixcol2 and tried to replace nfcapd with it in an nfsen system.
Works well so far.
It would however be very helpful to be able to configure the file prefixes and suffix such that e.g., a nfsen system would pick them up.

To this end, it would be awesome if one could set/override the defaults in
LNF_FILE_PREFIX
and
BF_FILE_PREFIX
and
SUFFIX_MASK
via the the plugin config

Furthermore, the plugin right now seems to convert timestamps to UTC (which is what I would prefer), however it would be nice if it one could toggle this via the config.

Best
Jan

Pattern matching utility to determine output topic in JsonKafka plugin

In json-kafka output plugin there is no option to determine multiple Kafka topics in order to load balance messages into topics or separate specific IPFIX message categories into specific Kafka topics. For example, I like to produce IPFIX start-session messages to the topic ipfix-start, session-update to the topic ipfix-update, and so on. This feature is very useful when your IPFIX consumers need a subset of message types and can subscribe to the topics that they need.
To do this We could determine some pair of Pattern regex and Kafka topic, which means if the regex is matched then produce the message to the topic.

Consider the following json-kafka config file:

<outputPlugins>
    <output>
      <name>JSON output</name>
      <plugin>json-kafka</plugin>
      <params>
        <outputs>
          <kafka>
            <name>Send to Kafka</name>
            <brokers>127.0.0.1:9092</brokers>
            <patternTopic>
                    <regex>message-type:1</regex>
                    <topic>ipfix-1</topic>
                    <partition>unassigned</partition>
            </patternTopic>
            <patternTopic>
                    <regex>TCP.{5}8080</regex>
                    <topic>ipfix-2</topic>
                    <partition>1</partition>
            </patternTopic>
          </kafka>
        </outputs>
      </params>
    </output>
  </outputPlugins>

In the patternTopic scope, the regex, topic, and partition fields are defined. The first pattern says that if the received IPFIX message contains message-type:1 then produce it to the topic ipfix-1 and the second pattern says that if the received IPFIX message matches with regex TCP.{5}8080 then produce it to the topic ipfix-2.

Note: The following patternTopic result is the same as the current config file description.

<patternTopic>
        <regex>.*</regex>
        <topic>ipfix</topic>
        <partition>unassined</partition>
</patternTopic>

Note: Every user could specify his own pattern regexes according to his own IPFIX templates.

IPFixCol2 unable to translate some IEs

I'm using a Mikrotik as my exporter.
It's sending me a template, and data flows.
Most of the data fields are translated by ipfixcol2/libfds without any issues, but there are a few fields which are not. More specifically, the ones I'm seeing that are not translated are:

  • postNATSourceIPv4Address (225)
  • postNATDestinationIPv4Address (226)
  • postNAPTSourceTransportPort (227)
  • postNAPTDestinationTransportPort (228)

I'm running ipfixcol2 with the JSON output plugin, and it is giving me these 4 fields as:

  • en4294967294:id225: 3339760682
  • en4294967294:id226: 167969767
  • en4294967294:id227: 5683
  • en4294967294:id228: 51769

All 4 IE definitions are in my /usr/etc/libfds/system/elements/iana.xml, so I would think that there should be no problem with them.

I'm using the latest commit of both libfds (dcb27c0ba139d4337bdfc3cc126a7e0454cd1acd) and ipfixcol2 (5515554).

Am I doing something wrong?

Set core affinity to processor threads

In order to reduce OS context switch overhead, it's good to set core affinity to the processor thread (the user can isolate the specific core from the OS scheduler and use it) to fix the core that will execute the processor function.

Loadbalance the output into multiple files

Hi,

We would like to listen a single incoming UDP port and load balance it to multiple JSON outputs. Actually, we used odidFilter in our output, but we cant load balance the incoming records to multiple files in a single odid instance. Is this somehow possible?
Regards

libfds build example uses cmake; CONCAT seems to require cmake3

libfds is failing on my cmake version because of lack of CONCAT:

# mkdir build && cd build && cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DLIBXML2_INCLUDE_DIR=/usr/lib64 -DLIBXML2_LIBRARIES=/usr/lib64/libxml2.so.2.9.1

-- Found LibXml2: /usr/lib64/libxml2.so.2.9.1
-- Maintainer contact for packages is not specified - using a name and email from the git configuration
CMake Error at pkg/CMakeLists.txt:55 (string):
  string does not recognize sub-command CONCAT
# cmake --version
cmake version 2.8.12.2

Happier when moving to cmake3:

#cmake3 --version
cmake3 version 3.13.5
# mkdir build && cd build && cmake3 .. -DCMAKE_INSTALL_PREFIX=/usr -DLIBXML2_INCLUDE_DIR=/usr/lib64 -DLIBXML2_LIBRARIES=/usr/lib64/libxml2.so.2.9.1
-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test COMPILER_SUPPORT_GNU11
-- Performing Test COMPILER_SUPPORT_GNU11 - Success
-- Performing Test COMPILER_SUPPORT_GNUXX11
-- Performing Test COMPILER_SUPPORT_GNUXX11 - Success
-- Found LibXml2: /usr/lib64/libxml2.so.2.9.1
--

libfds version...:   0.1.0
Install prefix...:   /usr
Build type.......:   RELEASE
C Compiler.......:   GNU 4.8.5
C Flags..........:    -fvisibility=hidden -std=gnu11 -O2 -DNDEBUG
C++ Compiler.....:   GNU 4.8.5
C++ Flags........:    -fvisibility=hidden -std=gnu++11 -O2 -DNDEBUG
Doxygen..........:

-- Configuring done
-- Generating done
-- Build files have been written to: /root/libfds/build

Number of records in NetFlow v9 Message header doesn't match number of records found in the Message

Here is such an error occurs when running ipfixcol2 -c ipfix.cfg how to fix it?

WARNING: UDP input (parser): [12.1.24.1:58633, ODID: 256] Number of records in NetFlow v9 Message header doesn't match number of records found in the Message (expected: 1, found: 0)
WARNING: UDP input (parser): [12.1.24.1:58633, ODID: 256] Unable to convert NetFlow v9 Data Set (FlowSet ID: 256) to IPFIX due to missing NetFlow template. The Data FlowSet and its records will be dropped!

Display Output has <unkown>:<unknown> in it

I am new to ipfixcol2 and the genre as a whole, I am an intern and trying to complete a project where I utilize my ubuntu cli server to collect IPFIX TCP data, running the collector returns the values with ...

IPFIX Message header:
Version: 10
Length: 88
Export time: 1691092694
Sequence no.: 0
ODID: 0

Set Header:
Set ID: 2 (Template Set)
Length: 72

  • Template Record (#1)
    Template ID: x
    Field Count: 8
    EN: x ID: x Size: 8 | ":"
    EN: x ID: x Size: 1 | ":"
    EN: x ID: x Size: var. | ":"
    EN: x ID: x Size: var. | ":"
    EN: x ID: x Size: var. | ":"
    EN: x ID: x Size: var. | ":"
    EN: x ID: x Size: 8 | ":"
    EN: x ID: x Size: 1 | ":"

I've replaced the numbers with X's for privacy although I don't know if this would matter lol. Is this an issue in how I have it set up or is this due to the IPFIX packets being encrypted before being sent?

Some libstdc++ functions might fail when used in a plugin

The issue is caused by RTLD_DEEPBIND flag used during loading of a plugin. This flag breaks some ODR assumptions required by C++, therefore, some libstdc++ functions might fail.

For example, following code used in a plugin can cause segmentation fault:
std::cout << "random text";

The issue can be resolved by removing the flag, however, some 3rd party libraries used in plugins (for example, libdrkafka in JSON output) might not work due to symbol collisions.

Could NOT find LibXml2 (missing: LIBXML2_LIBRARY LIBXML2_INCLUDE_DIR)

6-srv-v:~# cd ~
6-srv-v:~# git clone https://github.com/CESNET/libfds.git
Cloning into 'libfds'...
remote: Enumerating objects: 3532, done.
remote: Counting objects: 100% (98/98), done.
remote: Compressing objects: 100% (44/44), done.
remote: Total 3532 (delta 62), reused 56 (delta 54), pack-reused 3434
Receiving objects: 100% (3532/3532), 2.01 MiB | 1.69 MiB/s, done.
Resolving deltas: 100% (2295/2295), done.
6-srv-v:~# cd libfds
6-srv-v:~/libfds# mkdir build && cd build && cmake .. -DCMAKE_INSTALL_PREFIX=/usr
-- The C compiler identification is GNU 10.2.1
-- The CXX compiler identification is GNU 10.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test COMPILER_SUPPORT_GNU11
-- Performing Test COMPILER_SUPPORT_GNU11 - Success
-- Performing Test COMPILER_SUPPORT_GNUXX11
-- Performing Test COMPILER_SUPPORT_GNUXX11 - Success
-- Setting build type to 'Release' as none was specified.
CMake Error at /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:165 (message):
  Could NOT find LibXml2 (missing: LIBXML2_LIBRARY LIBXML2_INCLUDE_DIR)
Call Stack (most recent call first):
  /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:458 (_FPHSA_FAILURE_MESSAGE)
  /usr/share/cmake-3.18/Modules/FindLibXml2.cmake:104 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
  src/CMakeLists.txt:1 (find_package)


-- Configuring incomplete, errors occurred!
See also "/root/libfds/build/CMakeFiles/CMakeOutput.log".
6-srv-v:~/libfds/build# make
make: *** No targets specified and no makefile found.  Stop.
6-srv-v:~/libfds/build# make install
make: *** No rule to make target 'install'.  Stop.

6-srv-v:~/libfds/build# apt-get install libxml2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
libxml2 is already the newest version (2.9.10+dfsg-6.7+deb11u2).
The following packages were automatically installed and are no longer required:
  bsdmainutils cpp-8 fdisk libapt-inst2.0 libapt-pkg5.0 libasan5 libbind9-161 libboost-iostreams1.67.0 libboost-system1.67.0 libclass-accessor-perl libcroco3 libcwidget3v5
  libdns-export1104 libdns1104 libdns1110 libevent-core-2.1-6 libevent-pthreads-2.1-6 libfdisk1 libgail-common libgail18 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libhogweed4 libicu63
  libio-string-perl libip4tc0 libip6tc0 libiptc0 libirs161 libisc-export1100 libisc1100 libisc1105 libisccc161 libisccfg163 libisl19 libjson-c3 libllvm7 liblwres161 libmpdec2 libnettle6
  libnftables0 libparse-debianchangelog-perl libperl5.28 libprocps7 libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib libpython3.7-minimal libpython3.7-stdlib libreadline5
  libreadline7 libsub-name-perl linux-image-4.19.0-13-amd64 perl-modules-5.28 postgresql-client-9.4 postgresql-client-common postgresql-common python-pkg-resources python-setuptools
  python2 python2-minimal python2.7 python2.7-minimal python3-asn1crypto python3-future python3-mock python3-pbr python3.7-minimal x11proto-input-dev x11proto-kb-dev
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
6-srv-v:~/libfds/build# y
bash: y: command not found
6-srv-v:~/libfds/build# apt-get reinstall libxml2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
  bsdmainutils cpp-8 fdisk libapt-inst2.0 libapt-pkg5.0 libasan5 libbind9-161 libboost-iostreams1.67.0 libboost-system1.67.0 libclass-accessor-perl libcroco3 libcwidget3v5
  libdns-export1104 libdns1104 libdns1110 libevent-core-2.1-6 libevent-pthreads-2.1-6 libfdisk1 libgail-common libgail18 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libhogweed4 libicu63
  libio-string-perl libip4tc0 libip6tc0 libiptc0 libirs161 libisc-export1100 libisc1100 libisc1105 libisccc161 libisccfg163 libisl19 libjson-c3 libllvm7 liblwres161 libmpdec2 libnettle6
  libnftables0 libparse-debianchangelog-perl libperl5.28 libprocps7 libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib libpython3.7-minimal libpython3.7-stdlib libreadline5
  libreadline7 libsub-name-perl linux-image-4.19.0-13-amd64 perl-modules-5.28 postgresql-client-9.4 postgresql-client-common postgresql-common python-pkg-resources python-setuptools
  python2 python2-minimal python2.7 python2.7-minimal python3-asn1crypto python3-future python3-mock python3-pbr python3.7-minimal x11proto-input-dev x11proto-kb-dev
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.
Need to get 692 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://httpredir.debian.org/debian stable/main amd64 libxml2 amd64 2.9.10+dfsg-6.7+deb11u2 [692 kB]
Fetched 692 kB in 2s (354 kB/s)
(Reading database ... 67916 files and directories currently installed.)
Preparing to unpack .../libxml2_2.9.10+dfsg-6.7+deb11u2_amd64.deb ...
Unpacking libxml2:amd64 (2.9.10+dfsg-6.7+deb11u2) over (2.9.10+dfsg-6.7+deb11u2) ...
Setting up libxml2:amd64 (2.9.10+dfsg-6.7+deb11u2) ...
Processing triggers for libc-bin (2.31-13+deb11u3) ...
6-srv-v:~/libfds/build# mkdir build && cd build && cmake .. -DCMAKE_INSTALL_PREFIX=/usr
CMake Error at /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:165 (message):
  Could NOT find LibXml2 (missing: LIBXML2_LIBRARY LIBXML2_INCLUDE_DIR)
Call Stack (most recent call first):
  /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:458 (_FPHSA_FAILURE_MESSAGE)
  /usr/share/cmake-3.18/Modules/FindLibXml2.cmake:104 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
  src/CMakeLists.txt:1 (find_package)


-- Configuring incomplete, errors occurred!
See also "/root/libfds/build/CMakeFiles/CMakeOutput.log".

Could not find LibFds

Hello everyone! I am installing IPFIXCOL2 and I have just installed libfds successfully. But when I use cmake in ipfixcol2 repository, it shows an error! Please help me resolve this problem! Thanks for all!

Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1")
CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
Could NOT find LibFds (missing: FDS_LIBRARY FDS_INCLUDE_DIR) (Required is
at least version "0.2.0")
Call Stack (most recent call first):
/usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
CMakeModules/FindLibFds.cmake:40 (find_package_handle_standard_args)
src/CMakeLists.txt:12 (find_package)

-- Configuring incomplete, errors occurred!
See also "/home/dino/ipfixcol2/build/CMakeFiles/CMakeOutput.log".

using hiredis in output plugin

Hi,
I want to change Printer.cpp in json output plugins to use redis, in order to use hiredis -lhiredis flag should be added to compiler options, Unfortunately I'am a newbie in c/c++, so I don't know how to include this flag so the compiler recognize hiredis library.

My changes to Printer.cpp so far:
#include <hiredis/hiredis.h>
redisContext *c;
redisReply *reply;
c = redisConnect("127.0.0.1", 6379);
reply = (redisReply *)redisCommand(c, "PING");
printf("REDIS PING: %s\n", reply->str);
freeReplyObject(reply);

Thanks.

CSV output format

Hello!
If you save it to a file in the JSON format, then a lot of traffic is generated and it takes up a lot of space (even with compression). It would be very nice to be able to save to a file in CSV format
Thank you!

IPFIX elements

Hi Lukas,

I open this issue due to when I get IPFIX packets I can not see the whole elements defined at IANA. I have checked that ipfix-elements.xml at ipfixcol are all setted and at this new version seems to be setted by default too. I am trying to make a flow evaluator and take decisions based on data analized, but I can not achieve this if I can not see the fields I need. Maybe could be an implementation error? (I have got the same result for both collector, version 1 and 2). Why (for example) flowStartMicroseconds doesn't appear at IPFIX packet? It could be useful to calculate the bandwidth based on octetDeltaCount field.

Could you help me, please?

Thanks a lot.

BR

adding geoip module

one of the best intermediate modules of ipfixcol was geoip module, would you please add it again?
and is it possible to run ipfixcol2 in cluster for collecting huge number of flows( 300k flowsets per second)

Support hosts by name in json output params

Problem

ipfixcol2 doesn't support hostnames for the Json output plugin, only IP addresses. I manually fixed this by short circuiting the check_ip function to true in the /src/plugins/output/json/Config.cpp file. The Sender.cpp code uses the getaddrinfo() function which supports hostnames and IP addresses, so no other code changes but to not call check_ip() are needed to support this. This may be by design, but is there a particular reason why you would not want to allow hostnames?

Justification

I built a docker image of ipfixcol2 and docker inter-container networking relies on hostnames where the ip address is not known for configuration.

Floatig point exception for IPFIX output

I'm getting a floating point exception for this version and configuration:

Version:      2.2.1
GIT hash:     db9a5d4
Build type:   Release
Architecture: x86-64 (little endian)
Compiler:     GNU 7.4.0
Copyright (C) 2018 CESNET z.s.p.o.

My full config file

<ipfixcol2>
  <!-- Input plugins -->
  <inputPlugins>
    <input>
      <name>TCP collector</name>
      <plugin>tcp</plugin>
      <params>
        <!-- List on port 4739 -->
        <localPort>4739</localPort>
        <!-- Bind to all local adresses -->
        <localIPAddress></localIPAddress>
      </params>
    </input>
  </inputPlugins>

  <!-- Output plugins -->
  <outputPlugins>
    <output>
      <name>JSON output</name>
      <plugin>json</plugin>
      <params>
        <tcpFlags>formatted</tcpFlags>
        <timestamp>unix</timestamp>
        <protocol>raw</protocol>
        <ignoreUnknown>false</ignoreUnknown>
        <ignoreOptions>false</ignoreOptions>
        <nonPrintableChar>true</nonPrintableChar>
        <detailedInfo>true</detailedInfo>
        <templateInfo>true</templateInfo>

        <!-- Output methods -->
        <outputs>
          <!-- Store as files into /tmp/ipfixcol/... -->
          <file>
            <name>Store to files</name>
<!--            <path>/tmp/ipfixcol2/</path> -->
                        <path>/home/ubuntu/ipfixcol2</path>
            <prefix>json.</prefix>
            <timeWindow>3000</timeWindow>
            <timeAlignment>no</timeAlignment>
          </file>
        </outputs>
      </params>
    </output>
    <output>
      <name>IPFIX output</name>
      <plugin>ipfix</plugin>
      <params>
        <filename>/home/ubuntu/ipfixcol2/data.ipfix</filename>
        <useLocalTime>false</useLocalTime>
        <windowSize>0</windowSize>
        <alignWindows>true</alignWindows>
        <preserveOriginal>false</preserveOriginal>
        <rotateOnExportTime>false</rotateOnExportTime>
      </params>
    </output>
  </outputPlugins>
</ipfixcol2>

This is what I get:

INFO: TCP collector: New exporter connected from '172.16.10.163'.
INFO: TCP collector (parser): [172.16.10.163:35188, ODID: 0] New connection detected!
DEBUG: TCP collector (parser): [172.16.10.163:35188, ODID: 0] Processing an IPFIX Message (Seq. number 0)
DEBUG: TCP collector (parser): [172.16.10.163:35188, ODID: 0] Processing a definition of Template ID 257 ...
INFO: TCP collector (parser): [172.16.10.163:35188, ODID: 0] A definition of the Template ID 257 has been accepted.
INFO: IPFIX output: [ODID: 0] '172.16.10.163:35188' has been granted access to write to the file with the given ODID.
Floating point exception (core dumped)

The problem seems to be in the windowSize argument, when I increase it to a 1000, it works just fine. However, the documentation says that 0 is the default value, so it should work, right?

TLS Support

Good morning,

I was wondering if there was still support for TCP over TLS for the input plugins like IPFIXCol v1.

Thank you

Handling fragmented packets

We are doing dpi and using subtemplate multilists, so we increased netflow packet size to 2840 which is greater than interface MTU size 1500. So the netflow packets are fragmented by the time they reach ipfixcol2. In our scenario these decoded ipfix messages are further forwarded to Kafka.

With the change in the netflow packet size to 2840, we observed a significant drop in message count in kafka.

Does this mean ipfixcol2 is not handling fragmented packets??

Any help is highly appreciated.

unable to start ipfixcol2

Good day everyone. I already followed all the instructions for the installation, but when i try to use the ipfixcol2 command, is not recognized by the server. I guess im missing something but i dont know what it might be. I appreciate your kind help with this

error during compiling on arch

hello,
[ 82%] Building CXX object src/tools/fdsdump/src/common/CMakeFiles/common_obj.dir/filelist.cpp.o
/tmp/ipfixcol2/src/tools/fdsdump/src/common/filelist.cpp: In member function ‘void fdsdump::FileList::add_files(const std::string&)’:
/tmp/ipfixcol2/src/tools/fdsdump/src/common/filelist.cpp:68:20: error: ‘runtime_error’ is not a member of ‘std’
68 | throw std::runtime_error("glob() failed: GLOB_ABORTED");
| ^~~~~~~~~~~~~
/tmp/ipfixcol2/src/tools/fdsdump/src/common/filelist.cpp:46:1: note: ‘std::runtime_error’ is defined in header ‘’; did you forget to ‘#include ’?
45 | #include "filelist.hpp"
+++ |+#include
46 |
/tmp/ipfixcol2/src/tools/fdsdump/src/common/filelist.cpp:70:20: error: ‘runtime_error’ is not a member of ‘std’
70 | throw std::runtime_error("glob() failed: " + std::to_string(ret));
| ^~~~~~~~~~~~~
/tmp/ipfixcol2/src/tools/fdsdump/src/common/filelist.cpp:70:20: note: ‘std::runtime_error’ is defined in header ‘’; did you forget to ‘#include ’?
make[2]: *** [src/tools/fdsdump/src/common/CMakeFiles/common_obj.dir/build.make:118: src/tools/fdsdump/src/common/CMakeFiles/common_obj.dir/filelist.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1104: src/tools/fdsdump/src/common/CMakeFiles/common_obj.dir/all] Error 2
make: *** [Makefile:136: all] Error 2

lnfstore plugin not found

I've installed lnfstore plugin and I can see it in /usr/local/lib/ipfixcol2/. I've also added LD_LIBRARY_PATH just in case. And I've recompiled ipfixcol2 just in case. However when I run it, it gives me the following error:

ERROR: Configurator: Collector failed to start: Unable to find the 'lnfstore' plugin.

And running
ipfixcol2 -L -v
indeed does not show this plugin

INPUT PLUGINS

  • Name : dummy
    Description: Example plugin that generates messages.
    Path: /usr/local/lib/ipfixcol2/libdummy-input.so
    Version: 2.0.0

  • Name : fds
    Description: Input plugin for FDS File format.
    Path: /usr/local/lib/ipfixcol2/libfds-input.so
    Version: 2.0.0

  • Name : ipfix
    Description: Input plugin for IPFIX File format
    Path: /usr/local/lib/ipfixcol2/libipfix-input.so
    Version: 2.0.0

  • Name : tcp
    Description: Input plugins for IPFIX/NetFlow v5/v9 over Transmission Control Protocol.
    Path: /usr/local/lib/ipfixcol2/libtcp-input.so
    Version: 2.0.0

  • Name : udp
    Description: Input plugins for IPFIX/NetFlow v5/v9 over User Datagram Protocol.
    Path: /usr/local/lib/ipfixcol2/libudp-input.so
    Version: 2.1.0

INTERMEDIATE PLUGINS

  • Name : anonymization
    Description: IPv4/IPv6 address anonymization plugin
    Path: /usr/local/lib/ipfixcol2/libanonymization-intermediate.so
    Version: 2.0.0

OUTPUT PLUGINS

  • Name : dummy
    Description: Example output plugin.
    Path: /usr/local/lib/ipfixcol2/libdummy-output.so
    Version: 2.2.0

  • Name : fds
    Description: Flow Data Storage output plugin
    Path: /usr/local/lib/ipfixcol2/libfds-output.so
    Version: 2.0.0

  • Name : forwarder
    Description: Forward flow records as IPFIX to one or more subcollectors.
    Path: /usr/local/lib/ipfixcol2/libforwarder-output.so
    Version: 1.0.0

  • Name : ipfix
    Description: IPFIX output plugin
    Path: /usr/local/lib/ipfixcol2/libipfix-output.so
    Version: 2.0.0

  • Name : json
    Description: Conversion of IPFIX data into JSON format
    Path: /usr/local/lib/ipfixcol2/libjson-output.so
    Version: 2.2.0

  • Name : json-kafka
    Description: Conversion of IPFIX data into JSON format
    Path: /usr/local/lib/ipfixcol2/libjson-kafka-output.so
    Version: 2.2.0
    Notes:

    • Deep bind (RTLD_DEEPBIND) required
  • Name : timecheck
    Description: The plugin checks that timestamp elements in flows are relatively recent.
    Path: /usr/local/lib/ipfixcol2/libtimecheck-output.so
    Version: 2.0.0

  • Name : viewer
    Description: Output plugin for ptinting information about incoming IPFIX messages.
    Path: /usr/local/lib/ipfixcol2/libviewer-output.so
    Version: 2.0.0

Test xml configuration and exit

Is there a way to validate xml configuration file other than ipfixcol2 -c <FILE>? I'm running timeout 1s ipfixcol2 -c <FILE> and catching RC 124 but that seems inelegant and prone to errors (slow read due to hdd spin up comes to mind).

Kafka connection

Hey guy,

I am having trouble to run ipfixcol2 JSON-kafka plugin.

Setup is like this:
Computer A:

Zookeeper with Kafka broker on standard port 9092.
IP address: 192.168.1.2

Computer B:

ipfixcol2 with json-kafka
softflowd - reporting data to localhost:4739
IP address 192.168.1.144

Here you can see that ipfixcol is keeping connection opened
image

But none of the data is being transfered.

Those are my settings. Almost 1:1 with example file.

<!--
  Receive flow data simultaneously over TCP and UDP and store them on a local
  drive in a nfdump compatible format (multiple instances of the same input
  plugin).
-->
<ipfixcol2>
  <!-- Input plugins -->
  <inputPlugins>
    <input>
      <name>TCP collector</name>
      <plugin>tcp</plugin>
      <params>
        <!-- List on port 4739 -->
        <localPort>4739</localPort>
        <!-- Bind to all local adresses -->
        <localIPAddress></localIPAddress>
      </params>
    </input>

    <input>
      <name>UDP collector</name>
      <plugin>udp</plugin>
      <params>
        <!-- List on port 4739 -->
        <localPort>4739</localPort>
        <!-- Bind to all local adresses -->
        <localIPAddress></localIPAddress>
      </params>
    </input>
  </inputPlugins>

  <!-- Output plugins -->
  <outputPlugins>
    <output>
      <name>JSON output</name>
      <plugin>json</plugin>
        <params>
          <tcpFlags>formatted</tcpFlags>
          <timestamp>formatted</timestamp>
          <protocol>formatted</protocol>
          <ignoreUnknown>true</ignoreUnknown>
          <ignoreOptions>true</ignoreOptions>
          <nonPrintableChar>true</nonPrintableChar>
          <octetArrayAsUint>true</octetArrayAsUint>
          <numericNames>false</numericNames>
          <splitBiflow>false</splitBiflow>
          <detailedInfo>false</detailedInfo>
          <templateInfo>false</templateInfo>

          <outputs>
              <kafka>
                  <name>Send to Kafka</name>
                  <brokers>192.168.1.2:9092</brokers>
                  <topic>ipfix</topic>
                  <blocking>false</blocking>
                  <partition>unassigned</partition>

                  <!-- Zero or more additional properties -->
                  <property>
                      <key>compression.codec</key>
                      <value>lz4</value>
                  </property>
              </kafka>
          </outputs>
      </params>
    </output>
  </outputPlugins>
</ipfixcol2>

This is an image after shuting down kafka.
272379200_1019954915225910_1285543740149151593_n

It is fine because thess errors showed up just after shutting down kafka but its quiet strange that it is reporting connection problem to localhost all over the place.

EDIT: Forgot to mention, but its working when kafka is running on localhost

Contribution to ipfixcol2

hi im interest in this project and thinking about developing intermediate plugins. is there any guideline document to start from?

using json output plugin of devel branch

hi
i have installed ipfixcol2 devel branch for writing output stream in kafka topic , but encounter with segmentation fault error when using json plugin with every output . my os is centos 7 (also test on ubuntu 18.04) and here is my sample conf file:

UDP collector udp 5555 JSON output json formatted formatted formatted true true true true false false false false
    <outputs>
        <!-- Choose one or more of the following outputs -->
        <print>
            <name>Printer to standard output</name>
        </print>

    </outputs>
</params>

Failed to start by command 'ipfixcol2 -c /root/ipfixudp.xml'

Hi Expert,
it always show :The instance holds information about 0 active session(s).

root@logger-vm-3:~# ipfixcol2 -c /root/ipfixudp.xml -vvv
INFO: Configurator: Information Elements have been successfully loaded from '/etc/libfds/'.
INFO: Configurator (plugin manager): 10 plugins found
DEBUG: Configurator (plugin manager): Plugin 'json' has been successfully loaded from '/usr/local/lib/ipfixcol2/libjson-output.so'.
DEBUG: Configurator (plugin manager): Input plugin 'udp' does not support requests to close a Transport Session.
DEBUG: Configurator (plugin manager): Plugin 'udp' has been successfully loaded from '/usr/local/lib/ipfixcol2/libudp-input.so'.
DEBUG: Configurator: All plugins have been successfully loaded.
DEBUG: JSON output: Calling instance constructor of the plugin 'json'
DEBUG: Output manager: Calling instance constructor of the plugin 'Output manager'
DEBUG: UDP collector (parser): Calling instance constructor of the plugin 'IPFIX Parser'
DEBUG: UDP collector: Calling instance constructor of the plugin 'udp'
DEBUG: JSON output: (File output) Thread started...
INFO: UDP collector: The socket receive buffer size of a new socket (local IP 11.100.5.222) enlarged (from 106496 to 16777216 bytes).
INFO: UDP collector: Bind succeed on 11.100.5.222 (port 4739)
DEBUG: Configurator: All instances have been successfully initialized.
DEBUG: JSON output: Instance thread of the output plugin 'json' has started!
DEBUG: Configurator: All threads of instances has been successfully started.
DEBUG: UDP collector (parser): Instance thread of the intermediate plugin 'IPFIX Parser' has started!
DEBUG: Output manager: Instance thread of the intermediate plugin 'Output manager' has started!
DEBUG: UDP collector: Instance thread of the input plugin 'udp' has started!
DEBUG: UDP collector: The instance holds information about 0 active session(s).
DEBUG: UDP collector: The instance holds information about 0 active session(s).
DEBUG: UDP collector: The instance holds information about 0 active session(s).
DEBUG: UDP collector: The instance holds information about 0 active session(s).
DEBUG: UDP collector: The instance holds information about 0 active session(s).
DEBUG: UDP collector: The instance holds information about 0 active session(s).
DEBUG: UDP collector: The instance holds information about 0 active session(s).
DEBUG: UDP collector: The instance holds information about 0 active session(s).

Query Regarding Incoming and Outgoing Packet Rates in ipfixcol2 Collector

We are encountering an issue while using the ipfixcol2 collector. Specifically, when sending traffic from a remote machine to the collector, we have noticed that the incoming rate of packets, received by the ipfixcol2 collector, is greater than the outgoing rate of packets originating from the remote machine. This situation seems counterintuitive and requires your expertise to understand and resolve.

Here is some information about our configuration:

We are running the ipfixcol2 collector using the command ./ipfixcol2 -vvv -c /opt/ipfixcol2/conf/startup.xml or ./ipfixcol2 -c /opt/ipfixcol2/conf/startup.xml.
Our collector setup involves the utilization of the UDP input plugin.
We have configured the collector to use a JSON-Kafka output plugin.
We observe that when we start the collector, there is a sudden and unexplained increase in the packet rate received by the collector, as we are sending outgoing packets from a different machine to the collector. The discrepancy between incoming and outgoing packet rates is unexpected.

We are reaching out to seek your suggestions on potential reasons for this discrepancy. We are eager to understand the root cause of this behavior and identify steps to rectify it.
Could you kindly share your expertise on this matter?
ip-trafng

Segmentation fault when I provide invalid unirec-elements.txt

When I provide unirec-elements.txt with duplicate element entry, the ipfixcol2 ends up with a segmentation fault error.

To replicate the error, insert one line twice into the unirec-elements.txt config, like:

...
# --- OVPN Information elements ---

OVPN_CONF_LEVEL                 uint8        cesnet:OVPNConfLevel

# --- NTP Information elements  ---
NTP_LEAP                        uint8        cesnet:NTPLeap
NTP_LEAP                        uint8        cesnet:NTPLeap
NTP_VERSION                     uint8        cesnet:NTPVersion 

...

json output : unicode nullcharacter http fields

In Kafka Json we are getting in all http record:
"httpRequestMethod":"GET\u0000\u0000\u0000\u0000\u0000","httpRequestHost":"xml-ads.com\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000","httpRequestTarget":"/in.html?q=cD0yMTkjcz00Mjc2ODMwI3Q9TUFJTlNUUkVBTSNxPTAjYz1JT\u0000\u0000\u0000\u0000"

Error while compiling

Hello everyone.

I am trying to install this project on a Centos 7.
I have followed all the steps you indicate, previously installing the libfds library:

$ git clone https://github.com/CESNET/libfds.git
$ cd libfds
$ mkdir build && cd build && cmake .. -DCMAKE_INSTALL_PREFIX=/usr
$ make

In addition, I have installed the packages that you indicate

yum install gcc gcc-c++ cmake make python3-docutils zlib-devel

However, once the library is installed, I have tried to install ipfixcol2 and the following compilation error has occurred in one of the files

[ 69%] Building CXX object src/plugins/output/json/CMakeFiles/json-output.dir/src/Storage.cpp.o
/home/dit/ipfixcol2/src/plugins/output/json/src/Storage.cpp: In member function ‘void Storage::convert_tmplt_rec(fds_tset_iter*, uint16_t, const fds_ipfix_msg_hdr*)’:
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:223:59: error: expected ‘)’ before ‘PRIu16’
snprintf(field, LOCAL_BSIZE, ""ipfix:templateId":%" PRIu16, tmplt->id);
^
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:226:64: error: expected ‘)’ before ‘PRIu16’
snprintf(field, LOCAL_BSIZE, ","ipfix:scopeCount":%" PRIu16, tmplt->fields_cnt_scope);
^
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:244:62: error: expected ‘)’ before ‘PRIu16’
snprintf(field, LOCAL_BSIZE, ""ipfix:elementId":%" PRIu16, current.id);
^
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:246:66: error: expected ‘)’ before ‘PRIu32’
snprintf(field, LOCAL_BSIZE, ","ipfix:enterpriseId":%" PRIu32, current.en);
^
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:248:65: error: expected ‘)’ before ‘PRIu16’
snprintf(field, LOCAL_BSIZE, ","ipfix:fieldLength":%" PRIu16, current.length);
^

/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp: In member function ‘void Storage::addDetailedInfo(const fds_ipfix_msg_hdr*)’:
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:396:60: error: expected ‘)’ before ‘PRIu32’
snprintf(field, LOCAL_BSIZE, ","ipfix:exportTime":%" PRIu32, ntohl(hdr->export_time));
^
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:399:59: error: expected ‘)’ before ‘PRIu32’
snprintf(field, LOCAL_BSIZE, ","ipfix:seqNumber":%" PRIu32, ntohl(hdr->seq_num));
^
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:402:54: error: expected ‘)’ before ‘PRIu32’
snprintf(field, LOCAL_BSIZE, ","ipfix:odid":%" PRIu32, ntohl(hdr->odid));
^
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:405:59: error: expected ‘)’ before ‘PRIu16’
snprintf(field, LOCAL_BSIZE, ","ipfix:msgLength":%" PRIu16, ntohs(hdr->length));
^
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp: In member function ‘void Storage::convert(fds_drec&, const fds_iemgr_t*, fds_ipfix_msg_hdr*, bool)’:
/path/to/ipfixcol2/src/plugins/output/json/src/Storage.cpp:448:64: error: expected ‘)’ before ‘PRIu16’
snprintf(field, LOCAL_BSIZE, ","ipfix:templateId":%" PRIu16, rec.tmplt->id);
^
make[2]: *** [src/plugins/output/json/CMakeFiles/json-output.dir/src/Storage.cpp.o] Error 1
make[1]: *** [src/plugins/output/json/CMakeFiles/json-output.dir/all] Error 2
make: *** [all] Error 2

I've been browsing the file looking for some syntactic errors but haven't found anything. So I'm afraid it might have been a mistake on my part at installation time.

Has anyone else received this error?
How do I solve it?

thank you everyone

Deduplication Feature in ipfixcol2 tool

HI Team,
We are planning to use IPFIXCol2 as a collector for our NetFlow collection as a replacement to our existing vendor tool. The current tool got the deduplication feature and would like to know this feature is there in ipfixcol2?
Currently we are trying our POC's with UDP input plugin and json output plugin that send converted logs to other server with "send" function.

Thanks,
Sree

Plugin to merge IPFIX data

Is there a plugin that merges the IPFIX statistics of different packets related to the same flow but coming from different observation points?

What is the Maximum Ingestion rate that is acceptable by IPFIXCol2?

HI Team,
We are planning to use IPFIXCol2 as a collector for our NetFlow collection as a replacement to our existing vendor current tool. The current maximum ingestion that the vendor tool can accept is 8 million Flows/Min. I don't see any note about the maximum ingestion rate that IPFIXCol2 can process per minute? Can you please help with standards related to load it can accept please?

Kind Regards,
Sanky.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.