Git Product home page Git Product logo

vitis_with_100gbps_tcp-ip's Introduction

EasyNet: 100 Gbps TCP/IP Network Stack for HLS

This repository provides TCP/IP network support at 100 Gbit/s in Vitis-HLS and provides several examples to demonstrate the usage.

Architecture Overview

This repository creates designs with three Vitis kernels: cmac kernel, network kernel and user kernel. The cmac kernel and the network kernel serve as a common infrastructure for network functionality while the user kernel can be customized for each application.

CMAC Kernel

The cmac kernel contains an UltraScale+ Integrated 100G Ethernet Subsystem. It is connected to the GT pins exposed by the Vitis shell and it runs at 100G Ethernet Subsystem clock, i.e.,322 MHz. It also exposes two 512-bit AXI4-Stream interfaces to the network kernel for Tx and Rx network packets. Internally the cmac kernel has CDC (clock domain crossing) logic to convert from network kernel clock to the 100G Ethernet Subsystem clock.

Network Kernel

The network kernel is a collection of HLS IP cores to provide TCP/IP network functionality. It can saturate 100 Gbps network bandwidth and it is clocked at 250 MHz. The kernel contains two 512-bit AXI4 interfaces to two memory banks, which serve as temporary buffers for Tx packet retransimission and Rx packet buffering respectively. The network kernel also exposes AXI4-Stream interfaces to the user kernel for openning and closing TCP/IP connection, sending and receiving network data. For a detailed description of the interface, please see below. The TCP/IP stack also contains several compile time parameters that can be tuned for performance benifit or resource saving for a specific application. For a more detailed description, please see how to configure TCP/IP stack below.

User Kernel

The user kernel contains AXI4-Stream interfaces to the network kernel and other interfaces that can be customized for each application. It is clocked at the same frequency as the network kernel. The user kernel can be developed in Vitis supported languages: RTL, C/C++ and OpenCL, and it should satisfy the requirement of the Vitis application acceleration flow. This repository contains several examples written in RTL or C++ to illustrate how the user kernel could interact with the network kernel and to benchmark network statistic.

User-Network Kernel Interface

The AXI4-Stream interfaces between the network kernel and the user kernel is shown below. The interfaces can be divided into two paths: Rx and Tx. The structure of the interfaces can be found in kernel/common/include/toe.hpp.

On the RX path, the user kernel can put a TCP port into listening state through the listenPortReq interface and is notified about the port state change on the listenPortRsp interface. A Rx control handshake is required between the user and the network kernel before receving the payload. Through the notification interface, the user kernel is informed either about data available in the Rx Buffer or connection termination by the other end. To retrieve data from the Rx Buffer, the user kernel issues a request to the rxDataReq interface containing the session ID and length to be retrieved. This request is answered by the network kernel by providing the stream of data on the rxDataRsp interface.

The user kernel can open active connections through the openConReq interface providing the IP address and TCP port of the destination. Through the openConRsp, it will then either receive the session ID of the new connection or be notified that the connection could not be established. The user kernel can close a connection by issuing the session ID to the closeConReq interface. To transfer data over an existing connection, a Tx control handshake is required before each payload transfer. The user kernel has to first provide the session ID and the length to the txDataReq interface. For each requested transfer, the TCP module will return a response on the txDataRsp interface indicating potential errors and the remaining buffer space for that connection. If the txDataRsp doesn't return any error, the user kernel can send the payload to the txData interface. Notice that for each transfer, it should not exceed the maximum segement size.

How to Achieve 100 Gbps Tx and Rx Rate

Though the network kernel is capable of processing network packets at network rate, to actually achieve 100 Gbps rate at the user kernel, it requires a careful design with proper interaction with the network kernel and a proper tuning of the TCP/IP stack. Here we list three suggestions in order to achieve 100 Gbps at application level.

(1) Pipeline Control Handshake and Payload Transfer

For each Rx and Tx packet transfer, a control handshake between the user kernel and the network kernel is required before the actual payload receiving or transmitting. One straight forward way is to sequentially perform this "control handshake - payload transfer" for each transaction. However, due to complex logic control and registering in the TCP/IP stack, this control handshake process can take from 10 to 30 cycels. Considering the fact that it only takes 22 cycles for a payload transfer of 1408 bytes (64 bytes per cycle), the payload transfer process would be stalled for a substantial portion of time for each transaction. Therefore, in order to saturate the line rate, the control handshake and payload transfer for every consequtive transaction should be pipelined. Figure below shows example waveforms of the valid signal of Tx interfaces with and without pipelining between each transaction.

(2) Send Packets With Size of Multiple of 64 Bytes

During Tx process, the payload are buffered in global memory for retransmission in case of packet loss. This requires memory bandwidth of 100 Gbps if we want to saturate the network bandwidth. However, the memory access partern affects the memory bandwidth and especially with the Vitis shell, memory accesses with addresses not aligned to 64 byte would significantly decrease the memory bandwidth. In the case of sequential accesses with all unaligned memory addresses, the memory bandwidth is about only 25 Gbps, limiting the Tx rate. Therefore, it recommended to avoid unaligned memory access whenever possible and this can be achieved by sending packets with size of multiple of 64 bytes.

(3) Concurrent Connections and Large Maximum Transfer Unit (MTU)

To achieve 100 Gbps at the application level, it requires that both end points of the communication can work at network rate. In the case of communicating between an FPGA and a CPU, it requires proper tunning on the CPU side to meet the network rate. First, concurrent connections should be establised and pinned on different threads. Second, large MTU (e.g., 4096 Bytes) should be set to reduce the overhead of packet parsing.

Performance Benchmark

For performance benchmark in terms of throughput and open connection time, please see here.

Clone the Repository

Git Clone

git clone	
git submodule update --init --recursive

Configure TCP Stack

Setup the TCP/IP stack HLS IPs:

mkdir build
cd build
cmake .. -DFDEV_NAME=u280 -DTCP_STACK_EN=1
make ip

TCP/IP stack options:

Name Values Desription
FNS_TCP_STACK_MAX_SESSIONS Integer Maximum number of session supported by the stack. Each session requires a 64 KB Tx and Rx buffer in off-chip memory and state tables using on-chip memory. The choice of this parameter is a trade-off between maximum supported session count and resource usage. ; Default: 1000
FNS_TCP_STACK_RX_DDR_BYPASS_EN <0,1> Bypassing Rx packets buffering. If user application can consume Rx packets at line-rate, setting this parameter allows the network kernel forward packets directly to the user kernel, which reduces global memory usage and latency. ; Default: 1
FNS_TCP_STACK_WINDOW_SCALING_EN <0,1> Enable TCP window scaling; Default: 1

Create Design

The following example command will synthesis and implement the design with selected user kernel. The generated XCLBIN resides in folder build_dir.hw.xilinx_u280_xdma_201920_3. The generated host executable resides in folder host.

cd ../
make all TARGET=hw DEVICE=/opt/xilinx/platforms/xilinx_u280_xdma_201920_3/xilinx_u280_xdma_201920_3.xpfm USER_KRNL=iperf_krnl USER_KRNL_MODE=rtl NETH=4
  • DEVICE Alveo development target platform
  • USER_KRNL Name of the user kernel
  • USER_KRNL_MODE If the user kernel is a rtl kernel, rtl mode should be specified. If the user kernel is a C/C++ kernel, then hls mode should be specified.

Kernel options:

USER_KRNL USER_KRNL_MODE Desription
iperf_krnl rtl Iperf kernel contains some HLS IPs and a rtl wrapper. It can be used to benchmark netowrk bandwidth acting as iperf2 client. Usage: ./host XCLBIN_FILE [Server IP address in format 10.1.212.121] [#Connection] [Seconds]. The default port number of iperf2 is 5001.
scatter_krnl rtl Scatter kernel contains some HLS IPs and a rtl wrapper. It scatters packets through serveral connections. Usage: ./host XCLBIN_FILE [IP address in format 10.1.212.121] [Base Port] [#Connection] [#Tx Pkg]. Kernel tries to open connections with different port numbers with incremental offset by the Base Port.
hls_send_krnl hls This kernel is a C kernel working in the VITIS HLS flow. It contains simple examples in C to open connection, send data through the connection. Usage: ./host XCLBIN_FILE [#Tx Pkt] [IP address in format: 10.1.212.121] [Port]
hls_recv_krnl hls This kernel is a C kernel working in the VITIS HLS flow. It contains simple examples in C to listen on port and receive data from connection established with that port. Usage: ./host XCLBIN_FILE [#RxByte] [Port]

Repository structure

├── fpga-network-stack
├── scripts
├── kernel
│   └── cmac_krnl
│   └── network_krnl
│   └── user_krnl
|		└── iperf_krnl
|		└── scatter_krnl
|		└── hls_send_krnl
|		└── hls_recv_krnl
├── host
|	└── iperf_krnl
|	└── scatter_krnl
|	└── hls_send_krnl
|	└── hls_recv_krnl
├── common
├── img
  • fpga-network-stack: this folder contains the HLS code for 100 Gbps TCP/IP stack
  • scripts: this folder contains scripts to pack each kernel and to connect cmac kernel with GT pins
  • kernel: this folder contains the rtl/hls code of cmac kernel, network kernel and user kernel. User kernel can be configured to one of the example kernels
  • host: this folder contains the host code of each user kernel
  • img: this folder contains images
  • common: this folder contains neccessary libraries for running the vitis kernel

Support

Tools

Vitis XRT
2022.1 2.13.466

Alveo Cards

Alveo Development Target Platform(s)
U280 xilinx_u280_xdma_201920_3
U250 xilinx_u250_gen3x16_xdma_3_1_202020_1
U50 xilinx_u50_gen3x16_xdma_5_202210_1
U55C xilinx_u55c_gen3x16_xdma_3_202210_1

Requirements

In order to generate this design you will need a valid UltraScale+ Integrated 100G Ethernet Subsystem license set up in Vivado.

Acknowledgement

We would like to thank David Sidler for developing the prototype of 100 Gbps TCP/IP stack and Mario Daniel Ruiz Noguera for helpful discussion. We also thank Xilinx for generous donations of software and hardware to build the Xilinx Adaptive Compute Cluster (XACC) at ETH Zurich.

Publication

If you use EasyNet, cite us :
@INPROCEEDINGS {easynet,
    author = {Z. He and D. Korolija and G. Alonso},
    booktitle = {2021 31st International Conference on Field-Programmable Logic and Applications (FPL)},
    title = {EasyNet: 100 Gbps Network for HLS},
    year = {2021},
    pages = {197-203},
    doi = {10.1109/FPL53798.2021.00040},
    url = {https://doi.ieeecomputersociety.org/10.1109/FPL53798.2021.00040},
    publisher = {IEEE Computer Society},
    address = {Los Alamitos, CA, USA},
    month = {sep}
}

vitis_with_100gbps_tcp-ip's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vitis_with_100gbps_tcp-ip's Issues

How to run demo and config some param?

i have already compile and implement you project on Alveo Card U250
and use command line ./host ./network.xclbin 1 10

but it not happen something on my server side how I fixed this?

U250 support?

Hi,

Just wonder how hard it is to port to U250. Would U250 be supported in the future?

Best,
Yang

How do I use the network stack with an OpenCL user kernel?

Hey,

The Readme states that this network stack can also be used in OpenCL. I would really like to give it a shot. Unfortunately, the repository seems not to contain examples for OpenCL user kernels, so I do not know how to get started. Can you please give me a short example or point me to further documentation that can help me in this regard?
Thanks in advance!

Vitis 2022.2 and 2023.1

Can I run this implementation for U55C with Vitis 2022.2 or 2023.1? Just want to ask if someone has tried it. Thanks.

how to run rocev2 in this repo

I make this repo to U200 board,have generated bitstream and xclbin。
but,actually I want to build a rocev2 system not the TCP/IP toe model。
Can anyone tell me how to make a rocev2 demo

TCP/IP not handling "DUP ACK" from the remote end

I'm evaluating the performance of the hardened tcp/ip stack with the iperf client user module. After transferring for less than 1sec, the transfer stop. I monitor the tcp traffic with wireshark. I can find a lot of "DUP ACK" packet from the iperf server side (Desktop PC with 100G QSFP interface).

The DUP ACK packets all point to the same sequence number - 872278593.
Screenshot from 2024-06-18 17-00-23

Looks like tcp/ip not response to "DUP ACK".
Please advise.

Watson

u200 suport

May I ask what adjustments I need to make if I want to deploy on u200 and use vitis2020

Can I build the TCP stack using Vitis 2021.1

I have a requirement to use Vitis 2021.1. But I saw the supported version for the TCP stack is 2019.2. Can I still use 2021.1 with some changes to build the bitstream? Are there any plans to add support for this version in the near future?

Error during creation of HLS IP cores

When I do “make all”, I seem to get the same error on HLS ip cores as seen below.

WARNING: [Vivado 12-3523] Attempt to change 'Component_Name' from 'axis_256_to_64_converter' to 'axis_256to_64_converter' is not allowed and is ignored.
ERROR: [Coretcl 2-1134] No IP matching VLNV 'ethz.systems:hls:toe:1.6' was found. Please check your repository configuration.

while executing

"source $path_to_pack_tcl/network_stack.tcl"
(file "kernel/network_krnl/package_network_krnl.tcl" line 85)

while executing

"source -notrace ${package_tcl_path}"
(file "scripts/gen_xo.tcl" line 53)
INFO: [Common 17-206] Exiting Vivado at Thu Mar 14 15:29:41 2024...
make: *** [config_rtl.mk:4: _x.hw.xilinx_u50_gen3x16_xdma_5_202210_1/network_krnl.xo] Error 1

Has anyone able to fix this issue?

HARDWARE build error related to GTY.

Hi,
When I am trying to build hardware for the project I am getting below error:

ERROR: [VPL UTLZ-1] Resource utilization: GTYE4_CHANNEL over-utilized in Pblock pblock_dynamic_region (This design requires more GTYE4_CHANNEL cells than are available in Pblock 'pblock_dynamic_region'. This design requires 10 of such cell types but only 8 compatible sites are available in Pblock 'pblock_dynamic_region'. Please consider increasing the span of Pblock 'pblock_dynamic_region' or removing cells from it.)

I could not find any useful information on the web, any pointer would be helpful.

Thanks and regards,
Ishtiyaque Shaikh

Set Default Gateway for FPGA

Hi,

I tried to ping from a PC to FPGA through a switch layer 3, I need a default gateway for FPGA. How can I set that?

Thank you so much,
Duc

Error in implementing on U250

I tried to implement this on U250 but it gives this error

make all TARGET=hw DEVICE=/opt/xilinx/platforms/xilinx_u250_xdma_201830_2/xilinx_u250_xdma_201830_2.xpfm USER_KRNL=iperf_krnl USER_KRNL_MODE=rtl NETH=4

image

image

If anyone knows how to fix this issue kindly let me know.

How to get a free integrated license of CMAC?

I tried to get UltraScale+ Integrated 100G Ethernet Subsystem's license.
But it came out:

Product Licensing - Name and Address Verification
Please correct the errors and send your information again.

We cannot fulfill your request as your account has failed export compliance verification. If this verification is in error, please review the Export Compliance Information page - https://www.xilinx.com/support/export-compliance.html

U.S. Government Export Approval
U.S. export regulations require that your First Name, Last Name, Company Name and Shipping Address be verified before Xilinx can fulfill your download request. Please provide accurate and complete information.
Addresses with Post Office Boxes and names/addresses with Non-Roman Characters with accents such as grave, tilde or colon are not supported by US export compliance systems.

What's the reason?

When TCP retransmission occurs, the TCP server crashes

I use windows 10.

I use VITIS 2021.2 to compile the fpgasystems/Vitis_with_100Gbps_TCP-IP/fpga-network-stack folder, and I can set up a TCP server on the FPGA and a TCP client on the PC. Send data to PC through FPGA, the data rate is about 5Gbps. But once a TCP retransmission occurs, the TCP server crashes. The TCP retransmission can be triggered by pinging the FPGA from the PC. At this time, I see that the toe ip reads back the data in the DDR and tries to retransmit.

In a similar situation, I use VIVADO 2018.3 to compile the fpgasystems/fpga-network-stack folder. When TCP retransmission occurs, the TCP server does not crash.

In addition, when the TCP_STACK_FAST_RETRANSMIT_EN option is not used, an error will occur when HLS compiles toe ip. The error is in line 215 of hls/toe/tx_engine/tx_engine.cpp, "txSar2txEng_upd_rsp.read(txSar)", "txSar" is not defined, and some other variables are not Definition, such as "currLength".

the loop back user core issue

I want to build a 100G tcp/ip server in fpga, so pc client can send data to server and server can loop back that data to pc client. I have changed the ipert core but that core is only receive. Could you please take a look at that code and let me know where is issue?

switch (serverFsmState)
{
case WAIT_PKG:
	if (!rxMetaData.empty() && !rxDataBuffer.empty())
	{
		rxMetaData.read();
		net_axis<WIDTH> receiveWord = rxDataBuffer.read();
		if (!receiveWord.last)
		{
			serverFsmState = CONSUME;
		}
	}
	break;
case CONSUME:
	if (!rxDataBuffer.empty())
	{
		receiveWord = rxDataBuffer.read();
		if (receiveWord.last)
		{
			serverFsmState = WRITE_PKG;
		}
	}
	break;
case WRITE_PKG:
{
	txDataBuffer.write(receiveWord);
	if (receiveWord.last)
	{
		serverFsmState = WAIT_PKG;
	}
}
	break;
}

Thank you,
Duc

Can not find top module in vivado project

I've done built this project on u280 platform. however when opening the vivado project generated from vitis/v++, I can not find the top module name "pfm_top_wrapper". Also, I see that GT ref clock of cmac_krnl is not assigned to physical clock pin, but the project still can be built.
Could you please help me understand this?
Thank you.

Do we need different cmac_usplus settings if testing in 10G network.

Hi,
I have build the project on Vitis 2022.2 targeting Alveo280 board. However my network is 10G only. Do we need to change any setting in cmac_usplus 3.1 (100G Ethernet System IP) to make it working on 10G network.
Currently I have put ILA on cmac_usplus rx side and I see following error status :
stat_rx_internal_local_fault = 1
stat_rx_local_fault = 1
stat_rx_synced_err = FFFF

Current IP setting is Mode=CAUI4, Line Rate=4X25.78G
Should I change Mode to CAUI10 and Line Rate=10X10G ?

Also how do we know if cmac GTY is constraint to ethernet port 0 or 1 of Alveo280 in the design ?

Mismatch between branch 2022.1 and its Development Target Platform

I tried building the project on the new vitis_2022_1 branch targeting U280, and it resulted in the following error on make all:

$ make all TARGET=hw DEVICE=/opt/xilinx/platforms/xilinx_u280_xdma_201920_3/xilinx_u280_xdma_201920_3.xpfm USER_KRNL=iperf_krnl USER_KRNL_MODE=rtl NETH=4

...

# if {[file exists "${xoname}"]} {
#     file delete -force "${xoname}"
# }
# package_xo -xo_path ${xoname} -kernel_name ${krnl_name} -ip_directory ./packaged_kernel_${suffix} -kernel_xml ${xml_path}
WARNING: [Vivado 12-4404] The CPU emulation flow in v++ is only supported when using a packaged XO file that contains C-model files, none were found.
WARNING: [Vivado 12-12407] VLNV in kernel.xml does not match VLNV in any of the IPs specified with the ip_directory option: ethz.ch:kernel:cmac_krnl:1.0
INFO: [Common 17-206] Exiting Vivado at Fri Aug 26 05:29:19 2022...
mkdir -p ./build_dir.hw.xilinx_u280_xdma_201920_3
/opt/Xilinx/Vitis/2022.1/bin/v++ -t hw --platform /opt/xilinx/platforms/xilinx_u280_xdma_201920_3/xilinx_u280_xdma_201920_3.xpfm --save-temps  --kernel_frequency 200 --advanced.param compiler.userPostSysLinkTcl=/home/ubuntu/sungsoo/Vitis_with_100Gbps_TCP-IP/scripts/post_sys_link.tcl  --dk chipscope:network_krnl_1:m_axis_tcp_open_status --dk chipscope:network_krnl_1:s_axis_tcp_tx_meta --dk chipscope:network_krnl_1:m_axis_tcp_tx_status  --dk chipscope:network_krnl_1:s_axis_tcp_open_connection  --dk chipscope:network_krnl_1:m_axis_tcp_port_status --dk chipscope:network_krnl_1:m_axis_tcp_notification --dk chipscope:network_krnl_1:m_axis_tcp_rx_meta  --dk chipscope:network_krnl_1:s_axis_tcp_read_pkg  --dk chipscope:network_krnl_1:s_axis_tcp_listen_port  --config ./kernel/user_krnl/iperf_krnl/config_sp_iperf_krnl.txt --config ./scripts/network_krnl_mem.txt --config ./scripts/cmac_krnl_slr.txt --report estimate --temp_dir ./build_dir.hw.xilinx_u280_xdma_201920_3 -l  -o'build_dir.hw.xilinx_u280_xdma_201920_3/network.xclbin' _x.hw.xilinx_u280_xdma_201920_3/network_krnl.xo _x.hw.xilinx_u280_xdma_201920_3/iperf_krnl.xo _x.hw.xilinx_u280_xdma_201920_3/cmac_krnl.xo
WARNING: [v++ 60-1604] The supplied option 'dk' is deprecated. To standardize the command line, the preferred alternative is 'debug.chipscope','debug.list_ports', 'debug.protocol. 
Option Map File Used: '/opt/Xilinx/Vitis/2022.1/data/vitis/vpp/optMap.xml'

****** v++ v2022.1 (64-bit)
  **** SW Build 3524075 on 2022-04-13-17:42:45
    ** Copyright 1986-2022 Xilinx, Inc. All Rights Reserved.

WARNING: [v++ 60-1495] Deprecated parameter found: compiler.userPostSysLinkTcl. Please use this replacement parameter instead: compiler.userPostDebugProfileOverlayTcl
INFO: [v++ 60-1306] Additional information associated with this v++ link can be found at:
	Reports: /home/ubuntu/sungsoo/Vitis_with_100Gbps_TCP-IP/build_dir.hw.xilinx_u280_xdma_201920_3/reports/link
	Log files: /home/ubuntu/sungsoo/Vitis_with_100Gbps_TCP-IP/build_dir.hw.xilinx_u280_xdma_201920_3/logs/link
WARNING: [v++ 60-1216] --report/-r option has been deprecated. Please use --report_level/-R estimate to generate an estimate report file for software emulation
Running Dispatch Server on port: 37311
INFO: [v++ 60-1548] Creating build summary session with primary output /home/ubuntu/sungsoo/Vitis_with_100Gbps_TCP-IP/build_dir.hw.xilinx_u280_xdma_201920_3/network.xclbin.link_summary, at Fri Aug 26 05:29:42 2022
INFO: [v++ 60-1316] Initiating connection to rulecheck server, at Fri Aug 26 05:29:42 2022
INFO: [v++ 60-1315] Creating rulecheck session with output '/home/ubuntu/sungsoo/Vitis_with_100Gbps_TCP-IP/build_dir.hw.xilinx_u280_xdma_201920_3/reports/link/v++_link_network_guidance.html', at Fri Aug 26 05:29:43 2022
INFO: [v++ 60-895]   Target platform: /opt/xilinx/platforms/xilinx_u280_xdma_201920_3/xilinx_u280_xdma_201920_3.xpfm
INFO: [v++ 60-1578]   This platform contains Xilinx Shell Archive '/opt/xilinx/platforms/xilinx_u280_xdma_201920_3/hw/xilinx_u280_xdma_201920_3.xsa'
INFO: [v++ 74-78] Compiler Version string: 2022.1
ERROR: [v++ 60-1299] The specified platform is not supported. Platform 'xilinx_u280_xdma_201920_3.xpfm' (version 2019.2) is not supported by the current tool version (2022.1). By policy, platforms are supported for the remainder of the calendar year release plus the following calendar year release
ERROR: [v++ 60-703] Failed to finish linking
INFO: [v++ 60-1653] Closing dispatch client.
Makefile:147: recipe for target 'build_dir.hw.xilinx_u280_xdma_201920_3/network.xclbin' failed
make: *** [build_dir.hw.xilinx_u280_xdma_201920_3/network.xclbin] Error 1

There seems to be a mismatch between the Vitis and DTP versions specified in the README. (Vitis 2022.1, DTP xilinx_u280_xdma_201920_3)

Maybe the README needs an update, or is there a way that I can make the build work?

Build error - ERROR: [HLS 207-3776] use of undeclared identifier 'FNS_ROCE_STACK_MAX_QPS'

Trying to build project but getting errors

Vitis 2022.1
Ubuntu 22.04
u50 platform - xilinx_u50_gen3x16_xdma_5_202210_1

Steps to reproduce -

  1. git clone https://github.com/fpgasystems/Vitis_with_100Gbps_TCP-IP.git
  2. cd Vitis_with_100Gbps_TCP-IP
  3. git submodule update --init --recursive
  4. git branch
    -> branch is vitis_2022_1
  5. mkdir build && cd build
  6. cmake .. -DFDEV_NAME=u50 -DTCP_STACK_EN=1
  7. make ip

Error:

ERROR: [HLS 207-3776] use of undeclared identifier 'FNS_ROCE_STACK_MAX_QPS' (/home/test/source/learn/xilinx/tcp-ip/Vitis_with_100Gbps_TCP-IP/fpga-network-stack/hls/arp_server_subnet/../fns_config.hpp:5:26)
INFO: [HLS 200-111] Finished Command csynth_design CPU user time: 7.22 seconds. CPU system time: 0.77 seconds. Elapsed time: 6.72 seconds; current allocated memory: -1030.672 MB.
 
    while executing
"source /home/test/source/learn/xilinx/tcp-ip/Vitis_with_100Gbps_TCP-IP/build/fpga-network-stack/hls/arp_server_subnet/arp_server_subnet_synthesis.tcl"
    ("uplevel" body line 1)
    invoked from within
"uplevel \#0 [list source $arg] "

Design did not meet timing

[10:33:50] Starting bitstream generation..
[10:49:24] Run vpl: Step impl: Failed
[10:49:25] Run vpl: FINISHED. Run Status: impl ERROR

===>The following messages were generated while Compiling (bitstream) accelerator binary: network Log file: /home/ubuntu/Documents/Vitis_with_100Gbps_TCP-IP/build_dir.hw.xilinx_u280_xdma_201920_3/link/vivado/vpl/prj/prj.runs/impl_1/runme.log :
ERROR: [VPL-4] design did not meet timing - Design did not meet timing. One or more unscalable system clocks did not meet their required target frequency. Please try specifying a clock frequency lower than 250 MHz using the '--kernel_frequency' switch for the next compilation. For all system clocks, this design is using 0 nanoseconds as the threshold worst negative slack (WNS) value. List of system clocks with timing failure:
system clock: txoutclk_out[0]; slack: -0.030 ns
ERROR: [VPL 60-773] In '/home/ubuntu/Documents/Vitis_with_100Gbps_TCP-IP/build_dir.hw.xilinx_u280_xdma_201920_3/link/vivado/vpl/runme.log', caught Tcl error: problem implementing dynamic region, impl_1: route_design ERROR, please look at the run log file '/home/ubuntu/Documents/Vitis_with_100Gbps_TCP-IP/build_dir.hw.xilinx_u280_xdma_201920_3/link/vivado/vpl/prj/prj.runs/impl_1/runme.log' for more information
WARNING: [VPL 60-732] Link warning: No monitor points found for BD automation.
ERROR: [VPL 60-704] Integration error, problem implementing dynamic region, impl_1: route_design ERROR, please look at the run log file '/home/ubuntu/Documents/Vitis_with_100Gbps_TCP-IP/build_dir.hw.xilinx_u280_xdma_201920_3/link/vivado/vpl/prj/prj.runs/impl_1/runme.log' for more information
ERROR: [VPL 60-1328] Vpl run 'vpl' failed
ERROR: [VPL 60-806] Failed to finish platform linker
INFO: [v++ 60-1442] [10:49:26] Run run_link: Step vpl: Failed
Time (s): cpu = 00:08:02 ; elapsed = 05:39:19 . Memory (MB): peak = 1341.246 ; gain = 0.000 ; free physical = 48585 ; free virtual = 62358
ERROR: [v++ 60-661] v++ link run 'run_link' failed
ERROR: [v++ 60-626] Kernel link failed to complete
ERROR: [v++ 60-703] Failed to finish linking

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.