RIFFA (Reusable Integration Framework for FPGA Accelerators) is a framework developed in University of California, San Diego. This project utilises the RIFFA framework to define an interface to interact with a user's IP core on the FPGA to send and receive data to and from the PC. This particular project is being developed under Imperial College London.
C 13.60%Python 0.11%Verilog 10.34%VHDL 72.26%HTML 3.30%Makefile 0.39%
For some reason XPS thinks that the BRAM_Din signals do not have a load and therefore does not route it. I am trying to input RUNTIME and OUTPUT_CYCLE signals into test_core.vhd. Everything works fine in ModelSim simulation but it doesn't synthesize these signals in XPS. I really don't know how to fix this bug!
If you open up the system.xmp file with XPS from either ml605 or ml505 directory you will most likely get an error that some of the netlist files are not available. The fix to that is the following:
Before opening the XPS project open up the system.xmp file with a text editor and find the following line:
Installation of the RIFFA software drivers and any issues you had with them.
Installation of the FPGA USB programming driver (which one? any issues?)
Necessary changes to the XPS project setting and the .mhs and .ucf files
Configuration of the XUP board for using Compact Flash (DIP switches)
Toolflow to create the CF .ace and .sys files (possibly with screenshots, include pdf guide I sent you)
Describe the process necessary for the system to recognize the PCIe connection and for RIFFA to work (reboot). Also, make sure that you include all the shell commands used to confirm that everything is OK (e.g. dmesg).
Again in the simulation things work perfectly fine. When I try to do a dma transfer after the RAM gets full and get back to processing again, I receive an error on the software side.
I have no idea what is going on there.
At this point the application can be used even though its a bit buggy.
Write a wrapper which will sit between the hardware core and the RIFFA modules. This should include double buffers for sending and receiving data together with the necessary control (state machine, etc) to communicate with the basic software functions.
You also need to be able to accommodate for multiple consecutive DMA transfers. This will possibly require a software loop and some kind of hardware signal to notify the software that the loop needs to break (e.g. by sending a special data pattern).
Also, maybe try using more than one PCIe lanes (hasn't been tested by Matt).
I try to address some values from the bram as an input to my core. In the simulation I have created a file called bram.vhd which is supposed to simulate a bram but apparently the core that Xilinx produces addresses in some alien addressing mode. Basically I try to send the following data:
1024
2
1
1
According to the simulation they go to input 0, 1,2 and 3 of a test_core that i implemented. However for the inputs to be alligned correctly the data has to be sent in the order: