Git Product home page Git Product logo

casper-astro / mlib_devel Goto Github PK

View Code? Open in Web Editor NEW
56.0 56.0 85.0 509.03 MB

Home Page: http://casper-toolflow.readthedocs.io/en/latest/jasper_documentation.html

MATLAB 0.31% Objective-C 0.01% Shell 0.06% Verilog 4.65% VHDL 92.19% Tcl 0.27% Makefile 0.01% CartoCSS 0.01% SystemVerilog 0.04% Batchfile 0.01% Python 0.18% HTML 0.01% Filebench WML 0.01% C 0.82% Assembly 0.01% Perl 0.01% Ruby 0.01% V 1.47% Fortran 0.01% Pascal 0.01%

mlib_devel's People

Contributors

adami75 avatar amartens avatar amishatishpatel avatar amitbansod avatar andrewvanderbyl avatar bjbford avatar cs150bf avatar david-macmahon avatar francoistolmie avatar gcallanan avatar gitj avatar griffinfoster avatar hkriel avatar ianmalcolm avatar jack-h avatar jkocz avatar jmanley avatar lvertats avatar mchirindo avatar mitchburnett avatar moragb96 avatar paulprozesky avatar respectmyprivacy0 avatar serfass avatar shlean avatar telegraphic avatar tschrager avatar wanxiangcheng avatar wjmallard avatar wnew avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mlib_devel's Issues

ValueError: Verification of write to '' at offset 0 failed.

I followed the red_pitaya tutorial one until executing the command "fpga.write_int('a',10)" and the problem started to appear.

In [5]: fpga.write_int('a',10)
2024-05-09 15:07:38.21 ERROR 192.168.31.206 casperfpga.py:547 - Verification of write to a at offset 0 failed. Wrote 0x0000000a... but got back 0x00000001.

ValueError Traceback (most recent call last)
Cell In[5], line 1
----> 1 fpga.write_int('a',10)

File ~/work/cfpga_venv/lib/python3.8/site-packages/casperfpga/casperfpga.py:597, in CasperFpga.write_int(self, device_name, integer, blindwrite, word_offset)
595 self.blindwrite(device_name, data, word_offset * 4)
596 else:
--> 597 self.write(device_name, data, word_offset * 4)
598 self.logger.debug('Write_int %8x to register %s at word offset %d '
599 'okay%s.' % (integer, device_name, word_offset,
600 ' (blind)' if blindwrite else ''))

File ~/work/cfpga_venv/lib/python3.8/site-packages/casperfpga/casperfpga.py:548, in CasperFpga.write(self, device_name, data, offset)
543 err_str = 'Verification of write to %s at offset %d failed. '
544 'Wrote 0x%08x... but got back 0x%08x.' % (
545 device_name, offset,
546 unpacked_wrdata, unpacked_rddata)
547 self.logger.error(err_str)
--> 548 raise ValueError(err_str)

ValueError: Verification of write to a at offset 0 failed. Wrote 0x0000000a... but got back 0x00000001.

I ues Red Pitaya OS 2.00-23 / STEMlab 125-14 LN v1.1

Shadow ethernet packets on ROACH2 10Ge CPU TX

When the CPU transmits a packet via a 10Ge interface on ROACH2, a packet of the same size but containing all 0's is transmitted directly after. This does not occur on the fabric interface.

Installation error of Xilinx USB Cable Driver

Original post: https://github.com/casper-astro/mlib_devel/blob/master/docs/src/How-to-install-Xilinx-ISE.md

Hi! I follow your guide to install the USB Cable Driver and I have installed the prerequisite which is

sudo apt-get install gitk git-gui libusb-dev build-essential libc6-dev-i386 fxload

on a 64-bit machine.

Then, I use the command:

sudo make lib32

and I get the following error:
image

Since the linker ld says that it cannot find the shared object, so I download the *.so file into the required folder from the error.
The shared object that I have downloaded and copied into this path:

/lib/x86_64-linux-gnu/libusb-1.0.so.4.4.4

Then, I run the command

sudo make lib32

I still get the same issue.

RFDC block, DAC "Axis Clockign Invalid" locked error message

I think this is the right repository to place this issue, but it could need posting in the casperfpga repository.

When using the ZCU111 with rfdc GUI block, there are some methods of input that leave the "DAC Tiles >> Required AXI4-Stream Block (MHz)" field with an error that says "Axis Clockign Invalid". This is good because it appears the GUI is trying to make sure there is a valid clock rate for the target sample rate, but there are two issues:

  1. (minor) This is a typo in the error message. It should be "Clock[ing]". This was kind of confusing to figure out.

  2. (moderate) When this DAC tile is disabled so that only the ADC portion is enabled, the error message will be greyed out (locked) and it is possible for the GUI to field to lock in the "Clockign Invalid" message even when the ADC portion has a valid clock configuration. If this field is locked with the error message, when the model is built with jasper, it will fail with a cryptic message about undefined aliases. This appears to be a library linking type issue on the surface, but upon deeper dive, you can see the line "t228_axis_stream_clk" in the generated jasper.per file will copy the error message from the GUI and cause the jasper parser to get confused. In order to get the locked error message to disappear, I had to enable the DAC tile, twidle the clock values with dummy values until the error disappeared, then disable the DAC tile again.

To fix this bug, I think the best approach would be to not have the jasper tools copy the fields from the DAC portion of the GUI if the tile is disabled. This may get complicated depending on how the Matlab GUI is set up. Another option is to run the error checking for both the ADC and DAC whenever the other side is updated.

Test models out of date

It looks like many of the test models in the "jasper_library/test_models" directory have not been updated in 5 years.

In particular, some of them refer to scripts / black boxes that I am not able to find elsewhere in the code. We either need to add this code (tagging @AdamI75 as the test models seemed to mostly come from SARAO) or remove those that no longer work.

FFT copy/paste problem

When an FFT block is copied and pasted, the parameters (and block) revert to default. This is painful as the block must then be re-configured which may take some time.

exec_flow partial compiles

exec_flow.py has flags which enable you to turn off bits of the compile. They [probably] work if you want to stop later stages of the compile from running, but don't work at all if you want to (eg) regenerate an FPG file, but not rerun --backend.
It would be nice if this weren't the case. One use case, which I just encountered, is if a user wants to intervene in a compile to tweak timing issues in vivado and then merge the new bitstream into the fpg.

RFSoC: No recursive cloning in tutorial instructions

I was getting a failed jasper synthesis on the vanilla zcu111_tut_onehundred_gbe file. I went digging through the generate source code and after some splunking in my file system I realized this issue was a missing kutleng_skarab2_bsp_firmware directory. Looking in the github repo, I realized this was a submodule that was not in my local filesystem.

The current tutorial instructions do not mention the recursive cloning of submodules. I recommend adding this to the instructions for new users.

In summary,

Change:
"git clone https://github.com/casper-astro/mlib_devel.git"
To:
"git clone https://github.com/casper-astro/mlib_devel.git --recursive"

in the "Getting Started With RFSoC" page. May also add a note about this in the text.

Version checking?

Version conflicts have been a problem for a long time. I don't think it's a solution per se but it would probably help if people can quickly confirm exactly what version of casper library they are using, like they could do with checking Xilinx version or Matlab version. I can't really think of how the latter could be done, though (putting the version number into library title so people can see it directly from their library browser? doesn't seem like a neat idea to me).

I tried to make a function get_mlibVersion() that's supposed to work similar to get_xlVersion(), but it's just a very rough attempt and it doesn't quite work for several reasons:

  • I cannot figure out a good way to identify our existing libraries with certain numbers/strings, so I just pulled out the newest git commit log. Not a good idea because
    • people may not have GIT installed, but got the library through some other methods instead
    • this command doesn't quite work (for me) in Matlab as it keep getting warnings about "this terminal is not fully functional', it's just a warning but very very undesirable still, and I don't know how to fix that.
    • pulling the newest git commit log does tells you pretty much what version this library is (as git commit id) and the date. (assuming no changes have been done to the library...) But it could have been cleaner and... IDK, just better?
    • (I'm running out of time now but yeah I would appreciate better ideas on how to do this...)

mlib_devel / xps_library / get_mlibVersion.m

function get_mlibVersion()

mlib_path = which('casper_library');
if isempty(mlib_path)
    disp('casper_library not found, please load casper_library first.')
end

substrind = strfind(mlib_path, '/casper_library.mdl');
mlib_path = mlib_path(1:substrind-1);
disp(['casper_library path:', mlib_path]);

type(fullfile(mlib_path, '../.git/refs/heads/master'));

currentFolder = pwd;
cd(mlib_path);
!git log --max-count=1
cd(currentFolder);

end

SNAP Microblaze support is currently broken

On the m2021a branch, SNAP designs with the microblaze controller enabled do not get included.

In Vivado, trying to run

exec -ignorestderr updatemem -bit ./myproj.runs/impl_1/top.bit -meminfo ./myproj.runs/impl_1/top.mmi -data ../executable_core_info.mem  -proc snap_bd/microblaze_0 -out ./myproj.runs/impl_1/top.bit -force

returns

ERROR: [Updatemem 57-85] Invalid processor specification of: snap_bd/microblaze_0. The known processors are: snap_bd_inst/microblaze_0

It seems that a rename happened at some point

Compilation of designs with no AXI bram / software registers fail

Hi,
If I create a simulink model containing red_pitaya_dac block with no software_register block or snapshot block, compilation process fails with the following error:

INFO: Launching helper process for spawning children vivado processes
INFO: Helper process launched with PID 799

Starting RTL Elaboration : Time (s): cpu = 00:00:02 ; elapsed = 00:00:03 . Memory (MB): peak = 1831.582 ; gain = 153.715 ; free physical = 2945 ; free virtual = 7209

ERROR: [Synth 8-2772] type t_axi4lite_mmap_addr_arr does not match with a string literal [/home/ub/work/mlib_devel/jasper_library/test_models/test_red_pitaya_adc_dac/myproj/myproj.srcs/sources_1/imports/xml2vhdl_hdl_output/axi4lite_axi4lite_top_mmap_pkg.vhd:94]
ERROR: [Synth 8-2772] type t_axi4lite_mmap_addr_arr does not match with a string literal [/home/ub/work/mlib_devel/jasper_library/test_models/test_red_pitaya_adc_dac/myproj/myproj.srcs/sources_1/imports/xml2vhdl_hdl_output/axi4lite_axi4lite_top_mmap_pkg.vhd:95]
INFO: [Synth 8-2810] unit axi4lite_axi4lite_top_mmap_pkg ignored due to previous errors [/home/ub/work/mlib_devel/jasper_library/test_models/test_red_pitaya_adc_dac/myproj/myproj.srcs/sources_1/imports/xml2vhdl_hdl_output/axi4lite_axi4lite_top_mmap_pkg.vhd:67]

Finished RTL Elaboration : Time (s): cpu = 00:00:02 ; elapsed = 00:00:04 . Memory (MB): peak = 1886.301 ; gain = 208.434 ; free physical = 2970 ; free virtual = 7234

RTL Elaboration failed
INFO: [Common 17-83] Releasing license: Synthesis
7 Infos, 0 Warnings, 0 Critical Warnings and 3 Errors encountered.
synth_design failed
ERROR: [Common 17-69] Command failed: Synthesis failed - please see the console or run log file for details
INFO: [Common 17-206] Exiting Vivado at Fri Apr 2 20:58:24 2021...

The bug is easy to reproduce using test_red_pitaya_adc_dac.slx model which is a part of tutorial. Original model compiles with no errors but if you delete software_register blocks (and other block connected to them - in the right bottom part of model) you get error.

dec_fir block complex output issue

Hi, there is an issue with the output complex data from the dec_fir block.

The block works with complex samples, it takes in real and imaginary components separately and outputs complex samples (concatenated real and imag). The issue is with the bit width of the imag at the output before concatenation. We normally set the parameters Output bit width (x) and Output binary pt (y). Then, the results are converted (by casting) just before output. The real component is correctly casted to x_y, but the imag component is casted to x_(x-1) instead.
Please find attached a screenshot of that part of the block design from Simulink.
dec_fir_output

Even if this is corrected manually in the designs, the bit width parameter of the cast block is reset to x_(x-1). This is producing images in the negative part spectra due to the vanishing of the imag counterpart with respect to the real part.

Thank you for attending to the problem.
Best regards.

Outstanding ROACH2 yellow blocks

Yellow blocks completed;

  • Shared BRAM (tested)
  • Software register (tested)
  • iADC (untested)
  • katADC (untested)
  • 10Ge_v2 SFP+ mezzanine (untested)
  • 10Ge_v2 CX4 mezzanine (tested)
  • 1Ge (tested)
  • QDR SRAM (untested)
  • ADC5g (untested?)

Yellow blocks under development

Outstanding yellow blocks;

  • XAUI

Please update the list as things are completed/assigned/tested and we can sign the issue off when all are done.

Guide to add new supported platform

Hi,

We are developing an RF receiver using a board based on XCZU28DR-2FFVG1517. I am wondering if there's a guide on how to add a platform in Casper and how to create a Casperized image file for a specific board.

Thank you!

make matlab return values more useful

When running jasper_frontend the toolflow helpfully tells you what the command is to finish a compile.

When running jasper, the toolflow still prints this command (which is useless) and also something like ans=0 (or ans=1). Should print a clearer message indicating success or failure.

Default yaml loader issue

I ran into an issue when building a Casper project using the exec_flow.py script. I was able to resolve it, but I thought I'd bring this to your attention. Here is the issue:

/data2/development/mlib_devel/jasper_library/castro.py:32: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  c = yaml.load(fh)
Traceback (most recent call last):
  File "/data2/development/mlib_devel/jasper_library/exec_flow.py", line 222, in <module>
    backend.import_from_castro(backend.compile_dir + '/castro.yml')
  File "/data2/development/mlib_devel/jasper_library/toolflow.py", line 1089, in import_from_castro
    self.castro = castro.Castro.load(filename)
  File "/data2/development/mlib_devel/jasper_library/castro.py", line 32, in load
    c = yaml.load(fh)
  File "/data2/development/.virtualenv/casper_venv_dcain/lib/python3.6/site-packages/yaml/__init__.py", line 114, in load
    return loader.get_single_data()
  File "/data2/development/.virtualenv/casper_venv_dcain/lib/python3.6/site-packages/yaml/constructor.py", line 51, in get_single_data
    return self.construct_document(node)
  File "/data2/development/.virtualenv/casper_venv_dcain/lib/python3.6/site-packages/yaml/constructor.py", line 55, in construct_document
    data = self.construct_object(node)
  File "/data2/development/.virtualenv/casper_venv_dcain/lib/python3.6/site-packages/yaml/constructor.py", line 100, in construct_object
    data = constructor(self, node)
  File "/data2/development/.virtualenv/casper_venv_dcain/lib/python3.6/site-packages/yaml/constructor.py", line 429, in construct_undefined
    node.start_mark)
yaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/object:castro.Castro'

This resolved it for me:

--- a/mlib_devel/jasper_library/castro.py-BAK
+++ b/mlib_devel/jasper_library/castro.py
@@ -29,7 +29,7 @@ class Castro(object):
         loads this class object from a yaml file and assert that it is of type Castro
         '''
         with open(filename, 'r') as fh:
-            c = yaml.load(fh)
+            c = yaml.load(fh, Loader=yaml.Loader)
             assert isinstance(c, Castro)
             return c

To Software register bit fields

On the To Processor Software Register when using the bit fields functionality the simulation outputs don't match the inputs.
screenshot from 2018-07-25 16 51 43

update_casper_blocks doesn't seem to be permanent

So, I have an m2019 model I'm trying to update to the m2021 toolflow.

I update my mlib_devel submodule, run ./startsg, open my model, and run update_casper_blocks(bdroot), let it do its thing (updating 48 blocks), save the simulink model, and close matlab.

When I open things up again, and try to run the same update_casper_blocks it still seems to think there are 48 blocks to update. I'm not sure what to make of this.

JAM indexing errors prevents compilation

As far as I (and the C compiler) can tell, the constant CORE_INFO is a single byte in some but not all cases:

extern const uint8_t _core_info;
#define CORE_INFO (&_core_info)

So in lines such as

state->ptr = (void *)(CORE_INFO - 2);

and

extern const uint8_t _core_info;
#define CORE_INFO (&_core_info)

if(!CORE_INFO[0] || !CORE_INFO[1]) {

if(!CORE_INFO[0] || !CORE_INFO[1]) {

yield compiler errors that prevent the JAM software from building

pfb_fir_real no longer accepts parameters

Since updating to the m2021a, setting any of the parameters in the pfb_fir_real block doesn't seem to actually do anything. I noticed this after a call to update_casper_blocks and it was no longer the proper size in my model.

setup.py error

Few things I need to change before installing the casperfpga package.

  1. In requirements.txt I changed version katcp 0.6.2 to katcp 0.6.3
  2. in the setup.py following corrections needed.
    2A)data_files = ['src/tengbe_mmap.txt', 'src/tengbe_mmap_legacy.txt']
    2B)include_package_data=True, We need to include this line inside setup
    2C)package_data={ "": ["*.txt"],'casperfpga': data_files},

AXI BRAM fixed latency

For a long time now, the shared_bram yellow block has had an option to embed pipelining registers, increasing the bram latency from 1 to either 2 or 3.
Seemingly, the AXI infrastructure always instantiates RAM with latency of 1. This can causes designs to misbehave if they rely on the BRAM latency being as advertised. Further, issues aren't picked up in simulation, since the Simulink simulation model does respect the user's latency choices.

RFDC needs to be on top-level of Simulink diagram

The RFDC mask uses the sysgen block to determine hardware-specific capabilities. The original RFDC mask searched for the sysgen block from the same level in the model as the RFDC yellow block. This fails if the RFDC is in a subsystem.

I "fixed" this with realtimeradio/mlib_devel@6c67275

However, it turns out that unless the RFDC is on the top-level of the design, the RFDC .dtsi file never gets created. It's not really clear to me how this file gets generated, and whether the top-level requirement is fundamental or can be relaxed with a simple script change (maybe name to fullname in some generation script somewhere?).
If the latter, it would be good to implement this.

Compatiability Matrix in Documentation

Today is Dec 29, 2022. Yesterday, I tried to install the casper tools into the software enviroment listed in the documentation:

https://github.com/casper-astro/mlib_devel/blob/m2021a/docs/src/Installing-the-Toolflow.md

That is, Ubuntu 20.04.5 and Vivado 2021.1 and Matlab 2021a.

I followed all of the "Note on Operating System" suggestions about libqt4, the gcc 6 workarounds, etc.

I could not solve the issue Craig Ramsey described following his methods. In fact, Model Composer 2021.1 is not officially supported in Ubuntu 20.04, so it bothers me that you have 2021.1 listed as the "stable" enviroment.

But, Model Composer 2021.2 is supported, and I was able to get everything working successfully using Ubuntu 20.04.5 and Vivado 2021.2 and Matlab 2021a.

I recommend changing the table to say Vivado 2021.2 instead of 2021.1. Maybe the mature folks in the community figured out how to get Vivado 2021.1 working with the casper tools in Ubuntu 20.04, but as a newcomer this was a frustrating barrier to entry that I believe is a documentation error.

Shared BRAM performance

Shared BRAMs are instantiated in different ways,

  1. If a default 32-bits-in 32-bits-out block is instantiated, an EDK dual port RAM pcore is used. This pcore optimises performance by splitting the data word across multiple BRAM primitives (if necessary) so that no output mux is required. This core cannot have different sized ports though so cannot be used in the general case.
  2. In other cases, a Coregen dual port RAM is used. This has three implementation options, one that optimises for 'area', one that optimises for 'power' and one that uses specified primitives. The problem is that the 'area' option instantiates an output mux that reduces the possible performance significantly if the storage space required is larger than one BRAM in size instead of splitting the word across multiple BRAMs as with the EDK pcore above.

To optimise performance we need a dual port RAM pcore that operates like the EDK dual port RAM pcore but allows different sized interfaces.

Address/size inconsicencites for microblaze

Since recent microblaze updates, it seems that addresses and offsets in the corresponding tcl files have not been correspondingly updated, rendering current microblaze designs inoperable.

Ethernet multicast support

It will be necessary (and desirable) to support multi-cast transmission of packets.

A patch by Jan Wagner supports this for the iBOB, we should look at adding this to the 10Ge, 1Ge and 10Ge_v2 blocks.

How to build a python virtual environment in Ubuntu 16.04LTS

Okay, so it turns out that creating the virtual environment for python using "virtualen -p python3 <name_of_env>" does not work. This is what you need to do in order to create a successful virtual environment and get your designs to build. I tested it on a brand new virtual environment and it works. Thanks to Clifford van Wyk (Peralex), Morag Brown and Jack Hickish for their assistance. The docs will definitely need to be updated - we will add this to the agenda for the CASPER meeting.

The following needs to be done if you want to generate a proper virtual environment that will build in the toolflow - Kaj, I recommend the below way:

  1. Create the virtual environment: "python3 -m venv ". The old way of using virtualenv -p python 3 just doesn't work. Thanks to Jack and Morag for pointing this out to me.
  2. Activate the environment: "source /bin/activate"
  3. Go to the mlib_devel directory and edit the requirements.txt file. It should have "numpy<1.19" (thanks for your sleuth work, Jack!). Now save the file.
  4. Go to mlib_devel directory and type exactly: "pip install -r requirements.txt". This will install without error or issues. if you check the site-packages in the virtual environment you will see what I mean - all the python packages will be installed properly in your virtual environment.
  5. You will need to edit line 32 of castro.py (located in the same folder as mlib_devel/jasper_library), so that it is "c = yaml.load(fh, Loader=yaml.Loader) . Refer to yaml/pyyaml#266 for more info on this version change issue. I tested the build with my original working virtualenv and the newly generated virtualenv and it works for both. Don't worry, I will be committing this change once I have done a bit more investigation - maybe adding versions in requirements.txt is not a bad idea for future support.

@jkocz and @moragb96 I have submitted a pull request to casper-astro/mlib_devel (casper-astro-soak-test) that includes the fixes for 3) and 5). You shouldn't have to do anything once merged. Maybe just check to be safe! This will all eventually end up in the ReadtheDocs documentation. It is here as a place holder and to keep track.

FFT mask error -- shift schedule

I am using JASPER commit 6e4b9be

I found an issue when using the FFT block (a complex FFT) not the fft_wideband_real block.
The parameter shift_schedule is left empty in the Simulink mask instead of being initialized to "[]" as in the fft_wideband_real. This has an effect even when the "Hardcore Shift Schedule" tick is not selected, as you can see in the screenshot "FFT_shift_schedule.png". You can see the difference in the field value on the Simulink mask in the screenshot "Real_Imag_FFT.png"

The result being a matlab error such as:

Error using design_info.sblock_to_info (line 23)
Block test6_test/dbf01/dbf_buttler: parameter has an empty value?

Error in design_info.write_info_table (line 16)
infoblks = design_info.sblock_to_info(blks{b});

Error in gen_xps_add_design_info (line 63)
design_info.write_info_table(info_table_path, sysname, tagged_objects)

Error in jasper_frontend (line 41)
gen_xps_add_design_info(sys, mssge, '/');

image

image

SNAP TX_DISABLE wrong polarity

On SNAP, the FPGA TX_DISABLE pins do not directly drive the SFP TX_DISABLE inputs. Rather, they drive the gate of an N-channel MOSFET.

As a result, the polarity of the TX_DISABLE FPGA pin needs inverting in firmware.

No platform names in gpio blocks

GPIO block pin names are specified as (eg):

roach2:led

The JASPER toolflow only uses the identifier after the colon - it throws away the platform. The GPIO block should be modified so it doesn't include these platform descriptors at all, and the toolflow should then not perform the split(':')

Bonus points if the toolflow's YellowBlock.requires / Platform.provides infrastructure can be used to do DRC for allowed GPIO blocks

AXI write interface liable to fail if user_clk < axi_clk

The clock-domain crossing double-registers used to connect AXI registers to the user's simulink design are liable to miss write events if the axi clock is faster than the simulink clock (user_clk) since writable registers from the AXI interconnect subsystem only hold the IP_BUS_VALID signal high for one cycle.

Should there be some more sophisticated handshaking (as in the WB software registers)?

ROACH2 revision 1 pin mismatch in BEE2_hw_routes.mat

Revision 1 of the ROACH2 has 16 GPIO pins (from 12 on revision 0). It does not have two GPIO banks like ROACH. It has no oe (output enable) pins for the GPIO. The first 8 GPIO pins are connected to the 8 LEDs on the board as well as to a header.

The toolflow at the moment gives the option of ROACH2 having 2 GPIO ports of 12 pins with an oe pin for each. It also provides for 8 LEDs which maps to the same pins as the first 8 gpio pins. This causes various compile problems.

There is only one aux clock and a sync_in and sync_out.

There is no diff_gpio for ROACH2.

onegbe merge messed up

The merge-staging-2019 branch version of onegbe.py is messed up, with multiple definitions of the onegbe_vcu118 class. FIX!

Issue and/or Feature Request: Automatic updating of IP

In the Jasper tool flow, in toolflow.py, when the .tcl script is being generated, there is a boolean field "upgrade_ip" from the incoming yaml that puts in a line in the .tcl script to update the Xilinx IP. It appears the yaml always puts the "upgrade_ip" as "True" (or is there a way to put this field as "False" that I am missing?)

I stumbled on this having an issue using the rfdc block in Vivado 2021.2 where the Vivado tools crash because they upgrade the jasper generated version 2.0 rfdc to the 2.6 version of the 2021.2 tools. Vivado can't find the version 2.6 in the sysgen ip directory and crashes.

It may cause more problems to flat out disable updating IP in general, but it would be nice if the pull down menu where you select a synthesis tool in the various platform blocks had a little checkbox to allow the user to select if they would like to update the IP during the jasper compilation. This could help make the tools more robust to Xilinx software versions.

RFSoC 2x2 Backend compile issue - missing tile228?

Hi, I'm trying to set up a toolflow for working with an RFSoC 2x2 board (I'm extremely new to FPGA programming). I understand the general flow of operations. I tried the first tutorial, and I could build and load the FPGA bitstream, and everything worked fine. Now I'm on tutorial two and having issues during backend compilation, specifically while generating the RFDC yellow block.
Posting the error message here.

grabbing data from yaml, this is a gen 1 part
Traceback (most recent call last):
  File "/home/harshad/Tools/rfsoc/mlib_devel/jasper_library/exec_flow.py", line 209, in <module>
    tf.gen_periph_objs()
  File "/home/harshad/Tools/rfsoc/mlib_devel/jasper_library/toolflow.py", line 415, in gen_periph_objs
    self.periph_objs.append(yellow_block.YellowBlock.make_block(
  File "/home/harshad/Tools/rfsoc/mlib_devel/jasper_library/yellow_blocks/yellow_block.py", line 65, in make_block
    return cls(blk,platform,hdl_root=hdl_root)
  File "/home/harshad/Tools/rfsoc/mlib_devel/jasper_library/yellow_blocks/yellow_block.py", line 152, in __init__
    self.initialize()
  File "/home/harshad/Tools/rfsoc/mlib_devel/jasper_library/yellow_blocks/rfdc.py", line 273, in initialize
    t.has_clk_src = self.rfdc_conf['tile{:d}'.format(tidx)]['has_dac_clk']
KeyError: 'tile228'

I checked the .yaml file for rfsoc 2x2 and I noticed that config for tiles 228 - 231 are missing and rfdc.py is trying to build objects for DAC tiles. I'm not sure how it is supposed to behave in this scenario.

I did try the provided tutorial model (rfsoc2x2_tut_rfdc.slx). I had to reset clk values to get rid of incompatible clk rate errors. Please let me know if you want some information.

Any help would be appreciated!

RFDC block stops working if .slx file is renamed

If I rename the .slx file or copy it to a new file to make changes, the tutorial rfsoc4x2_tut_rfdc_real.slx still builds without errors, but the ADCs only return -1.

I went into the BLOCK toolbar and click Expand on the rfdc block. The gateways that are revealed do include the name of the original file, not the renamed file.

A repository installation issue with ROACH2

Hi teacher,How‘s it going? I am trying the tutorial of ROACH2. I first downloaded mlib_devel v0.9 for ROACH2 from Github, and then downloaded the latest version of Tutorial_devel. But, See the README in the, I don't know how to put them together, to complete ROACH2 tutorial.NOW ise14.7 and Matlab2013b are installed. I would like to ask which version of Tutorial_devel should be downloaded and how to use it with mlib_devel v0.9.

I would be really appreciated if you reply !

Regression test framework in casper_library/Tests folder out of date

A regression testing framework was started. It is very simple - a simulation model with simulation inputs can be run and the output compared against reference output stored in a file. There is some infrastructure to allow the selective running of the tests.

Issues

  • the models are out of date and probably contain stale blocks.
  • currently, the models, and reference output data need to be maintained and updated manually.
  • the comparison is very simple, and no allowance for slightly differing delays etc are supported.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.