Git Product home page Git Product logo

snitch's Introduction

License

DEPRECATED

This repository has been deprecated. However, development on Snitch-related projects continues in the following new dedicated repositories:

Snitch System

This monolithic repository hosts software and hardware for the Snitch generator and generated systems.

Getting Started

To get started, check out the getting started guide.

Content

What can you expect to find in this repository?

  • The Snitch integer core. This can be useful stand-alone if you are just interested in re-using the core for your project, e.g., as a tiny control core or you want to make a peripheral smart. The sky is the limit.
  • The Snitch cluster. A highly configurable cluster containing one to many integer cores with optional floating-point capabilities as well as our custom ISA extensions Xssr, Xfrep, and Xdma.
  • Any other system that is based on Snitch compute elements. Right now, we do not have any open-sourced yet, but be sure that this is going to change.

Tool Requirements

  • verilator = v4.100
  • bender >= v0.21.0

License

Snitch is being made available under permissive open source licenses.

The following files are released under Apache License 2.0 (Apache-2.0) see LICENSE:

  • sw/
  • util/

The following files are released under Solderpad v0.51 (SHL-0.51) see hw/LICENSE:

  • hw/

The sw/vendor directory contains third-party sources that come with their own licenses. See the respective folder for the licenses used.

  • sw/vendor/

Publications

If you use Snitch in your work, you can cite us:

Snitch: A tiny Pseudo Dual-Issue Processor for Area and Energy Efficient Execution of Floating-Point Intensive Workloads

@article{zaruba2020snitch,
  title={Snitch: A tiny Pseudo Dual-Issue Processor for Area and Energy Efficient Execution of Floating-Point Intensive Workloads},
  author={Zaruba, Florian and Schuiki, Fabian and Hoefler, Torsten and Benini, Luca},
  journal={IEEE Transactions on Computers},
  year={2020},
  publisher={IEEE}
}

Stream semantic registers: A lightweight risc-v isa extension achieving full compute utilization in single-issue cores

@article{schuiki2020stream,
  title={Stream semantic registers: A lightweight risc-v isa extension achieving full compute utilization in single-issue cores},
  author={Schuiki, Fabian and Zaruba, Florian and Hoefler, Torsten and Benini, Luca},
  journal={IEEE Transactions on Computers},
  volume={70},
  number={2},
  pages={212--227},
  year={2020},
  publisher={IEEE}
}


Other work which can be found in or contributed to this repository:

Banshee: A Fast LLVM-Based RISC-V Binary Translator

@INPROCEEDINGS{9643546,
  author={Riedel, Samuel and Schuiki, Fabian and Scheffler, Paul and Zaruba, Florian and Benini, Luca},
  booktitle={2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)}, 
  title={Banshee: A Fast LLVM-Based RISC-V Binary Translator}, 
  year={2021},
  volume={},
  number={},
  pages={1-9},
  doi={10.1109/ICCAD51958.2021.9643546}
}

Manticore: A 4096-Core RISC-V Chiplet Architecture for Ultraefficient Floating-Point Computing

@ARTICLE{9296802,
  author={Zaruba, Florian and Schuiki, Fabian and Benini, Luca},
  journal={IEEE Micro}, 
  title={Manticore: A 4096-Core RISC-V Chiplet Architecture for Ultraefficient Floating-Point Computing}, 
  year={2021},
  volume={41},
  number={2},
  pages={36-42},
  doi={10.1109/MM.2020.3045564}
}

Indirection Stream Semantic Register Architecture for Efficient Sparse-Dense Linear Algebra

@INPROCEEDINGS{9474230,
  author={Scheffler, Paul and Zaruba, Florian and Schuiki, Fabian and Hoefler, Torsten and Benini, Luca},
  booktitle={2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)}, 
  title={Indirection Stream Semantic Register Architecture for Efficient Sparse-Dense Linear Algebra}, 
  year={2021},
  volume={},
  number={},
  pages={1787-1792},
  doi={10.23919/DATE51398.2021.9474230}
}

snitch's People

Contributors

and-ivanov avatar colluca avatar cyrilkoe avatar fabianschuiki avatar fischeti avatar flaviens avatar giannap avatar huettern avatar jossevandelm avatar lucabertaccini avatar micprog avatar niwis avatar paulsc96 avatar samuelriedel avatar stmach avatar suehtamacv avatar thommythomaso avatar viv-eth avatar xeratec avatar zarubaf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

snitch's Issues

Banshee: Atomic support on main memory

As @fischeti pointed out, opposed to the RTL, Banshee currently does not support atomics on its main memory. Would be very neat to align those - also in light of the overall synchronization discussion.

Read-modify-write (RMW) module

In order to be able to protect an SRAM using ECC, only full-word accesses can be made as the parity word is assembled over the entire word. It is still beneficial to allow for byte-wise accesses though.

The idea is to create a module that contains the following interface on the in-side (minus atop):

.mem_req_o(spm_req),
.mem_gnt_i(spm_req), // always granted - it's an SPM.
.mem_addr_o(spm_addr),
.mem_wdata_o(spm_wdata),
.mem_strb_o(spm_strb),
.mem_atop_o(),
.mem_we_o(spm_we),
.mem_rvalid_i(spm_rvalid),
.mem_rdata_i(spm_rdata)

If the module detects a write access without the full write mask set a state machine should do a read access, followed by a partial merge (dependent on the write mask) and a consecutive write. The output interface of the module looks similar to the input interface minus the write-mask (which is implicitly set to '1).

Banshee SDMA not working with non-zero tcdm start

when memory.tcdm.start is set to an address different from 0x0, the DMA doesn't perform the proper loads/stores anymore.

This may be due to a wrong address used in binary_store/binary_load or inconsistencies between the tcdm and memory data structures used to hold memory.

SSR writes '-inf' at the end

Description

Suppose we have an array x of floats (all elements set to 0.0) and want to fill x up to index m with ** 1.0 **.

    snrt_ssr_loop_1d(SNRT_SSR_DM0, m, sizeof(float));
    snrt_ssr_repeat(SNRT_SSR_DM0, 1);
    snrt_ssr_write(SNRT_SSR_DM0, SNRT_SSR_1D, x);

    snrt_ssr_enable();
    for (uint32_t i = 0; i < m; i++) {
        asm volatile(
            "addi x5, zero, 0\n"    // x5 <-- 0
            "fcvt.s.w ft1, x5\n"    // ft1 <-- float(x5)
            "fmv.s ft0, ft1\n"      // ft0 <-- ft1 (write to x[i])
            : 
            :
            : "ft0", "ft1", "x5"
        );
    }
    snrt_ssr_disable();

After this x[m] will contain the value -inf.

Here is a working example.

SPI Host: FSM Bug Read Enable

The OpenTitan SPI host FSM contains a bug as it enables the rd_en signal in the same cycle a command with the cmd_rd_en field set is acknowledged. However the rd_en signal tells the shift register to output the last sampled byte (i.e. the last byte of the preceeding command).

This patch fixes this misbehaviour (just like OpenTitan's fix):

diff --git a/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv b/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
index 4fb1360d..53914918 100644
--- a/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
+++ b/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
@@ -482,7 +482,7 @@ module spi_host_fsm
 
   assign wr_en_internal    = byte_starting & cmd_wr_en;
   assign shift_en_internal = bit_shifting;
-  assign rd_en_internal    = byte_ending & cmd_rd_en;
+  assign rd_en_internal    = byte_ending & cmd_rd_en_q;
   assign speed_o           = cmd_speed;
   assign sample_en_d       = byte_starting | shift_en_o;
   assign full_cyc_o        = full_cyc;

SPI Host: FSM Bug CSAAT

The OpenTitan SPI host FSM contains a bug as it uses the command_i.segment.csaat signal instead of the latched csaat_q from the command that is currently being executed to decide how the chip select will be handled.

See OpenTitans Fix

Patch to rectify the problem:

diff --git a/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv b/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
index 4fb1360d..e50d0e7e 100644
--- a/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
+++ b/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
@@ -250,7 +250,7 @@ module spi_host_fsm
           // and of CSAAT is asserted, the details of the subsequent command.
           if (!last_bit || !last_byte) begin
             prestall_st_d = InternalClkLow;
-          end else if (!command_i.segment.csaat) begin
+          end else if (!csaat_q) begin
             prestall_st_d = WaitTrail;
           end else if (!command_valid_i) begin
             prestall_st_d = IdleCSBActive;

Multicluster DMA transfer deadlock

Simulatneously copying or sharing data across clusters in a multicluster system (i.e. occamy) can cause deadlocks. The DMA of each cluster issues a read request to another clusters TCDM and a corresponding write request to its own TCMD. Hence each cluster has an incoming read request and a local write request it needs to arbitrate which is handled in the axi_to_mem module.

always_comb begin
meta_sel_d = meta_sel_q;
sel_lock_d = sel_lock_q;
if (sel_lock_q) begin
meta_sel_d = meta_sel_q;
if (arb_valid && arb_ready) begin
sel_lock_d = 1'b0;
end
end else begin
if (wr_valid ^ rd_valid) begin
// If either write or read is valid but not both, select the valid one.
meta_sel_d = wr_valid;
end else if (wr_valid && rd_valid) begin
// If both write and read are valid, decide according to QoS then burst properties.
// Prioritize higher QoS.
if (wr_meta.qos > rd_meta.qos) begin
meta_sel_d = 1'b1;
end else if (rd_meta.qos > wr_meta.qos) begin
meta_sel_d = 1'b0;
// Decide requests with identical QoS.
end else if (wr_meta.qos == rd_meta.qos) begin
// 1. Prioritize individual writes over read bursts.
// Rationale: Read bursts can be interleaved on AXI but write bursts cannot.
if (wr_meta.last && !rd_meta.last) begin
meta_sel_d = 1'b1;
// 2. Prioritize ongoing burst.
// Rationale: Stalled bursts create back-pressure or require costly buffers.
end else if (w_cnt_q > '0) begin
meta_sel_d = 1'b1;
end else if (r_cnt_q > '0) begin
meta_sel_d = 1'b0;
// 3. Otherwise arbitrate round robin to prevent starvation.
end else begin
meta_sel_d = ~meta_sel_q;
end
end
end
// Lock arbitration if valid but not yet ready.
if (arb_valid && !arb_ready) begin
sel_lock_d = 1'b1;
end
end
end

The module currently arbitrates in round-robin fashion (among others prioritizations) which might cause multiple clusters to simultaneously service the incoming read request and stall the local write request. This situation causes a deadlock since the read request depends on the write request to be serviced.

A current workaround for this issue is to naively prioritize write over read requests. But this only fixes this particular problem and might cause different deadlocks.

Banshee casting

It would be very useful if casting instructions would be implemented in banshee. These are the fcvt instructions.

Snitch to Wishbone Bridge

Hello,

I have recently tried to create a snitch to wishbone bridge. Currently I have a problem with data channel. If I understand correctly, data channel has enum or struct type (screenshot I attached below) and differs from instruction channel.
image

image

But I cant find the definition of reqrsp_rsp_t and reqrsp_req_t enum/struct as a result I cant connect signals appropriately.
image

I saw the example of connection to AXI bus in Snitch Cluster but it does not give me a lot.
Could you please help me to find this definition? Or probably share an example of connection to another buses?

Best regards,
Illia

Repeated SSR configuration [enhancement]

While developing applications for the snitch we've come up with a potential improvement:

When the developer configures only one SSR he does not know if the other registers may have still some configuration and will also behave as SSR when we call snrt_ssr_enable().

So we'd propose to have some function like snrt_ssr_clear() which resets all registers (that they behave like normal registers) and the developer knows that all registers he configures afterwards will be enabled when calling snrt_ssr_enable() and no other registers.

AXI Isolate: Add feature to return error instead of stall

axi_isolate stalls when accessed with isolate asserted, potentially violating the AXI protocol and (intentionally) stalling cores that try to access it. This can become a hazard when the manager core's AXI requests are forwarded to an isolated quadrant.

Add an error slave around AXI isolate, which can be configured to return AXI-compliant error codes instead.

occamy: FPGA Linux boot hangs on init process

#207 seems to have introduced a bug that causes the init process to get stuck during Linux boot on Occamy's FPGA build. Since Linux accesses main memory over the wide crossbar network, the issue could also possibly be related to the routing issue addressed by #250.

Originally posted by @niwis in #207 (comment):

The good news: the UART seems to be working! 🎉
The bad news: Linux does not boot anymore. This is the last output:

[   37.982674] Freeing unused kernel memory: 7928K
[   37.993771] This architecture does not have kernel memory protection.
[   38.009054] Run /init as init process
[   48.006306] random: dd: uninitialized urandom read (512 bytes read)
[  261.893203] random: crng init done

After that, Linux eventually loops in kernel space. To me, it looks like busybox is for some reason not able to access the console. This can be a SW issue (the Linux image is unchanged, though), or it could be related to some of the HW changes. The ILA suggests that busybox init is correctly called and executed, but the control path quickly becomes untracable.

Regarding next debugging steps: follow this busybox FAQ: https://busybox.net/FAQ.html#init
If hello world does not print, we should be able to pinpoint the issue using gdb / the ILA.

Compiler support for ISSR

I have noticed that ISSR instructions have been added to the banshee simulator. However, the tests now need compiler support (the scfgwi instruction is not supported). Is there a link to the new compiler?

how to solve this issue:multiple definition of `_start';

hi, when i use docker and follow the Quick Start to run the platform . when i go to the step of Build the software with make cmd , some errors like this blow:
/tools/riscv/bin/../lib/gcc/riscv64-unknown-elf/8.3.0/../../../../riscv64-unknown-elf/bin/ld: ../snRuntime/libsnRuntime-cluster.a(start_snitch.S.o): in function snrt.crt0.init_global_pointer': (.init+0x0): multiple definition of _start'; /tools/riscv/bin/../lib/gcc/riscv64-unknown-elf/8.3.0/../../../../riscv64-unknown-elf/lib/crt0.o:(.text+0x0): first defined here
collect2: error: ld returned 1 exit status
benchmark/CMakeFiles/benchmark-matmul-ssr.dir/build.make:106: recipe for target 'benchmark/benchmark-matmul-ssr' failed
make[2]: *** [benchmark/benchmark-matmul-ssr] Error 1
CMakeFiles/Makefile2:251: recipe for target 'benchmark/CMakeFiles/benchmark-matmul-ssr.dir/all' failed
make[1]: *** [benchmark/CMakeFiles/benchmark-matmul-ssr.dir/all] Error 2
Makefile:113: recipe for target 'all' failed
make: *** [all] Error 2

I try to add -nostartfiles cmd to the gcc option , but this issue is unresolved.

my question is how to solve this issue "multiple definition of `_start'"

thanks !

Fix banshee version tags

The tags for distinct versions of banshee point to commits outside the master branch history, probably due to some rebasing that has taken place at some point. Update the banshee-v* tags to point to the corresponding commit that is now part of the master branch history.

Submodule vendorized `pulp_platform` dependencies

vendor.py is helpful in maintaining local, modifiable copies of repos from other organizations we have no control over. However, it proves to be extremely counterproductive for our own repositories:

  1. Manually creating and maintaining patch files for contributions is cumbersome, time-consuming, and error-prone.
  2. Since automatic patch application almost never succeeds, updating to a new base commit while maintaining patches requires the contributor to suffer 1. for each existing patch on each new patch, a self-aggravating problem.
  3. The time-consuming nature of pulling in and contributing back changes leads to this being deferred for long times (we barely ever had the time to do either) which allows serious bugs to fester in our own repos.
  4. Vendoring stores foreign code in-tree, produces massive diffs, and requires CI validation, which bloats our tree and wastes CI time.

Compare this to simply creating a submodule for each pulp_platform dependency with a tag and mergeable feature branch, and you will see that all of these issues are already solved optimally:

  1. git commit
  2. git rebase
  3. Creating an upstream pull request for your feature branch / updating the local submodule commit.
  4. No foreign code or patches are stored and only the submodule commit hashes change.

IMHO, the only excuse to use vendor.py in the first place is that it provides robustness against breaking upstream changes (branch rebase or removal, repo move or removal) and allows for the seamless transition to local maintenance in such cases. However, none of this should happen in our own pulp_platform repos, especially not if we use clearly-designated tags.

Thus, I propose to migrate all dependencies to our own repos to submodule dependencies. The reasons I propose submodules rather than Bender repo dependencies are that:

  • They are well-established, easily understood, and robust (they do not allow silly choices like tracking named references).
  • They are very similar in practical use to vendorized repos, except with the benefits above.
  • We can (and do) use non-RTL resources (software, scripts, templates) from other repos.
  • We can use all other features of Bender with submodules using path dependencies as we do now.

The only drawback to submodules is that they need to be fetched on checkout and updated. However, this is standard procedure in most Git repositories, and if the alternatives are tracking repos using Bender or vendor.py, both niche tools with the issues discussed above, the choice is clear to me.

Of course, any inputs on this are very welcome.

banshee/RTL wakeup register discrepancy

In RTL, the wakeup register is per-cluster and cannot wake harts from other clusters (introduced in #111). Furthermore, the hartid always begins at 0.

Banshee implements a global wakeup register where all cores in the system can be woken up. It further considers the base-hartid when waking up cores as added in #58.

This hinders a "unified" runtime.

I propose to change the banshee behaviour to mimic the RTL implementation. @SamuelRiedel @fischeti are there applications that depend on being able to wake cores from other clusters in banshee? Is the mempool implementation different than the current Snitch implementation?

For inter-cluster communication we can use the CLINT that is implemented in banshee & RTL.

Tool Version Management

Create a simple way of managing the different tool versions required for developing on this system. For example bender 0.23.0 is needed. That requirement should probably be written down somewhere and documentation, as well as necessary checks, should be in place.

axi: Outdated dw_converter

There is a bug in the current version of the data width converter that leads to requests being lost. This prevented me from accessing the soc control registers with Snitch. This bug is fixed in the upstream AXI repo, but Snitch currently follows the DMA branch.

The solution would be to pull in the fixed dw_converter, but I am unsure how you want to handle this. Should we create a patch that only updates the dw_converter, or do you want to update the complete AXI repo?

SPI Host: FSM Bug CSID Switch

The OpenTitan SPI host FSM contains a bug as it jumps to the WaitIdle state after receiving a command with a different CSID as the previous command. It therefore basically always skips the first command after a CSID change which this patch rectifies (OpenTitan does it the same way).

diff --git a/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv b/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
index 4fb1360d..babb6138 100644
--- a/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
+++ b/hw/vendor/lowrisc_opentitan/spi_host/rtl/spi_host_fsm.sv
@@ -290,7 +290,7 @@ module spi_host_fsm
           if (idle_cntr_q == 4'h0) begin
             prestall_st_d = WaitLead;
           end else begin
-            prestall_st_d = WaitIdle;
+            prestall_st_d = CSBSwitch;
           end
         end
         IdleCSBActive: begin

Banshee Regression?

@SamuelRiedel @paulsc96 CI seems to have troubles compiling banshee since a couple of hours - maybe a regression with a new rustc release? Can somebody of you please look into this when he has some time at hand (🙈 )?

/usr/bin/env: 'python3\r': No such file or directory

ls: cannot access 'trace_hart_*.dasm': No such file or directory /repo/hw/system/snitch_cluster/../../../util/clustergen.py -c cfg/cluster.default.hjson -o . /usr/bin/env: 'python3\r': No such file or directory

Hi, I have downloaded the docker container and tried to run the first make command in the Getting Started tutorial and I am getting these errors and results in the bin folder not being created. Is there something I'm doing wrong?

Thanks in advance

Incorrect decoding of `scfgri` causing deadlock

The following sequence of instructions generates a deadlock in the Snitch core:

scfgwi <whatever register>, <whatever immediate>
scfgri <whatever register>, <whatever immediate>

What happens is:

  • Cycle 0) scfgwi is on Snitch's instruction interface, and is offloaded to the SSRs. That is, we have a handshake on Snitch's accelerator request interface, which in turn generates a handshake on Snitch's instruction interface.
  • Cycle 1) scfgri is on Snitch's instruction interface but can't be offloaded to the SSRs, i.e. the request never receives a ready. Concurrently, the response to the previous instruction is valid on the accelerator response interface. The core never handshakes the response, which is why the new request can't be offloaded. And the reason the response is not accepted is because the scfgri instruction is valid and mis-decoded (omitting the details here).

how to solve this issue 'mstatus' undeclared (first use in this function)

hi, when i use docker and follow the Quick Start to run the platform . when i go to the step of Build the software with make cmd , some errors like this blow:
root@59651dd59bcc:/repo/hw/system/snitch_cluster/sw/build# make
[ 3%] Building C object snRuntime/CMakeFiles/snRuntime-cluster.dir/src/platforms/shared/start.c.o
In file included from /repo/sw/snRuntime/src/platforms/shared/../../team.h:5:0,
from /repo/sw/snRuntime/src/platforms/shared/start.c:4:
/repo/sw/snRuntime/include/snrt.h: In function 'snrt_interrupt_global_enable':
/repo/sw/snRuntime/include/snrt.h:208:5: warning: implicit declaration of function 'set_csr' [-Wimplicit-function-declaration]
set_csr(mstatus, MSTATUS_MIE); // set M global interrupt enable
^~~~~~~
/repo/sw/snRuntime/include/snrt.h:208:13: error: 'mstatus' undeclared (first use in this function)
set_csr(mstatus, MSTATUS_MIE); // set M global interrupt enable
^~~~~~~
/repo/sw/snRuntime/include/snrt.h:208:13: note: each undeclared identifier is reported only once for each function it appears in
/repo/sw/snRuntime/include/snrt.h: In function 'snrt_interrupt_global_disable':

and i find that variable mstatus is declared in doc directory , my question is how to solve this issue 'mstatus' undeclared ??
thx !

Floor Plan Area

Hi, Thanks for all your help on the other thread. I am considering using a 32-bit snitch with the FPU in a university project. The other option I was looking at was using a Cortex-M0 but it doesn't have a FPU. The M0 has a 0.11 mm2 floor plan area in a 180nm process. Do you have a rough idea what the floor plan area might be in this configuration for the snitch?

Thank you very much!

banshee non-zero TCDM offset fails initialization and loads

When the tcdm.start parameter in the banshee configuration is set to a non-zero value, the ELF data preloading and the fast loads do not access the same region.

The tcdm is initialized here

tcdm[(addr / 4) as usize] = value;

where the address is absolute.

The fast access to the tcdm

let index = LLVMBuildSub(self.builder, addr, tcdm_start, NONAME);

however, is relative to the tcdm start address.

Either, the LLVMBuildSub in emit_tcdm_check has to be removed, or the offset be subtracted when the tcdm is initialized.

Add rustfmt lint

Add a CI check to ensure the banshee rust code is formatted according to rustfmt rules.

Is this synthesizable?

Hi all,
I've tried to synthesis snitch_cluster(or occamy_cluster_wrapper) using design-compiler.
But I faced the error as below:
"Error: .../git/snitch/hw/vendor/pulp_platform_common_cells/src/cf_math_pkg.sv:58: The construct '$clog2( non-constant expression )' is not supported in this language. (VER-720)"

As I know the snitch is already implemented as FPGA.

Please let me know what I should do.

Occamy on FPGA - "axi_flat.svh" header file is missing on the repo

Thank you for your proving us Occamy source code.
I tried to create Occamy FPGA code, based on your Occamy instruction on the repo.
--> make occamy_vcu128

The following error message will be reported.

ERROR: [Synth 8-1766] cannot open include file axi_flat.svh [/home/nam/codeSim/project/snitch/hw/system/occamy/src/occamy_xilinx.sv:9]
ERROR: [Synth 8-2841] use of undefined macro AXI_FLATTEN_MASTER [/home/nam/codeSim/project/snitch/hw/system/occamy/src/occamy_xilinx.sv:525]

I think "occamy_xilinx.sv" must include "axi_flat.svh" header file.
But, we can't find the header file on the Github repo.
I think uploading of the header file was missed.

It would be very grateful to you if you upload "axi_flat.svh" header file on this repo.
Thank you.

Seungseok

Negative stride

Can the stride for the SSR loops be negative? Currently, I see that this is not allowed by the function arguments.

For example, the snrt_ssr_loop_1d has the third argument size_t i0 which only allows for unsigned values, therefore only allowing a positive stride.

ci: Forced python version

The new python version 3.10 used in the CI breaks the yamlfmt package (which requires ruamel.yaml<0.16). the CI can be forced to python version <=3.9 where the CI is working again for the moment (#300). But ultimately, the package conflict needs to resolved. yamlfmt seems to be not maintained anymore, therefore we need to figure out a new solution (e.g. yamllint)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.