Git Product home page Git Product logo

princetonuniversity / openpiton Goto Github PK

View Code? Open in Web Editor NEW
586.0 54.0 209.0 91.34 MB

The OpenPiton Platform

Home Page: http://www.openpiton.org

Verilog 1.66% Tcl 0.09% Python 0.07% Perl 0.17% SystemVerilog 0.01% C 0.35% C++ 0.01% Shell 0.04% M4 0.05% Roff 0.03% Makefile 0.01% Lex 0.01% Yacc 0.01% Assembly 97.27% Batchfile 0.25% Monkey C 0.01% Slim 0.01%
verilog python fpga processor research-platform

openpiton's Introduction

OpenPiton Logo

OpenPiton Research Platform Build Status

OpenPiton is the world's first open source, general purpose, multithreaded manycore processor. It is a tiled manycore framework scalable from one to 1/2 billion cores. It is a 64-bit architecture using SPARC v9 ISA with a distributed directory-based cache coherence protocol across on-chip networks. It is highly configurable in both core and uncore components. OpenPiton has been verified in both ASIC and multiple Xilinx FPGA prototypes running full-stack Debian linux. We have released both the Verilog RTL code as well as synthesis and back-end flow. We believe OpenPiton is a great framework for researchers in computer architecture, OS, compilers, EDA, security and more.

OpenPiton has been published in ASPLOS 2016: Jonathan Balkind, Michael McKeown, Yaosheng Fu, Tri Nguyen, Yanqi Zhou, Alexey Lavrov, Mohammad Shahrad, Adi Fuchs, Samuel Payne, Xiaohua Liang, Matthew Matl, and David Wentzlaff. "OpenPiton: An Open Source Manycore Research Framework." In Proceedings of the 21st International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '16), April 2016.

Find out more

If you use OpenPiton in your research please reference our ASPLOS 2016 paper mentioned above and send us a citation of your work.

Documentation

There are several detailed pieces of documentation about OpenPiton in the docs folder listed below:

We also host GitHub repositories for other parts of the project, including:

Environment Setup

  • The PITON_ROOT environment variable should point to the root of the OpenPiton repository

  • The Synopsys environment for simulation should be setup separately by the user. Besides adding correct paths to your PATH and LD_LIBRARY_PATH (usually accomplished by a script provided by Synopsys), the OpenPiton tools specifically reference the VCS_HOME environment variable which should point to the root of the Synopsys VCS installation.

  • Run source $PITON_ROOT/piton/piton_settings.bash to setup the environment

    • A CShell version of this script is provided, but OpenPiton has not been tested for and currently does not support CShell
  • Top level directory structure:

    • piton/
      • All OpenPiton design and verification files
    • docs/
      • OpenPiton documentation
    • build/
      • Working directory for simulation and simulation models
Notes on Environment and Dependencies
  • Depending on your system setup, Synopsys tools may require the -full64 flag. This can easily be accomplished by adding a bash function as shown in the following example for VCS (also required for URG):

    function vcs() { command vcs -full64 "$@"; }; export -f vcs
  • On many systems, an error with goldfinger, or other errors not described below, may indicate that you should run the mktools command once to rebuild a number of the tools before continuing. If you see issues later with building or running simulations, try running mktools if you have not already.

  • In some cases, you may need to recompile the PLI libraries we provide. This is done using mkplilib with the argument for the simulator you want to rebuild for. You may need to run mkplilib clean first, then depending on which simulator, you can build with: mkplilib vcs, mkplilib ncverilog, mkplilib icarus, or mkplilib modelsim.

  • If you see an error with bw_cpp then you may need to install gcc/g++ (to get cpp), or csh (csh on ubuntu, tcsh on centos)

  • If you see an error with goldfinger or g_as then you may need to install 32-bit glibc (libc6-i386 on ubuntu, glibc.i686 on centos)

  • If you see an error with goldfinger or m4 then you may need to install libelf (libelf-dev on ubuntu, elfutils-libelf-devel on centos)

  • You also need the Perl Bit::Vector package installed on your machine (libbit-vector-perl on ubuntu, perl-Bit-Vector.x86_64 on centos, also installable via CPAN)

==========================

Building a simulation model

  1. cd $PITON_ROOT/build
  2. sims -sys=manycore -x_tiles=1 -y_tiles=1 -vcs_build builds a single tile OpenPiton simulation model.
  3. A directory for the simulation model will be created in $PITON_ROOT/build and the simulation model can now be used to run tests. For more details on building simulation models, please refer to the OpenPiton documentation.

Note: if you would like to decrease the testbench monitor output to a minimum, append -config_rtl=MINIMAL_MONITORING to your build command in step 2. above.

==========================

Running a simulation

  1. cd $PITON_ROOT/build
  2. sims -sys=manycore -x_tiles=1 -y_tiles=1 -vcs_run princeton-test-test.s runs a simple array summation test given the simulation model is already built.
  3. The simulation will run and generate many log files and simulation output to stdout. For more details on running a simulation, provided tests/simulations in the test suite, and understanding the simulation log files and output, please refer to the OpenPiton documentation.

==========================

Running a regression

A regression is a set of simulations/tests which run on the same simulation model.

  1. cd $PITON_ROOT/build
  2. sims -sim_type=vcs -group=tile1_mini runs the simulations in the tile1_mini regression group.
  3. The simuation model will be built and all simulations will be run sequentially. In addition to the simulation model directory, a directory will be created in the form <date>_<id> which contains the simulation results.
  4. cd <date>_<id>
  5. regreport $PWD > report.log will process the results from each of the regressions and place the aggregated results in the file report.log. For more details on running a regression, the available regression groups, understanding the regression output, and specifying a new regression group, please refer to the OpenPiton documentation.

==========================

Running a continuous integration bundle

Continuous integration bundles are sets of simulations, regression groups, and/or unit tests. The simulations within a bundle are not required to have the same simulation model. The continuous integration tool requires a job queue manager (e.g. SLURM, PBS, etc.) to be present on the system in order parallelize simulations.

  1. cd $PITON_ROOT/build
  2. contint --bundle=git_push runs the git_push continuous integration bundle which we ran on every commit when developing Piton. It contains a regression group, some assembly tests, and some unit tests.
  3. The simulation models will be built and all simulation jobs will be submitted
  4. After all simulation jobs complete, the results will be aggregated and printed to the screen. The individual simulation results will be saved in a new directory in the form contint_<bundle name>_<date>_<id> and can be reprocessed later to view the aggregated results again.
  5. The exit code of the command in Step 2 indicates whether all tests passed (zero exit code) or at least one failed (non-zero exit code).
  6. For more details on running continuous integration bundles, the available bundles, understanding the output, reprocessing completed bundles, and creating new bundles, please refer to the OpenPiton documentation.

========================== OpenPitonAriane Logo

Support for the Ariane RV64IMAC Core

This version of OpenPiton supports the 64bit Ariane RISC-V processor from ETH Zurich. To this end, Ariane has been equipped with a different L1 cache subsystem that follows a write-through protocol and that has support for cache invalidations and atomics. This L1 cache system is designed to connect directly to the L1.5 cache provided by OpenPiton's P-Mesh.

Check out the sections below to see how to run the RISC-V tests or simple bare-metal C programs in simulation.

blockdiag

Environment Setup

In addition to the OpenPiton setup described above, you have to adapt the paths in the ariane_setup.sh script to match with your installation (we support Questasim, VCS and Verilator at the moment). Source this script from the OpenPiton root folder and build the RISC-V tools with ariane_build_tools.sh if you are running this for the first time:

  1. cd $PITON_ROOT/
  2. source piton/ariane_setup.sh
  3. piton/ariane_build_tools.sh

Step 3. will then download and compile the RISC-V toolchain, the assembly tests and Verilator.

Note that the address map is different from the standard OpenPiton configuration. DRAM is mapped to 0x8000_0000, hence the assembly tests and C programs are linked with this offset. Have a look at piton/design/xilinx/genesys2/devices_ariane.xml for a complete address mapping overview.

Also note that we use a slightly adapted version of syscalls.c. Instead of using the RISC-V FESVR, we use the OpenPiton testbench monitors to observe whether a test has passed or not. Hence we added the corresponding pass/fail traps to the exit function in syscalls.c.

For simulation Questasim 10.6b, VCS 2017.03 or Verilator 4.014 is needed (older versions may work, but are untested).

You will need Vivado 2018.2 or newer to build an FPGA bitstream with Ariane.

Running RISC-V Tests and Benchmarks

The RISC-V benchmarks are precompiled in the tool setup step mentioned above. You can run individual benchmarks by first building the simulation model with

  1. cd $PITON_ROOT/build
  2. sims -sys=manycore -x_tiles=1 -y_tiles=1 -msm_build -ariane

Then, invoke a specific riscv test with the -precompiled switch as follows

sims -sys=manycore -msm_run -x_tiles=1 -y_tiles=1 rv64ui-p-addi.S -ariane -precompiled

This will look for the precompiled ISA test binary named rv64ui-p-addi in the RISC-V tests folder $ARIANE_ROOT/tmp/riscv-tests/build/isa and run it.

In order to run a RISC-V benchmark, do

sims -sys=manycore -msm_run -x_tiles=1 -y_tiles=1 dhrystone.riscv -ariane -precompiled

The printf output will be directed to fake_uart.log in this case (in the build folder).

Note: if you see the Warning: [l15_adapter] return type 004 is not (yet) supported by l15 adapter. warning in the simulation output, do not worry. This is only generated since Ariane does currently not support OpenPiton's packet-based interrupt packets arriving over the memory interface.

Running Custom Programs

You can also run test programs written in C. The following example program just prints 32 times "hello_world" to the fake UART (see fake_uart.log file).

  1. cd $PITON_ROOT/build
  2. sims -sys=manycore -x_tiles=1 -y_tiles=1 -msm_build -ariane
  3. sims -sys=manycore -msm_run -x_tiles=1 -y_tiles=1 hello_world.c -ariane -rtl_timeout 10000000

And a simple hello world program running on multiple tiles can run as follows:

  1. cd $PITON_ROOT/build
  2. sims -sys=manycore -x_tiles=4 -y_tiles=4 -msm_build -ariane
  3. sims -sys=manycore -msm_run -x_tiles=4 -y_tiles=4 hello_world_many.c -ariane -finish_mask 0x1111111111111111 -rtl_timeout 1000000

In the example above, we have a 4x4 Ariane tile configuration, where each core just prints its own hart ID (hardware thread ID) to the fake UART. Synchronization among the harts is achieved using an atomic ADD operation.

Note that we have to adjust the finish mask in this case, since we expect all 16 cores to hit the pass/fail trap.

Regressions

The RISC-V ISA tests, benchmarks and some additonal simple example programs have been added to the regression suite of OpenPiton, and can be invoked as described below.

  • RISC-V ISA tests are grouped into the following four batches, where the last two are the regressions for atomic memory operations (AMOs):

sims -group=ariane_tile1_asm_tests_p -sim_type=msm

sims -group=ariane_tile1_asm_tests_v -sim_type=msm

sims -group=ariane_tile1_amo_tests_p -sim_type=msm

sims -group=ariane_tile1_amo_tests_v -sim_type=msm

  • RISC-V benchmarks can be run with:

sims -group=ariane_tile1_benchmarks -sim_type=msm

  • Simple hello world programs and AMO tests for 1 tile can be invoked with

sims -group=ariane_tile1_simple -sim_type=msm

  • And a multicore "hello world" example running on 16 tiles can be run with

sims -group=ariane_tile16_simple -sim_type=msm

If you would like to get an overview of the exit status of a regression batch, step into the regression subfolder and call regreport . -summary.

FPGA Mapping on Genesys2 Board

The bitfile for a 1x1 tile Ariane configuration for the Genesys2 board can be built using the follong command:

protosyn -b genesys2 -d system --core=ariane --uart-dmw ddr

It is recommended to use Vivado 2018.2 or later since earlier versions might not produce a working bitstream.

Once you have loaded the bitstream onto the FPGA using the Vivado Hardware Manager or a USB drive plugged into the Genesys2, you first need to connect the UART/USB port of the Genesys2 board to your computer and flip switch 7 on the board as described in the OpenPiton FPGA Prototype Manual. Then you can use pitonstream to run a list of tests on the FPGA:

pitonstream -b genesys2 -d system -f ./tests.txt --core=ariane

The tests that you would like to run need to be specified in the test.txt file, one test per line (e.g. hello_world.c).

You can also run the precompiled RISCV benchmarks by using the following command

pitonstream -b genesys2 -d system -f ./piton/design/chip/tile/ariane/ci/riscv-benchmarks.list --core=ariane --precompiled

Note the -precompiled switch here, which has the same effect as when used with the sims command.

Debugging via JTAG

OpenPiton+Ariane supports the RISC-V External Debug Draft Spec and hence you can debug (and program) the FPGA using OpenOCD. We provide two example scripts for OpenOCD below.

To get started, connect the micro-USB port that is labeled with JTAG to your machine. This port is attached to the FTDI 2232 USB-to-serial chip on the Genesys 2 board, and is usually used to access the native JTAG interface of the Kintex-7 FPGA (e.g. to program the device using Vivado). However, the FTDI chip also exposes a second serial link that is routed to GPIO pins on the FPGA, and we leverage this to wire up the JTAG from the RISC-V debug module.

If you are on an Ubuntu based system you need to add the following udev rule to /etc/udev/rules.d/99-ftdi.rules

SUBSYSTEM=="usb", ACTION=="add", ATTRS{idProduct}=="6010", ATTRS{idVendor}=="0403", MODE="664", GROUP="plugdev"

Once attached to your system, the FTDI chip should be listed when you type lsusb

Bus 005 Device 019: ID 0403:6010 Future Technology Devices International, Ltd FT2232C/D/H Dual UART/FIFO IC

If this is the case, you can go on and start openocd with the fpga/ariane.cfg configuration file below.

$ openocd -f fpga/ariane.cfg
Open On-Chip Debugger 0.10.0+dev-00195-g933cb87 (2018-09-14-19:32)
Licensed under GNU GPL v2
For bug reports, read
    http://openocd.org/doc/doxygen/bugs.html
adapter speed: 1000 kHz
Info : auto-selecting first available session transport "jtag". To override use 'transport select <transport>'.
Info : clock speed 1000 kHz
Info : TAP riscv.cpu does not have IDCODE
Info : datacount=2 progbufsize=8
Info : Examined RISC-V core; found 1 harts
Info :  hart 0: XLEN=64, misa=0x8000000000141105
Info : Listening on port 3333 for gdb connections
Ready for Remote Connections
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : accepting 'gdb' connection on tcp/3333

Note that this simple OpenOCD script currently only supports one hart to be debugged at a time. Select the hart to debug by changing the core id (look for the -coreid switch in the ariane.cfg file). If you would like to debug multiple harts at once, you can use ariane-multi-hart.cfg.

Then you will be able to either connect through telnet or with gdb:

$ riscv64-unknown-elf-gdb /path/to/elf
(gdb) target remote localhost:3333
(gdb) load
Loading section .text, size 0x6508 lma 0x80000000
Loading section .rodata, size 0x900 lma 0x80006508
(gdb) b putchar
(gdb) c
Continuing.

Program received signal SIGTRAP, Trace/breakpoint trap.
0x0000000080009126 in putchar (s=72) at lib/qprintf.c:69
69    uart_sendchar(s);
(gdb) si
0x000000008000912a  69    uart_sendchar(s);
(gdb) p/x $mepc
$1 = 0xfffffffffffdb5ee

You can read or write device memory by using:

(gdb) x/i 0x1000
    0x1000: lui t0,0x4
(gdb) set {int} 0x1000 = 22
(gdb) set $pc = 0x1000

In order to compile programs that you can load with GDB, use the following command:

sims -sys=manycore -novcs_build -midas_only hello_world.c -ariane -x_tiles=1 -y_tiles=1 -gcc_args="-g"

Note that the tile configuration needs to correspond to your actual platform configuration if your program is a multi-hart program. Otherwise you can omit these switches (the additional cores will not execute the program in that case).

Booting SMP Linux on Genesys2 or VC707

We currently support single core and SMP Linux on the Genesys2, Nexys Video, and VC707 FPGA development boards. For familiarisation and to ensure your hardware is setup correctly first try running with a released bitfile and SD card image.

To prepare the SD card with a Linux image you need to format it with sgdisk then write the image with dd.

  1. Download the Ariane Linux OS image from either the ariane-sdk release or the Princeton archive, extract and save the .bin file as bbl.bin in the current directory. If you want to build your own Linux image please see ariane-sdk.
  2. $ sudo fdisk -l Search carefully for the corresponding disk label of the SD card, e.g. /dev/sdb
  3. $ sudo sgdisk --clear --new=1:2048:67583 --new=2 --typecode=1:3000 --typecode=2:8300 /dev/sdb Create a new GPT partition table and two partitions: 1st partition 32MB (ONIE boot), 2nd partition rest (Linux root).
  4. $ sudo dd if=bbl.bin of=/dev/sdb1 oflag=sync bs=1M Write the bbl.bin file to the first partition. E.g. where your disk label is /dev/sdb use /dev/sdb1 (append a 1).
  5. Insert the SD card into the FPGA development board. You can leave it there until you want to build your own Linux OS image.

Note that the board specific settings are encoded in the device tree that is automatically generated and compiled into the FPGA bitfile, so no specific configuration of the Linux kernel is needed.

Next up is generating the bitfile which assumes you've setup your PATH by sourcing /opt/xilinx/Vivado/2018.2/settings64.sh and piton/ariane_setup.sh. The default configuration is 1 core for all boards, but you can override this with command line arguments. In order to build an FPGA image for these boards, use one of the following commands representing the maximum configurations:

  • protosyn -b nexysVideo -d system --core=ariane --uart-dmw ddr
  • protosyn -b genesys2 -d system --core=ariane --uart-dmw ddr --x_tiles=2
  • protosyn -b vc707 -d system --core=ariane --uart-dmw ddr --x_tiles=3 --y_tiles=1 (81% LUT utilization)

Vivado version 2017.3 and previous are known to fail for various reasons but may generate an unusable bitfile. Please use Vivado 2018.2.

This command will take a while (1-2 hours is typical for the first run before IP has been generated) to generate a bitfile at build/vc707/system/vc707_system/vc707_system.runs/impl_1/system.bit. To get started you can, alternatively, try a released bitfile from the Princeton archive,

Now that you have a prepared SD card inserted into the dev board, and a bitfile it's time to boot up. The Linux OS provides console access over UART.

  1. Connect a mini-USB cable to the port labelled UART and power on the board which allows the interfaces such as /dev/ttyUSB2 to become available.
  2. Open a console with 115200/8N1. E.g. something like screen /dev/ttyUSB0 115200 or sudo minicom -D /dev/ttyUSB2 If there are multiple ttyUSB devices just open a console to each of them.
  3. Connect a micro-USB cable to the port labelled JTAG and connect from within the Vivado Hardware Manager. This can be done in the GUI by opening the project: vivado build/vc707/system/vc707_system/vc707_system.xpr &
  4. Program the device with the generated bitfile, which Vivado should find automatically. Once programming is finished (around 10s) reset will be immediately lifted and you should see the Linux boot process being reported on the UART console.

When the device comes out of reset, the zero-stage bootloader copies the Linux image, including the first stage bootloader, from the SD card into DDR, and executes it. Be patient, copying from SD takes a couple of seconds. When the boot process is finished a login prompt is displayed. The username is root without a password. Now you can test things by running standard unix commands (# cat /proc/cpuinfo), or playing tetris (# /tetris).

There is also preliminary support for the VCU118, but not all features work yet on that board. For the VCU118 board you need the PMOD SD adapter from Digilent to be able to use an SD card (the slot on the VCU118 board is not directly connected to the FPGA). As the PMOD0 port has open-drain level-shifters, you also have to replace the R1-R4 and R7-8 resistors with 470 Ohm 0201 SMD resistors on the Digilent PMOD SD adapter to make sure that signal rise times are short enough.

Running OpenPiton simulations on F1 instances in AWS: step guide

Here is the generic flow to run OpenPiton on F1 instance. We created a public image (agfi-0d87a634f93fe7c83), which you can use to try OpenPiton on F1 without synthesizing it.

  1. We assume that you already have F1 instance up and running. If not - steps 1,2,4,5 from this (https://github.com/vegaluisjose/aws-fpga-notes) guide will help you.

  2. ssh into your instance, clone OpenPiton repo (https://github.com/PrincetonUniversity/openpiton).

  3. cd into repo, run these bash commands:

    export PITON_ROOT=`pwd`
    export AWS_FPGA_REPO_DIR="$PITON_ROOT/piton/design/aws"
    export CL_DIR="$AWS_FPGA_REPO_DIR/hdk/cl/developer_designs/piton_aws"
    source piton/piton_settings.bash
    source piton/ariane_setup.sh
  1. Load the fpga image into board:
    fpga-load-local-image -S 0 -I agfi-0d87a634f93fe7c83 

After this step the fpga is programmed, but the reset signal is high, so the system is still not working.

  1. Compile software
    cd $CL_DIR/software/src 
    make

This will compile programs "uart" and "dma_os". You should have xdma driver preinstalled (https://github.com/Xilinx/dma_ip_drivers/tree/master/XDMA/linux-kernel).

  1. Run "uart" program
    ./uart & 

This will create a pseudo-terminal and tell you the location of the corresponding file (e.g. /dev/pts/3)

  1. Write OS image in memory:
    ./dma_os $FILE_LOCATION 

This will put the os image from FILE_LOCATION in the appropriate place in memory. Note, before writing the image you should revert each 8 bytes of it (consequences of strange behavior of xdma driver). You can do this with

    objcopy -I binary -O binary --reverse-bytes=8 $FILE_LOCATION 
  1. Reset fpga:
    ./fpga-reset 

After this the processor will start working and printing UART data in your pseudo-terminal. You can connect to the terminal using your favourite terminal program (e.g. screen, tio).

  1. Managing the fpga: In the software directory we provide you with some useful programs:
  • uart : starts pseudo-terminal, connected to the OpenPiton's uart
  • dma_os : copies file from argument into the SD part of the memory
  • fpga-reset : resets the fpga
  • fpga-poweroff : sets the reset in the fpga to high, basically powering off OpenPiton so that you could write the data into memory without memory corruptions.

Besides, you might find useful some utilities, provided by AWS themselves:

  • fpga-clear-local-image : clears the image from FPGA
  • fpga-set-virtual-dip-switch : sets the value of 16 virtual dip switches. Note that the last switch (15th) is connected to OpenPiton's reset
  • fpga-get-virtual-led : reads the value of 16 virtual leds

Synthesizing OpenPiton image for F1

The flow is very simillar to synthesizing image for any other FPGA we support, but has own nuances. Good news: you can run the flow on your own machine if you have Vivado 2018.2 or higher.

  1. Create the S3 credentials and configure your S3 bucket. You can find step guides here (https://github.com/vegaluisjose/aws-fpga-notes).

  2. Clone OpenPiton repo (https://github.com/PrincetonUniversity/openpiton).

  3. cd into repo, run these bash commands:

    export PITON_ROOT=`pwd`
    export AWS_FPGA_REPO_DIR="$PITON_ROOT/piton/design/aws"
    export CL_DIR="$AWS_FPGA_REPO_DIR/hdk/cl/developer_designs/piton_aws"
    source piton/piton_settings.bash
    source piton/ariane_setup.sh
    source "$AWS_FPGA_REPO_DIR/hdk_setup.sh"

The last command will try to ask for root password to apply patch for Vivado, but you don't have to do it - the flow still works even without patch.

  1. Run the synthesis:
    protosyn -b f1 -c ariane ... 

This will create the custom logic tar archive, which we'll later upload on AWS servers. The synthesis itself is run in nohup, but protosyn will tell you the location of the log files, so that you can follow the progress.

  1. After the synthesis is complete (takes about 2-3 hours on fast PC), go to results folder:
    cd $PITON_ROOT/build/f1/piton_aws/build/checkpoints/to_aws 

This is the folder, where all the result tars are located.

  1. Copy the resulting tar archive in the S3 bucket you created before

  2. Send the command for final synthesis:

    aws ec2 create-fpga-image --name NAME_OF_IMAGE --input-storage-location Bucket=YOUR_S3_BUCKET,Key=NAME_OF_YOUR_TAR_ARCHIVE 

The command will tell print the afi and agfi of your image. You can track the synthesis progress with

    aws ec2 describe-fpga-images --fpga-image-ids AFI_OF_YOUR_IMAGE 
  1. After the synthesis is done - you can go load it in your F1 instance!

openpiton's People

Contributors

afazekas avatar angl-dev avatar b1f6c1c4 avatar buhraian avatar danielmlynek avatar fei-g avatar getziadz avatar grigoriy-chirkov avatar hossein1387 avatar jbalkind avatar morenes avatar msfschaffner avatar shashank-vijay avatar shivampotdar avatar srtemp avatar therbom avatar tianrui-wei avatar tingjung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openpiton's Issues

JTAG debug

"The FTDI chip also exposes a second serial link that is routed to GPIO pins on the FPGA, and we leverage this to wire up the JTAG from the RISC-V debug module."
But how can I make sure which GPIO would be used? It seems the FPGA_TMS, FPGA_TDI, FPGA_TDO, FPGA_TCK are not used as GPIO.

Program is halted when running Coremark.riscv

Hello
I want to run Coremark on the simulator and I used the code here: riscv-coremark which generates two version one for baremetal and the other for Linux or pk. I compiled them and copied them in the built benchmarks folder.
I used the following command:
sims -sys=manycore -vlt_run -x_tiles=1 -y_tiles=1 coremark.bare.riscv -ariane -precompiled -rtl_timeout=1000000
for the bare metal th execution is halted before the rtl-timeout as followwing:

`TILE0-------------------------------------
0000000001d3904a
0000000001d3904a P1S3 msg type:     st_req     addr: 0x0080001000, Data_size: 100, cache_type: 0
P1S3 valid: recycle: 0, stall: 0
State wr en: 1
Dir data: 0x0000000000000000
CSM enable: 0
Msg from mshr: 1
P1S3 addr: 0x0080001000
P1S3 valid: l2_hit: 1, l2_evict: 0
Data data: 0x000000000000000000000000000000000000
State:mesi: 00, vd: 10, subline: 0000, cache_type: 0, owner: 000000
sdid:    0, lsid:  0
TILE0-------------------------------------
0000000001d3923e
0000000001d3923e P1S4 msg type:     st_req     addr: 0x0080001000, Data_size: 100, cache_type: 0
P1S4 valid: recycle: 0, stall: 0, msg_stall: 0, dir_data_stall: 0, stall_inv_counter: 0, stall_smc_buf: 0, smc_stall: 0, global_stall: 0, broadcast_stall: 0
Control signals: 0100011101000011110
CSM enable: 0
broadcast coreid: (    0,   0,   0)
broadcast state: 0, broadcast op val: 0
Special addr type: 0
MSHR state wr en: 0
MSHR data wr en: 0
MSHR data wr : 0x00000000000000040a00c0080001000
MSHR inv counter :  0
State wr en: 1
Dir data: 0x0000000000000000
Dir sharer counter:  1
State data in: 0x01000000000004040
State data mask in: 0x0f0000000000067ff
State wr addr: 0x40
Msg data: 0x0000000000000000
SMC miss: 0
SMC data out: 0x00000000
SMC tag out: 0x0000
SMC valid out: 0x0
Msg send valid: 1, send ready: 1, mode: 011, length: 00000010
Msg send type:     data_ack   Msg send data_size: 000, cache_type: 0, mesi: 11, l2_miss: 1, mshrid: 00000011, subline_vector: 0000
Msg from mshr: 1
P1S4 addr: 0x0080001000
P1S4 valid: l2_hit: 1, l2_evict: 0
Data data: 0x000000000000000000000000000000000000
State:mesi: 00, vd: 10, subline: 0000, cache_type: 0, owner: 000000
Msg send: addr: 0x0080001000, dst_x: 00000000, dst_y: 00000000, dst_fbits: 0000
Msg send data: 0x00000000000000000000000000000000
src x: 00000000, src y: 00000000
sdid:    0, lsid:  0
0000000001d39432
TILE0 noc2 flit raw: 0x00000000008740f8
0000000001d39626
TILE0 noc2 flit raw: 0x0000000000000000
0000000001d3981a
TILE0 noc2 flit raw: 0x0000000000000000
30646000 TILE0 L1.5: Received NOC2                                                MSG_TYPE_DATA_ACK   mshrid 3, l2miss 1, f4b 0, ackstate 3, address 0x0000000000
   Data1: 0x0000000000000000
   Data2: 0x0000000000000000
   Data3: 0x0000000000000000
   Data4: 0x0000000000000000

0000000001d39ef0 L15 TILE0:
NoC1 credit:  8
NoC1 reserved credit:  1
TILE0 Pipeline: *  X  X 
Stage 1 status:    Operation: L15_REQTYPE_ACKDT_ST_IM
   TILE0 S1 Address: 0x0080001000
L15_MON_END


0000000001d3a0e4 L15 TILE0:
NoC1 credit:  7
NoC1 reserved credit:  0
TILE0 Pipeline: *  *  X 
Stage 1 status:    Operation: L15_REQTYPE_ACKDT_ST_IM
   TILE0 S1 Address: 0x0080001000
Stage 2 status:    Operation: L15_REQTYPE_ACKDT_ST_IM
   TILE0 S2 Address: 0x0080001000
   TILE0 S2 Cache index:   0
   DTAG way0 state: 0x3
   DTAG way0 data: 0x0000000080024000
   DTAG way1 state: 0x2
   DTAG way1 data: 0x0000000080003800
   DTAG way2 state: 0x3
   DTAG way2 data: 0x0000000080004000
   DTAG way3 state: 0x0
   DTAG way3 data: 0x0000000000000000
L15_MON_END


0000000001d3a2d8 L15 TILE0:
NoC1 credit:  7
NoC1 reserved credit:  0
TILE0 Pipeline: X  *  * 
Stage 2 status:    Operation: L15_REQTYPE_ACKDT_ST_IM
   TILE0 S2 Address: 0x0080001000
   TILE0 S2 Cache index:   0
   MESI write way: 0x3
   MESI write data: 0x3
HMT writing: 0
Stage 3 status:    Operation: L15_REQTYPE_ACKDT_ST_IM
   TILE0 S3 Address: 0x0080001000
   TILE0 WMT read index: 00
   WMT way          0: 1 0x1
   WMT way          1: 0 0x0
   WMT way          2: 0 0x0
   WMT way          3: 0 0x0
L15_MON_END

30647500 TILE0 L1.5 th0: Sent CPX ST_ACK   l2miss 1, nc 0, atomic 0, threadid 0, pf 0, f4b 0, iia 0, dia 0, dinval 0, iinval 0, invalway 0, blkinit 0
   Data0: 0x0000000000000000
   Data1: 0x0000000000000000
   Data2: 0x0000000000000000
   Data3: 0x0000000000000000

0000000001d3a4cc L15 TILE0:
NoC1 credit:  8
NoC1 reserved credit:  0
TILE0 Pipeline: X  X  * 
Stage 3 status:    Operation: L15_REQTYPE_ACKDT_ST_IM
   TILE0 S3 Address: 0x0080001000
L15_MON_END

Info: spc(0) thread(1) -> timeout happen
Info: spc(0) thread(2) -> timeout happen
Info: spc(0) thread(3) -> timeout happen
Info: spc(0) thread(1) -> timeout happen
...

and I tried the other one for Linux and pk (just to check) and it keeps running for ever until reaching the timeout.
Could you please help?

cache coherency

Hi, I am implement a directory based cache coherency protocol in RTL. I saw OpenPiton github mainpage says the RTL version of cache coherency in OpenPiton is still under developement. I am wondering if there is any updates? or what information/codes I could refer in OpenPiton for my project? Thanks a lot.

Cannot boot Linux with small delay in l2_pipe

I'm currently trying to add a delay of 8 cycles to incoming message data in pipeline 2 of the L2 cache. To be more specific, when the input buffer wants to transfer an incoming data flit to dpath/ctrl, I'm waiting 8 cycles before the actual hand-off takes place. My implementation can be found at therbom@90322d4. Although I'm able to successfully run binaries in the barebones environment with Ariane, both in ModelSim as well as on the Genesys 2 FPGA, I'm not able to boot Linux on the latter. It either hangs at copying from the SD card, or a kernel panic pops up. It's quite unpredictable.

Things I've tried:

  • Increasing L2_P2_HEADER_BUF_IN_SIZE from 4 to 64 and L2_P2_DATA_BUF_IN_SIZE from 16 to 256. And their log sizes correspondingly.
  • Putting 10 NOPs in the SD card driver after a group of 32 bits has been stored from SD card to DRAM.
  • I tried searching for a timeout in the chipset maybe (?), but couldn't find anything that would point in this direction.

But I still couldn't get it to work, unfortunately.

I assumed that simply adding this delay wouldn't create such a discrepancy between the barebones environment and Linux. Would this indeed be the expected behavior for OpenPiton+Ariane?

SMP Linux on two core

Hi
I want to realize the SMP Linux on two core ,so how to configure the ariane.dts,and i dont kown how do this interrupts of Peripherals connect with PLIC, the interrupt number is how to define?

/dts-v1/;

/ {
#address-cells = <2>;
#size-cells = <2>;
compatible = "eth,ariane-bare-dev";
model = "eth,ariane-bare";
chosen {
stdout-path = "/soc/uart@10000000:115200";
};
cpus {
#address-cells = <1>;
#size-cells = <0>;
timebase-frequency = <25000000>; // 25 MHz
CPU0: cpu@0 {
clock-frequency = <50000000>; // 50 MHz
device_type = "cpu";
reg = <0>;
status = "okay";
compatible = "eth, ariane", "riscv";
riscv,isa = "rv64imafdc";
mmu-type = "riscv,sv39";
tlb-split;
// HLIC - hart local interrupt controller
CPU0_intc: interrupt-controller {
#interrupt-cells = <1>;
interrupt-controller;
compatible = "riscv,cpu-intc";
};
};
};
memory@80000000 {
device_type = "memory";
reg = <0x0 0x80000000 0x0 0x40000000>;
};
leds {
compatible = "gpio-leds";
heartbeat-led {
gpios = <&xlnx_gpio 1 0>;
linux,default-trigger = "heartbeat";
retain-state-suspended;
};
};
L26: soc {
#address-cells = <2>;
#size-cells = <2>;
compatible = "eth,ariane-bare-soc", "simple-bus";
ranges;
clint@2000000 {
compatible = "riscv,clint0";
interrupts-extended = <&CPU0_intc 3 &CPU0_intc 7>;
reg = <0x0 0x2000000 0x0 0xc0000>;
reg-names = "control";
};
PLIC0: interrupt-controller@c000000 {
#address-cells = <0>;
#interrupt-cells = <1>;
compatible = "riscv,plic0";
interrupt-controller;
interrupts-extended = <&CPU0_intc 11 &CPU0_intc 9>;
reg = <0x0 0xc000000 0x0 0x4000000>;
riscv,max-priority = <7>;
riscv,ndev = <30>;
};
debug-controller@0 {
compatible = "riscv,debug-013";
interrupts-extended = <&CPU0_intc 65535>;
reg = <0x0 0x0 0x0 0x1000>;
reg-names = "control";
};
uart@10000000 {
compatible = "ns16750";
reg = <0x0 0x10000000 0x0 0x1000>;
clock-frequency = <50000000>;
current-speed = <115200>;
interrupt-parent = <&PLIC0>;
interrupts = <1>;
reg-shift = <2>; // regs are spaced on 32 bit boundary
reg-io-width = <4>; // only 32-bit access are supported
};
timer@18000000 {
compatible = "pulp,apb_timer";
interrupts = <0x00000004 0x00000005 0x00000006 0x00000007>;
reg = <0x00000000 0x18000000 0x00000000 0x00001000>;
interrupt-parent = <&PLIC0>;
reg-names = "control";
};
xps-spi@20000000 {
compatible = "xlnx,xps-spi-2.00.b", "xlnx,xps-spi-2.00.a";
#address-cells = <1>;
#size-cells = <0>;
interrupt-parent = <&PLIC0>;
interrupts = < 2 2 >;
reg = < 0x0 0x20000000 0x0 0x1000 >;
xlnx,family = "kintex7";
xlnx,fifo-exist = <0x1>;
xlnx,num-ss-bits = <0x1>;
xlnx,num-transfer-bits = <0x8>;
xlnx,sck-ratio = <0x4>;

  mmc@0 {
    compatible = "mmc-spi-slot";
    reg = <0>;
    spi-max-frequency = <12500000>;
    voltage-ranges = <3300 3300>;
    disable-wp;
  };

  // mmc-slot@0 {
  //   compatible = "fsl,mpc8323rdb-mmc-slot", "mmc-spi-slot";
  //   reg = <0>;  //Chip select 0
  //   spi-max-frequency = <12500000>;
  //   voltage-ranges = <3300 3300>;
  //   //interrupts = < 2 2 >;
  //   //interrupt-parent = <&PLIC0>;
  // };
};
eth: lowrisc-eth@30000000 {
  compatible = "lowrisc-eth";
  device_type = "network";
  interrupt-parent = <&PLIC0>;
  interrupts = <3 0>;
  local-mac-address = [00 18 3e 02 e3 7f]; // This needs to change if more than one GenesysII on a VLAN
  reg = <0x0 0x30000000 0x0 0x8000>;
};
xlnx_gpio: gpio@40000000 {
  #gpio-cells = <2>;
  compatible = "xlnx,xps-gpio-1.00.a";
  gpio-controller ;
  reg = <0x0 0x40000000 0x0 0x10000 >;
  xlnx,all-inputs = <0x0>;
  xlnx,all-inputs-2 = <0x0>;
  xlnx,dout-default = <0x0>;
  xlnx,dout-default-2 = <0x0>;
  xlnx,gpio-width = <0x8>;
  xlnx,gpio2-width = <0x8>;
  xlnx,interrupt-present = <0x0>;
  xlnx,is-dual = <0x1>;
  xlnx,tri-default = <0xffffffff>;
  xlnx,tri-default-2 = <0xffffffff>;
};

};
};

Move pre-requisites to README

The pre-requisites hidden inside the ariane_setup.sh should probably be moved to README. There is currently no callout to these dependancies.

sudo apt install \

gcc-7 \

g++-7 \

gperf \

autoconf \

automake \

autotools-dev \

libmpc-dev \

libmpfr-dev \

libgmp-dev \

gawk \

build-essential \

bison \

flex \

texinfo \

python-pexpect \

libusb-1.0-0-dev \

default-jdk \

zlib1g-dev \

valgrind \

csh

fatal error: stdint.h: No such file or directory

Why do I get this error?

In file included from /home/openpiton/piton/design/chip/tile/ariane/tmp/riscv-tests/build/../benchmarks/common/util.h:41,
from /home/openpiton/piton/design/chip/tile/ariane/tmp/riscv-tests/build/../benchmarks/median/median_main.c:12:
/usr/lib/gcc/riscv64-unknown-elf/9.3.0/include/stdint.h:9:16: fatal error: stdint.h: No such file or directory
9 | # include_next <stdint.h>
| ^~~~~~~~~~
compilation terminated.

riscv-gcc toolchain seems to be running fine.

why is the /piton/design/f1 is empty

I trying to build an FPGA image for AWS F1 and I fillowed the exact steps as in readme.md.
I got an error message saying that I cannot download submodule F1 and the F1 folder is empty. Is this normal.
Should I download it manually?

error while building simulation model

I want to run benchmark on simulation so I did the fillowing;

$ source $PITON_ROOT/piton/piton_settings.bash
$ source piton/ariane_setup.sh
$ piton/ariane_build_tools.sh
$ cd $PITON_ROOT/build
$ sims -sys=manycore -x_tiles=1 -y_tiles=1 -vlt_build -ariane

and I got the following error

sims -sys=sys_ariane -x_tiles=1 -y_tiles=1 -vlt_build -ariane 
sims: ====================================================
sims:   Simulation Script for OpenPiton
sims:   Modified by Princeton University on June 9th, 2015
sims: ====================================================
sims: ====================================================
sims:   Simulation Script for OpenSPARC T1
sims:   Copyright (c) 2001-2006 Sun Microsystems, Inc.
sims:   All rights reserved.
sims: ====================================================
sims: start_time Sat Apr  4 18:54:30 CEST 2020
sims: running on nechi
sims: uname is Linux nechi 5.3.0-45-generic #37~18.04.1-Ubuntu SMP Fri Mar 27 15:58:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
sims: version 2.0
sims: dv_root /home/nechi/Desktop/openpiton/piton
sims: model_dir /home/nechi/Desktop/openpiton/build
sims: tre_search /home/nechi/Desktop/openpiton/piton/tools/env/tools.iver
sims: using config file /home/nechi/Desktop/openpiton/piton/tools/src/sims/sims.config ()
Can't exec "bw_cpp": No such file or directory at /home/nechi/Desktop/openpiton/piton/tools/src/sims/sims,2.0 line 2034.
sims: Caught a SIGDIE. Could not open /home/nechi/Desktop/openpiton/piton/tools/src/sims/sims.config at /home/nechi/Desktop/openpiton/piton/tools/src/sims/sims,2.0 line 2034.

what could be the problem?

General Question regarding Openpiton on AWS F1

I want to run openpiton image on AWS F1 So my questions are the fiollowing.

  • Is this FPGA image agfi-0d87a634f93fe7c83 work with one ariane core ?
  • How can I ran a benchmark using the image on the AWS F1?
  • Do you have a precompiled Coremark
    Thank you

openpiton/piton/tools/Linux/x86_64/m4_gmp does not work out of box

The mktools command recompiles certain utilities such as goldfinger, however there is no mention of m4_gmp which is a mystery binary. This silently fails unless libc6-i386 is installed and no meaningful error is given. Also libgmp10:i386 is a dependency which is not mentioned. Once this is sorted, simulation cannot work unless libbit-vector-perl is installed (these package names refer to Ubuntu 16.04), a necessary prerequisite to eventually use Vivado.

Perhaps a sanitised version of this workaround could be added to the getting started instructions.

L2 multicore request arbitration

Hello,

I am studying contention on the L2 for the OpenPiton+Ariane architecture. I'm interested in knowing which is the arbitration or queue system for the requests that are send from multiple cores to the L2. Is it a global queue for all cores? Does each core has its own queue?
Or maybe there is a queue for each slice of the L2? Thanks.

Sergi.

Vivado Project was not created properly

When I try to synthesize CVA6 on a Genesys2 board with Vivado 2022.2 I get the following

[max@yarra build]$ protosyn -b genesys2 -d system --core=ariane --uart-dmw ddr
[INFO]  protosyn,2.5:702: ----- System Configuration -----
[INFO]  protosyn,2.5:720: x_tiles   = 1
[INFO]  protosyn,2.5:721: y_tiles   = 1
[INFO]  protosyn,2.5:722: num_tiles = 1
[INFO]  protosyn,2.5:729: core      = ariane
[INFO]  protosyn,2.5:732: defining RTL_TILE0
[INFO]  protosyn,2.5:762: setenv RTL_ARIANE0
[INFO]  protosyn,2.5:780: network   = 2dmesh_config
[INFO]  protosyn,2.5:784: l15 size  = 8192
[INFO]  protosyn,2.5:785: l15 assoc = 4
[INFO]  protosyn,2.5:786: l1d size  = 8192
[INFO]  protosyn,2.5:787: l1d assoc = 4
[INFO]  protosyn,2.5:788: l1i size  = 16384
[INFO]  protosyn,2.5:789: l1i assoc = 4
[INFO]  protosyn,2.5:790: l2  size  = 65536
[INFO]  protosyn,2.5:791: l2  assoc = 4
[INFO]  protosyn,2.5:805: ---- Additional RTL Defines ----
[INFO]  protosyn,2.5:808: NO_RTL_CSM
[INFO]  protosyn,2.5:808: PITON_FPGA_MC_DDR3
[INFO]  protosyn,2.5:808: PITONSYS_MEM_ZEROER
[INFO]  protosyn,2.5:808: PITON_FPGA_SD_BOOT
[INFO]  protosyn,2.5:808: PITONSYS_UART_BOOT
[INFO]  protosyn,2.5:808: PITON_NO_CHIP_BRIDGE
[INFO]  protosyn,2.5:808: PITON_UART16550
[INFO]  protosyn,2.5:808: PITON_FPGA_ETHERNETLITE
[INFO]  protosyn,2.5:810: --------------------------------
[INFO]  protosyn,2.5:879: Generating UART init sequence
[INFO]  protosyn,2.5:631: Using core clock frequency: 66.667 MHz
[INFO]  protosyn,2.5:285: Building a project for design 'system' on board 'genesys2'
[INFO]  protosyn,2.5:330: Running FPGA implementation down to bitstream generation
[INFO]  protosyn,2.5:932: Checking Project Build results
[ERROR] fpga_lib.py:344: Vivado Project was not created properly!
[ERROR] fpga_lib.py:345: Check: /home/max/Workarea/openpiton/build/genesys2/system/protosyn_logs/make_project.log

The tail of make_project.log is

INFO: compiling DTS and bootroms for Ariane (MAX_HARTS=1, UART_FREQ=66667000)...
dtc -I dts ariane.dts -O dtb -o ariane.dtb
riscv64-unknown-elf-gcc -Tlinker.ld bootrom.S -nostdlib -static -Wl,--no-gc-sections -o bootrom.elf
riscv64-unknown-elf-objcopy -O binary bootrom.elf bootrom.bin
dd if=bootrom.bin of=bootrom.img bs=128
python ./gen_rom.py bootrom.img
rm bootrom.bin bootrom.elf ariane.dtb
child process exited abnormally
    while executing
"exec make all 2> /dev/null"
    invoked from within
"if  {[info exists ::env(PITON_ARIANE)]} {
  puts "INFO: compiling DTS and bootroms for Ariane (MAX_HARTS=$::env(PITON_NUM_TILES), UART_FREQ=$env(CONFI..."
    (file "/home/max/Workarea/openpiton/piton/tools/src/proto/common/setup.tcl" line 121)

    while executing
"source $DV_ROOT/tools/src/proto/common/setup.tcl"
    (file "/home/max/Workarea/openpiton/piton/tools/src/proto/vivado/setup.tcl" line 30)

    while executing
"source $DV_ROOT/tools/src/proto/vivado/setup.tcl"
    (file "/home/max/Workarea/openpiton/piton/tools/src/proto/vivado/gen_project.tcl" line 33)
INFO: [Common 17-206] Exiting Vivado at Fri Dec 30 16:42:08 2022...

In my understanding the failing make is the one executed under bootrom/baremetal, but if I run it manually I see no problem.
Any suggestion?
Thanks

Ariane : Translation Shift fix with Piton UART Stream

Hi,

On Genesys2 with MIG with AXI interface:
Discussed with Jon on problem of AXI address from NoC with UART boot option. As per his suggestion, It seems on this line, for ariane, it might to be shift by 6 bits right to achieve null translation eventually for pitonstream to load test and work correctly.
Test worked properly with the fix.

print("assign bram_addr_0 = (({{(MEM_ADDR_WIDTH-VA_ADDR_WIDTH){1'b0}}, va_byte_addr} - 64'h%s) >> `ADDR_TRANS_PHYS_WIDTH_ALIGN) + 0;" % (memBegin));

Thanks,
Raghav

OpenOCD error: Unknown RTOS type riscv

I've created a system with 2 CVA6 cores.
The implementation on Genesys2 is succesful.
I've tried connecting openOCD by using the ariane-multi-hart.cfg configuration file, but I get

../ariane-multi-hart.cfg:27: Error: Unknown RTOS type riscv, try one of: ThreadX, FreeRTOS, eCos, linux, chibios, Chromium-EC, embKernel, mqx, uCOS-III, nuttx, RIOT, Zephyr, hwthread,  or auto

Can you suggest how to modify the .cfg file or which version of openOCD to use?

L1.5 and L2 Cache Configurations to vary their sizes and associativities

Dear Openpiton Maintainers,

I have successfully cloned the Openpiton repository and attempted to reconfigure various parameters one at a time of both caches using the command line interface as described in the Openpiton simulation manual. However, despite being able to compile the Openpiton code using VCS without any errors, it has failed all tests in the regression test group named "tile1_mini". The specific parameters and reasons for the test failures are outlined as follows:

  1. L1.5 associativity: I reconfigured the L1.5 cache to have 8 and 16 associativities and compiled and ran the tests. The results indicated that all tests in this test group failed due to "L15 having X's in the stall_s1 signal," as per the report.
  2. L1.5 size: I attempted to reconfigure the L1.5 cache to various sizes, and the maximum size that allowed the code to pass the tests was 1MB.
  3. L2 associativity: I reconfigured the L2 cache to have associativities of 16 and 8, and the tests were passed.
  4. L2 size: I attempted to reconfigure the L2 cache to sizes of 2MB and 128KB, but despite being able to compile the code without any issues, the "tile1_mini" test group failed due to an "attempt to write X to register detected."

I would greatly appreciate your guidance in reconfiguring the caches correctly and resolving the issues to pass the failed tests.

Alignment dependency for AMO and uncached accesses

I ran into the following issue while trying to develop a core<->L1.5 transducer -
For data that is returned from the L1.5 (i.e data0, data1, data2, data3), there seems to be a need to use the address at the transducer to pick out the data that is needed for atomic operations (LR/SC) and uncached accesses as opposed to the data replication that happens in case of a regular load or a store.

@fei-g @Jbalkind, more clarity on why it is the way it is and any nuances involved would be great!

Building Ariane+Openpiton Bitstream for Genesys2 is Failed

I followed the README, that is:
Setting PITON_ROOT
Ran source $PITON_ROOT/piton/piton_settings.bash
cd $PITON_ROOT/
source piton/ariane_setup.sh
piton/ariane_build_tools.sh

when I run protosyn -b genesys2 -d system --core=ariane --uart-dmw ddr which crashes with the following output:
protosyn -> .local_tool_wrapper: configsrch returned error code 126 Exiting ...
What caused it and how to debug, if anyone has an idea, I would appreciate your help.

Ariane cache operations

Esteemed Colleagues!

I have three unrelated questions... Please forgive (and correct) me if this is not the best place to ask them... :-)

1.) How can an Ariane tile flush a write all the way to DDR?

  • Specifically, suppose an Ariane core wants to write a byte at address 0x8001_4200.

  • To BYPASS the L1 and L1.5 cache, it can ACTUALLY create an IO write by using physical address 0x80_8001_4200  (setting bit-39).

  • With that, the write data leaves the tile and sits dirty in the L2.

  • Next, as described in the microarchitecture specification for the L2, the Ariane core must do a specific ldub instruction to flush the cacheline from the L2 to DDR.

  • To compose that ldub instruction, it must know the index/way/tag/etc. information about where the line is in the L2.

  • My question now becomes ... what is the recipe to map a physical address to this special ldub instruction? How can the code running on the Ariane create the correct instruction?

  • Obviously, if I reconfigure the attributes of the L2, that mapping changes. How?

    1.5) If we turn on page tables, marking the page containing 0x8001_4200 as uncacheable, would the L2 respect that?

2.) How can an Ariane core discard a line from the L1, L1.5, and L2?

  • Suppose an Ariane core is polling a location in DDR, but suppose something in the simulation infrastructure magically reaches into DDR and changes the byte being polled.

  • How can the Ariane see this new value?

  • Typically, we issue an invalidate cacheop instruction that asks the hardware to simply discard a clean line, forcing the next read to fetch from DDR.

    2.5) Again if we turned on page tables, and marked the page containing the polled location as uncacheable, would the L2 respect that?

3.) I want to add another small peripheral to the riscv_peripherals.sv module (and to the memory map). Can you point me to an example/recipe where someone has done something similar?

Thank you!

running a program on AWS f1 instance

Hello I have read the instructions to work with openpiton on AWS f1 instance and I have some questions:
when is it possible to run a precompiled benchmark is necessary to be executed before uploading the OS or it does not matter?
to run a precompiled benchmark for example dhrystone can I use this instruction:
pitonstream -b f1 -d system dhrystone.riscv --core=ariane --precompiled ?
From where do I get the linux image to upload to openpiton design on F1 instance. What do you mean by $FILE_LOCATION in ./dma_os $FILE_LOCATION.

Latch inference in dynamic_node

We've been getting some timing issues when pushing high frequencies on the new UltraScale+ FPGAs due to latch inference in a few parts of the design. @msfschaffner has fixed most of these but the one that we're uncertain about and want a good long-term solution to is dynamic_node. @msfschaffner has implemented several alternatives to the latch-inferring block from dynamic_node, one_of_five, and collected some data from synthesising that one block for ASIC.

@wentzlaf we discussed using the "zeroes" version of the patch. @msfschaffner has since implemented another option which is "array" - it's much better with respect to area and slack for ASIC synthesis, but the "error case" for this is left up to the tool. If we trust that DC will know that the design only uses the first 5 values, then the error case shouldn't be hit anyway.

We should decide which is best for the long term and go with that. @wentzlaf which is your preference?

Here are the synthesis results:

+---------+---------+---------+---------+
| variant | [squmm] | [ps]    | ts [ps] |    
+---------+---------+---------+---------+
| latch   | 2824.4  | 338.46  | 4252.8  | 
+---------+---------+---------+---------+
| zeroes  | 2615.5  | 341.40  | 11029.2 | 
+---------+---------+---------+---------+
| bypass  | 2626.0  | 351.62  | 16204.2 |  
+---------+---------+---------+---------+
| array   | 2280.2  | 353.19  | 5091.3  |  
+---------+---------+---------+---------+

The implementations are as follows (default latch version of the file is here):

module one_of_five_latch(in0,in1,in2,in3,in4,sel,out);
    parameter WIDTH = 8;
    parameter BHC = 10;
    input [2:0] sel;
    input [WIDTH-1:0] in0,in1,in2,in3,in4;
    output reg [WIDTH-1:0] out;
    always@(*)
    begin
        case(sel)
            3'd0:out=in0;
            3'd1:out=in1;
            3'd2:out=in2;
            3'd3:out=in3;
            3'd4:out=in4;
            default:; // indicates null
        endcase
    end
endmodule

// bypass
module one_of_five_bypass(in0,in1,in2,in3,in4,sel,out);
    parameter WIDTH = 8;
    parameter BHC = 10;
    input [2:0] sel;
    input [WIDTH-1:0] in0,in1,in2,in3,in4;
    output reg [WIDTH-1:0] out;
    always@(*)
    begin
        out=in0;
        case(sel)
            3'd0:out=in0;
            3'd1:out=in1;
            3'd2:out=in2;
            3'd3:out=in3;
            3'd4:out=in4;
            default:; // indicates null
        endcase
    end
endmodule

// zeroes
module one_of_five_zeroes(in0,in1,in2,in3,in4,sel,out);
    parameter WIDTH = 8;
    parameter BHC = 10;
    input [2:0] sel;
    input [WIDTH-1:0] in0,in1,in2,in3,in4;
    output reg [WIDTH-1:0] out;
    always@(*)
    begin
        out={WIDTH{1'b0}};
        case(sel)
            3'd0:out=in0;
            3'd1:out=in1;
            3'd2:out=in2;
            3'd3:out=in3;
            3'd4:out=in4;
            default:; // indicates null
        endcase
    end
endmodule

// array
module one_of_five_array(in0,in1,in2,in3,in4,sel,out);
    parameter WIDTH = 8;
    parameter BHC = 10;
    input [2:0] sel;
    input [WIDTH-1:0] in0,in1,in2,in3,in4;
    output [WIDTH-1:0] out;
    wire [5*WIDTH-1:0] tmp;
    assign tmp ={in4,in3,in2,in1,in0};
    assign out = tmp[sel*WIDTH +: WIDTH];
endmodule```

piton/ariane_build_tools.sh uses obsolete riscv toolchain

The piton/ariane_build_tools.sh script uses the following repositories to build a toolchain;

Submodule 'riscv-binutils-gdb' (https://github.com/riscv/riscv-binutils-gdb.git) registered for path 'riscv-binutils-gdb'
Submodule 'riscv-dejagnu' (https://github.com/riscv/riscv-dejagnu.git) registered for path 'riscv-dejagnu'
Submodule 'riscv-gcc' (https://github.com/riscv/riscv-gcc.git) registered for path 'riscv-gcc'
Submodule 'riscv-glibc' (https://github.com/riscv/riscv-glibc.git) registered for path 'riscv-glibc'
Submodule 'riscv-newlib' (https://github.com/riscv/riscv-newlib.git) registered for path 'riscv-newlib'
Submodule 'riscv-qemu' (git://github.com/riscv/riscv-qemu.git) registered for path 'riscv-qemu'

Support for RISC-V has been merged into upstream projects and you should be using upstream binutils and gcc repositories. I believe that qemu has also been merged upstream too.

Random Output (Raw Binary) data when running Openpiton+Ariane on VCU118

I generated a new project for openpiton+ariane with the VCU118 is the target board using the following command:

protosyn -b vcu118 -d system --core=ariane --uart-dmw ddr --x_tiles=2 --y_tiles=2

Then when trying to run any c source code via UART , I got random binary data printed on the terminal.

Are there any recommendations about what to do or modify inside the RTL to solve this problem.
PS: I'm using the openpiton-dev branch from openpiton repo.
Thank you in advance.

image

JTAG Support VCU118

Hello everyone,
I want to know which kind of JTAG debugger is supported on VCU118 board.
Xilinx internal JTAG chain via the bscane2 primitive or external JTAG debugger?
Is any changes needed on fpga/ariane.cfg under ariane repo?

Thank you all.

Building my own multicore processor

Hello! I have made a single core implementation of a RISC-V based processor which has been taped out. Now I want to move onto the multicore implementation of the processor. Your manual says in the port own RTL section that it is coming soon and i should contact you if i want to use my own RTL.

Is replacing Ariane with my own core possible? If so, how do i begin?

Pitonstream gives timeout when running RISCV benchmarks (Openpiton+Ariane)

I'm trying to run OpenPiton+Ariane
I built it and programmed the FPGA successfully on Genesys 2.0 board, also for the simple hello_world.c test it passes.
Running RISCV Benchmark tests gives timeout.
I used the following command to run the RISCV benchmarks test knowing that the UART is detected at ttyUSB2.
"pitonstream -b genesys2 -d system -f $PITON_ROOT/piton/design/chip/tile/ariane/ci/riscv-benchmarks.list --core=ariane --precompiled -p ttyUSB2"

(PS: I tried to change "ASM_TIMEOUT_CYCLES " inside "chipset_define.vh" to be 10^11 cycles rather than 5*10^9, but still have timeout after 1500 seconds rather than having it after 75 seconds.)
image
I need also to understand the flow of the assembly tests to run the traps , Inspecting "good_bad_trap_handler.s", I can see that the written assembly targets OpenSPARC T1 processor not Ariane core.
What can be the problem and how the trap handling tests are done for the Ariance core?
Thanks.

Building Ariane+Openpiton Bitstream for Genesys2

I have issues generating the Ariane+Openpiton bitstream for a Genesys2 board.
I followed the README, that is:

  1. Setting PITON_ROOT
  2. Ran source $PITON_ROOT/piton/piton_settings.bash
  3. cd $PITON_ROOT/
  4. source piton/ariane_setup.sh
  5. piton/ariane_build_tools.sh

Okay, so far, so good.

Now i want to run protosyn -b genesys2 -d system --core=ariane --uart-dmw ddr which crashes with the following output:

[INFO]  protosyn,2.5:702: ----- System Configuration -----
[INFO]  protosyn,2.5:720: x_tiles   = 1
[INFO]  protosyn,2.5:721: y_tiles   = 1
[INFO]  protosyn,2.5:722: num_tiles = 1
[INFO]  protosyn,2.5:729: core      = ariane
[INFO]  protosyn,2.5:758: setenv RTL_ARIANE0
[INFO]  protosyn,2.5:775: network   = 2dmesh_config
[INFO]  protosyn,2.5:779: l15 size  = 8192
[INFO]  protosyn,2.5:780: l15 assoc = 4
[INFO]  protosyn,2.5:781: l1d size  = 8192
[INFO]  protosyn,2.5:782: l1d assoc = 4
[INFO]  protosyn,2.5:783: l1i size  = 16384
[INFO]  protosyn,2.5:784: l1i assoc = 4
[INFO]  protosyn,2.5:785: l2  size  = 65536
[INFO]  protosyn,2.5:786: l2  assoc = 4
[INFO]  protosyn,2.5:800: ---- Additional RTL Defines ----
[INFO]  protosyn,2.5:803: NO_RTL_CSM
[INFO]  protosyn,2.5:803: PITON_FPGA_MC_DDR3
[INFO]  protosyn,2.5:803: PITONSYS_MEM_ZEROER
[INFO]  protosyn,2.5:803: PITON_FPGA_SD_BOOT
[INFO]  protosyn,2.5:803: PITONSYS_UART_BOOT
[INFO]  protosyn,2.5:803: PITON_NO_CHIP_BRIDGE
[INFO]  protosyn,2.5:803: PITON_UART16550
[INFO]  protosyn,2.5:803: PITON_FPGA_ETHERNETLITE
[INFO]  protosyn,2.5:805: --------------------------------
[INFO]  protosyn,2.5:874: Generating UART init sequence
[INFO]  protosyn,2.5:631: Using core clock frequency: 66.667 MHz
[INFO]  protosyn,2.5:285: Building a project for design 'system' on board 'genesys2'
[INFO]  protosyn,2.5:330: Running FPGA implementation down to bitstream generation
[INFO]  protosyn,2.5:927: Checking Project Build results
[ERROR] fpga_lib.py:344: Vivado Project was not created properly!
[ERROR] fpga_lib.py:345: Check: /home/jan/Desktop/openpiton/build/genesys2/system/protosyn_logs/make_project.log

The logfile contains the following hint:

INFO: compiling DTS and bootroms for Ariane (MAX_HARTS=1, UART_FREQ=66667000)...
INFO: done
INFO: generating PLIC for Ariane (2 targets, 2 sources)...
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ModuleNotFoundError: No module named 'encodings'

Current thread 0x00007f5cabe03740 (most recent call first):
    while executing
"exec ./gen_plic_addrmap.py -t $NUM_TARGETS -s $NUM_SOURCES > plic_regmap.sv"
    invoked from within
"if  {[info exists ::env(PITON_ARIANE)]} {
  puts "INFO: compiling DTS and bootroms for Ariane (MAX_HARTS=$::env(PTON_NUM_TILES), UART_FREQ=$env(CONFIG..."
    (file "/home/jan/Desktop/openpiton/piton/tools/src/proto/common/setup.tcl" line 114)

    while executing
"source $DV_ROOT/tools/src/proto/common/setup.tcl"
    (file "/home/jan/Desktop/openpiton/piton/tools/src/proto/vivado/setup.tcl" line 30)

    while executing
"source $DV_ROOT/tools/src/proto/vivado/setup.tcl"
    (file "/home/jan/Desktop/openpiton/piton/tools/src/proto/vivado/gen_project.tcl" line 33)
INFO: [Common 17-206] Exiting Vivado at Wed Dec 11 13:48:21 2019...

I have checked my python installation and removed the environment variables PYTHONHOME and PYTHONPATH and also tried running the command in screen envrironment which was suggested by google. Btw, the python script runs just fine when called separately. Now, I do realize that this might very well be an issue with the setup of my Environment but I have no idea what to try next. So if anyone has an idea, I would appreciate your help.

Thank you.

DMA error when running on AWS instance

Hello,

I am trying to run the AWS F1 instance setup.
I am using f1.2xlarge instance type with AWS FPGA operating system.

I am following the instructions from here: https://github.com/PrincetonUniversity/openpiton#running-openpiton-simulations-on-f1-instances-in-aws-step-guide

But, when I run ./dma_os $FILE_LOCATION , I am getting the following error:

[root@ip-10-0-9-104 src]# ./dma_os $FILE_LOCATION 2020-06-10T18:13:24.765435Z, test_dram_dma, INFO, dma_os.c +65: main(): Checking to see if the right AFI is loaded... 2020-06-10T18:13:24.778029Z, test_dram_dma, INFO, dma_os.c +159: check_slot_config(): Operating on slot 0 with id: 0000:00:1d.0 2020-06-10T18:13:24.780280Z, test_dram_dma, ERROR, dma_os.c +212: dma_os(): DMA write failed 2020-06-10T18:13:24.780371Z, test_dram_dma, ERROR, dma_os.c +77: main(): OS DMA failed! 2020-06-10T18:13:24.780377Z, test_dram_dma, INFO, dma_os.c +87: main(): Memory initialization FAILED

here are the other things I did already:

  1. I ran hdk_setup.sh and sdk_setup.sh from https://github.com/aws/aws-fpga.
  2. I ran steps from https://github.com/Xilinx/dma_ip_drivers/tree/master/XDMA/linux-kernel

dtc tool needed but doesn't seem to be in the requirements?

Following the instructions in the README.md;

cd $PITON_ROOT/
source piton/ariane_setup.sh
piton/ariane_build_tools.sh
cd $PITON_ROOT/build
sims -sys=manycore -x_tiles=1 -y_tiles=1 -msm_build -ariane

You get the following output;

install -p -m 644 `find isa -maxdepth 1 -type f` /tmp/riscv-tests/build/share/riscv-tests/isa                                  
install -p -m 644 `find benchmarks -maxdepth 1 -type f` /tmp/riscv-tests/build/share/riscv-tests/benchmarks                   
                                                                                                              
----------------------------------------------------------------------                                                        
build complete                                                                                                            
----------------------------------------------------------------------                                                   
                                                                                                                      
tansell@tansell:~/github/PrincetonUniversity/openpiton$ cd $PITON_ROOT/build
tansell@tansell:~/github/PrincetonUniversity/openpiton/build$ sims -sys=manycore -x_tiles=1 -y_tiles=1 -msm_build -ariane
/usr/local/google/home/tansell/github/PrincetonUniversity/openpiton/piton/design/chip/tile/ariane/Flist.ariane
compiling DTS and bootroms for Ariane...
rm -f bootrom.img bootrom.sv ariane.dtb
dtc -I dts ariane.dts -O dtb -o ariane.dtb
make: dtc: Command not found
Makefile:20: recipe for target 'ariane.dtb' failed
make: *** [ariane.dtb] Error 127
sims: Caught a SIG Error compiling DTS for ariane. at /usr/local/google/home/tansell/github/PrincetonUniversity/openpiton/piton/tools/src/sims/sims,2.0 line 1246.

General questions

  • Do you guys sell 64-bit procesors?
  • Do you have something like Intel ME in your CPUs?
  • Is this good for gaming PCs?

missing file ariane.dts when generating bitstream for vcu118

I followed the following instruction as mentionned in the readme:

export PITON_ROOT=$PWD
source $PITON_ROOT/piton/piton_settings.bash
source piton/ariane_setup.sh
piton/ariane_build_tools.sh

and I got no error (except for aws submodule cannot clone it)
then I executed
protosyn -b vcu118 -d system --core=ariane --uart-dmw ddr
and I got an error I checked the log file :
Traceback (most recent call last): File "<string>", line 18, in <module> File "/home/nechi/openpiton/piton/tools/bin/riscvlib.py", line 330, in gen_riscv_dts with open(dtsPath + '/ariane.dts','w') as file: IOError: [Errno 2] No such file or directory: '/home/nechi/openpiton/piton/design/chip/tile/ariane//openpiton/bootrom//ariane.dts' while executing "exec pyhp.py ${PYV_IMPL_FILE} > ${GEN_RTL_IMPL_FILE}" ...

Build script doesn't check /scratch exists or is writable

tansell@tansell-glaptop:~/github/PrincetonUniversity/openpiton$ source piton/ariane_setup.sh 

----------------------------------------------------------------------
openpiton/ariane path setup
----------------------------------------------------------------------

make sure that you source this script in a bash shell in the root folder of OpenPiton

----------------------------------------------------------------------
setup complete. do not forget to run the following script             
if you run the setup for the first time: ./piton/ariane_build_tools.sh
----------------------------------------------------------------------

tansell@tansell-glaptop:~/github/PrincetonUniversity/openpiton$ ./piton/ariane_build_tools.sh 

----------------------------------------------------------------------
building RISCV toolchain and tests (if not existing)
----------------------------------------------------------------------
.....
g++-7 -L.  -Wl,-rpath,/scratch/tansell/riscv_install/lib  -o elf2hex elf2hex.o  -lfesvr -lpthread 
../scripts/mk-install-dirs.sh /scratch/tansell/riscv_install/include/fesvr
mkdir /scratch
mkdir: cannot create directory ‘/scratch’: Permission denied
mkdir /scratch/tansell
mkdir: cannot create directory ‘/scratch/tansell’: No such file or directory
mkdir /scratch/tansell/riscv_install
mkdir: cannot create directory ‘/scratch/tansell/riscv_install’: No such file or directory
mkdir /scratch/tansell/riscv_install/include
mkdir: cannot create directory ‘/scratch/tansell/riscv_install/include’: No such file or directory
mkdir /scratch/tansell/riscv_install/include/fesvr
mkdir: cannot create directory ‘/scratch/tansell/riscv_install/include/fesvr’: No such file or directory
Makefile:336: recipe for target 'install-hdrs' failed
make: *** [install-hdrs] Error 1

benchmarking one ariane core in openpiton

Hi I want to run dhrystone and Coremark on Ariane core using XCVU9P (existing FPGA in AWS F1 instance)
Is it possible to run open piton with only 1 Ariane core and run the benchmarks?
Can I configure the L1-I and L1-D sizes?
Are the RS, BTB, and BHT configurable for each Ariane core?
Thank you

plic_regmap.sv

Hi,
The auto-generated plic_regmap seems not getting generated with update in number of interrupts.
Why ie_* signals [2:0] per target? while in plic_top it is "number of interrupt sources + 1" per target.
I updated ariane_device.xml and also riscvpylib for new device.
Thanks,
raghav

Implementation is failed

when i run protosyn -b genesys2 -d system --core=ariane --uart-dmw ddr which crashes with the following output:
[INFO] protosyn,2.5:702: ----- System Configuration -----
[INFO] protosyn,2.5:720: x_tiles = 1
[INFO] protosyn,2.5:721: y_tiles = 1
[INFO] protosyn,2.5:722: num_tiles = 1
[INFO] protosyn,2.5:729: core = ariane
[INFO] protosyn,2.5:732: defining RTL_TILE0
[INFO] protosyn,2.5:762: setenv RTL_ARIANE0
[INFO] protosyn,2.5:780: network = 2dmesh_config
[INFO] protosyn,2.5:784: l15 size = 8192
[INFO] protosyn,2.5:785: l15 assoc = 4
[INFO] protosyn,2.5:786: l1d size = 8192
[INFO] protosyn,2.5:787: l1d assoc = 4
[INFO] protosyn,2.5:788: l1i size = 16384
[INFO] protosyn,2.5:789: l1i assoc = 4
[INFO] protosyn,2.5:790: l2 size = 65536
[INFO] protosyn,2.5:791: l2 assoc = 4
[INFO] protosyn,2.5:805: ---- Additional RTL Defines ----
[INFO] protosyn,2.5:808: NO_RTL_CSM
[INFO] protosyn,2.5:808: PITON_FPGA_MC_DDR3
[INFO] protosyn,2.5:808: PITONSYS_MEM_ZEROER
[INFO] protosyn,2.5:808: PITON_FPGA_SD_BOOT
[INFO] protosyn,2.5:808: PITONSYS_UART_BOOT
[INFO] protosyn,2.5:808: PITON_NO_CHIP_BRIDGE
[INFO] protosyn,2.5:808: PITON_UART16550
[INFO] protosyn,2.5:808: PITON_FPGA_ETHERNETLITE
[INFO] protosyn,2.5:810: --------------------------------
[INFO] protosyn,2.5:879: Generating UART init sequence
[INFO] protosyn,2.5:631: Using core clock frequency: 66.667 MHz
[INFO] protosyn,2.5:285: Building a project for design 'system' on board 'genesys2'
[INFO] protosyn,2.5:330: Running FPGA implementation down to bitstream generation
[INFO] protosyn,2.5:932: Checking Project Build results
[INFO] fpga_lib.py:348: Project was build successfully!
[INFO] protosyn,2.5:939: Checking Project Implementation results
Traceback (most recent call last):
File "/home/shancheng/PITON_ROOT/piton/tools/src/proto/protosyn,2.5", line 949, in
main()
File "/home/shancheng/PITON_ROOT/piton/tools/src/proto/protosyn,2.5", line 940, in main
if not implFlowSuccess(rc_dir.log, rc_dir.run):
File "/home/shancheng/PITON_ROOT/piton/tools/src/proto/fpga_lib.py", line 365, in implFlowSuccess
if not strInFile(fpath, ["synth_design completed successfully"]):
File "/home/shancheng/PITON_ROOT/piton/tools/src/proto/fpga_lib.py", line 330, in strInFile
f = open(fpath, 'r')
FileNotFoundError: [Errno 2] No such file or directory: '/home/shancheng/PITON_ROOT/build/genesys2/system/genesys2_system/genesys2_system.runs/synth_1/runme.log'

so i dont konw why runme.log was deleted, how to find the reason, if anyone has an idea, please help me .

General questions about benchmarking

I'm interested in doing some custom benchmarks on OpenPiton + Ariane.

Is there a way to get the l1.5 and l2 miss rate from the simulations? Running

sims -sys=manycore -x_tiles=1 -y_tiles=1 -vlt_run -ariane dhrystone.riscv -precompiled -rtl_timeout=10000000 -post_process_cmd="perf > perf.log"

gives me an empty perf.log file. I see no performance results besides the cycle count printed from within the dhrystone elf (which reflects the perf. counters results and has no information about l1.5 and l2).

Thank you.

Boot linux on xilinx vcu108 board

Hi everyone,I'm trying to boot linux on vcu108 dev board,The 108 is almost the same as 118.But the system stuck at "copying block 0 of 1 blocks (0 %)"
These are something I have already done for transplanting from the 118project to 108project.

  1. Generated a vcu118 FPGA proto project(ariane core).
  2. Copied all the the files into vivado project.
  3. Modified board's type,upgrade IPs
    (ddr4 mig IP is using a 300mhz clk).
  4. Modified the IO ports constraints.
  5. Deleted the (IOB = TRUE)setting in "sd_cmd_serial_host.v" and "sd_data_serial_host.v" because of some errors.
  6. Did synthesis,implementation,and bitstream generated.
  7. Connected a 3rd party SD slot with several changed 470ohm resistance,and prepared a SD card.
    (Connection between dev board PMOD and SD slot's interface is some jumper wires,does it influence the system?)
  8. Downloaded bit file into FPGA and see what happened in the UART screen.

Problems:

  1. How can I lower the frequency?
    For example,If I want to use a 80Mhz clk rather than 100Mhz,I need to change the mmcm IP setting,the mmcm_chip IP setting.And for UART baud rate115200,number(in hex) in the "uart_data.coe" is also needed to be changed.
    But after that, what I can see via UART is nothing but messy code.
  2. Both vcu118 and vcu108 have a mcrio SD slot which is not connected to the FPGA package pin,so we need a SD slot.But my system is stucking at "copying block 0 of 1 blocks (0 %)",It seems like the system can not get anything from the SD card.

It would be appreciated that any suggestion about solving problems.
Thanks :)

Error:Can not open serial device

Hi,

I got errors by using pitonstream command to run tests on the board.
"Can not open serial device /dev/ttyUSB1"
"Provide correct device name using -p option"
But USB1 is actually the right serial port.
How could I solve this problem?
Thank you all.

Ariane Toolchain is misssing when compiling the benchmarks

I am trying to have a fresh installation of openpitonand I id the following:

$ git clone <openpiton repo>
$ git checkout -b openpiton-dev
$ export PITON_ROOT=path to openpiton
$ source $PITON_ROOT/piton/piton_settings.bash
$ cd  $PITON_ROOT
$ source piton/ariane_setup.sh
$ piton/ariane_build_tools.sh

At this point I get an error during the benchmarks compilation :

make[1]: riscv64-unknown-elf-gcc: Command not found
make[1]: *** [/home/rk-vcu118/Desktop/openpiton/piton/design/chip/tile/ariane/tmp/riscv-tests/build/../benchmarks/Makefile:54: median.riscv] Error 127
make[1]: Leaving directory '/home/rk-vcu118/Desktop/openpiton/piton/design/chip/tile/ariane/tmp/riscv-tests/build/benchmarks'
make: *** [Makefile:25: benchmarks] Error 2

How could I fix this issue. Thank you

Support for VCU118

I can see already that there is a folder for the VCU118 board but when I read the documentation it is mentioned that there is only support for three boards and the VCU118 is not included.
So is it really supported or not?

Test compilation failed

Hey,

when i try to execute the pitonstream command for the dhrystone benchmark i get this error:

command:
pitonstream -b vcu118 -d system -f ./piton/design/chip/tile/ariane/tmp/riscv-tests/build/benchmarks/dhrystone.riscv --core=ariane -precompiled -p ttyUSB1 --no_wait_fpga_config

console output:

dhrystone.c
[INFO]  pitonstream,1.0:372: UART DIV Latch value: 0x36
[INFO]  pitonstream,1.0:375: Configuring port /dev/ttyUSB1
[INFO]  pitonstream,1.0:164: UART will be configured for 115200 baud rate
[INFO]  pitonstream,1.0:402: Running dhrystone.c: 1 out of 1 test
[INFO]  pitonstream,1.0:287: Compiling dhrystone.c
sims -sys=manycore -novcs_build -midas_only               -midas_args='-DUART_DIV_LATCH=0x36 -DFPGA_HW -DCIOP -DNO_SLAN_INIT_SPC' dhrystone.c -ariane -uart_dmw -x_tiles=1 -y_tiles=1
[ERROR] pitonstream,1.0:409: Test compilation failed
[ERROR] pitonstream,1.0:410: Skipping dhrystone.c
[ERROR] pitonstream,1.0:411: See /home/rk-vcu118/Desktop/openpiton/build/uart_piton.log for more information

Any help to solve this problem

SD card issue on VCU118

Hi everyone,
I tried to get this project worked on VCU118.
My sd card: SanDisk microSD 16GB.
But I always get the following warning when I prepared the SD card with command: sudo sgdisk --clear --new=1:2048:67583 --new=2 --typecode=1:3000 --typecode=2:8300 /dev/sdb


Found invalid GPT and valid MBR; converting MBR to GPT format
in memory.


Warning! Secondary partition table overlaps the last partition by
33 blocks!
You will need to delete this partition or resize it in another utility.
Non-GPT disk; not saving changes. Use -g to override.

And system get stuck at
sd initialized!
initializing SD...
sd initialized!
copying block 0 of 1 blocks (0 %)

Is anything wrong with this SD card? Should I change it? But to which one? Could you please give me some advice? Must I use SD boot if I just want to run some benchmarks on my board? Is there any other choices?

Thanks

VERILATOR_ROOT does not work with install verilator

Ran ./configure --prefix=/opt/verilator and did a make install. Then set VERILATOR_ROOT to /opt/verilator and got the following error;

i_fmt_slice.gen_num_lanes[0].active_lane.lane_instance.i_fma.output_pipline.i_output_pipe.stage_ready
sims: make -j -C /usr/local/google/home/tansell/github/PrincetonUniversity/openpiton/build/manycore/rel-0.1/obj_dir -f Vcmp_top.mk Vcmp_top
make: Entering directory '/usr/local/google/home/tansell/github/PrincetonUniversity/openpiton/build/manycore/rel-0.1/obj_dir'
Vcmp_top.mk:65: /opt/verilator/include/verilated.mk: No such file or directory
make: *** No rule to make target '/opt/verilator/include/verilated.mk'.  Stop.
make: Leaving directory '/usr/local/google/home/tansell/github/PrincetonUniversity/openpiton/build/manycore/rel-0.1/obj_dir'

The make file is actually in /opt/verilator/share/verilator/include/verilated.mk.

If I instead set VERILATOR_ROOT to the source directory, everything works.

Noc output data path with chip_bridge

In the chipset.v file, there's a set of noc_data/noc_valid I/O's for no chip_bridge.

Is there a way to use use those with chip_bridge? I want to use those noc_data/valid as inputs to my own module. I can see that it goes through credit_to_valrdy, chipset_imp1, and fpga_bridge which is kind of confusing. can someone help with the noc_data path?

How can we determine SRAM specifications

Hi, I am trying to synthesize OpenPiton and perform back-end implementation for a 22nm technology node, following the manual. I have a question regarding SRAM integration.

The flow works well until the synthesis stage. I also get some synthesis results without SRAMs. Now I try to integrate SRAMs into the system, but I wonder how to determine the specifications of the SRAMs (depths, bit width, etc.)? I looked into the module setup scripts (${PITON_ROOT}/piton/design/chip/tile/sparc/ffu/synopsys/
script/module_setup.tcl, etc.), but they did not mention the size of the SRAM blocks.

My understanding is that the SRAM size depends on the cache configurations in design_setup.tcl. But is there a way to determine the required SRAM size? I tried to look into the generated Verilog files, but did not find too many clues. Please let me know if you have any suggestions. Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.