Git Product home page Git Product logo

haddoc2's Introduction

Haddoc2 : Hardware Automated Dataflow Description of CNNs

Haddoc2 is a tool to automatically design FPGA-based hardware accelerators for convolutional neural networks (CNNs). Using a Caffe model, Haddoc2 generates a hardware description of the network (in VHDL-2008) which is constructor and device independent. Haddoc2 is built upon the principals of Dataflow stream-based processing of data, and, implements CNNs using a Direct Hardware Mapping approach, where all the actors involved in CNN processing are physically mapped on the FPGA.

More implementation details can be found in this technical report and the this paper If you find Haddoc2 useful in your research, please consider citing the following paper

@article{Abdelouahab17,
author = {Abdelouahab, Kamel and Pelcat, Maxime and Serot, Jocelyn. and Bourrasset, Cedric and Berry, Fran{\c{c}}ois},
doi = {10.1109/LES.2017.2743247},
issn = {19430663},
journal = {IEEE Embedded Systems Letters},
keywords = {CNN,Dataflow,FPGA,VHDL},
pages = {1--4},
title = {Tactics to Directly Map CNN graphs on Embedded FPGAs},
url = {http://ieeexplore.ieee.org/document/8015156/},
year = {2017}}

For a short demo of the tool, see here

Pre-requisite

  • Caffe with A simple CPU-only build is needed.
  • Quartus II or Vivado (Optional) : to compile and synthesize your design
  • GPStudio FPGA (Optional): Haddoc2 generated accelerators are compatible with GPStudio, a tool-chain to to deploy image processing applications on FPGA-based smart cameras.

Execution

To run haddoc2, please use the binders in bin/ directory.

python ../lib/haddoc2.py \
       --proto=<path to caffe prototxt> \
       --model=<path to caffe model> \
       --out=<output directory> \
       --nbits=<fixed point format. Default nbits=8>

Note that Haddoc2 needs to know where your Caffe and Haddoc2 installation directories are. Please add the following environment variables or edit you .bashrc file in Linux. For instance :

export CAFFE_ROOT="$HOME/caffe"
export HADDOC2_ROOT="$HOME/dev/haddoc2"

Components required to implement the supported CNN layers can be found at lib/hdl/ directory. Important: Be sure to synthesize your project in VHDL 2008 mode !

Generating an example

example/ directory contains pre-trained BVLC_caffe model version of the Lenet5 CNN. Please use the Makefile given to test Haddoc2.

  • make hdl generates the VHDL description of the CNN
  • make quartus_proj creates a simple Quartus II project to implement LeNet on an Intel Cyclone V FPGA
  • make compile lunches Quartus tool to compile and synthesize your design. This command requires quartus binary to be on your path
cd $HADDOC2_ROOT/example
make hdl
>> Haddoc2 CNN parameter parser:
  prototxt: ./caffe/lenet.prototxt
  caffe model: ./caffe/lenet.caffemodel
  vhdl out: ./hdl_generated
  bit width : 5
>> Generated toplevel file: ./hdl_generated/cnn_process.vhd
make quartus_proj
>> Succefully generated quartus project
make compile
>> quartus_map cnn_process -c cnn_process
...

TODO

  1. Add support of BatchNorm / Sigmoid / ReLU layers
  2. Implement Dynamic Fixed Point Arithmetic
  3. Support conv layers with sparse connections (such AlexNet's conv2 layer, where each neuron is connected to only half of conv1 outputs i.e n_outputs(layer-1) != n_inputs(layer) )

haddoc2's People

Contributors

blazecode2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

haddoc2's Issues

Too many LEs

I'm having trouble compiling the quartus project from the make file, so I tried opening it in the Quartus studio and running it there. Its generating 402239 registers, which seems close to the number generated in the tech report for the implementation using Logic Elements without Tailored Multipliers, and is too many to fit onto a Cyclone IV. Does compiling the project from the make file rather than from the Quartus studio fix this somehow, or is this unintended?

Might not be an issue, but thanks for any help.

No module named 'parseNetParams'

Hello,

Whenever I'm trying to build the example with make hdl I have this error:
python3 ../lib/haddoc2.py \ --proto=./caffe/lenet.prototxt \ --model=./caffe/lenet.caffemodel \ --out=./hdl_generated \ --nbits=5 Traceback (most recent call last): File "../lib/haddoc2.py", line 34, in <module> import parseNetParams ModuleNotFoundError: No module named 'parseNetParams' Makefile:11: recipe for target 'hdl' failed make: *** [hdl] Error 1
Do you know what causes it?

TanhLayer.vhdl correct?

Is the TanhLayer.vhdl's logic correct?. it just bit slices the SUM_WIDTH long bit array to BITWIDTH wide. it won't even consider the sign bit. Just take the least significant BITWIDTH bits.

Data protocol

Hello!
Could you share data protocol for cnn_process.vhd? Waveform is the greate!
What is in_dv and in_fv?

MOA pipepline

Hi,
The neighbourExtractor is not passing the data valid and frame valid signals correctly when simulated. And can you tell me the purpose of frame valid and data valid signals in the first convolution layer.

KERNEL_VALUE is mismatch!

Hi, I have generated successfully vhdl code and tried to synthesis in Xilinx Vivado, but I got below false message:
[Synth 8-421] mismatched array sizes in rhs and lhs of assignment [params.vhd:65]
[Synth 8-78] a value must be associated with generic KERNEL_VALUE [cnn_process.vhd:89]
[Synth 8-318] illegal unconstrained array generic 'KERNEL_VALUE' [cnn_process.vhd:89]
[Synth 8-285] failed synthesizing module 'cnn_process' [cnn_process.vhd:27]

I see the number of KERNEL VALUE of the conv layer 2 is reduced by half and this causes the error, but I did not figure out the reason behind this?
It makes me a little confused. Is there any suggestion?

neighborhood model has bug

The signals of out_data 、out_dv and out_fv are not match in image 30x30 kernel3x3,in modelsim simulation

missing bitwidths.vhd

Hello,

The package cnn_types.vhd imports bitwidths.vhd. I am not able to find bitwidths.vhd.

library ieee;
use ieee.numeric_std.all;
use ieee.std_logic_1164.all;
use ieee.math_real.all;
use work.bitwidths.all;

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.