awai54st / pynq-classification Goto Github PK
View Code? Open in Web Editor NEWPython on Zynq FPGA for Convolutional Neural Networks
License: BSD 2-Clause "Simplified" License
Python on Zynq FPGA for Convolutional Neural Networks
License: BSD 2-Clause "Simplified" License
both links provided aren't working for us to download the SD card image. Also, MANUAL_INSTAL.md has a lot of issues. Can't move forward can you help?
Hi,
I was looking into your LeNet_wrapper design and I don't really understand what is SMM performing and what's the meaning of it's parameters. Since I notice SCIG performs the convolutional operations.
Thank you.
你好:
我現在在研究如何將CNN手寫辨識移植到ZEDBOARD上面,我想請問在CONVOLUTION方面你是使用HLS來處理嗎?還是用verilog?希望你能果點方向,麻煩你了 謝謝
Excuse me
How to classify one of my own pictures on Cifar 10
Hi, i am a new developer in PYNQ, and i am considering deploy a cnn in pynq. Your work is really impressive to me. But i am sitll confused about the design process of the whole work. In other word, what should i do if i want to deploy my own cnn in pynq.Could you please give me some advices on the precess from traing a net to deploy a net in pynq.It would be very nice of you to give me these advices.I am looking forword to your reply.THX!
I followed the instructions form "https://github.com/awai54st/PYNQ-Classification", but encountered the following error when i tried "make all" after copying the makefile.config from PYNQ to caffe root.
Iv tried method like adding opencv_imgcodecs to the LIBRARIES + line in the Makefile, but still, the problem remained.
What am I do wrong?
CXX src/caffe/solvers/sgd_solver.cpp
CXX src/caffe/solvers/adagrad_solver.cpp
AR -o .build_release/lib/libcaffe.a
LD -o .build_release/lib/libcaffe.so.1.0.0
/usr/bin/ld: cannot find -lopencv_imgcodecs
/usr/bin/ld: cannot find -lopencv_imgcodecs
/usr/bin/ld: cannot find -lboost_python3
/usr/bin/ld: cannot find -lopenblas
collect2: error: ld returned 1 exit status
Makefile:572: recipe for target '.build_release/lib/libcaffe.so.1.0.0' failed
make: *** [.build_release/lib/libcaffe.so.1.0.0] Error 1
excuse me
Can you tell me
SMM <1,75,32>(connect_1,connect_2,1,0,25); What is the meaning of each parameter?
Works on cyclone IV too or only for Xilinx?
I have noticed that there is no FC module in sketchpad in your design.So isn't it necessary?
Hi,Mr.Wang. I'm now reading the im2col HLS code. I want to know why the size of the 'Initial_buffer' was determined as
Initial_buffer = MIN(Initial_lines * (IFMPadDim),IFMPadDim * IFMPadDim-1);
and why there is one ' additional_lines' in the Initial_buffer?
additional_lines = IFMPadDimSqrt/(OFMDim_curr * KerDim_curr * KerDim_curr);
In my opinion , the size of 'Initial_buffer' just should be K * W
Can you give me some advice about it? thank you~ ^_^
Where is VIVADO_SIDE.7z package?
Hello!
I am a high school student from China.
I download the image you provided from Baidu Drive and use Win32DiskImager to put the image to sd card.
But i found an issue,I can't open the Jupyter Notebook from the static IP 192.168.2.99.
I have set up my static IP correctly.
Hi,
I couldn't find out the vivado command prompt ,I just saw the vivado hls command prompt.So what exactly is the vivado command tool you were talking about?Thank you very much.
NOTE:I use 2017.3 version and 2018.1 version.
Hi,
I have got a SD card image with pre-installed Caffe and Theano dependencies ,and I download your pynq-cnn-image file from Download Link(baidu drive),and my board is pynq-z2 ,but the pynq-image fails to run on it .I am not sure if the image file can start my board ,or I have some mistakes about the image file?
Ive tried regenerating . Tcl files from lenet5 directories, however its not showinf the design and shows an ip error... It would be useful if we could get a make file for lenet5 architecture...
Thanks
Hi,
I was trying to redo the project on my own PYNQ, but when I started to install the dependencies following the MANUAL_INSTALL.md
, I failed at the first step with sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
..
The messages showed up as follow:
root@dingqiuyi:/home/xilinx# sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev prot
obuf-compiler
Reading package lists... Done
Building dependency tree
Reading state information... Done
libopencv-dev is already the newest version.
The following extra packages will be installed:
hdf5-helpers libhdf5-10 libhdf5-cpp-10 libhdf5-dev libleveldb1v5 libprotobuf-lite9v5 libprotobuf9v5 libprotoc9v5 libsnappy1v5
Suggested packages:
libhdf5-doc leveldb-doc
The following NEW packages will be installed:
hdf5-helpers libhdf5-10 libhdf5-cpp-10 libhdf5-dev libhdf5-serial-dev libleveldb-dev libleveldb1v5 libprotobuf-dev
libprotobuf-lite9v5 libprotobuf9v5 libprotoc9v5 libsnappy-dev libsnappy1v5 protobuf-compiler
0 upgraded, 14 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 7,533 kB/7,570 kB of archives.
After this operation, 29.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
WARNING: The following packages cannot be authenticated!
hdf5-helpers libhdf5-10 libhdf5-cpp-10 libhdf5-dev libhdf5-serial-dev libsnappy1v5 libleveldb1v5 libleveldb-dev
libprotobuf-lite9v5 libprotobuf9v5 libprotoc9v5 libprotobuf-dev libsnappy-dev protobuf-compiler
Install these packages without verification? [y/N] y
Err http://ports.ubuntu.com/ubuntu-ports/ wily/universe hdf5-helpers armhf 1.8.15-patch1+docs-4
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/universe libhdf5-10 armhf 1.8.15-patch1+docs-4
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/universe libhdf5-cpp-10 armhf 1.8.15-patch1+docs-4
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/universe libhdf5-dev armhf 1.8.15-patch1+docs-4
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/universe libhdf5-serial-dev all 1.8.15-patch1+docs-4
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/main libleveldb1v5 armhf 1.18-2.1ubuntu2
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/main libleveldb-dev armhf 1.18-2.1ubuntu2
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/main libprotobuf-lite9v5 armhf 2.6.1-1.2
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/main libprotobuf9v5 armhf 2.6.1-1.2
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/main libprotoc9v5 armhf 2.6.1-1.2
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/main libprotobuf-dev armhf 2.6.1-1.2
404 Not Found [IP: 91.189.88.150 80]
Err http://ports.ubuntu.com/ubuntu-ports/ wily/main protobuf-compiler armhf 2.6.1-1.2
404 Not Found [IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/universe/h/hdf5/hdf5-helpers_1.8.15-patch1+docs-4_armhf.deb 404 Not F
ound [IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/universe/h/hdf5/libhdf5-10_1.8.15-patch1+docs-4_armhf.deb 404 Not Fou
nd [IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/universe/h/hdf5/libhdf5-cpp-10_1.8.15-patch1+docs-4_armhf.deb 404 Not
Found [IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/universe/h/hdf5/libhdf5-dev_1.8.15-patch1+docs-4_armhf.deb 404 Not Fo
und [IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/universe/h/hdf5/libhdf5-serial-dev_1.8.15-patch1+docs-4_all.deb 404 N
ot Found [IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/l/leveldb/libleveldb1v5_1.18-2.1ubuntu2_armhf.deb 404 Not Found
[IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/l/leveldb/libleveldb-dev_1.18-2.1ubuntu2_armhf.deb 404 Not Found
[IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/protobuf/libprotobuf-lite9v5_2.6.1-1.2_armhf.deb 404 Not Found
[IP: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/protobuf/libprotobuf9v5_2.6.1-1.2_armhf.deb 404 Not Found [IP:
91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/protobuf/libprotoc9v5_2.6.1-1.2_armhf.deb 404 Not Found [IP: 9
1.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/protobuf/libprotobuf-dev_2.6.1-1.2_armhf.deb 404 Not Found [IP
: 91.189.88.150 80]
E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/pool/main/p/protobuf/protobuf-compiler_2.6.1-1.2_armhf.deb 404 Not Found [
IP: 91.189.88.150 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Any suggestions how to solve this?
Thanks a lot!
Hi, I downloaded the provided SD card image from the Google Drive link and attempted to run both the Lenet and CIFAR_10 Jupyter Notebook examples. It fails on the first step "Import Caffe" with the following error:
ImportError Traceback (most recent call last)
in ()
2 caffe_root = '/home/xilinx/caffe/' # this file should be run from {caffe_root}/examples (otherwise change this line)
3 sys.path.insert(0, caffe_root + 'python')
----> 4 import caffe
/home/xilinx/caffe/python/caffe/init.py in ()
----> 1 from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver
2 from ._caffe import set_mode_cpu, set_mode_gpu, set_device, Layer, get_solver, layer_type_list, set_random_seed
3 from ._caffe import version
4 from .proto.caffe_pb2 import TRAIN, TEST
5 from .classifier import Classifier
/home/xilinx/caffe/python/caffe/pycaffe.py in ()
11 import numpy as np
12
---> 13 from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver,
14 RMSPropSolver, AdaDeltaSolver, AdamSolver
15 import caffe.io
ImportError: libprotobuf.so.10: cannot open shared object file: No such file or directory
where?
Hi,
I have made some changes in the wrapper file and fixed_point_stream_convolution.cpp file in order to test the network with different kernel sizes. I have generated the hls ip and the Vivado design which allowed me to generate the bit and tcl files. Now I want to ckeck if my generated files are correct, could you please give me the steps from the beginning until obtaining the bit and tcl files, because I am afraid that i have missed some steps?
and also how to put this files in the FPGA chip?
Thanks in advance.
First it is very creative project and I appreciate your effort to make the source open.
I have one puzzle. It appears you have converted Caffe model to Lasagne network first, then copy Caffe parameters to Lasagne network, then read out the paramters of Lasagne network before forwarding the weight parameters to FPGA. i.e. FPGALoadW(weight, 1, 32, 32, 2). My question is: Why not download the weight parameters of caffe model to FPGA directly? Thanks.
@awai54st hey if I want to change this to run yolo on pynq, what steps do I need to follow ?
Thanks in advance!
Hi,
Do I have to change the HLS numbers and layers and implementing a new IP to work with my CNN model?
Excuse me
Can the FPGA QuickTest function use Fully Convolutional Networks?
hi,
When I read your source file conv_fpga.py, I found that all the keys you used to index the IPs in PL.ip_dict dictionary have a prefix ‘SEG’ and a suffix ‘Reg’. As follows:
class StreamingSwitch:
def init(self,name):
base_addr=int(PL.ip_dict["SEG{0}Reg".format(name)][0],16)
self.mmio=MMIO(base_addr,256)
self.reset()
I have printed the PL.ip_dict.keys() and the output is:
dict_keys(['axi_dma_0', 'axi_dma_1', 'axis_switch_0', 'mult_constant_0'])
As you can see ,with no Prefix. So could you please explain why the prefix and suffix are added here?
Thanks a lot,
liu
HI~
I want to look at the output characteristics of each layer, so I split the model.I found that the output feature map of the first convolutional layer is different from the first layer output performed by ARM.So I look at the fixed_point_stream_convolution.h file.In this file, why not multiply the input picture?
Hi, I've implemented the Lenet on PYNQ.
I found that the elapsed time for 500 images with the provided bitstream is roughly 1 second.
But if I used the bitstream generated by HLS and VIVADO tools with the provided projects, the elapsed time has increased to 5 seconds.
Could you please have a try and give some help on this?
Hi, Your work is really impressive to me. but if I just want to use vivado hls for simulation, I don't need to use real boards to accelator. how do I use your Python code to run, train, test the Caffe part on Windows? Looking forward to your help, thank you very much.
Excuse me
What should I do if I want to output 13 * 13 * 125 in FPGAQuickTest ?
Dear Prof,
Sorry to disturb you again!
I have a resource utilization issue that I want to discuss with you,because of the limitation of pynq board,if i want to design a deep net,the resource is not enough,so i want to design a dropout layer after the pooling layer,my idea is multiply the output of the pooling layer by a constant less than 1,but i don't know how to modify the code in the definition about the pooling layer ,could you give me some specific advices about the pooling code?
Hi, my name is Vladimir.
Can you help me with small question connect with Xilinx Pynq-z2
I want to use it and bought MicroSDHC 16GB, but my windows or Ubuntu don't see such drive, only device but drive letter don't appear. Maybe you know the reason? Thanks
Hello sir. Your Project is really commendable. I am working on PYNQ and i want to install KERAS and Tensorflow on PYNQ. having read your thesis ,I found you have installed tensorflow on PYNQ .can u please give me detailed steps of installing Tensorflow on PYNQ
I have flashed my sd card with the provided image and followed the script design interface steps for the the vivado. Now I want to know that the Graphical design interface says to use "make compile_graphical" command but where to use it? secondly the pdf report for the project says to open the CNN_BLOCK_DESIGN project in vivado but I can't find any file with such name.
Hallo,
Thank you very much for this amazing work and for sharing it with others. I need only to ask about the construction of CNN steps as you have mentioned in your report on page 58. The first step was to open the CNN_Block_Design project, however, I can't find this file. I guess that I am missing something and I would be glad if you could explain it to me.
Thanks in advance
Hi,
Which pynq version did you use for building the image?
Thanks
Udi
Hello Ew Wang, I come here from zhihu, and in the issue you give the link of github, someone said that the Master's degree thesis of you is also in this link, and I want to know more detial about it ,but i can't find the thesis, could you give me the link or the title? My english is not very well, thanks!
Hi,
I am studying your framework, but I have a doubt: what is the functionality of the simple_sum, mult_constant and stream_mult IPs? From their names I can guess but I do not understand how these IPs work with the IP of the network (e.g. cifar_10 IP). Can you clarify this please?
Thanks a lot,
Sara
HI~
Could you tell me how to make DMA in vivado and python ?
Hi Erwei Wang,
First of all Thank you so much for doing this cool project. I have trained my network using caffe's cifar10_full with 60,000 iterations. when I try to copy the paramters from caffe to lasagne i run into the following error
Key error "ip2"
Can you please look at it and suggest why I am running into this problem. I have modeified Solver.prototxt and have changed ip2 layers number of outputs from 10 to 2.
net = {}
net['input'] = InputLayer((None, 3, 32, 32))
net['conv1'] = ConvLayer(net['input'], num_filters=32, filter_size=5, pad=2, nonlinearity=None)
net['pool1'] = PoolLayer(net['conv1'], pool_size=2, stride=2, mode='max', ignore_border=False)
net['relu1'] = NonlinearityLayer(net['pool1'], rectify)
net['conv2'] = ConvLayer(net['relu1'], num_filters=32, filter_size=5, pad=2, nonlinearity=rectify)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, mode='average_exc_pad', ignore_border=False)
net['conv3'] = ConvLayer(net['pool2'], num_filters=64, filter_size=5, pad=2, nonlinearity=rectify)
net['pool3'] = PoolLayer(net['conv3'], pool_size=2, stride=2, mode='average_exc_pad', ignore_border=False)
net['ip1'] = DenseLayer(net['pool3'], num_units=64, nonlinearity = None)
net['ip2'] = DenseLayer(net['ip1'], num_units=2, nonlinearity = None)
net['prob'] = NonlinearityLayer(net['ip2'], softmax)
import numpy as np
layers_caffe = dict(zip(list(net_caffe._layer_names), net_caffe.layers))
for name, layer in net.items():
try:
if name=='ip1'or name=='ip2':
layer.W.set_value(np.transpose(layers_caffe[name].blobs[0].data))
layer.b.set_value(layers_caffe[name].blobs[1].data)
else:
layer.W.set_value(layers_caffe[name].blobs[0].data[:,:,::-1,::-1])
layer.b.set_value(layers_caffe[name].blobs[1].data)
except AttributeError:
continue
KeyError Traceback (most recent call last)
in ()
8 try:
9 if name=='ip1'or name=='ip2':
---> 10 layer.W.set_value(np.transpose(layers_caffe[name].blobs[0].data))
11 layer.b.set_value(layers_caffe[name].blobs[1].data)
12 else:
KeyError: 'ip2'
Hi there, can you measure the power of the pynq FPGA chip? I mean how much power is consumed by the chip to implement the CNN algorithm?
Regards
When I try to import caffe on the PYNQ board I get an error. Do we need to make any changes to the caffe makefile before compiling it on the board?
Hi,
I want to get a SD card image with pre-installed Caffe and Theano dependencies ,so I download your pynq-cnn-image file from Download Link in the README.md,but my pynq board is zcu104 and I am not sure if the image file can start my board or I mistake how to use the the image file?
Excuse me
I set SMM<1, 1152, 256>(connect_1, connect_2, 1, 0, 25)
"DMA wait timed out" occurs when I download weights from CPU to FPGA
what should I do?
hi, i want to finish the ip of convolution layers, but i do not know the top files when i try to add files to the HLS, i am a novice, so i want to know more details about this, i really thank you if you can help me.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.