Git Product home page Git Product logo

pinto0309 / mobilenet-ssd-realsense Goto Github PK

View Code? Open in Web Editor NEW
362.0 35.0 125.0 423.55 MB

[High Performance / MAX 30 FPS] RaspberryPi3(RaspberryPi/Raspbian Stretch) or Ubuntu + Multi Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD) + Background Multi-transparent(Simple multi-class segmentation) + FaceDetection + MultiGraph + MultiProcessing + MultiClustering

Home Page: https://qiita.com/PINTO

License: MIT License

Python 100.00%
raspberry-pi mobilenet-ssd realsense neural-compute-stick python raspberrypi d435 caffe usbcamera mobilenetssd

mobilenet-ssd-realsense's Introduction

MobileNet-SSD-RealSense

RaspberryPi3(Raspbian Stretch) or Ubuntu16.04/UbuntuMate + Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD)

【Notice】December 19, 2018 OpenVINO has supported RaspberryPi + NCS2 !!
https://software.intel.com/en-us/articles/OpenVINO-RelNotes#inpage-nav-2-2

【Dec 31, 2018】 USB Camera + MultiStick + MultiProcess mode correspondence with NCS2 is completed.
【Jan 04, 2019】 Tune performance four times. MultiStickSSDwithRealSense_OpenVINO_NCS2.py. Core i7 -> NCS2 x1, 48 FPS
【Nov 12, 2019】 Compatible with OpenVINO 2019 R3 + RaspberryPi3/4 + Raspbian Buster.


Measure the distance to the object with RealSense D435 while performing object detection by MobileNet-SSD(MobileNetSSD) with RaspberryPi 3 boosted with Intel Movidius Neural Compute Stick.
"USB Camera mode / PiCamera mode" can not measure the distance, but it operates at high speed.
And, This is support for MultiGraph and FaceDetection, MultiProcessing, Background transparentation.
And, This is support for simple clustering function. (To prevent thermal runaway)

My blog

【Japanese Article1】
RaspberryPi3 (Raspbian Stretch) + Intel Movidius Neural Compute Stick(NCS) + RealSenseD435 + MobileNet-SSD(MobileNetSSD) で高速に物体検出しつつ悟空やモニタまでの距離を測る

【Japanese / English Article2】
Intel also praised me again ヽ(゚∀゚)ノ Yeah MobileNet-SSD(MobileNetSSD) object detection and RealSense distance measurement (640x480) with RaspberryPi3 At least 25FPS playback frame rate + 12FPS prediction rate

【Japanese / English Article3】
Detection rate approx. 30FPS RaspberryPi3 Model B(plus none) is slightly later than TX2, acquires object detection rate of MobilenetSSD and corresponds to MultiModel VOC+WIDER FACE

【Japanese Article4】
RaspberryPi3で複数のMovidius Neural Compute Stick をシームレスにクラスタ切り替えして高速推論性能を維持しつつ熱暴走(内部温度70℃前後)を回避する

【Japanese Article5】
Caffeで超軽量な "Semantic Segmentation" のモデルを生成する Sparse-Quantized CNN 512x1024_10MB_軽量モデル_その1

【Japanese / English Article6】
Boost RaspberryPi3 with Neural Compute Stick 2 (1 x NCS2) and feel the explosion performance of MobileNet-SSD (If it is Core i7, 21 FPS)

【Japanese / English Article7】
[24 FPS] Boost RaspberryPi3 with four Neural Compute Stick 2 (NCS2) MobileNet-SSD / YoloV3 [48 FPS for Core i7]

【Japanese / English Article8】
[24 FPS, 48 FPS] RaspberryPi3 + Neural Compute Stick 2, The day when the true power of one NCS2 was drawn out and "Goku" became a true "super saiya-jin"


Table of contents

1. Summary
 1.1 Verification environment NCSDK (1)
 1.2 Result of detection rate NCSDK (1)
 1.3 Verification environment NCSDK (2)
 1.4 Result of detection rate NCSDK (2)
2. Performance comparison as a mobile application (Based on sensory comparison)
3. Change history
4. Motion image
 4-1. NCSDK ver
  4-1-1. RealSense Mode about 6.5 FPS (Synchronous screen drawing)
  4-1-2. RealSense Mode about 25.0 FPS (Asynchronous screen drawing)
  4-1-3. USB Camera Mode MultiStick x4 Boosted 16.0 FPS+ (Asynchronous screen drawing)
  4-1-4. RealSense Mode SingleStick about 5.0 FPS(Transparent background / Asynchronous screen drawing
  4-1-5. USB Camera Mode MultiStick x3 Boosted (Asynchronous screen drawing / MultiGraph
  4-1-6. Simple clustering function (MultiStick / MultiCluster / Cluster switch cycle / Cluster switch temperature)
 4-2. OpenVINO ver
  4-2-1. USB Camera Mode NCS2 x 1 Stick + RaspberryPi3(Synchronous screen drawing)
  4-2-2. USB Camera Mode NCS2 x 1 Stick + Core i7(Synchronous screen drawing)
  4-2-3. USB Camera Mode NCS2 x 1 Stick + Core i7(Asynchronous screen drawing)
  4-2-4. USB Camera Mode NCS2 x 1 Stick + RaspberryPi3(Asynchronous screen drawing)
  4-2-5. USB Camera Mode NCS2 x 1 Stick + LattePanda Alpha(Asynchronous screen drawing)48 FPS
  4-2-6. PiCamera Mode NCS2 x 1 Stick + RaspberryPi3(Asynchronous screen drawing)
  4-2-7. USB Camera Mode NCS2 x 1 Stick + RaspberryPi4(Asynchronous screen drawing)40 FPS
5. Motion diagram of MultiStick
6. Environment
7. Firmware update with Windows 10 PC
8. Work with RaspberryPi3 (or PC + Ubuntu16.04 / RaspberryPi + Ubuntu Mate)
 8-1. NCSDK ver (Not compatible with NCS2)
 8-2. OpenVINO ver (Corresponds to NCS2)
9. Execute the program
10. 【Reference】 MobileNetv2 Model (Caffe) Great Thanks!!
11. Conversion method from Caffe model to NCS model (NCSDK)
12. Conversion method from Caffe model to NCS model (OpenVINO)
13. Construction of learning environment and simple test for model (Ubuntu16.04 x86_64 PC + GPU NVIDIA Geforce)
14. Reference articles, thanks

Summary

Performance measurement result each number of sticks. (It is Detection rate. It is not a Playback rate.)
The best performance can be obtained with QVGA + 5 Sticks.
However, It is important to use a good quality USB camera.

Verification environment (1)

No. Item Contents
1 Video device USB Camera (No RealSense D435) ELP-USB8MP02G-L75 $70
2 Auxiliary equipment (Required) self-powered USB2.0 HUB
3 Input resolution 640x480
4 Output resolution 640x480
5 Execution parameters $ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 640 -ht 480

Result of detection rate (1)

No. Stick count FPS Youtube Movie Note
1 1 Stick 6 FPS https://youtu.be/lNbhutT8hkA base line
2 2 Sticks 12 FPS https://youtu.be/zuJOhKWoLwc 6 FPS increase
3 3 Sticks 16.5 FPS https://youtu.be/8UDFIJ1Z4v8 4.5 FPS increase
4 4 Sticks 16.5 FPS https://youtu.be/_2xIZ-IZwZc No improvement

Verification environment (2)

No. Item Contents
1 Video device USB Camera (No RealSense D435) PlayStationEye $5
2 Auxiliary equipment (Required) self-powered USB2.0 HUB
3 Input resolution 320x240
4 Output resolution 320x240
5 Execution parameters $ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 320 -ht 240

Result of detection rate (2)

No. Stick count FPS Youtube Movie Note
1 4 Sticks   25 FPS https://youtu.be/v-Cei1TW88c
2 5 Sticks ⭐ 30 FPS https://youtu.be/CL6PTNgWibI best performance

Performance comparison as a mobile application (Based on sensory comparison)

◯=HIGH, △=MEDIUM, ×=LOW

No. Model Speed Accuracy Adaptive distance
1 SSD × ALL
2 MobileNet-SSD Short distance
3 YoloV3 × ALL
4 tiny-YoloV3 × Long distance

Change history

Change history
[July 14, 2018] Corresponds to NCSDK v2.05.00.02
[July 17, 2018] Corresponds to OpenCV 3.4.2
[July 21, 2018] Support for multiprocessing [MultiStickSSDwithRealSense.py]
[July 23, 2018] Support for USB Camera Mode [MultiStickSSDwithRealSense.py]
[July 29, 2018] Added steps to build learning environment
[Aug 3, 2018] Background Multi-transparent mode implementation [MultiStickSSDwithRealSense.py]
[Aug 11, 2018] CUDA9.0 + cuDNN7.2 compatible with environment construction procedure
[Aug 14, 2018] Reference of MobileNetv2 Model added to README and added Facedetection Model
[Aug 15, 2018] Bug Fixed. `MultiStickSSDwithRealSense.py` depth_scale be undefined. Pull Requests merged. Thank you Drunkar!!
[Aug 19, 2018] 【Experimental】 Update Facedetection model [DeepFace] (graph.facedetectXX)
[Aug 22, 2018] Separate environment construction procedure of "Raspbian Stretch" and "Ubuntu16.04"
[Aug 22, 2018] 【Experimental】 FaceDetection model replaced [resnet] (graph.facedetection)
[Aug 23, 2018] Added steps to build NCSDKv2
[Aug 25, 2018] Added "Detection FPS View" [MultiStickSSDwithRealSense.py]
[Sep 01, 2018] FaceDetection model replaced [Mobilenet] (graph.fullfacedetection / graph.shortfacedetection)
[Sep 01, 2018] Added support for MultiGraph and FaceDetection mode [MultiStickSSDwithRealSense.py]
[Sep 04, 2018] Performance measurement result with 5 sticks is posted
[Sep 08, 2018] To prevent thermal runaway, simple clustering function of stick was implemented.
[Sep 16, 2018] 【Experimental】 Added Semantic Segmentation model [Tensorflow-UNet] (semanticsegmentation_frozen_person.pb)
[Sep 20, 2018] 【Experimental】 Updated Semantic Segmentation model [Tensorflow-UNet]
[Oct 07, 2018] 【Experimental】 Added Semantic Segmentation model [caffe-jacinto] (cityscapes5_jsegnet21v2_iter_60000.caffemodel)
[Oct 10, 2018] Corresponds to NCSDK 2.08.01
[Oct 12, 2018] 【Experimental】 Added Semantic Segmentation model [Tensorflow-ENet] (semanticsegmentation_enet.pb) https://github.com/PINTO0309/TensorFlow-ENet.git
[Dec 22, 2018] Only "USB Camera + single thread mode" correspondence with NCS 2 is completed
[Dec 31, 2018] "USB Camera + MultiStick + MultiProcess mode" correspondence with NCS2 is completed
[Jan 04, 2019] Tune performance four times. MultiStickSSDwithRealSense_OpenVINO_NCS2.py
[Feb 01, 2019] Pull request merged. Fix Typo. Thanks, nguyen-alexa!!
[Feb 09, 2019] Corresponds to PiCamera.
[Feb 10, 2019] Added support for SingleStickSSDwithRealSense_OpenVINO_NCS2.py
[Feb 10, 2019] Firmware v5.9.13 -> v5.10.6, RealSenseSDK v2.13.0 -> v2.16.5
[May 01, 2019] Corresponds to OpenVINO 2019 R1.0.1
[Nov 12, 2019] Corresponds to OpenVINO 2019 R3.0


Motion image

RealSense Mode about 6.5 FPS (Detection + Synchronous screen drawing / SingleStickSSDwithRealSense.py)

【YouTube Movie】 https://youtu.be/77cV9fyqJ1w

03 04

RealSense Mode about 25.0 FPS (Asynchronous screen drawing / MultiStickSSDwithRealSense.py)

However, the prediction rate is fairly low.(about 6.5 FPS)
【YouTube Movie】 https://youtu.be/tAf1u9DKkh4

09

USB Camera Mode MultiStick x4 Boosted 16.0 FPS+ (Asynchronous screen drawing / MultiStickSSDwithRealSense.py)

【YouTube Movie】 https://youtu.be/GedDpAc0JyQ

10 11

RealSense Mode SingleStick about 5.0 FPS(Transparent background in real time / Asynchronous screen drawing / MultiStickSSDwithRealSense.py)

【YouTube Movie】 https://youtu.be/ApyX-mN_dYA

12

USB Camera Mode MultiStick x3 Boosted (Asynchronous screen drawing / MultiGraph(SSD+FaceDetection) / FaceDetection / MultiStickSSDwithRealSense.py)

【YouTube Movie】 https://youtu.be/fQZpuD8mWok

13

Simple clustering function (MultiStick / MultiCluster / Cluster switch cycle / Cluster switch temperature)

14
[Execution log]
15

USB Camera Mode NCS2 SingleStick + RaspberryPi3(Synchronous screen drawing / SingleStickSSDwithUSBCamera_OpenVINO_NCS2.py)

【YouTube Movie】 https://youtu.be/GJNkX-ZBuC8

16

USB Camera Mode NCS2 SingleStick + Core i7(Synchronous screen drawing / SingleStickSSDwithUSBCamera_OpenVINO_NCS2.py)

【YouTube Movie】 https://youtu.be/1ogge90EuqI

17

USB Camera Mode NCS2 x 1 Stick + Core i7(Asynchronous screen drawing / MultiStickSSDwithRealSense_OpenVINO_NCS2.py)

【YouTube Movie】 https://youtu.be/Nx_rVDgT8uY

$ python3 MultiStickSSDwithRealSense_OpenVINO_NCS2.py -mod 1 -numncs 1

23

USB Camera Mode NCS2 x 1 Stick + RaspberryPi3(Asynchronous screen drawing / MultiStickSSDwithRealSense_OpenVINO_NCS2.py)

【YouTube Movie】 https://youtu.be/Xj2rw_5GwlI

$ python3 MultiStickSSDwithRealSense_OpenVINO_NCS2.py -mod 1 -numncs 1

24

USB Camera Mode NCS2 x 1 Stick + LattePanda Alpha(Asynchronous screen drawing / MultiStickSSDwithRealSense_OpenVINO_NCS2.py)[48 FPS]

https://twitter.com/PINTO03091/status/1081575747314057219

PiCamera Mode NCS2 x 1 Stick + RaspberryPi3(Asynchronous screen drawing / MultiStickSSDwithPiCamera_OpenVINO_NCS2.py)

$ python3 MultiStickSSDwithPiCamera_OpenVINO_NCS2.py

25

USB Camera Mode NCS2 x 1 Stick + RaspberryPi4(Asynchronous screen drawing / MultiStickSSDwithUSBCamera_OpenVINO_NCS2.py)

$ python3 MultiStickSSDwithUSBCamera_OpenVINO_NCS2.py

26


Motion diagram of MultiStick

20

Environment

1.RaspberryPi3 + Raspbian Stretch (USB2.0 Port) or RaspberryPi3 + Ubuntu Mate or PC + Ubuntu16.04
2.Intel RealSense D435 (Firmware Ver 5.10.6) or USB Camera or PiCamera Official stable version firmware
3.Intel Neural Compute Stick v1/v2 x1piece or more
4-1.OpenCV 3.4.2 (NCSDK)
4-2.OpenCV 4.1.1-openvino (OpenVINO)
5.VFPV3 or TBB (Intel Threading Building Blocks)
6.Numpy
7.Python3.5
8.NCSDK v2.08.01 (It does not work with NCSDK v1. v1 version is here)
9. OpenVINO 2019 R2.0.1
10.RealSenseSDK v2.16.5 (The latest version is unstable) Official stable version SDK
11.HDMI Display

Firmware update with Windows 10 PC

1.ZIP 2 types (1) Firmware update tool for Windows 10 (2) The latest firmware bin file Download and decompress
2.Copy Signed_Image_UVC_5_10_6_0.bin to the same folder as intel-realsense-dfu.exe
3.Connect RealSense D435 to USB port
4.Wait for completion of installation of device driver
5.Execute intel-realsense-dfu.exe
6.「1」 Type and press Enter and follow the instructions on the screen to update
01
7.Firmware version check 「2」
02

Work with RaspberryPi3 (or PC + Ubuntu16.04 / RaspberryPi + Ubuntu Mate)

1.NCSDK ver (Not compatible with NCS2)

Use of Virtualbox is not strongly recommended
[Note] Japanese Article
https://qiita.com/akitooo/items/6aee8c68cefd46d2a5dc
https://qiita.com/kikuchi_kentaro/items/280ac68ad24759b4c091

[Post of Official Forum]
https://ncsforum.movidius.com/discussion/950/problems-with-python-multiprocessing-using-sdk-2-0-0-4
https://ncsforum.movidius.com/discussion/comment/3921
https://ncsforum.movidius.com/discussion/comment/4316/#Comment_4316

1.Execute the following

$ sudo apt update;sudo apt upgrade
$ sudo reboot

2.Extend the SWAP area (RaspberryPi+Raspbian Stretch / RaspberryPi+Ubuntu Mate Only)

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=2048

$ sudo /etc/init.d/dphys-swapfile restart;swapon -s

3.Install NSCDK

$ sudo apt install python-pip python3-pip
$ sudo pip3 install --upgrade pip
$ sudo pip2 install --upgrade pip

$ cd ~/ncsdk
$ make uninstall
$ cd ~;rm -r -f ncsdk
#=====================================================================================================
# [Oct 10, 2018] NCSDK 2.08.01 , Tensorflow 1.9.0
$ git clone -b ncsdk2 http://github.com/Movidius/ncsdk
#=====================================================================================================
$ cd ncsdk
$ nano ncsdk.conf

#MAKE_NJOBS=1
↓
MAKE_NJOBS=1

$ sudo apt install cython
$ sudo -H pip3 install cython
$ sudo -H pip3 install numpy
$ sudo -H pip3 install pillow
$ make install

$ cd ~
$ wget https://github.com/google/protobuf/releases/download/v3.5.1/protobuf-all-3.5.1.tar.gz
$ tar -zxvf protobuf-all-3.5.1.tar.gz
$ cd protobuf-3.5.1
$ ./configure
$ sudo make -j1
$ sudo make install
$ cd python
$ export LD_LIBRARY_PATH=../src/.libs
$ python3 setup.py build --cpp_implementation 
$ python3 setup.py test --cpp_implementation
$ sudo python3 setup.py install --cpp_implementation
$ sudo ldconfig
$ protoc --version

# Before executing "make examples", insert Neural Compute Stick into the USB port of the device.
$ cd ~/ncsdk
$ make examples -j1

【Reference】https://github.com/movidius/ncsdk

4.Update udev rule

$ sudo apt install -y git libssl-dev libusb-1.0-0-dev pkg-config libgtk-3-dev
$ sudo apt install -y libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev

$ cd /etc/udev/rules.d/
$ sudo wget https://raw.githubusercontent.com/IntelRealSense/librealsense/master/config/99-realsense-libusb.rules
$ sudo udevadm control --reload-rules && udevadm trigger

5.Upgrade to "cmake 3.11.4"

$ cd ~
$ wget https://cmake.org/files/v3.11/cmake-3.11.4.tar.gz
$ tar -zxvf cmake-3.11.4.tar.gz;rm cmake-3.11.4.tar.gz
$ cd cmake-3.11.4
$ ./configure --prefix=/home/pi/cmake-3.11.4
$ make -j1
$ sudo make install
$ export PATH=/home/pi/cmake-3.11.4/bin:$PATH
$ source ~/.bashrc
$ cmake --version
cmake version 3.11.4

6.Register LD_LIBRARY_PATH

$ nano ~/.bashrc
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

$ source ~/.bashrc

7.Install TBB (Intel Threading Building Blocks)

$ cd ~
$ wget https://github.com/PINTO0309/TBBonARMv7/raw/master/libtbb-dev_2018U2_armhf.deb
$ sudo dpkg -i ~/libtbb-dev_2018U2_armhf.deb
$ sudo ldconfig

8.Uninstall old OpenCV (RaspberryPi Only)
[Very Important] The highest performance can not be obtained unless VFPV3 is enabled.

$ cd ~/opencv-3.x.x/build
$ sudo make uninstall
$ cd ~
$ rm -r -f opencv-3.x.x
$ rm -r -f opencv_contrib-3.x.x

9.Build install "OpenCV 3.4.2" or Install by deb package.
[Very Important] The highest performance can not be obtained unless VFPV3 is enabled.

9.1 Build Install (RaspberryPi Only)

$ sudo apt update && sudo apt upgrade
$ sudo apt install build-essential cmake pkg-config libjpeg-dev libtiff5-dev \
libjasper-dev libavcodec-dev libavformat-dev libswscale-dev \
libv4l-dev libxvidcore-dev libx264-dev libgtk2.0-dev libgtk-3-dev \
libcanberra-gtk* libatlas-base-dev gfortran python2.7-dev python3-dev

$ cd ~
$ wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.4.2.zip
$ unzip opencv.zip;rm opencv.zip
$ wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/3.4.2.zip
$ unzip opencv_contrib.zip;rm opencv_contrib.zip
$ cd ~/opencv-3.4.2/;mkdir build;cd build
$ cmake -D CMAKE_CXX_FLAGS="-DTBB_USE_GCC_BUILTINS=1 -D__TBB_64BIT_ATOMICS=0" \
        -D CMAKE_BUILD_TYPE=RELEASE \
        -D CMAKE_INSTALL_PREFIX=/usr/local \
        -D INSTALL_PYTHON_EXAMPLES=OFF \
        -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.4.2/modules \
        -D BUILD_EXAMPLES=OFF \
        -D PYTHON_DEFAULT_EXECUTABLE=$(which python3) \
        -D INSTALL_PYTHON_EXAMPLES=OFF \
        -D BUILD_opencv_python2=ON \
        -D BUILD_opencv_python3=ON \
        -D WITH_OPENCL=OFF \
        -D WITH_OPENGL=ON \
        -D WITH_TBB=ON \
        -D BUILD_TBB=OFF \
        -D WITH_CUDA=OFF \
        -D ENABLE_NEON:BOOL=ON \
        -D ENABLE_VFPV3=ON \
        -D WITH_QT=OFF \
        -D BUILD_TESTS=OFF ..
$ make -j1
$ sudo make install
$ sudo ldconfig

9.2 Install by deb package (RaspberryPi Only) [I already activated VFPV3 and built it]

$ cd ~
$ sudo apt autoremove libopencv3
$ wget https://github.com/PINTO0309/OpenCVonARMv7/raw/master/libopencv3_3.4.2-20180709.1_armhf.deb
$ sudo apt install -y ./libopencv3_3.4.2-20180709.1_armhf.deb
$ sudo ldconfig

10.Install Intel® RealSense™ SDK 2.0

$ cd ~
$ sudo apt update;sudo apt upgrade
$ sudo apt install -y vulkan-utils libvulkan1 libvulkan-dev

# Ubuntu16.04 Only
$ sudo apt install -y mesa-utils* libglu1* libgles2-mesa-dev libopenal-dev gtk+-3.0

# The latest version is unstable
$ cd ~/librealsense/build
$ sudo make uninstall
$ cd ~
$ sudo rm -rf librealsense

$ git clone -b v2.16.5 https://github.com/IntelRealSense/librealsense.git
$ cd ~/librealsense
$ git checkout -b v2.16.5
$ mkdir build;cd build

$ cmake .. -DBUILD_EXAMPLES=true -DCMAKE_BUILD_TYPE=Release

# For RaspberryPi3
$ make -j1
or
# For LaptopPC
$ make -j8

$ sudo make install

11.Install Python binding

$ cd ~/librealsense/build

#When using with Python 3.x series
$ cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python3)

OR

#When using with Python 2.x series
$ cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python)

# For RaspberryPi3
$ make -j1
or
# For LaptopPC
$ make -j8

$ sudo make install

12.Update PYTHON_PATH

$ nano ~/.bashrc
export PYTHONPATH=$PYTHONPATH:/usr/local/lib

$ source ~/.bashrc

13.RealSense SDK import test

$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170124] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrealsense2
>>> exit()

14.Installing the OpenGL package for Python

$ sudo apt-get install -y python-opengl
$ sudo -H pip3 install pyopengl
$ sudo -H pip3 install pyopengl_accelerate

15.Installation of the imutils package. (For PiCamera)

$ sudo apt-get install -y python3-picamera
$ sudo -H pip3 install imutils --upgrade

16.Reduce the SWAP area to the default size (RaspberryPi+Raspbian Stretch / RaspberryPi+Ubuntu Mate Only)

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=100

$ sudo /etc/init.d/dphys-swapfile restart;swapon -s

17.Clone a set of resources

$ git clone https://github.com/PINTO0309/MobileNet-SSD-RealSense.git

18.[Optional] Create a RAM disk folder for movie file placement

$ cd /etc
$ sudo cp fstab fstab_org
$ sudo nano fstab

# Mount "/home/pi/movie" on RAM disk.
# Add below.
tmpfs /home/pi/movie tmpfs defaults,size=32m,noatime,mode=0777 0 0

$ sudo reboot


2.OpenVINO ver (Corresponds to NCS2)

1.Execute the following

$ sudo apt update;sudo apt upgrade
$ sudo reboot

2.Extend the SWAP area (RaspberryPi+Raspbian Stretch / RaspberryPi+Ubuntu Mate Only)

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=2048

$ sudo /etc/init.d/dphys-swapfile restart;swapon -s

3.Install OpenVINO

$ curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=1rBl_3kU4gsx-x2NG2I5uIhvA3fPqm8uE" > /dev/null
$ CODE="$(awk '/_warning_/ {print $NF}' /tmp/cookie)"
$ curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${CODE}&id=1rBl_3kU4gsx-x2NG2I5uIhvA3fPqm8uE" -o l_openvino_toolkit_ie_p_2018.5.445.tgz
$ tar -zxvf l_openvino_toolkit_ie_p_2018.5.445.tgz
$ rm l_openvino_toolkit_ie_p_2018.5.445.tgz
$ sed -i "s|<INSTALLDIR>|$(pwd)/inference_engine_vpu_arm|" inference_engine_vpu_arm/bin/setupvars.sh
$ nano ~/.bashrc
### Add 1 row below
source /home/pi/inference_engine_vpu_arm/bin/setupvars.sh

$ source ~/.bashrc
### Successful if displayed as below
[setupvars.sh] OpenVINO environment initialized

$ sudo usermod -a -G users "$(whoami)"
$ sudo reboot

$ uname -a
Linux raspberrypi 4.14.79-v7+ #1159 SMP Sun Nov 4 17:50:20 GMT 2018 armv7l GNU/Linux

$ sh inference_engine_vpu_arm/install_dependencies/install_NCS_udev_rules.sh
### It is displayed as follows
Update udev rules so that the toolkit can communicate with your neural compute stick
[install_NCS_udev_rules.sh] udev rules installed

4.Update udev rule

$ sudo apt install -y git libssl-dev libusb-1.0-0-dev pkg-config libgtk-3-dev
$ sudo apt install -y libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev

$ cd /etc/udev/rules.d/
$ sudo wget https://raw.githubusercontent.com/IntelRealSense/librealsense/master/config/99-realsense-libusb.rules
$ sudo udevadm control --reload-rules && udevadm trigger

5.Upgrade to "cmake 3.11.4"

$ cd ~
$ wget https://cmake.org/files/v3.11/cmake-3.11.4.tar.gz
$ tar -zxvf cmake-3.11.4.tar.gz;rm cmake-3.11.4.tar.gz
$ cd cmake-3.11.4
$ ./configure --prefix=/home/pi/cmake-3.11.4
$ make -j1
$ sudo make install
$ export PATH=/home/pi/cmake-3.11.4/bin:$PATH
$ source ~/.bashrc
$ cmake --version
cmake version 3.11.4

6.Register LD_LIBRARY_PATH

$ nano ~/.bashrc
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

$ source ~/.bashrc

7.Install Intel® RealSense™ SDK 2.0

$ cd ~
$ sudo apt update;sudo apt upgrade
$ sudo apt install -y vulkan-utils libvulkan1 libvulkan-dev

# Ubuntu16.04 Only
$ sudo apt install -y mesa-utils* libglu1* libgles2-mesa-dev libopenal-dev gtk+-3.0

# The latest version is unstable
$ cd ~/librealsense/build
$ sudo make uninstall
$ cd ~
$ sudo rm -rf librealsense

$ git clone -b v2.16.5 https://github.com/IntelRealSense/librealsense.git
$ cd ~/librealsense
$ git checkout -b v2.16.5
$ mkdir build;cd build

$ cmake .. -DBUILD_EXAMPLES=false -DCMAKE_BUILD_TYPE=Release

# For RaspberryPi3
$ make -j1
or
# For LaptopPC
$ make -j8

$ sudo make install

8.Install Python binding

$ cd ~/librealsense/build

#When using with Python 3.x series
$ cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python3)

OR

#When using with Python 2.x series
$ cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python)

# For RaspberryPi3
$ make -j1
or
# For LaptopPC
$ make -j8

$ sudo make install

9.Update PYTHON_PATH

$ nano ~/.bashrc
export PYTHONPATH=$PYTHONPATH:/usr/local/lib

$ source ~/.bashrc

10.RealSense SDK import test

$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170124] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrealsense2
>>> exit()

11.Installing the OpenGL package for Python

$ sudo apt-get install -y python-opengl
$ sudo -H pip3 install pyopengl
$ sudo -H pip3 install pyopengl_accelerate

12.Installation of the imutils package. (For PiCamera)

$ sudo apt-get install -y python3-picamera
$ sudo -H pip3 install imutils --upgrade

13.Reduce the SWAP area to the default size (RaspberryPi+Raspbian Stretch / RaspberryPi+Ubuntu Mate Only)

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=100

$ sudo /etc/init.d/dphys-swapfile restart;swapon -s

14.Clone a set of resources

$ git clone https://github.com/PINTO0309/MobileNet-SSD-RealSense.git

15.[Optional] Create a RAM disk folder for movie file placement

$ cd /etc
$ sudo cp fstab fstab_org
$ sudo nano fstab

# Mount "/home/pi/movie" on RAM disk.
# Add below.
tmpfs /home/pi/movie tmpfs defaults,size=32m,noatime,mode=0777 0 0

$ sudo reboot


Execute the program

$ python3 MultiStickSSDwithRealSense.py <option1> <option2> ...

<options>
 -grp MVNC graphs Path. (Default=./)
 -mod Camera Mode. (0:=RealSense Mode, 1:=USB Camera Mode. Defalut=0)
 -wd Width of the frames in the video stream. (USB Camera Mode Only. Default=320)
 -ht Height of the frames in the video stream. (USB Camera Mode Only. Default=240)
 -tp TransparentMode. (RealSense Mode Only. 0:=No background transparent, 1:=Background transparent. Default=0)
 -sd SSDDetectionMode. (0:=Disabled, 1:=Enabled. Default=1)
 -fd FaceDetectionMode. (0:=Disabled, 1:=Enabled. Default=0)
 -snc stick_num_of_cluster. Number of sticks to be clustered. (0:=Clustering invalid, n:=Number of sticks Default=0)
 -csc cluster_switch_cycle. Cycle of switching active cluster. (n:=millisecond Default=10000)
 -cst cluster_switch_temperature. Temperature threshold to switch active cluster. (n.n:=temperature(Celsius) Default=65.0)

(Example0) MobileNet-SSD + Neural Compute Stick + RealSense D435 Mode + Syncronous

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 SingleStickSSDwithRealSense.py

(Example1) MobileNet-SSD + Neural Compute Stick + RealSense D435 Mode + Asynchronous

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py

(Example2) MobileNet-SSD + Neural Compute Stick + USB Camera Mode + Asynchronous

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 640 -ht 480
$ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 320 -ht 240

(Example3) MobileNet-SSD + Neural Compute Stick + RealSense D435 Mode + Asynchronous + Transparent background in real time

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py -tp 1

(Example4) MobileNet-SSD + FaceDetection + Neural Compute Stick + USB Camera Mode + Asynchronous

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 640 -ht 480 -fd 1

(Example5) To prevent thermal runaway, simple clustering function (2 Stick = 1 Cluster)

When a certain cycle or constant temperature is reached, the active cluster switches seamlessly automatically.
You must turn on the clustering enable flag.
The default switch period is 10 seconds, the default temperature threshold is 65°C.
The number, cycle, and temperature of sticks constituting one cluster can be specified by the start parameter.
Depending on your environment, please tune to the optimum parameters yourself.

[1] Number of all sticks = 5
[2] stick_num_of_cluster = 2
[3] cluster_switch_cycle = 10sec (10,000millisec)
[4] cluster_switch_temperature = 65.0℃

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"

$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py -mod 1 -snc 2 -csc 10000 -cst 65.0

[Simplified drawing of cluster switching]
14
[Execution log]
15

(Example6)

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G2 GL (Fake KMS)"
$ realsense-viewer

05

(Example7)

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"

$ cd ~/librealsense/wrappers/opencv/build/grabcuts
$ rs-grabcuts

06

(Example8)

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"

$ cd ~/librealsense/wrappers/opencv/build/imshow
$ rs-imshow

07

(Example9) MobileNet-SSD(OpenCV-DNN) + RealSense D435 + Without Neural Compute Stick

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"

$ cd ~/librealsense/wrappers/opencv/build/dnn
$ rs-dnn

08

【Reference】 MobileNetv2 Model (Caffe) Great Thanks!!

https://github.com/xufeifeiWHU/Mobilenet-v2-on-Movidius-stick.git

Conversion method from Caffe model to NCS model - NCSDK

$ cd ~/MobileNet-SSD-RealSense
$ mvNCCompile ./caffemodel/MobileNetSSD/deploy.prototxt -w ./caffemodel/MobileNetSSD/MobileNetSSD_deploy.caffemodel -s 12
$ mvNCCompile ./caffemodel/Facedetection/fullface_deploy.prototxt -w ./caffemodel/Facedetection/fullfacedetection.caffemodel -s 12
$ mvNCCompile ./caffemodel/Facedetection/shortface_deploy.prototxt -w ./caffemodel/Facedetection/shortfacedetection.caffemodel -s 12

Conversion method from Caffe model to NCS model - OpenVINO

$ cd ~/MobileNet-SSD-RealSense
$ sudo python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py \
--input_model caffemodel/MobileNetSSD/MobileNetSSD_deploy.caffemodel \
--input_proto caffemodel/MobileNetSSD/MobileNetSSD_deploy.prototxt \
--data_type FP16 \
--batch 1

or

$ cd ~/MobileNet-SSD-RealSense
$ sudo python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py \
--input_model caffemodel/MobileNetSSD/MobileNetSSD_deploy.caffemodel \
--input_proto caffemodel/MobileNetSSD/MobileNetSSD_deploy.prototxt \
--data_type FP32 \
--batch 1

Construction of learning environment and simple test for model (Ubuntu16.04 x86_64 PC + GPU[NVIDIA Geforce])

1.【Example】 Introduction of NVIDIA-Driver, CUDA and cuDNN to the environment with GPU

$ sudo apt-get remove nvidia-*
$ sudo apt-get remove cuda-*

$ apt search "^nvidia-[0-9]{3}$"
$ sudo apt install cuda-9.0
$ sudo reboot
$ nvidia-smi

### Download cuDNN v7.2.1 NVIDIA Home Page
### libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb
### libcudnn7-dev_7.2.1.38-1+cuda9.0_amd64.deb
### cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb
### cuda-repo-ubuntu1604-9-0-local-cublas-performance-update_1.0-1_amd64.deb
### cuda-repo-ubuntu1604-9-0-local-cublas-performance-update-2_1.0-1_amd64.deb
### cuda-repo-ubuntu1604-9-0-local-cublas-performance-update-3_1.0-1_amd64.deb
### cuda-repo-ubuntu1604-9-0-176-local-patch-4_1.0-1_amd64.deb

$ sudo dpkg -i libcudnn7*
$ sudo dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb
$ sudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub
$ sudo apt update
$ sudo dpkg -i cuda-repo-ubuntu1604-9*
$ sudo apt update
$ rm libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb;rm libcudnn7-dev_7.2.1.38-1+cuda9.0_amd64.deb;rm cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb;rm cuda-repo-ubuntu1604-9-0-local-cublas-performance-update_1.0-1_amd64.deb;rm cuda-repo-ubuntu1604-9-0-local-cublas-performance-update-2_1.0-1_amd64.deb;rm cuda-repo-ubuntu1604-9-0-local-cublas-performance-update-3_1.0-1_amd64.deb;rm cuda-repo-ubuntu1604-9-0-176-local-patch-4_1.0-1_amd64.deb

$ echo 'export PATH=/usr/local/cuda-9.0/bin:${PATH}' >> ~/.bashrc
$ echo 'export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:${LD_LIBRARY_PATH}' >> ~/.bashrc
$ source ~/.bashrc
$ sudo ldconfig
$ nvcc -V
$ cd ~;nano cudnn_version.cpp

#include <cudnn.h>
#include <iostream>

int main(int argc, char** argv) {
    std::cout << "CUDNN_VERSION: " << CUDNN_VERSION << std::endl;
    return 0;
}

$ nvcc cudnn_version.cpp -o cudnn_version
$ ./cudnn_version

$ sudo pip2 uninstall tensorflow-gpu
$ sudo pip2 install tensorflow-gpu==1.10.0
$ sudo pip3 uninstall tensorflow-gpu
$ sudo pip3 install tensorflow-gpu==1.10.0

2.【Example】 Introduction of Caffe to environment with GPU

$ cd ~
$ sudo apt install libopenblas-base libopenblas-dev
$ git clone https://github.com/weiliu89/caffe.git
$ cd caffe
$ git checkout ssd
$ cp Makefile.config.example Makefile.config
$ nano Makefile.config
# cuDNN acceleration switch (uncomment to build with cuDNN).
#USE_CUDNN := 1
↓
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3
↓
# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
↓
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda-9.0

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the lines after *_35 for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
             -gencode arch=compute_20,code=sm_21 \
             -gencode arch=compute_30,code=sm_30 \
             -gencode arch=compute_35,code=sm_35 \
             -gencode arch=compute_50,code=sm_50 \
             -gencode arch=compute_52,code=sm_52 \
             -gencode arch=compute_61,code=sm_61
↓
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the lines after *_35 for compatibility.
CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
             -gencode arch=compute_35,code=sm_35 \
             -gencode arch=compute_50,code=sm_50 \
             -gencode arch=compute_52,code=sm_52 \
             -gencode arch=compute_61,code=sm_61

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
↓
# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include \
                /usr/local/lib/python2.7/dist-packages/numpy/core/include


# Uncomment to support layers written in Python (will link against Python libs)
# WITH_PYTHON_LAYER := 1
↓
# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
↓
# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include \
                /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib \
                /usr/lib/x86_64-linux-gnu/hdf5/serial

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
↓
# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
USE_PKG_CONFIG := 1
$ rm -r -f build
$ rm -r -f .build_release
$ make superclean
$ make all -j4
$ make test -j4
$ make distribute -j4
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ make py

3.Download of VGG model [My Example CAFFE_ROOT PATH = "/home/<username>/caffe"]

$ export CAFFE_ROOT=/home/<username>/caffe
$ cd $CAFFE_ROOT/models/VGGNet
$ wget http://cs.unc.edu/~wliu/projects/ParseNet/VGG_ILSVRC_16_layers_fc_reduced.caffemodel

4.Download VOC 2007 and VOC 2012 datasets

# Download the data.
$ cd ~;mkdir data;cd data
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar #<--- 1.86GB
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar #<--- 438MB
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar #<--- 430MB

# Extract the data.
$ tar -xvf VOCtrainval_11-May-2012.tar
$ tar -xvf VOCtrainval_06-Nov-2007.tar
$ tar -xvf VOCtest_06-Nov-2007.tar
$ rm VOCtrainval_11-May-2012.tar;rm VOCtrainval_06-Nov-2007.tar;rm VOCtest_06-Nov-2007.tar

5.Generate lmdb file

$ export CAFFE_ROOT=/home/<username>/caffe
$ cd $CAFFE_ROOT
# Create the trainval.txt, test.txt, and test_name_size.txt in $CAFFE_ROOT/data/VOC0712/
$ ./data/VOC0712/create_list.sh

# You can modify the parameters in create_data.sh if needed.
# It will create lmdb files for trainval and test with encoded original image:
#   - $HOME/data/VOCdevkit/VOC0712/lmdb/VOC0712_trainval_lmdb
#   - $HOME/data/VOCdevkit/VOC0712/lmdb/VOC0712_test_lmdb
# and make soft links at examples/VOC0712/

$ ./data/VOC0712/create_data.sh

6.Execution of learning [My Example environment GPU x1, GeForce GT 650M = RAM:2GB]

Adjust according to the number of GPU

# It will create model definition files and save snapshot models in:
#   - $CAFFE_ROOT/models/VGGNet/VOC0712/SSD_300x300/
# and job file, log file, and the python script in:
#   - $CAFFE_ROOT/jobs/VGGNet/VOC0712/SSD_300x300/
# and save temporary evaluation results in:
#   - $HOME/data/VOCdevkit/results/VOC2007/SSD_300x300/
# It should reach 77.* mAP at 120k iterations.

$ export CAFFE_ROOT=/home/<username>/caffe
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ cd $CAFFE_ROOT
$ cp examples/ssd/ssd_pascal.py examples/ssd/BK_ssd_pascal.py
$ nano examples/ssd/ssd_pascal.py
# Solver parameters.
# Defining which GPUs to use.
gpus = "0,1,2,3"
↓
# Solver parameters.
# Defining which GPUs to use.
gpus = "0"

Adjust according to GPU performance (Memory Size) [My Example GeForce GT 650M x1 = RAM:2GB]

# Divide the mini-batch to different GPUs.
batch_size = 32
accum_batch_size = 32
↓
# Divide the mini-batch to different GPUs.
batch_size = 1
accum_batch_size = 1

Execution

  • The learned data is generated in "$CAFFE_ROOT/models/VGGNet/VOC0712/SSD_300x300"
  • VGG_VOC0712_SSD_300x300_iter_n.caffemodel
  • VGG_VOC0712_SSD_300x300_iter_n.solverstate
$ export CAFFE_ROOT=/home/<username>/caffe
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ cd $CAFFE_ROOT
$ python examples/ssd/ssd_pascal.py

7.Evaluation of learning data (still image)

$ export CAFFE_ROOT=/home/<username>/caffe
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ cd $CAFFE_ROOT
# If you would like to test a model you trained, you can do:
$ python examples/ssd/score_ssd_pascal.py

8.Evaluation of learning data (USB camera)

$ export CAFFE_ROOT=/home/<username>/caffe
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ cd $CAFFE_ROOT
# If you would like to attach a webcam to a model you trained, you can do:
$ python examples/ssd/ssd_pascal_webcam.py

Reference articles, thanks

https://github.com/movidius/ncappzoo/tree/master/caffe/SSD_MobileNet
https://github.com/FreeApe/VGG-or-MobileNet-SSD
https://github.com/chuanqi305/MobileNet-SSD
https://github.com/avBuffer/MobilenetSSD_caffe
https://github.com/Coldmooon/SSD-on-Custom-Dataset
https://github.com/BVLC/caffe/wiki/Ubuntu-16.04-or-15.10-Installation-Guide#the-gpu-support-prerequisites
https://stackoverflow.com/questions/33962226/common-causes-of-nans-during-training
https://github.com/CongWeilin/mtcnn-caffe
https://github.com/DuinoDu/mtcnn.git
https://www.hackster.io/mjrobot/real-time-face-recognition-an-end-to-end-project-a10826
https://github.com/Mjrovai/OpenCV-Face-Recognition.git
https://github.com/sgxu/face-detection-based-on-caffe.git
https://github.com/RiweiChen/DeepFace.git
https://github.com/KatsunoriWa/eval_faceDetectors
https://github.com/BeloborodovDS/MobilenetSSDFace
https://www.pyimagesearch.com/2018/09/03/semantic-segmentation-with-opencv-and-deep-learning/
https://github.com/TimoSaemann/ENet/tree/master/Tutorial
https://blog.amedama.jp/entry/2017/04/03/235901
https://github.com/NVIDIA/nvidia-docker
https://hub.docker.com/r/nvidia/cuda/
https://www.dlology.com/blog/how-to-run-keras-model-on-movidius-neural-compute-stick/
https://ncsforum.movidius.com/discussion/1106/ncs-temperature-issue
https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend
https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend#raspbian-stretch
https://github.com/skhameneh/OpenVINO-ARM64

mobilenet-ssd-realsense's People

Contributors

drunkar avatar pinto0309 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mobilenet-ssd-realsense's Issues

Got error message "no module named mvnc" when running SingleStickSSDwithRealSense.py

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): Raspberry PI3

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): armv7l

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Raspbian

[Required] Details of the work you did before the problem occurred:


Finish all steps described in the article.


[Required] Error message:


ImportError: No module named 'mvnc'


[Required] Overview of problems and questions:


Does OpenVino include mvnc?

NCSDK setup

Dear Master Pinto,

in your very thorough and kind step by step I miss the NCSDK v2 setup... when do you consider best to install it (after which step number)?

After seeing some stretch issues, do you use ubuntu 16.04 in rasp? which exactly (@ rasp site there are two: ubuntu mate & snappy)

thx!!

MobileNet-SSD + NCS1 + my own datasets

Hello,PINTO0309!
When I trained MobileNet-SSD on my own datasets(20 different classes, 400 images per class), I found the results of FP32 and FP16 be completely different(the caffe model test detection_eval is 0.91). And the same problem also happend when I trained YOLOv3 on my own datasets(follow your other github program).
Are you trying to train the model with your own data? I doubted if the openvino and ncsdk can only convert the specific model perfectly. Could you give me some advices? Thank you!

Failed to load multi NCS2

Hi, I tried to load multi NCS2 on RaspberryPi3B+ and Ubuntu16.04LTS with intel i7-7700k just like what you do, using the IEPlugin function. I actually plugged 2 NCS2, and called the function for 2 times, but it failed with a error message saying NO_DEVICE when loading the second NCS2. Have you encountered the same problem before? I thought a plugin is an abstract for a device, am I right?

ncsdk "make examples" - installs OpenCV (is it really needed?)

Dear PINTO,

I noted that when doing "make examples -j1" when installing ncsdk it automatically installs OpenCV...
I understand that if you are doing a fresh install there is no need to install it to later uninstall it and then install it again in your step-by-step, so do I really need to make the examples in this step or I can psotpone this step after the whole step-by-step process?

I also see now there are repeated steps regarding the installation of protobuf 3.5.1, do I really need to install it twice?

Thanks in avdance!

btw: you made an amazing work! thks a lot!

more NCS2

i have 2 NCS2
i want to USE 1-NCS2 to run first model and use another NCS2 to run second model,it is possible?
how do i set?

Possible Memory Leak

I have suffer from memory leak when running MultiStickSSDwithRealSense_OpenVINO_NCS2.py with a single stick and USB Webcam.

The memory leak occurs on Laptop and Odroid XU4 with rate about 200MB/minutes.

I can run SingleStickSSDwithUSBCamera_OpenVINO_NCS2.py normally without any memory leak, my guess is the problem is at the multiprocessing where the already processed image didn't removed from the memory.

Laptop Specification:
Ubuntu 18.04.1 (x64)
i7-4710HQ (2.5 GHz / 4 Cores)
GTX 850M (2 GB VRAM)
8 GB RAM

ODROID XU4 Specification:
Ubuntu Mate 16.04 (ARM)
Samsung Exynos5422 Cortex™-A15 2Ghz and Cortex™-A7 (2 GHz / 8 Cores)
Mali-T628 MP6
2 GB RAM

Camera:
Laptop Webcam, Logitech C270, PS3 EYE

Could you share the way of convert MobileNet-ssd to IR?

Could you share the way of convert MobileNet-ssd to IR?
I convert Mobilenet-ssd v2 to IR, but got lots of error. Here is my approach:

python3 .\mo_tf.py `
--input_model E:\ssdlite_mobilenet_v2\frozen_inference_graph.pb `
--model_name ssd_mobilenet_v2 `
--output_dir E:\ssdlite_mobilenet_v2\ `
--data_type FP16 `
--batch 1 `
--tensorflow_object_detection_api_pipeline_config E:\ssdlite_mobilenet_v2\pipeline.config `
--tensorflow_use_custom_operations_config .\extensions\front\tf\ssd_v2_support.json

There are many errors when I use this converted IR model.

About performance

Dear "demigod" Pinto,

We recently implemented a multiprocessing version in our own way, pretty similar to yours, but we got some stability issues. In fact we were somehow surprised by the lack of multiprocessing examples in zoo collection and just some multithreading ones for multistick, being the later a bottle neck due to GIL, so no big advantages from multistick at all.

Our approach was based in API v1 with multiprocessing in fork (default) mode. why did you choose forkserver + daemon?

Which is the actual increase in detection fps (not screen rendering) that you achieved by each extra stick added? from 1 to 2, then 3 and finally 4

One final question about performance. the
allocate_with_fifos command defaults to 2 elements per each input / output fifo. does it ensure that stick will be all time busy without waiting times? i mean if it allows you to push another item to the graph queue while the previous one is being processed.

Anyway very excellent job on your side!

Got error message "USB Failure. Code: Error opening device"

[Required] Your device (RaspberryPi3):

[Required] Your device's CPU architecture (armv7l):

[Required] Your OS (Raspbian):

[Required] Details of the work you did before the problem occurred:

I followed your steps to install ncsdk on the Raspberry Pi. An error occurred while executing make examples
[Required] Error message:

Network Input tensors ['data#24']

Network Output tensors ['prob#38']

Blob generated
W: [ 0] ncDeviceOpen:528 ncDeviceOpen() XLinkBootRemote returned error 3

[Error 7] Toolkit Error: USB Failure. Code: Error opening device
Makefile:64: recipe for target 'profile' failed
make[3]: *** [profile] Error 255
make[3]: Leaving directory '/home/pi/ncsdk/examples/caffe/AlexNet'
Makefile:12: recipe for target 'AlexNet' failed
make[2]: *** [AlexNet] Error 2
make[2]: Leaving directory '/home/pi/ncsdk/examples/caffe'
Makefile:12: recipe for target 'caffe/.' failed
make[1]: *** [caffe/.] Error 2
make[1]: Leaving directory '/home/pi/ncsdk/examples'
Makefile:57: recipe for target 'examples' failed
make: *** [examples] Error 2
[Required] Overview of problems and questions:
Unable to find device

Each Stick run different model

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
RaspberryPi3 b+ , NCS2 x 4 ,
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
armv7l
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Rasbian
[Required] Details of the work you did before the problem occurred:

I have four NCS2 and I`m trying to run different models on each neural stick independently.

For example, first neural stick run face detection, next one run emotion recognition, and third one run image classificaiton ..
Is it possible ? I saw your MutiModel(FaceDetection, EmotionRecognition) which is merged
model but what I want to do is what I said above.


I`m really appreciate your project
Thanks



Bounding Box Incorrect Scaling/Positioning after 97e92fa

So first off thanks again for the tuned performance. I verified 12 FPS peak on the Raspberry Pi with the Realsense D435.

The thing I noticed (see videos below), is that the scaling/positioning math behind the bounding boxes after 97e92fa (here) seem to be off.

So prior to this commit, they follow me left/right in the live video feed correctly (my center, and the center of the bounding box match well), and the scale matches well too (i.e. my width/height, and the bounding box width/height match well).

After the commit, I notice the following:

  1. The positioning is off: when object is on the left side, the position of the bounding box is too far to the left. When the object is on the right, the center of the bounding box is too far to the right, etc.
  2. The left/right size is also off. The box is too large at least in the left/right dimension (and probably also up/down, it seems, but less tested).

Here's a before/correct video (from checkout 80a564c):
https://photos.app.goo.gl/H5ta1X7KnqTdsz7j7

Notice that the bounding box tracks my center well and also my size well.

And here's the after/incorrect video (from checkout 055636a):
https://photos.app.goo.gl/6zfMgGKaYjbrZRQZ9

Notice in this case the box is too wide (and also seemingly too tall), and goes to far left/right when the object (me) is located left/right of the video feed.

Thanks again! I'll also try to hunt to see if I can find where this scale/position error was introduced.

GPU does not have a better speed

[Required] Your device: Surface book 2 15 inch

[Required] Your device's CPU architecture: i7 8650U

[Required] Your OS: Windows 10
[Required] Details of the work you did before the problem occurred:

I was doing a benchmarking between CPU, GPU, and NCS2. I found the speed: CPU > MYRAID > GPU. It seems the result is not reasonable.

        if device == "CPU":
            model_xml = 'model_ir/fp32/frozen_alexnet_model.xml'
            model_bin = os.path.splitext(model_xml)[0] + ".bin"
            plugin = IEPlugin(device='CPU') # 使用CPU运行模型,要用NCS的话改为MYRIAD
            net = IENetwork(model=model_xml, weights=model_bin)  # 用NCS的话也需要更改
        elif device == "GPU":
            model_xml = 'model_ir/fp32/frozen_alexnet_model.xml'
            model_bin = os.path.splitext(model_xml)[0] + ".bin"
            plugin = IEPlugin(device='GPU')
            net = IENetwork(model=model_xml, weights=model_bin)
        else:
            model_xml = 'model_ir/fp16/frozen_alexnet_model.xml'
            model_bin = os.path.splitext(model_xml)[0] + ".bin"
            plugin = IEPlugin(device='MYRIAD')
            net = IENetwork(model=model_xml, weights=model_bin)

        net.batch_size = batch
        input_blob = next(iter(net.inputs))
        exec_net = plugin.load(network=net)

        print('tick tok...')
        start_time = time.time()

        for images, images_path in get_batches_fn(batch):
            outputs = exec_net.infer(inputs={input_blob: images})

            for indx in range(batch):
                class_name = class_names[np.argmax(outputs['prob'][indx])]
                probs = outputs['prob'][indx, np.argmax(outputs['prob'][indx])]

[Required] Overview of problems and questions:
Why GPU does not have a better performance? Is it normal?

Feedback

Hello,

Many thanks for your code.

Got about 15 FPS, using USB Webcam on Rpi 3 B+ and 2 ncs2.
Have you bench using 4 ncs on rpi ?

I use https://github.com/umlaeute/v4l2loopback to feed trafic cam view.

I get better result starting cam in 640x480 and cropping like this:

color_image=cv2.resize(color_image,(532,400))
color_image=color_image[100:100+300,116:116+300]

I also try to use the script with yolov3 but got buffer overflow.

Regards,

Pierre

MobileNet-SSD + RPi + NCS2

I've been trying to use this repository to run a custom MobileNet-SSD model trained with tensorflow and converted to IR. I have validated a FP32 version of this model using OpenVino's object_detection_sample_ssd, but I'm using a FP16 version for this repository.

I'm running SingleStickSSDwithUSBCamera_OpenVINO_NCS2.py, modified to load my .xml/.bin model and my labels, but I never get any detections. Any idea why? Thanks!

Systemdでの起動失敗 Automatic startup failure on Systemd

pyrealsense2を用いたプログラムをsystemdを用いて起動を試みたのですが(python3)、以下のエラーにより失敗します。

ImportError: No module named 'pyrealsense2'

systemdを用いずに起動した際は、"ImportError"は出ず、プログラムは正常に動作します。
また、パッケージのバージョンは全てreadmeに従っています。

宜しくお願い致します。

English
I attempted to run a program using pyrealsense2 with systemd (python 3), but it fails with the following error.

ImportError: No module named 'pyrealsense2'

When run the program without using systemd, it works normally.
Also, all package versions compliant with readme.

Thank you.

Cannot run code on RPi 4: Inference Engine Build Error

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): RaspberryPi4

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): aarch64

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Ubuntu 18.04 LTS

[Required] Details of the work you did before the problem occurred:
I have followed all the steps in your guide and I keep arriving at the same error. I have also tried to manually install opencv myself but still cant seem to figure out why the code doesnt work. I can run sample object detection codes on single images but not videos. This is the error i get when I try to run SingleStickSSDwithRealSense_OpenVINO_NCS2.py






[Required] Error message:
Traceback (most recent call last):
File "SingleStickSSDwithRealSense_OpenVINO_NCS2.py", line 33, in
net = cv2.dnn.readNet('lrmodel/MobileNetSSD/MobileNetSSD_deploy.xml', 'lrmodel/MobileNetSSD/MobileNetSSD_deploy.bin')
cv2.error: OpenCV(4.3.0) /home/ubuntu/opencv/modules/dnn/src/dnn.cpp:3541: error: (-2:Unspecified error) Build OpenCV with Inference Engine to enable loading models from Model Optimizer. in function 'readFromModelOptimizer'






[Required] Overview of problems and questions:
At first the script would not be able to import cv2 at all but after manually installing OpenCV 4.3.0 I was able to import cv2 and run the dnn sample code.
I am very new to this so I would appreciate all the help I can get.





RealSense D435 connection problems

Hi, PINTO0309
I have the environment: Raspberry Pi + Raspbian Stretch+ RealSense D435.
There are some problems with my D435 connecting to my Pi.
The screenshot as follows:
2018-08-23-152117_1342x747_scrot

  1. I have no RGB module, only stereo module in my $ realsense-viewer
  2. Incomplete frame (38%) and no frames received. (Maybe the transfer speed is not enough with usb2.0? But it is perfect in your demo. )

Delay problem

Why the position displayed by the detection frame is not the position of the instant picture, he will probably be slower by 3~5 frames.

NCS2 Slower Than NCS1

Hi PINTO0309,

Thanks for all your work here and particularly for documenting it all. I've bene able to successfully follow everything, which I think speaks to your thoroughness. :-)

So I've setup and run, with RealSense D435, the following:

  1. No NCS with TensorFlow (here
  2. NCS1 (this repository, with NCSDK)
  3. NCS2 (this repository, with OpenVINO), using python3 MultiStickSSDwithRealSense_OpenVINO_NCS2.py - but I only have 1x NCS2 device.

So I'm seeing the following framerates:

  1. 1 FPS - No NCS
  2. 6 FPS - 1x NCS1
  3. 5 FPS - 1x NCS2

So I'm curious to see that the NCS2 with OpenVINO actually seems to be slightly slower than the NCS1 with NCSDK.

Now my question is, it feels like I'm running the wrong command for 3... am I?

I'm running python3 MultiStickSSDwithRealSense_OpenVINO_NCS2.py and the only other options I see there are:

  • MultiStickSSDwithRealSense.py
  • SingleStickSSDwithRealSense.py
  • SingleStickSSDwithUSBCamera_OpenVINO_NCS2.py

So it kind of makes me think that SingleStickSSDwithRealSense_OpenVINO_NCS2.py, or something like that, is missing from github (like forgot to be committed, or similar).

Thoughts?

Oh and to see if SingleStickSSDwithRealSense.py happened to be compatible with both NCS1 (NCSDK) and NCS2 (OpenVINO), I rant it with the NCS2 connected (and no NCS1 connected), and got:

python3 SingleStickSSDwithRealSense.py 
Traceback (most recent call last):
  File "SingleStickSSDwithRealSense.py", line 13, in <module>
    from mvnc import mvncapi as mvnc
ImportError: No module named 'mvnc'

Thanks again!
-Brandon

How to use all 80 classes of coco dataset : SingleStickSSDwithRealSense_OpenVINO_NCS2.py

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): LaptopPC

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): x64

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Ubuntu 1804

[Required] Details of the work you did before the problem occurred:

I have run the SingleStickSSDwithRealSense_OpenVINO_NCS2.py code.

Components used: Intel RealSense camera D435, Intel NCS2 and LaptopPC


[Required] Error message:






[Required] Overview of problems and questions:

I want to include(train) all the 80 classes of coco dataset and use it in the Single StickSSDwithRealSense_OpenVINO_NCS2 python program



mobilenet SSD on raspberry pi

Hi there !!
I have SSD mobile net model (written in tensorflow) for image classification..... and it works well on my windows pc with a dlib tracker added to it..... but i need to use a raspberry pi for prototyping ...
i install tensorflow on my RPi and try to execute the file ( on the python 3 IDLE) and it shows me nothing but no error is generated.... could you tell me my mistake ?

Latency of NCS1 vs. NCS2

So this could be a synchronous/asynchronous observer phenomenon, I'm not sure, and will investigate more. That said, I did want to drop in a comparison here of NCS1 (synchronous) vs. NCS2 (asynchronous):

NCS1 (synchronous): https://photos.app.goo.gl/ejezgpvmMz7U53Qu9
NCS2 (asynchronous): https://photos.app.goo.gl/BExZNpWVaTvLA3wn8

Again, I need to check whether this is just an observational thing of synchronous/asynchronous, but wanted to share these in the meantime in case they give an 'ah-ha' moment. :-)

Thanks again,
Brandon

Bad performance

Great work, thank you very much!

In this gist https://gist.github.com/treinberger/c63cb84979a4b3fb9b13a2d290482f4e I ported your code to get the camera input from the PiCam, but I discovered that the framerate is really low (about 1FPS). Have you got any ideas why this could be the case? I ported your code because I currently don't have a usb webcam at hand. My environment is Raspberry Pi 3+, some wideangle camera and the NCS2.

How to transfrom the mymodel ssd_300_vgg.pb to IR and use your code to detect

LaptopPc+ x86_64+ Ubuntu1604 ssd_300_vgg
use the program from https://github.com/pierluigiferrari/ssd_keras

how to convert the .pb model to IR?
l use
python3 mo_tf.py --data_type FP16 --input_shape [1,300,300,3] --input_model /path/to/my/pbmodel
it's wrong.
[ ERROR ] Shape [ 1 3 -1 256] is not fully defined for output 0 of "conv8_2/convolution". Use --input_shape with positive integers to override model input shapes. [ ERROR ] Cannot infer shapes or values for node "conv8_2/convolution". [ ERROR ] Not all output shapes were inferred or fully defined for node "conv8_2/convolution". For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function Convolution.infer at 0x7fe94897d378>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Stopped shape/value propagation at "conv8_2/convolution" node. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
how to correct convert? need other parameter? and parameter's content?

And what should l change the code if l use your code to detect.

Problem compiling OpenCV 3.4.2 with your TBB 2018U2 package on Ubuntu 16.04 AMD64

[Required] Your device (RaspberryPi3, LaptopPC, or other device name): Intel NUC5i7RYHR

[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name): x86_64

[Required] Your OS (Raspbian, Ubuntu1604, or other os name): Ubuntu1604

[Required] Details of the work you did before the problem occurred:
I'm following your https://github.com/PINTO0309/MobileNet-SSD-RealSense/README.md for an Intel 5th gen NUC running Ubuntu 16.04.6 LTS 64bit.

Specifically, I have an NCS 1 so I'm using the NCSDK2 and following the instruction under heading:
"Work with RaspberryPi3 (or PC + Ubuntu16.04 / RaspberryPi + Ubuntu Mate)"

At step 7, I found your TBBonamd64x86x64 repository and downloaded: https://github.com/PINTO0309/TBBonamd64x86x64/raw/master/libtbb-dev_2018U2_amd64.deb

I'm then trying to build OpenCV 3.4.2 from source (step 9.1) but get stuck at the compilation stage with an error.

[Required] Error message:
/usr/local/include/tbb/atomic.h:222:34: error: ‘tbb::internal::atomic_impl::my_storage’ has incomplete type
aligned_storage<T,sizeof(T)> my_storage;
^
/usr/local/include/tbb/atomic.h:95:8: note: declaration of ‘struct tbb::internal::aligned_storage<long int, 8ul>’
struct aligned_storage;
^
/usr/local/include/tbb/atomic.h: In instantiation of ‘struct tbb::internal::atomic_impl’:
/usr/local/include/tbb/atomic.h:338:8: required from ‘struct tbb::internal::atomic_impl_with_arithmetic<long unsigned int, long unsigned int, char>’
/usr/local/include/tbb/atomic.h:444:1: required from here

[Required] Overview of problems and questions:
Please help!

Raspberry pi3b/3b+ + RealSense D435 + no NCS or NCS1/2

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
Raspberry Pi3b / Raspberry pi3b+
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):

[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Raspbian / Ubuntu 16.04
[Required] Details of the work you did before the problem occurred:

i've already tried your project PINTO0309/MobileNet-SSDLite-RealSense-TF . Thank you for your project. But for my project, that was slow and unstable. So now,I'm considering using NCS v1 or v2 to improve performance of raspberry pi3b/3b+ with realsense.




[Required] Error message:






[Required] Overview of problems and questions:

I wonder how much fps or perfermance difference exists between no NCS and using NCS v1 and NCS v2. Thank you.



MultiStickWithPiCamera example using custom mobile net ssd model results error

Device: Raspberry Pi 3 B+

CPU Arch.: armv7l

OS: Raspbian

I have trained a custom 1 class mobile net ssd v2 network using Tensorflow Object Detection API and successfully converted to to IR model using OpenVino 2019 R1.1. Then I substituted the model files in the MultiStickWithPiCamera.py example with my custom model. The predict_async thread raises an error.

Error message:

line 174, in predict_async
cnt, dev = heapq.heappop(self.heap_request)
IndexError: index out of range

X_LINK_ERROR

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
Rpi4
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
armv7l
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Raspbian
[Required] Details of the work you did before the problem occurred:
Crashes after running inference 10 min or so in raspberry pi 4
"MultiStickSSDwithPiCamera_OpenVINO_NCS2.py "
[Required] Error message:
E: [xLink] [ 262150] [EventRead00Thr] eventReader:218 eventReader thread stopped (err -4)
E: [xLink] [ 262150] [Scheduler00Thr] eventSchedulerRun:576 Dispatcher received NULL event!
E: [global] [ 262151] [python3] XLinkReadDataWithTimeOut:1494 Event data is invalid
E: [ncAPI] [ 262151] [python3] ncFifoReadElem:3510 Packet reading is failed.
E: [watchdog] [ 262340] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 263339] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 264338] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 265337] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 266336] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 267336] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 268335] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 269334] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 270333] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 271332] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 272331] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 273331] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 274331] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 274332] [WatchdogThread] watchdog_routine:315 [0x30a02e0] device, not respond, removing from watchdog

E: [xLink] [ 274676] [WatchdogThread] dispatcherWaitEventComplete:774 waiting is timeout, sending reset remote event

[Required] Overview of problems and questions:
Crashes after running inference 10 min or so in raspberry pi 4
"MultiStickSSDwithPiCamera_OpenVINO_NCS2.py "

Env setup: Raspberry Pi3 + Raspbian Stretch + RealSense SDK

It occurs an error when I using the following commands with my raspberry pi:
$ sudo apt install -y git libssl-dev libusb-1.0-0-dev pkg-config libgtk-3-dev libglfw3-dev at-spi2-core libdrm*
###########################################
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'libdrm-radeon1-dbg' for glob 'libdrm*'
Note, selecting 'libdrm-freedreno1' for glob 'libdrm*'
Note, selecting 'libdrm-nouveau1' for glob 'libdrm*'
Note, selecting 'libdrm-nouveau2' for glob 'libdrm*'
Note, selecting 'libdrm2-dbg' for glob 'libdrm*'
Note, selecting 'libdrm-nouveau1a' for glob 'libdrm*'
Note, selecting 'libdrm-amdgpu1' for glob 'libdrm*'
Note, selecting 'libdrm-omap1' for glob 'libdrm*'
Note, selecting 'libdrm-omap1-dbg' for glob 'libdrm*'
Note, selecting 'libdrm2' for glob 'libdrm*'
Note, selecting 'libdrm-nouveau1a-dbg' for glob 'libdrm*'
Note, selecting 'libdrmaa1.0' for glob 'libdrm*'
Note, selecting 'libdrmaa-dev' for glob 'libdrm*'
Note, selecting 'libdrm-radeon1' for glob 'libdrm*'
Note, selecting 'libdrm-dev' for glob 'libdrm*'
Note, selecting 'libdrm-nouveau1-dbg' for glob 'libdrm*'
E: Unable to locate package libglfw3-dev
###########################################

what's the problems?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.