Git Product home page Git Product logo

tpu-posenet's Introduction

TPU-Posenet

Edge TPU Accelerator/Multi-TPU/Multi-Model + Posenet/DeeplabV3/MobileNet-SSD + Python + Sync/Async + LaptopPC/RaspberryPi.
Inspired by google-coral/project-posenet.
This repository was tuned to speed up Google's sample logic to support multi-TPU. And I replaced the complex Gstreamer implementation with the OpenCV implementation.

0. Table of contents

1. Environment
2. Inference behavior
 2-1. Async, TPU x3, USB Camera, Single Person
 2-2. Sync, TPU x1, USB Camera, Single Person
 2-3. Sync, TPU x1, MP4 (30 FPS), Multi Person
 2-4. Async, TPU x3, USB Camera (30 FPS), Multi-Model, Posenet + DeeplabV3 + MobileNet-SSDv2
3. Introduction procedure
 3-1. Common procedures for devices
 3-2-1. Only Linux
 3-2-2. Only RaspberryPi (Stretch or Buster)
4. Usage
5. Reference articles

1. Environment

  • Ubuntu or RaspberryPi
    • (Note: Because RaspberryPi3 is a low-speed USB 2.0, multi-TPU operation becomes extremely unstable.)
  • OpenCV4.1.1-openvino
  • Coral Edge TPU Accelerator (Multi-TPU)
    • Automatically detect the number of multiple TPU accelerators connected to a USB hub to improve performance.
  • USB Camera (Playstationeye)
  • Picamera v2
  • Self-powered USB 3.0 Hub
  • Python 3.5.2+

07

2. Inference behavior

2-1. Async, TPU x3, USB Camera, Single Person

Youtube:https://youtu.be/LBk71RKca1c
08

2-2. Sync, TPU x1, USB Camera, Single Person

Youtube:https://youtu.be/GuuXzpLXFJo
09

2-3. Sync, TPU x1, MP4 (30 FPS), Multi Person

Youtube:https://youtu.be/ibPuI12bj2w
10

2-4. Async, TPU x3, USB Camera (30 FPS), Multi-Model, Posenet + DeeplabV3 + MobileNet-SSDv2

Youtube:https://youtu.be/d946VOE65tU
11

3. Introduction procedure

3-1. Common procedures for devices

$ sudo apt-get update;sudo apt-get upgrade -y

$ sudo apt-get install -y python3-pip
$ sudo pip3 install pip --upgrade
$ sudo pip3 install numpy

$ wget https://dl.google.com/coral/edgetpu_api/edgetpu_api_latest.tar.gz -O edgetpu_api.tar.gz --trust-server-names
$ tar xzf edgetpu_api.tar.gz
$ sudo edgetpu_api/install.sh

$ git clone https://github.com/PINTO0309/TPU-Posenet.git
$ cd TPU-Posenet.git
$ cd models;./download.sh;cd ..
$ cd media;./download.sh;cd ..

3-2-1. Only Linux

$ wget https://github.com/PINTO0309/OpenVINO-bin/raw/master/Linux/download_2019R2.sh
$ chmod +x download_2019R2.sh
$ ./download_2019R2.sh
$ l_openvino_toolkit_p_2019.2.242/install_openvino_dependencies.sh
$ ./install_GUI.sh
OR
$ ./install.sh

3-2-2. Only RaspberryPi (Stretch or Buster)

### Only Raspbian Buster ############################################################
$ cd /usr/local/lib/python3.7/dist-packages/edgetpu/swig/
$ sudo cp \
_edgetpu_cpp_wrapper.cpython-35m-arm-linux-gnueabihf.so \
_edgetpu_cpp_wrapper.cpython-37m-arm-linux-gnueabihf.so
### Only Raspbian Buster ############################################################

$ cd ~/TPU-Posenet
$ sudo pip3 install imutils
$ sudo raspi-config

01
02
03
04
05
06

$ wget https://github.com/PINTO0309/OpenVINO-bin/raw/master/RaspberryPi/download_2019R2.sh
$ sudo chmod +x download_2019R2.sh
$ ./download_2019R2.sh
$ echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
$ source ~/.bashrc

4. Usage

usage: pose_camera_multi_tpu.py [-h] [--model MODEL] [--usbcamno USBCAMNO]
                                [--videofile VIDEOFILE] [--vidfps VIDFPS]

optional arguments:
  -h, --help            show this help message and exit
  --model MODEL         Path of the detection model.
  --usbcamno USBCAMNO   USB Camera number.
  --videofile VIDEOFILE
                        Path to input video file. (Default="")
  --vidfps VIDFPS       FPS of Video. (Default=30)
usage: pose_camera_single_tpu.py [-h] [--model MODEL] [--usbcamno USBCAMNO]
                                 [--videofile VIDEOFILE] [--vidfps VIDFPS]

optional arguments:
  -h, --help            show this help message and exit
  --model MODEL         Path of the detection model.
  --usbcamno USBCAMNO   USB Camera number.
  --videofile VIDEOFILE
                        Path to input video file. (Default="")
  --vidfps VIDFPS       FPS of Video. (Default=30)
usage: pose_picam_multi_tpu.py [-h] [--model MODEL] [--videofile VIDEOFILE] [--vidfps VIDFPS]

optional arguments:
  -h, --help            show this help message and exit
  --model MODEL         Path of the detection model.
  --videofile VIDEOFILE
                        Path to input video file. (Default="")
  --vidfps VIDFPS       FPS of Video. (Default=30)
usage: pose_picam_single_tpu.py [-h] [--model MODEL] [--videofile VIDEOFILE] [--vidfps VIDFPS]

optional arguments:
  -h, --help            show this help message and exit
  --model MODEL         Path of the detection model.
  --videofile VIDEOFILE
                        Path to input video file. (Default="")
  --vidfps VIDFPS       FPS of Video. (Default=30)
usage: ssd-deeplab-posenet.py [-h] [--pose_model POSE_MODEL]
                              [--deep_model DEEP_MODEL]
                              [--ssd_model SSD_MODEL] [--usbcamno USBCAMNO]
                              [--videofile VIDEOFILE] [--vidfps VIDFPS]
                              [--camera_width CAMERA_WIDTH]
                              [--camera_height CAMERA_HEIGHT]

optional arguments:
  -h, --help            show this help message and exit
  --pose_model POSE_MODEL
                        Path of the posenet model.
  --deep_model DEEP_MODEL
                        Path of the deeplabv3 model.
  --ssd_model SSD_MODEL
                        Path of the mobilenet-ssd model.
  --usbcamno USBCAMNO   USB Camera number.
  --videofile VIDEOFILE
                        Path to input video file. (Default="")
  --vidfps VIDFPS       FPS of Video. (Default=30)
  --camera_width CAMERA_WIDTH
                        USB Camera resolution (width). (Default=640)
  --camera_height CAMERA_HEIGHT
                        USB Camera resolution (height). (Default=480)

5. Reference articles

  1. Edge TPU USB Accelerator analysis - I/O data transfer - Qiita - iwatake2222

  2. [150 FPS ++] Connect three Coral Edge TPU accelerators and infer in parallel processing to get ultra-fast object detection inference performance ーTo the extreme of useless high performanceー - Qiita - PINTO

  3. [150 FPS ++] Connect three Coral Edge TPU accelerators and infer in parallel processing to get ultra-fast Posenet inference performance ーTo the extreme of useless high performanceー - Qiita - PINTO

  4. Raspberry Pi Camera Module

tpu-posenet's People

Contributors

pinto0309 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

tpu-posenet's Issues

about FPS on raspberry pi

@PINTO0309 thanks for the work. I am curious about the frame rate on raspberry pi. do you have any data for that? the demo video are all from PC, right?

Error trying to run pose_camera_single_tpu.py

I get this error when trying to run python3 pose_camera_single_tpu.py

PINTO/TPU-Posenet-master/pose_engine.py", line 125, in DetectPosesInImage
inference_time, output = self.run_inference(img.flatten())
AttributeError: 'PoseEngine' object has no attribute 'run_inference'

I don't seen any run_inference method defined anywhere.

Multi Model?

Am I wrong or there is not the multi-model implementation? I can only recognize multi-tpu implementation for posenet

Labels for deeplab

I am wondering what labels were used for the deeplab model?

It outputs some 0 but mostly 9 when no object is in the image which corresponds to background and chair in the official deeplab model.

Were different labels used for this model?

Failed to allocate tensor

I am trying to run the single TPU inference with USB camera.
It throws the following error:

Internal: Unsupported data type in custop up handler: *******Node number 0 (edgetpu-custom-op) failed to prepare.
Failed to allocate tensor.

My environment: Python3, edgetpu runtime version 13 and tflite runtime 2.5

Any idea?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.