Git Product home page Git Product logo

skeleton-based-action-recognition's Introduction

Skeleton-based-action-recognition

Yolov3, Openpose, Tensorflow2, ROS, multi-thread

It also support for remote GPU server. You can grap a frame from D435 in local and then send the data to remote server to process the data and return the result.

This is my final year project "3D Action Recognition based on Openpose and YOLO".

Installation

0. install openpose python api

Following the openpose homepage instruction to install openpose and compile the python api.

1. create a conda env.

conda create -n tensorflow2 python=3.6
pip install -r requirements.txt

2. create a ROS package

At first, following the ros wiki instruction to install ROS and create a ROS workspace

Then, create a ROS package which names act_recognizer.

cd catkin_ws/src
mkdir -p act_recognizer

Therefore, in your ROS workspace file folder, the structure will be as the following,

->~/catkin_ws
----->build
----->devel
----->src
--------->act_recognizer

3. clone this repo

Cloen the repo and copy all files to ROS package act_recognizer

Modify the act_talker.py. Likes,

sys.path.append('{$your root path}/catkin_ws/src/act_recognizer')

Modify the config.py. Change the path about YOLO data, such as YOLO.CLASSES and YOLO.ANCHORS.

Change your openpose python api path in Module/poser.py, so that your code can import pyopenpose correctly.

Additionally, you also have to change the openpose model path.

Module/poser.py
----->class PoseLoader()
--------->self._params["model_folder"] = your openpose model folder

4. download yolo and mlp checkpoints

Download checkpoints from BaiduYun the extract code is cxj6. Then move yolov3.weights into checkpoints folder checkpoints/YOLO and mlp.h5 to checkpoints. You should create a checkpoints folder first probably.

cd act_recognizer/src
mkdir -p checkpoints/YOLO

5. run the code

The ROS package whic written by Python do not need to compile. We have to specify the python interpreter in all .py files in the first line to use the conda env tensorflow2's python interpreter. Likes this,

#!/home/dongjai/anaconda3/envs/tensorflow2/bin/python

Then, run the code following,

roscore
cd catkin_ws
conda activate tensorflow2
source ./devel/setup.bash
roslaunch act_recognizer run.launch

Citation and Reference

Openpose from CMU

Yolov3 tensorflow2 from YunYang1994

skeleton-based-action-recognition's People

Contributors

anonymousaccv501 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.