Git Product home page Git Product logo

volvo-datax's Introduction

Unified Framework for Pedestrian Detection & Intention Classification

Collaborative research project between Volvo Cars USA & Sweden, UC Berkeley, and Chalmers University.

Team: Rajarathnam Balakrishnan, Francesco Piccoli, Maria Jesus Perez, Moraldeepsingh Sachdeo, Carlos Nuñez, Matthew Tang

Model Components

Our project involved building an integrated end-to-end system for pedestrian intent detection. A subset of the components are used for each model.

  • YOLOv3 -> Object detector: Responsible for identifying and detecting objects of interest in a given frame or image.
  • SORT -> Object Tracker: Responsible for tracking the identified pedestrians for the sequence of frames and maintain unique IDs for each pedestrian.
  • DeepSORT -> Object Tracker: Responsible for extracting features from the tracked pedestrian to enhance re-identification of the identified and tracked pedestrian even through occlusions.
  • Early Fused Skeleton -> Skeleton mapping: Responsible for mapping skeletons for each tracked pedestrian.
  • Spatio-Temporal DenseNet -> Classifier: Responsible for classifying every identified and tracked pedestrian's intention by using the last 16 frames of a pedetrian.

Visualizations

For more detailed information about each model and the different components, click here to see the website (made with ReactJS and MaterialUI). Click here for the website source code.

Repo contents

  • /checkpoints - Folder useful to hold weights and checkpoints
  • /data - Consists file for class name
  • /deep_sort - DeepSORT algorithm
  • /images - Images and GIFS for the README
  • /SORT - Additional file for SORT
  • /tf-pose-estimation - Skeleton fitting algorithm files
  • /yolov3_tf2 - Yolov3 algorithm files
  • /yolov3_tf2.egg-info - Yolov3 additional files
  • .gitignore - Ignore misc files like .DS_Store
  • densenet_1.hdf5 - Weights for ST-DenseNet that uses original images
  • densenet_2.hdf5 - Weights for ST-DenseNet that uses skeleton imposed images
  • densenet_model.json - Saved ST-DenseNet Model file in json format
  • LICENSE - MIT License for this repo
  • mars-small128.pb - Protocol buffer weight file for DeepSORT
  • Model A.ipynb - Google colab file for Model A demo
  • Model B.ipynb - Google colab file for Model B demo
  • Model C.ipynb - Google colab file for Model C demo
  • Model D.ipynb - Google colab file for Model D demo
  • README.md - Instructions on how to use this repo
  • sortn.py - SORT algorithm

Running the code

The code was developed and run on Google Colab (online iPython notebooks). Each model has its own Colab notebook. Follow each of the steps to configure and run the notebooks.

  1. Click on the appropriate model's Colab button you wish to run. This will open a Colab notebook in your browser.
  2. Ensure that you are in playground mode if you cannot edit the notebook. The following steps are included in each colab notebook but are repeated here as well.
  3. Connect runtime to GPU for better/faster results. (Runtime --> Change runtime type --> GPU)
  4. Clone the repository in a notebook cell.
!git clone https://github.com/mjpramirez/Volvo-DataX
  1. Install dependencies in a notebook cell.
%cd Volvo-DataX/tf-pose-estimation
! pip3 install -r requirements.txt
%cd tf_pose/pafprocess
! sudo apt install swig
!swig -python -c++ pafprocess.i && python3 setup.py build_ext --inplace
  1. Add this Google Drive folder of weight files as a shortcut My Drive (Click the bar that says datax_volvo_additional_files as the folder name at the top and click Add shortcut to Drive)
  2. Run the rest of the notebook cells (Shift + Enter) following further directions specific to each model and observe the output

Model A

model A Model A uses the following components:

  1. YOLO - ./yolov3_tf2
  2. SORT - sortn.py
  3. DenseNET - densenet_model.json

Click here to test Model A: Open In Colab

Model B

model B Model A uses the following components:

  1. YOLO - ./yolov3_tf2
  2. DeepSORT - ./deep_sort
  3. DenseNET - densenet_model.json

Click here to test Model B: Open In Colab

Model C

model C Model A uses the following components:

  1. YOLO - ./yolov3_tf2
  2. SORT - sortn.py
  3. Skeleton - ./tf-pose-estimation
  4. DenseNET - densenet_model.json

Click here to test Model C: Open In Colab

Model D

model D Model A uses the following components:

  1. YOLO - ./yolov3_tf2
  2. DeepSORT - ./deep_sort
  3. Skeleton - ./tf-pose-estimation
  4. DenseNET - densenet_model.json

Click here to test Model D: Open In Colab

GitHub repos adapted for our project

For this project, we adapted codes for each components from other GitHub repos as mentioned below:

The codes for YOLOv3 was adapted from the GitHub repo: https://github.com/zzh8829/yolov3-tf2

The codes for SORT was adapted from the GitHub repo: https://github.com/abewley/sort

The codes for DeepSORT was adapted from the GitHub repo: https://github.com/nwojke/deep_sort

The codes for Skeleton FittingTF-PoseEstimator was adapted from the GitHub repo: https://github.com/ildoonet/tf-pose-estimation

The codes for ST-DenseNet was adapted from the GitHub repo: https://github.com/GalDude33/DenseNetFCN-3D

volvo-datax's People

Contributors

francesco-piccoli avatar matthew29tang avatar mjpramirez avatar rajarathnambalakrishnan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

volvo-datax's Issues

Densenet

Hi,
Your work is very helpful for me,and I have some questions about the ST-densenet.
I want to train the model by use my own video,Could tell me how to prepare the 'inputs'?

How to train the ST-DenseNet?

Hi,
It would be helpful to know the how to retrain the model with own data. Could you provide some information on that?
x_train_images = load('latest_train_x.npy') y_train = load('latest_train_y.npy') y_train = to_categorical(y_train) x_test_images = load('latest_test_x.npy') y_test = load('latest_test_y.npy') y_test = to_categorical(y_test)
like this,how to prepare the datasets?
from conv3d_net_working import DenseNet3D_121 model = DenseNet3D_121((100, 100, 16, 3)) model.compile(loss=keras.losses.categorical_crossentropy, optimizer = keras.optimizers.SGD(lr=1e-4), metrics=['accuracy'])
and, can you provide the file 'conv3d_net_working'?
Thanks~

Reg: Deepsort & densenet

Hi,
I trained a pedestrian to detect the persons,
Next, How can I make use of The deepsort and densenet for intention detection.

Thanks in advance.

Evaluation Script

Hi,
I am unable to find the evaluation script for this project. It would be of great help if you can share the same.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.