Git Product home page Git Product logo

rolson24 / bwct-tracker Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 0.0 255.1 MB

An app to count and track pedestrians, bike riders, e-scooter riders, and people in wheelchairs.

License: GNU Affero General Public License v3.0

Python 86.87% Dockerfile 0.24% Makefile 0.05% Cython 0.83% CMake 0.43% C++ 7.36% HTML 0.30% JavaScript 3.81% CSS 0.12%
computer-vision object-detection object-tracking roboflow-app yolov8

bwct-tracker's Introduction

project-logo

Using Computer Vision to count and analyze how vulnerable road users use streets so that local governments have easy access to data for justifying investment in pedestrian and bike infrastructure.

Developed with the software and tools below.

tqdm TensorFlow JavaScript scikitlearn HTML5 YAML C SciPy Electron
Plotly Python Docker GitHub%20Actions pandas NumPy ONNX JSON Flask


Table of Contents

Overview

This repository contains code for an Electron app that processes videos from traffic cameras or portable cameras to count the number of people passing through the video. The user can draw lines on the video to indicate where they want to count people passing by. The app can distinguish between pedestrians, bikes, electric scooter riders, and wheelchairs (coming soon). It provides a frontend Electron interface that connects to a Flask backend to handle the video processing and people counting logic. The app ensures connectivity between the frontend and backend, monitors the backend health, handles reconnections as needed, and enables saving output files. It is designed to provide a seamless user experience for processing videos to count people crossing designated areas.

You can see our report here.

Features

Feature Description
โš™๏ธ Architecture The project features a modular architecture using Flask as backend and Electron for the frontend.
๐Ÿ”ฉ Code Quality Code follows PEP 8 guidelines with consistent formatting and clear variable names for readability.
๐Ÿ“„ Documentation Extensive documentation with inline comments, README files, and detailed guides for setup and usage.
๐Ÿงฉ Modularity Codebase is highly modular, enabling easy extension and reuse of components across different modules.
โšก๏ธ Performance Optimized performance with efficient algorithms and resource management, leveraging GPU accelerations.
๐Ÿ“ฆ Dependencies Key libraries include scikit-learn, TensorRT, Flask, matplotlib, and other essential ML and web development dependencies.
๐Ÿ’ป Platform Support Currently only tested and supported on NVIDIA Jetson hardware.

`


Repository Structure

โ””โ”€โ”€ /
    โ”œโ”€โ”€ BWCT_favicon.png
    โ”œโ”€โ”€ README.md
    โ”œโ”€โ”€ backend
    โ”‚   โ”œโ”€โ”€ BWCT_app.py
    โ”‚   โ”œโ”€โ”€ tracking
    โ”‚   โ”‚   โ”œโ”€โ”€ ConfTrack
    โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ basetrack.py
    โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ ConfTrack.py
    โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ kalman_filter.py
    โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ matching.py
    โ”‚   โ”‚   โ”œโ”€โ”€ Impr_Assoc_Track
    โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ basetrack.py
    โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ Impr_Assoc_Track.py
    โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ interpolation.py
    โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ kalman_filter.py
    โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ matching.py
    โ”‚   โ”‚   โ”œโ”€โ”€ LSTMTrack
    โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ LSTM_predictor.py
    โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ LSTMTrack.py
    โ”‚   โ”‚   โ”œโ”€โ”€ YOLOv8_TensorRT
    โ”‚   โ”‚   โ”œโ”€โ”€ color_transfer_cpu.py
    โ”‚   โ”‚   โ”œโ”€โ”€ color_transfer_gpu.py
    โ”‚   โ”‚   โ”œโ”€โ”€ example_count_lines.txt
    โ”‚   โ”‚   โ”œโ”€โ”€ reprocess_tracks.py
    โ”‚   โ”‚   โ”œโ”€โ”€ requirements.txt
    โ”‚   โ”‚   โ”œโ”€โ”€ setup.py
    โ”‚   โ”‚   โ””โ”€โ”€ track.py
    โ”‚   โ””โ”€โ”€ templates
    โ”œโ”€โ”€ index.html
    โ”œโ”€โ”€ main.js
    โ”œโ”€โ”€ package-lock.json
    โ”œโ”€โ”€ package.json
    โ”œโ”€โ”€ preload.js
    โ”œโ”€โ”€ readme-ai.md
    โ”œโ”€โ”€ requirements.txt
    โ”œโ”€โ”€ test_main.js
    โ””โ”€โ”€ track_logs.txt

Modules

.
File Summary
test_main.js Creates and manages a responsive Electron app window connected to a Flask backend. Monitors backend health, reconnects if needed, and facilitates file saving via dialogs. Initiated on app launch and handles system sleep events for seamless operation.
package-lock.json Code SummaryThe code file BWCT_app.py in the backend directory serves as the core application logic for the parent repository. This file is the backbone of the web application, handling various backend functionalities such as routing, data processing, and communication with the frontend. It orchestrates the interactions between different components, ensuring a seamless flow of data and actions within the application. Additionally, it encapsulates critical business logic and serves as the central hub for managing user requests and responses.
track_logs.txt The track_logs.txt file in the repository serves as a log for the processing of frames in the project. It records the progress of processing each frame out of a total of 1443 frames, including the frame number and frames per second (fps) at that point. This log file provides valuable insights into the processing speed and progress of the project execution, aiding in monitoring and optimizing performance.
requirements.txt NumPy for array manipulation-OpenCV for computer vision tasks-Flask for web framework-TensorFlow for machine learning-Plotly for interactive visualizations-Pandas for data analysis-And more crucial libraries.
preload.js Enables secure communication between front-end and back-end in Electron app. Exposes functions to interact with file system such as opening files, saving raw data, and generating visualizations. Enhances user experience by facilitating file operations seamlessly.
package.json Defines metadata for an Electron app named bwct-tracker-electron within the repository. Specifies dependencies, scripts for app execution, author details, licensing, and repository links, crucial for managing the Electron app within the project architecture.
main.js Integrates Node.js with Python backend for enhanced functionality. Contributes to seamless operation of the hybrid application within the repositorys architecture.
index.html Enables real-time file upload, merging, and status tracking for the BWCT Video Merging Tool web application. Supports multi-file uploads, async merge requests, and live status updates via Resumable.js and server-side endpoints.
backend
File Summary
BWCT_app.py The BWCT_app.py file in the backend directory of the repository serves as the core Flask application handling file uploads, real-time data visualization, and interaction using SocketIO. It enables users to upload files, process data, and retrieve visualizations. Additionally, it incorporates features for monitoring file changes and serving downloadable content. The file also integrates various libraries for file handling, event observation, and data manipulation to provide a robust platform for user interaction and data analysis.
backend.templates
File Summary
upload.html The upload.html file within the backend/templates directory of the repository serves as the user interface for the BWCT Video Analysis Tool. It provides a web interface for users to upload videos for analysis. The page includes necessary styling and scripts for functionality, such as handling uploads and displaying analysis results. The primary purpose of this file is to facilitate the seamless uploading of videos and enhance the user experience within the broader architecture of the BWCT application.
backend.tracking
File Summary
requirements.txt Improve association and counters with essential libraries for image processing, machine learning, visualization, and data handling.
reprocess_tracks.py Generates line crossing counts by processing tracks and count lines, utilizing specified input files and saving the results. Parses line and track data to calculate crossings for different object classes, updating the counts accordingly.
example_count_lines.txt Implements line counting functionality for demonstrating improved association with coordinates (640,0) to (640,720) in the tracking. Located at backend/tracking/example_count_lines.txt within the repository structure.
track.py Backend/tracking/track.pyThe track.pyfile within thetracking` directory focuses on implementing video processing, object detection, and tracking functionalities using various libraries and modules. It leverages libraries like OpenCV, NumPy, TensorFlow, and PyTorch for efficient video analysis. Additionally, it integrates YOLOv8 TensorRT for optimized real-time object detection and tracking. The script aims to provide a robust solution for enhancing association and counting tasks within video analysis applications.
color_transfer_cpu.py Implements color transfer between images using mean and standard deviations in the Lab color space. Enhances aesthetics by adjusting brightness levels, following original paper methodology or alternative scaling. Outputs a visually appealing color-transferred image.
color_transfer_gpu.py The color_transfer_gpu.py file, located within the backend/tracking directory, facilitates GPU-accelerated color transfer operations for enhanced performance. It leverages libraries like OpenCV, NumPy, CuPy, and Torch to efficiently manipulate and transfer image pixel data between various formats. The code defines a function to convert a GPU matrix from OpenCV to a CuPy array, enabling seamless interaction between GPU-accelerated matrices in different libraries. This functionality contributes to optimizing image processing tasks within the parent repositorys architecture, improving overall performance and efficiency.
Impr_track_count.py Backend/tracking/Impr_track_count.pyThe Impr_track_count.pyfile within thebackend/tracking` directory serves a crucial role in the repositorys architecture by handling image processing and association counting tasks. It leverages various libraries such as matplotlib, scipy, numpy, and TensorFlow to analyze and visualize tracking data efficiently. By integrating functionality from fastreid and other dependencies, this code file supports advanced image processing, machine learning, and data analysis for improved association counting within the project.
setup.py Set up compilation environment for PyTorch extension module.-Define extension modules with main source and additional sources.-Retrieve version and long description for the setup.-Configure package details and dependencies for PyPI distribution.
backend.tracking.Impr_Assoc_Track
File Summary
basetrack.py Track state, ID generation, state transitions, position history, activation, prediction, update methods, and state marking. Enables implementing custom tracking logic.
interpolation.py Creates interpolation results for tracking data in MOTChallenge format, optimizing track continuity by filling in missing frames through linear interpolation. Parses input arguments, generates new tracklets based on specified thresholds, and writes the interpolated track results to new files.
Impr_Assoc_Track.py This code file within the parent repositorys architecture implements a tracking system that utilizes various algorithms for association and feature extraction. It leverages techniques such as IOU distance calculation, Kalman filtering, and fast re-identification for object tracking. The file integrates these components to provide a robust solution for multi-object tracking in real-time scenarios.
matching.py Implements functions to calculate matching indices, IoU distances, and cost matrices for multi-object tracking association. Includes fusion methods for motion, IoU, and detection scores. Enables efficient assignment and merging of object matches in multi-object tracking systems.
kalman_filter.py Implements Kalman filtering for tracking bounding boxes with motion model and observation matrix. Facilitates track creation, prediction, correction, and distance computation for state and measurement comparison.
backend.tracking.ConfTrack
File Summary
basetrack.py State transitions, ID generation, activation handling, and state updates. Supports multi-camera tracking with location tracking. Encapsulates key tracking attributes and methods for future extensibility and customization within the parent repositorys architecture.
ConfTrack.py The ConfTrack.py file in the backend/tracking directory of the repository implements core functionality for object tracking and association in computer vision tasks. It leverages various matching algorithms, Kalman filtering, and FastReID integration to track objects efficiently. This code plays a crucial role in enhancing the tracking accuracy and robustness of the overall system by utilizing advanced computer vision techniques.
ConfTrack Code Summary**This code file within the repositorys backend architecture defines a STrack class that serves as a critical component for object tracking functionalities. It imports various modules related to computer vision and tracking algorithms, setting up the necessary infrastructure for object detection, tracking, and feature extraction. The STrack class is a key element enabling the system to effectively monitor and maintain the identity of objects across frames.
matching.py Implements functions to compute distance metrics for object tracking. Utilizes IoU calculations to determine similarity between bounding boxes. Facilitates matching and fusion of tracked objects. Enhances object tracking accuracy through cost optimization.
kalman_filter.py Implements Kalman filtering for tracking bounding boxes in image space. Initializes, predicts, updates state, and computes distance to measurements with customizable metric. Enhances object tracking robustness in ConfTrack architecture.
backend.tracking.YOLOv8_TensorRT
File Summary
cuda_utils.py Transform images on GPUs using CUDA-accelerated functions, resizing and padding while maintaining aspect ratio for compatibility with a new shape.Utilizes PyTorch for GPU padding and seamlessly interfaces with CuPy arrays to convert from OpenCV to PyTorch tensors.
pycuda_api.py Enables loading and running TensorRT models with CUDA. Initializes engine and bindings from provided weights file. Supports dynamic axes and profiler setting. Conducts warm-up with predefined inputs for optimal performance.
cudart_api.py Enables inference acceleration using NVIDIA TensorRT for deep learning models. Initializes the engine, manages input/output bindings, supports dynamic axes, and provides a warm-up mechanism. Offers a callable interface for efficient GPU memory handling and execution.
utils.py Implements image processing functions for resizing, padding, bounding box operations, and non-maximum suppression for object detection, segmentation, and pose estimation. Enhances input data to enable efficient object localization and extraction based on confidence thresholds.
common.py Defines utility functions for anchor point generation and implements non-maximum suppression for object detection models. Custom module classes are provided for post-processing detection and segmentation results, along with an optimization function for model compatibility.
torch_utils.py Implements segmentation, pose estimation, and object detection post-processing for computer vision tasks. Performs bounding box and mask processing using Torch and torchvision ops, including non-maximum suppression.
engine.py Builds a TensorRT engine from ONNX or API for object detection in image data. Handles input optimization, FP16 support, and profiling. Implements a Torch module for executing the model efficiently on GPUs, supporting dynamic shapes and profiling hooks.
api.py Implements multiple layers like Conv2d, Bottleneck, SPPF, and Detect for the neural network in the YOLOv8_TensorRT model, enabling efficient object detection with optimized TRT operations.
backend.tracking.LSTMTrack
File Summary
LSTMTrack.py The LSTMTrack.py file within the tracking directory of the repositorys backend contains code for tracking objects using a Long Short-Term Memory (LSTM) model. This file integrates TensorFlow for deep learning and implements object tracking functionalities such as feature matching, track state management, and prediction using LSTM. Additionally, it utilizes tools for similarity measurement and optimization to enhance tracking accuracy. By leveraging these techniques, the code facilitates robust object tracking within the larger system, contributing to improved association and counting capabilities for the application.
LSTM_predictor.py Predicts the next state in object motion using an LSTM model. Initiates a track from unassociated measurements and runs LSTM prediction steps for sequences. Handles bounding box coordinates and feature vectors to make accurate predictions in a 516-dimensional state space.
backend.tracking.fast_reid
File Summary
fast_reid_interfece.py Facilitates real-time person recognition using a pre-trained model. Processes image patches, runs predictions, and handles network input adaptation. Enables feature extraction for subsequent analysis and decision-making.

Getting Started

System Requirements:

  • Python: version 3.10.0

Installation

From source

  1. Clone the repository:
$ git clone https://github.com/rolson24/BWCT-tracker.git
  1. Change to the project directory:
$ cd 
  1. Install the dependencies:

3.a

$ pip install -r requirements.txt

3.b Install onnx for your system. Follow these instructions.

3.c Install the correct requirements for your gpu:

If you have an NVIDIA gpu, you can leave the requirements file the same.

If no NVIDIA gpu, then comment out cupy, faiss-gpu, and onnxruntime-gpu from requirements.txt also change BWCT_app.py to set device to "cpu"

If you have an amd gpu, install onnxruntime-directml (First try pip install onnxruntime-directml, and if that doesn't work, then try building from source.)

Depends on FFMPEG: Instructions

  1. Install electron:

First install node.js: Instructions

Now install fs-extra:

$ npm install fs-extra

Now install electron:

$ npm install -g electron
  1. Start app:
$ cd path/to/BWCT-Tracker
$ npm start

For Nvidia Jeton

Follow the software portion of these instructions to do the full setup on an Nvidia Jetson (includes instructions for OS install)

Usage

From source

Follow the build instructions above to install the project.

Train new YOLOv8 model

  1. Follow this Google Colab notebook to train a new YOLOv8 model: Open In Colab
  2. Download the trained model weights from the Google Drive folder and place them in the backend/tracking/models directory

2.5. (Optional) If you want the runtime to be fast and you have an NVIDIA GPU or an NVIDIA Jetson, run these commands to convert the model to a TensorRT model (first cd to the BWCT-Tracker repository and ensure you have tensorRT install):
A:
python3 backend/tracking/YOLOv8_TensorRT/export-det.py
--weights {path/to/weights_file}
--iou-thres 0.65
--conf-thres 0.25
--topk {num_of_classes_in_model}
--opset 17
--sim
--input-shape 1 3 640 640
--device cuda:0
B:
/usr/src/tensorrt/bin/trtexec
--onnx={path/to/onnx_export_output}
--saveEngine=yolov8s.engine
--fp16
C:
Move the ".engine" model to the models folder

  1. Update the model_path variable in backend/tracking/BWCT_app.py with the path to the new model weights

Project Roadmap

  • โ–บ Expand compatibility to other machines
  • โ–บ Add support for other models
  • โ–บ Package app into a Docker container

Contributing

Contributions are welcome! Here are several ways you can contribute:

Contributing Guidelines
  1. Fork the Repository: Start by forking the project repository to your local account.
  2. Clone Locally: Clone the forked repository to your local machine using a git client.
    git clone ../
  3. Create a New Branch: Always work on a new branch, giving it a descriptive name.
    git checkout -b new-feature-x
  4. Make Your Changes: Develop and test your changes locally.
  5. Commit Your Changes: Commit with a clear message describing your updates.
    git commit -m 'Implemented new feature x.'
  6. Push to local: Push the changes to your forked repository.
    git push origin new-feature-x
  7. Submit a Pull Request: Create a PR against the original project repository. Clearly describe the changes and their motivations.
  8. Review: Once your PR is reviewed and approved, it will be merged into the main branch. Congratulations on your contribution!
Contributor Graph


License

This project is protected under the AGPL-3.0 License. For more details, refer to the LICENSE file.


Special thanks to the following people for their existing work:

  • The Ultralytics team for all their amazing work developing YOLOv8.
  • Daniel Stadler and Jurgen Beyerer for their paper on Improved Association Tracker
  • Hyeonchul Jung, Seokjun Kang, Takgen Kim, and HyeongKi Kim for their paper on ConfTrack. (See also their implementation here)
  • He, Lingxiao and Liao, Xingyu and Liu, Wu and Liu, Xinchen and Cheng, Peng and Mei, Tao for developing FastReID. (See also their implementation here)
  • Adrian Rosebrock, Kumar Ujjawal and Adam Spannbauer for their implementiation of fast color transfer. (See also their implementation here)
  • The YOLOv8-TensorRT team for their wonderful work making YOLOv8 fast with tensorRT.
  • The Roboflow Supervision team for their amazing work developing the Supervision tool.

Return


bwct-tracker's People

Contributors

rolson24 avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

bwct-tracker's Issues

[DOCUMENTATION] End of ENGS 90 24W Tasks

Write out the README for notes about the project and links to the final report PDF.

Explain the structure of the repository.

Explain the training script and link to roboflow dataset used. Instructions on how to train.

How to print a case for the Jetson.

Verify the Jetson works before handing it off to Jennie. Give her the box and display cables.

List out in the README the next steps that we suggested in final report.

Transfer ownership of this repo to Emily and rename to BWCT-Tracker. Make sure branch settings on main are correct before doing so. Add link to BWCT-Recorder repository (do this after repo transfer). Make sure all links pointing to the repo are updated with the new link.

[BUG] Web App Started From Command Line Does Not Process Correctly

Wendell says:

I don't remember exactly what the issue was, but something about the web app being autostarted did not properly process videos once they were uploaded (i.e. no terminal window was available to debug it either). It would work fine if the web app was fired from a terminal and the terminal window was left open for debugging purposes. @rolson24 need to investigate more

Video uploads successfully. However, upon drawing lines and clicking "Process Video", the loader would spin for a while and then immediately the progress bar would go to the end with "00:00:00" remaining on processing time left. And no count output would be produced.

[Feature] Add Logging

Add loggers and documentation about how they work so we have a chance at debugging issues in the future.

[FEATURE] Features Requested by Jennie and Team 11

Per Jennie's email on 2024-03-02 and from our Team meeting on 2024-02-29:

I think you got this all from me, but just in case.

In / Out - what is the directionality of the counts - can an arrow or explantation be added to indicate?, and how does this work specifically with the vertical lines.

wheelchairs -> add to pedestrian count for now so it's not confusing

15 min clips - Rob mentioned (and I concur) that it would be good to have counts in 15 min clips as this is a standard increment used in counts

time conversion - will we need a way to compare the raw video time to the processed video time? in the event that the processing takes different amounts of time and buckets the data accordingly?

transparency - love the transparency idea for the paths

real time length of Allen clips - can you use the # of segments to estimate the real time length of the Allen clips that Rob and you took so that the counts are more meaningful

please save Allen St raw data - in case we need to count cars and trucks

In addition to this, Raif wants to save the x,y tracks into its own file and add an upload feature to upload a processed track file so that you don't have to go through and re-do a whole video if you close a session and want to do more analysis on it.

Written by Wendell

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.