Git Product home page Git Product logo

node-moving-things-tracker's Introduction

OpenDataCam – An open source tool to quantify the world

OpenDataCam is an open source tool that helps to quantify the world. With computer vision OpenDataCam understands and quantifies moving objects. The simple setup allows everybody to count moving objects from cameras and videos.

People use OpenDataCam for many different use cases. It is especially popular for traffic studies (modal-split, turn-count, etc.) but OpenDataCam detects 50+ common objects out of the box and can be used for many more things. And in case it does not detect what you are looking for, you can always train your own model.

OpenDataCam uses machine learning to detect objects in videos and camera feeds. It then follows the objects as they move accross the scene. Define counters via the easy to use UI or API, and every time an object crosses the counter, OpenDataCam takes count.

Demo Videos

πŸ‘‰ UI Walkthrough (2 min, OpenDataCam 3.0) πŸ‘‰ UI Walkthrough (4 min, OpenDataCam 2.0) πŸ‘‰ IoT Happy Hour #13: OpenDataCam 3.0
OpenDataCam 3.0 Demo OpenDataCam IoT

Features

OpenDataCam comes feature packed, the highlight are

  • Multiple object classes
  • Fine grained counter logic
  • Trajectory analysis
  • Real-time or pre-recorded video sources
  • Run on small devices in the field or data centers in the cloud
  • You own the data
  • Easy to use API

🎬 Get Started, quick setup

The quickest way to get started with OpenDataCam is to use the existing Docker Images.

Pre-Requesits

Installation

# Download install script
wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/v3.0.2/docker/install-opendatacam.sh

# Give exec permission
chmod 777 install-opendatacam.sh

# Note: You will be asked for sudo password when installing OpenDataCam

# Install command for Jetson Nano
./install-opendatacam.sh --platform nano

# Install command for Jetson Xavier / Xavier NX
./install-opendatacam.sh --platform xavier

# Install command for a Laptop, Desktop or Server with NVIDIA GPU
./install-opendatacam.sh --platform desktop

This command will download and start a docker container on the machine. After it finishes the docker container starts a webserver on port 8080 and run a demo video.

Note: The docker container is started in auto-restart mode, so if you reboot your machine it will automaticaly start opendatacam on startup. To stop it run docker-compose down in the same folder as the install script.

Use OpenDataCam

Open your browser at `http://[IP_OF_JETSON]:8080``. (If you are running with the Jetson connected to a screen try: http://localhost:8080)

You should see a video of a busy intersection where you can immediately start counting.

Next Steps

Now you can…

  • Drag'n'Drop a video file into the browser window to have OpenDataCam analzye this file
  • Change the video input to run from a USB-Cam or other cameras
  • Use custom neural network weigts

and much more. See Configuration for a full list of configuration options.

πŸ”Œ API Documentation

In order to solve use cases that aren't taken care by our opendatacam base app, you might be able to build on top of our API instead of forking the project.

https://opendatacam.github.io/opendatacam/apidoc/

πŸ—ƒ Data export documentation

πŸ›  Development notes

See Development notes

πŸ’°οΈ Funded by the community

  • @rantgithub funded work to add Polygon counters and to improve the counting lines

πŸ“«οΈ Contact

Please ask any Questions you have around OpenDataCam in the GitHub Discussions. Bugs, Features and anythings else regarding the development of OpenDataCam is tracked in GitHub Issues.

For business inquiries or professional support requests please contact Valentin Sawadski or visit OpenDataCam for Professionals.

πŸ’Œ Acknowledgments

node-moving-things-tracker's People

Contributors

akretz avatar b-g avatar tdurand avatar vsaw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-moving-things-tracker's Issues

Final tweaks Readme

@tdurand went over the readme and tweaked things a little ... and also added myself to the contributors (hope that is okay) etc.

Open:

  • extend the Acknowledgments section with your finds
  • resolve the TODO .txt vs .json input as noted in the text
  • check whether npm publish is correct (it seems there is also the repo under the old name)

Other wise ready for me

Refactor: Allow for mutliple Tracker instances

I have to confirm with a test suite, but I believe that the tracker does not differentiate merges trajectories without checking for object types.

So if e.g. a Person walks in front of a car, they will not be tracked as two separate objects but as one single object, even if the Modell is capable of detecting the objects separately.

Tracker fails to track objects that are close to each other but distinct from another

Looking at the following Darknet JSON Stream, the one can see that darknet detects two persons close to each other, yet clearly distinct from another. The tracker however merges both objects into one, thus erasing one object from the scene.

{
 "frame_id":1137132, 
 "objects": [ 
  {"class_id":0, "name":"person", "relative_coordinates":{"center_x":0.447100, "center_y":0.315462, "width":0.033928, "height":0.115056}, "confidence":0.670621}, 
  {"class_id":0, "name":"person", "relative_coordinates":{"center_x":0.472682, "center_y":0.296385, "width":0.028619, "height":0.118678}, "confidence":0.289695}
 ] 
}, 
{
 "frame_id":1137133, 
 "objects": [ 
  {"class_id":0, "name":"person", "relative_coordinates":{"center_x":0.447339, "center_y":0.313765, "width":0.035581, "height":0.126710}, "confidence":0.582253}, 
  {"class_id":0, "name":"person", "relative_coordinates":{"center_x":0.471141, "center_y":0.292986, "width":0.028312, "height":0.126107}, "confidence":0.308533}
 ] 
}, 
{
 "frame_id":1137134, 
 "objects": [ 
  {"class_id":0, "name":"person", "relative_coordinates":{"center_x":0.452491, "center_y":0.302588, "width":0.030738, "height":0.136299}, "confidence":0.546889}, 
  {"class_id":0, "name":"person", "relative_coordinates":{"center_x":0.469706, "center_y":0.294295, "width":0.027532, "height":0.124118}, "confidence":0.533913}
 ] 
}

Training Custom Clasifier

Hello, πŸ‘‹ an interesting repository you have here. Could you please also include instructions on how to train custom classifier? πŸ™‚

Update Node Version in automated testing.

We currently still test on node 10 which is out maintenance for quite some while. Will update the official support only for node 12.x, 14.x and 16.x which are currently supported LTS releases.

Add example running in browser

Can take care of this ... we could take one of the BTT videos and play it back in a p5.js sketch (left side: video + raw yolo detection results, right side: video + yolo detection results filtered with moving-things-tracker).

@tdurand Do you have some good ugly raw yolo detection results? (tbd in our call this afternoon)

Browser Support / Guide

Hello there!

Do you have any demo project or guide on how to use node-moving-things-tracker in the browser?

Create standalone dist/moving-things-tracker.js

Hi @tdurand. Do you mind if we add to package.json something like this?

"bundle": "browserify main.js --standalone tracker -o dist/moving-things-tracker.js",
"dist": "npm run bundle && browserify main.js --standalone gd | uglifyjs > dist/moving-things-tracker.min.js"

That way I believe people could use it out of the box in the browser via tracker.Tracker.updateTrackedItemsWithNewFrame ... which would be great as e.g. the create coding audience might not always be into node.js.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.