Git Product home page Git Product logo

nathanrooy / rpi-urban-mobility-tracker Goto Github PK

View Code? Open in Web Editor NEW
118.0 12.0 33.0 202.02 MB

The easiest way to count pedestrians, cyclists, and vehicles on edge computing devices or live video feeds.

License: GNU General Public License v3.0

Python 3.05% Jupyter Notebook 96.55% Dockerfile 0.39%
pedestrian-counting bike-counting car-counting urban-design deep-learning raspberry-pi coral-tpu deepsort deep-sort deep-sort-tracking tensorflow edge-computing pedestrians edge-tpu tensorflow-lite pedestrian-safety object-tracking

rpi-urban-mobility-tracker's Introduction

Raspberry Pi Urban Mobility Tracker (DeepSORT + MobileNet)

The Raspberry Pi Urban Mobility Tracker is the simplest way to track and count pedestrians, cyclists, scooters, and vehicles. For more information, see the original blog post [here].

Hardware

Primary Components

  1. Raspberry Pi (ideally v4-b)
  2. Raspberry Pi camera (ideally v2)
  3. Google Coral Accelerator (Not required, but strongly encouraged)

Secondary Components

  1. Ballhead mount: https://www.amazon.com/gp/product/B00DA38C3G
  2. Clear lens: https://www.amazon.com/gp/product/B079JW114G
  3. Weatherproof enclosure: https://www.amazon.com/gp/product/B005UPAN0W
  4. 30000mAh battery: https://www.amazon.com/gp/product/B01M5LKV4T

Notes

  • The mounts located in geometry/ are currently represented as stl files which are 3d printer ready. I don't currently have a 3d printer so I used the crowd sourced printing service https://printathing.com/ which yielded great results (kind of a sales pitch, but not really. I just like the service).
  • The original FreeCAD file is also included just in case you want to modify the geometry.
  • The only cutting necessary is through the plastic case to allow for the lens. This joint should then be sealed using silicone caulk to prevent any moisture from entering.
  • All the secondary components listed are just suggestions which worked well for my build. Feel free to use what ever you want.
3D printed mounts mounts with attached hardware
Final setup (open) Front (closed)

Install (Raspberry Pi)

  1. UMT has been dockerized in order to minimize installation friction. Start off by installing Docker on your Raspbery Pi or what ever you plan on using. The instructions below assume a Raspberry Pi v4 with Raspberry Pi OS 2020-12-02. This is also a good time to add non-root users to the Docker user group. As an example, to add the Raspberry pi default user pi:
sudo usermod -aG docker pi
  1. Open a terminal and create a directory for the UMT output:
UMT_DIR=${HOME}/umt_output && mkdir -p ${UMT_DIR}
  1. Move into the new directory:
cd ${UMT_DIR}
  1. Download the Dockerfile and build it:
wget https://raw.githubusercontent.com/nathanrooy/rpi-urban-mobility-tracker/master/Dockerfile

docker build . -t umt
  1. Start the Docker container:
docker run --rm -it --privileged --mount type=bind,src=${UMT_DIR},dst=/root umt
  1. Test install by downloading a video and running the tracker:
wget https://github.com/nathanrooy/rpi-urban-mobility-tracker/raw/master/data/videos/highway_01.mp4

umt -video highway_01.mp4

If everything worked correctly, you should see a directory labeled output filled with 10 annotated video frames.

Install (Ubuntu)

First, create a new virtualenv, initialize it, then install the TensorFlow Lite runtime package for Python:

pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime

Then finish with the following:

pip install git+https://github.com/nathanrooy/rpi-urban-mobility-tracker

Lastly, test the install by running step #6 from the Raspberry Pi install instructions above.

Model Choice

The default deep learning model is the MobileNet v1 which has been trained on the COCO dataset and quantized for faster performance on edge deployments. Another good model choice is PedNet which is also a quantized MobileNet v1 however, it's been optimized specifically for pedestrians, cyclsts, and vehicles. To use PedNet, simply download it from its repo here: https://github.com/nathanrooy/ped-net or clone it.

git clone https://github.com/nathanrooy/ped-net

Once the model and labels have been downloaded, simply use the modelpath and labelmap flags to specify a non-default model setup. As an example:

umt -camera -modelpath pednet_20200326_tflite_graph.tflite -labelmap labels.txt

Usage

Since this code is configured as a cli, everything is accessible via the umt command on your terminal. To run while using the Raspberry Pi camera (or laptop camera) data source run the following:

umt -camera

To run the tracker on an image sequence, append the -imageseq flag followed by a path to the images. Included in this repo are the first 300 frames from the MOT (Multiple Object Tracking Benchmark) Challenge PETS09-S2L1 video. To use them, simply download/clone this repo and cd into the main directory.

umt -imageseq data/images/PETS09-S2L1/

To view the bounding boxes and tracking ability of the system, append the -display flag to view a live feed. Note that this will greatly slow down the fps and is only recommended for testing purposes.

umt -imageseq data/images/PETS09-S2L1/ -display

By default, only the first 10 frames will be processed. To increase or decrease this value, append the -nframes flag followed by an integer value.

umt -imageseq data/images/PETS09-S2L1/ -display -nframes 20

To persist the image frames and detections, use the -save flag. Saved images are then available in the output/ directory.

umt -imageseq data/images/PETS09-S2L1/ -save -nframes 20

To run the tracker using a video file input, append the -video flag followed by a path to the video file. Included in this repo are two video clips of vehicle traffic.

umt -video data/videos/highway_01.mp4

In certain instances, you may want to override the default object detection threshold (default=0.5). To accompish this, append the -threshold flag followed by a float value in the range of [0,1]. A value closer to one will yield fewer detections with higher certainty while a value closer to zero will result in more detections with lower certainty. It's usually better to error on the side of lower certainty since these objects can always be filtered out during post processing.

umt -video data/videos/highway_01.mp4 -display -nframes 100 -threshold 0.4

To get the highest fps possible, append the -tpu flag to use the Coral USB Accelerator for inferencing.

umt -imageseq data/images/PETS09-S2L1/ -tpu

References

@inproceedings{Wojke2017simple,
  title={Simple Online and Realtime Tracking with a Deep Association Metric},
  author={Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich},
  booktitle={2017 IEEE International Conference on Image Processing (ICIP)},
  year={2017},
  pages={3645--3649},
  organization={IEEE},
  doi={10.1109/ICIP.2017.8296962}
}

@inproceedings{Wojke2018deep,
  title={Deep Cosine Metric Learning for Person Re-identification},
  author={Wojke, Nicolai and Bewley, Alex},
  booktitle={2018 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2018},
  pages={748--756},
  organization={IEEE},
  doi={10.1109/WACV.2018.00087}
}

rpi-urban-mobility-tracker's People

Contributors

kfculligan avatar luiscosio avatar nathanrooy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpi-urban-mobility-tracker's Issues

E: Package 'libhdf5-100' has no installation candidate

Instructions for "First install the required dependencies for cv2" guide the end use to install libhdf5-100 via sudo apt-get install libhdf5-dev libhdf5-serial-dev libhdf5-100

Output of the above command/instruction on Raspberry Pi OS:
`Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'libhdf5-dev' instead of 'libhdf5-serial-dev'
Package libhdf5-100 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
libhdf5-103

E: Package 'libhdf5-100' has no installation candidate
`
Before installation the latest Raspberry Pi OS was installed, updated, and upgraded. The package libhdf5-103 was installed in leau of libhdf5-100 but resulting usage of umt produces errors for -camera.

Guidance or perhaps update instruction set?

line crossing?

The demo images at the top of the readme show counting objects that cross a line. I don't see any options for how to define this behavior. What am I missing?

class_name = labels[track.get_class()] KeyError: 3

After running -

umt -camera -display -modelpath /home/pi/ped-net/model/pednet_20200517_edgetpu.tflite -labelmap /home/pi/ped-net/model/labels.txt -tpu

This is the Ouput, note: I added a print command for the list enumeration -
i.e.
{0: 'bike'}
{0: 'bike', 1: 'pedestrian'}
{0: 'bike', 1: 'pedestrian', 2: 'vehicle'}

Output:

INITIALIZING UMT...
THRESHOLD: 0.5
CUSTOM LABEL MAP = TRUE (/home/pi/ped-net/model/labels.txt)
{0: 'bike'}
{0: 'bike', 1: 'pedestrian'}
{0: 'bike', 1: 'pedestrian', 2: 'vehicle'}
TPU = TRUE
CUSTOM DETECTOR = TRUE
> DETECTOR PATH = /home/pi/ped-net/model/pednet_20200517_edgetpu.tflite

TRACKING...
starting video stream...
FRAME: 0
no detections...
FRAME: 1
no detections...
FRAME: 2
Traceback (most recent call last):
File "/home/pi/venv_umt/bin/umt", line 8, in
sys.exit(main())
File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/umt_main.py", line 107, in main
class_name = labels[track.get_class()]
KeyError: 3

This is intermittent, I can run this multiple times and get >100 Frames at times before this error. Please could you help out with any insight?

Error when using camera input

I'm new to using the rpi-umt, and have just installed the new umt version. When I run it on the video or stills, it classifies happily, but when I run using the camera input I'm getting the following error. The camera seems to work fine if I capture images using raspistill. Any ideas?

david@r2:~ $ source venv/bin/activate
(venv) david@r2:~ $ umt -camera

INITIALIZING UMT...
THRESHOLD: 0.5
TPU = FALSE
CUSTOM DETECTOR = FALSE
CUSTOM LABEL MAP = FALSE

TRACKING...
starting video stream...
FRAME: 0
Traceback (most recent call last):
File "/home/david/venv/bin/umt", line 8, in
sys.exit(main())
File "/home/david/venv/lib/python3.7/site-packages/umt/umt_main.py", line 77, in main
new_dets, classes, scores = generate_detections(pil_img, interpreter, args.threshold)
ValueError: not enough values to unpack (expected 3, got 0)

Viewing output in real time

Hi,

Is there a way to view the tracking in real time instead of just seeing the saved images in the output folder when using the -display option?

please consider citing SORT algorithm

Thanks for sharing your hard work on this project!

Being that you are using the SORT algorithm for this project, please consider citing the creator:

https://github.com/abewley/sort

Citing SORT
If you find this repo useful in your research, please consider citing:

@inproceedings{Bewley2016_sort,
  author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},
  booktitle={2016 IEEE International Conference on Image Processing (ICIP)},
  title={Simple online and realtime tracking},
  year={2016},
  pages={3464-3468},
  keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},
  doi={10.1109/ICIP.2016.7533003}
}

Speed monitoring?

Hello - is it possible to also capture the speed of counted vehicles/cyclists?

RTSP support

Hi, adding RTSP support shouldn't be too hard and would allow for more flexible physical installations, e.g. zoom cameras to be able to be farther away or smaller packages transmitting by wifi to the processor.
In my case, repurposing a Wyze v2 camera which has excellent performance and optional RTSP firmware.

Error when -labelmap labels.txt is added

(venv) root@raspberrypi:/home/pi/ped-net/model# umt -video highway_01.mp4 -modelpath pednet_20200517.tflite -labelmap labels.txt

INITIALIZING UMT...
THRESHOLD: 0.5
CUSTOM LABEL MAP = TRUE (labels.txt)
TPU = FALSE
CUSTOM DETECTOR = TRUE
> DETECTOR PATH = pednet_20200517.tflite

TRACKING...
FRAME: 0
Traceback (most recent call last):
File "/home/pi/venv/bin/umt", line 8, in
sys.exit(main())
File "/home/pi/venv/lib/python3.7/site-packages/umt/umt_main.py", line 106, in main
class_name = labels[track.class_name]
KeyError: 3

without it:

(venv) root@raspberrypi:/home/pi/ped-net/model# umt -video highway_01.mp4 -modelpath pednet_20200517.tflite -labelmap labels.txt

INITIALIZING UMT...
THRESHOLD: 0.5
CUSTOM LABEL MAP = TRUE (labels.txt)
TPU = FALSE
CUSTOM DETECTOR = TRUE
> DETECTOR PATH = pednet_20200517.tflite

TRACKING...
FRAME: 0
Traceback (most recent call last):
File "/home/pi/venv/bin/umt", line 8, in
sys.exit(main())
File "/home/pi/venv/lib/python3.7/site-packages/umt/umt_main.py", line 106, in main
class_name = labels[track.class_name]
KeyError: 3
(venv) root@raspberrypi:/home/pi/ped-net/model# umt -video highway_01.mp4 -modelpath pednet_20200517.tflite
INITIALIZING UMT...
THRESHOLD: 0.5
CUSTOM LABEL MAP = FALSE
TPU = FALSE
CUSTOM DETECTOR = TRUE
> DETECTOR PATH = pednet_20200517.tflite

TRACKING...
FRAME: 0
FRAME: 1
FRAME: 2
FRAME: 3
FRAME: 4
FRAME: 5
FRAME: 6
FRAME: 7
FRAME: 8
FRAME: 9
(venv) root@raspberrypi:/home/pi/ped-net/model#

umt -camera returns error for 'images:0'

UMT installations, using the verbatim instructions and a "modified" instruction set for updated tensorflow, result in the inability to process the camera feed.

The results of the command "umt -camera" are below with no difference in root or user "pi" being used, nor the two different installations.

Any guidance to what to do? Is this related to the inability to install libhdf5-100 and libhdf5-103 having to be used instead?

Output from "umt-camera":

Traceback (most recent call last): File "/home/pi/venv_umt/bin/umt", line 5, in <module> from umt.umt_main import main File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/umt_main.py", line 16, in <module> from umt.umt_utils import parse_label_map File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/umt_utils.py", line 25, in <module> encoder = gd.create_box_encoder(w_path, batch_size=1) File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/deep_sort/generate_detections.py", line 100, in create_box_encoder image_encoder = ImageEncoder(model_filename, input_name, output_name) File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/deep_sort/generate_detections.py", line 81, in __init__ "%s:0" % input_name) File "/home/pi/venv_umt/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3786, in get_tensor_by_name return self.as_graph_element(name, allow_tensor=True, allow_operation=False) File "/home/pi/venv_umt/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3610, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File "/home/pi/venv_umt/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3652, in _as_graph_element_locked "graph." % (repr(name), repr(op_name))) KeyError: "The name 'images:0' refers to a Tensor which does not exist. The operation, 'images', does not exist in the graph."

frame rate while using TPU?

When using the edgeTPU (-tpu option) and -display option, I am getting frame rates of only 1 FPS or so. Is this expected? I would have expected better performance.

Thanks!

umt -camera fails/returns error for 'images:0' does not exist

UMT installations, using the verbatim instructions and a "modified" instruction set for updated tensorflow, result in the inability to process the camera feed.

The results of the command "umt -camera" are below with no difference in root or user "pi" being used, nor the two different installations.

Any guidance to what to do? Is this related to the inability to install libhdf5-100 and libhdf5-103 having to be used instead?

Output from "umt-camera":

Traceback (most recent call last): File "/home/pi/venv_umt/bin/umt", line 5, in <module> from umt.umt_main import main File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/umt_main.py", line 16, in <module> from umt.umt_utils import parse_label_map File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/umt_utils.py", line 25, in <module> encoder = gd.create_box_encoder(w_path, batch_size=1) File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/deep_sort/generate_detections.py", line 100, in create_box_encoder image_encoder = ImageEncoder(model_filename, input_name, output_name) File "/home/pi/venv_umt/lib/python3.7/site-packages/umt/deep_sort/generate_detections.py", line 81, in __init__ "%s:0" % input_name) File "/home/pi/venv_umt/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3786, in get_tensor_by_name return self.as_graph_element(name, allow_tensor=True, allow_operation=False) File "/home/pi/venv_umt/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3610, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File "/home/pi/venv_umt/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3652, in _as_graph_element_locked "graph." % (repr(name), repr(op_name))) KeyError: "The name 'images:0' refers to a Tensor which does not exist. The operation, 'images', does not exist in the graph."

umt won't work

I followed the instructions at https://github.com/nathanrooy/rpi-urban-mobility-tracker

I'm getting this error:
(The first three lines as apparently just noise)

root@674b2ef1ec82:~# umt -video highway_01.mp4
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.summary API due to missing TensorBoard installation.
Traceback (most recent call last):
File "/usr/local/bin/umt", line 5, in
from umt.umt_main import main
File "/usr/local/lib/python3.7/dist-packages/umt/umt_main.py", line 15, in
from umt.umt_utils import parse_label_map
File "/usr/local/lib/python3.7/dist-packages/umt/umt_utils.py", line 26, in
encoder = gd.create_box_encoder(w_path, batch_size=1)
File "/usr/local/lib/python3.7/dist-packages/deep_sort_tools/generate_detections.py", line 123, in create_box_encoder
image_encoder = ImageEncoder(model_filename, input_name, output_name)
File "/usr/local/lib/python3.7/dist-packages/deep_sort_tools/generate_detections.py", line 97, in init
f"net/{input_name}:0")
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 3902, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 3726, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 3768, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'net/images:0' refers to a Tensor which does not exist. The operation, 'net/images', does not exist in the graph."

Coral USB is working in the host os:

[40332.332886] usb 2-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0

pi@pifem:~/coral/tflite/python/examples/classification $ python3 classify_image.py --model
models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
17.4ms
4.4ms
4.4ms
4.4ms
4.4ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.77734

I Can't display camera

Whenever I run the command umt -camera -display it tells me that it cannot open the display.

What can be?

WhatsApp Image 2021-04-10 at 20 22 47

Main tracking loop failing with pednet_20200517.tflite

(github newbie here, so please pardon if I'm using this incorrectly).

Great project here. Thank you. I am running the example on a RPI 3B and can get the code to work with the default model, but when using the latest PedNet (pednet_20200517.tflite) the main tracking loop fails at the try/except section ("TRACKER FAILED"). I removed the try/except to see the actual error and this is the result:

Traceback (most recent call last):
File "/home/pi/venv/bin/umt", line 8, in
sys.exit(main())
File "/home/pi/venv/lib/python3.7/site-packages/umt/umt_main.py", line 89, in main
tracker_labels, tracker_scores = match_detections_to_labels_and_scores(new_dets, trackers, scores, classes, labels)
File "/home/pi/venv/lib/python3.7/site-packages/umt/umt_utils.py", line 162, in match_detections_to_labels_and_scores
matched_labels = [labels[item] for item in matched_classes]
File "/home/pi/venv/lib/python3.7/site-packages/umt/umt_utils.py", line 162, in
matched_labels = [labels[item] for item in matched_classes]
KeyError: 3

Using TPU: AttributeError: 'Delegate' object has no attribute '_library'

Running the docker as per the file instructions - RPi 4 with a TPI.

Trying to run the command:
umt -video highway_01.mp4 -tpu

I get the following:

> INITIALIZING UMT...
   > THRESHOLD: 0.5
   > CUSTOM LABEL MAP = FALSE
   > TPU = TRUE
   > CUSTOM DETECTOR = FALSE
Traceback (most recent call last):
  File "/usr/local/bin/umt", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/dist-packages/umt/umt_main.py", line 62, in main
    interpreter = initialize_detector(args)
  File "/usr/local/lib/python3.7/dist-packages/umt/umt_utils.py", line 110, in initialize_detector
    {'device': device[0]} if device else {})
  File "/usr/local/lib/python3.7/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate
    delegate = Delegate(library, options)
  File "/usr/local/lib/python3.7/dist-packages/tflite_runtime/interpreter.py", line 81, in __init__
    self._library = ctypes.pydll.LoadLibrary(library)
  File "/usr/lib/python3.7/ctypes/__init__.py", line 434, in LoadLibrary
    return self._dlltype(name)
  File "/usr/lib/python3.7/ctypes/__init__.py", line 356, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: libedgetpu.so.1: cannot open shared object file: No such file or directory
Exception ignored in: <function Delegate.__del__ at 0xa819b198>
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/tflite_runtime/interpreter.py", line 116, in __del__
    if self._library is not None:
AttributeError: 'Delegate' object has no attribute '_library'

It works fine when I run without the tpu argument. I have the tpu installed correctly, and works for other code outside the docker.

Any help?

Thanks,

Anthony

No output from UMT

UMT appears to work fine, but no images are generated.
Ubuntu 20.04, x86_64

INITIALIZING UMT...
THRESHOLD: 0.5
CUSTOM LABEL MAP = FALSE
TPU = FALSE
CUSTOM DETECTOR = FALSE
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.

TRACKING...
FRAME: 0
FRAME: 1
FRAME: 2
FRAME: 3
FRAME: 4
FRAME: 5
FRAME: 6
FRAME: 7
FRAME: 8
FRAME: 9

Feature Request: Output Annotated and original frames as video

Feature Request:
Instead of (or in addition to) outputting frames as jpg images, it would be great to have the option to output video -- both the annotated and the original frames, with a unique name based on the time it was created.

This would:

  1. simplify the management of the data.
  2. allow you to understand the context a bit better

No license

There is no license for this repo. In your blog post you say you'd like people to share training data with you but I have no idea what is the license for your project to try to implement something similar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.