Git Product home page Git Product logo

yolov4-deepsort's Introduction

yolov4-deepsort

license Open In Colab

Object tracking implemented with YOLOv4, DeepSort, and TensorFlow. YOLOv4 is a state of the art algorithm that uses deep convolutional neural networks to perform object detections. We can take the output of YOLOv4 feed these object detections into Deep SORT (Simple Online and Realtime Tracking with a Deep Association Metric) in order to create a highly accurate object tracker.

Demo of Object Tracker on Persons

Demo of Object Tracker on Cars

Getting Started

To get started, install the proper dependencies either via Anaconda or Pip. I recommend Anaconda route for people using a GPU as it configures CUDA toolkit version for you.

Conda (Recommended)

# Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate yolov4-cpu

# Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate yolov4-gpu

Pip

(TensorFlow 2 packages require a pip version >19.0.)

# TensorFlow CPU
pip install -r requirements.txt

# TensorFlow GPU
pip install -r requirements-gpu.txt

Nvidia Driver (For GPU, if you are not using Conda Environment and haven't set up CUDA yet)

Make sure to use CUDA Toolkit version 10.1 as it is the proper version for the TensorFlow version used in this repository. https://developer.nvidia.com/cuda-10.1-download-archive-update2

Downloading Official YOLOv4 Pre-trained Weights

Our object tracker uses YOLOv4 to make the object detections, which deep sort then uses to track. There exists an official pre-trained YOLOv4 object detector model that is able to detect 80 classes. For easy demo purposes we will use the pre-trained weights for our tracker. Download pre-trained yolov4.weights file: https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT

Copy and paste yolov4.weights from your downloads folder into the 'data' folder of this repository.

If you want to use yolov4-tiny.weights, a smaller model that is faster at running detections but less accurate, download file here: https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights

Running the Tracker with YOLOv4

To implement the object tracking using YOLOv4, first we convert the .weights into the corresponding TensorFlow model which will be saved to a checkpoints folder. Then all we need to do is run the object_tracker.py script to run our object tracker with YOLOv4, DeepSort and TensorFlow.

# Convert darknet weights to tensorflow model
python save_model.py --model yolov4 

# Run yolov4 deep sort object tracker on video
python object_tracker.py --video ./data/video/test.mp4 --output ./outputs/demo.avi --model yolov4

# Run yolov4 deep sort object tracker on webcam (set video flag to 0)
python object_tracker.py --video 0 --output ./outputs/webcam.avi --model yolov4

The output flag allows you to save the resulting video of the object tracker running so that you can view it again later. Video will be saved to the path that you set. (outputs folder is where it will be if you run the above command!)

If you want to run yolov3 set the model flag to --model yolov3, upload the yolov3.weights to the 'data' folder and adjust the weights flag in above commands. (see all the available command line flags and descriptions of them in a below section)

Running the Tracker with YOLOv4-Tiny

The following commands will allow you to run yolov4-tiny model. Yolov4-tiny allows you to obtain a higher speed (FPS) for the tracker at a slight cost to accuracy. Make sure that you have downloaded the tiny weights file and added it to the 'data' folder in order for commands to work!

# save yolov4-tiny model
python save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416 --model yolov4 --tiny

# Run yolov4-tiny object tracker
python object_tracker.py --weights ./checkpoints/yolov4-tiny-416 --model yolov4 --video ./data/video/test.mp4 --output ./outputs/tiny.avi --tiny

Resulting Video

As mentioned above, the resulting video will save to wherever you set the --output command line flag path to. I always set it to save to the 'outputs' folder. You can also change the type of video saved by adjusting the --output_format flag, by default it is set to AVI codec which is XVID.

Example video showing tracking of all coco dataset classes:

Filter Classes that are Tracked by Object Tracker

By default the code is setup to track all 80 or so classes from the coco dataset, which is what the pre-trained YOLOv4 model is trained on. However, you can easily adjust a few lines of code in order to track any 1 or combination of the 80 classes. It is super easy to filter only the person class or only the car class which are most common.

To filter a custom selection of classes all you need to do is comment out line 159 and uncomment out line 162 of object_tracker.py Within the list allowed_classes just add whichever classes you want the tracker to track. The classes can be any of the 80 that the model is trained on, see which classes you can track in the file data/classes/coco.names

This example would allow the classes for person and car to be tracked.

Demo of Object Tracker set to only track the class 'person'

Demo of Object Tracker set to only track the class 'car'

Command Line Args Reference

save_model.py:
  --weights: path to weights file
    (default: './data/yolov4.weights')
  --output: path to output
    (default: './checkpoints/yolov4-416')
  --[no]tiny: yolov4 or yolov4-tiny
    (default: 'False')
  --input_size: define input size of export model
    (default: 416)
  --framework: what framework to use (tf, trt, tflite)
    (default: tf)
  --model: yolov3 or yolov4
    (default: yolov4)
    
 object_tracker.py:
  --video: path to input video (use 0 for webcam)
    (default: './data/video/test.mp4')
  --output: path to output video (remember to set right codec for given format. e.g. XVID for .avi)
    (default: None)
  --output_format: codec used in VideoWriter when saving video to file
    (default: 'XVID)
  --[no]tiny: yolov4 or yolov4-tiny
    (default: 'false')
  --weights: path to weights file
    (default: './checkpoints/yolov4-416')
  --framework: what framework to use (tf, trt, tflite)
    (default: tf)
  --model: yolov3 or yolov4
    (default: yolov4)
  --size: resize images to
    (default: 416)
  --iou: iou threshold
    (default: 0.45)
  --score: confidence threshold
    (default: 0.50)
  --dont_show: dont show video output
    (default: False)
  --info: print detailed info about tracked objects
    (default: False)

References

Huge shoutout goes to hunglc007 and nwojke for creating the backbones of this repository:

yolov4-deepsort's People

Contributors

nickvaras avatar theaiguyscode avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yolov4-deepsort's Issues

FPS calculation suggested bugfix

The FPS calculation routine in object_tracker.py can get into division by zero if the FPS count exceed 100 in Windows. I encountered this when running YOLO tiny model. It's because the time() function resolution in Windows is ~16 ms.

I fixed it by using time.time_ns() function available in python >= 3.7 (https://docs.python.org/3/library/time.html#time.time_ns) and refactoring out the calculation to reduce clutter in main():

def print_fps(start_time_ns, end_time_ns):
    elapsed_ns = (end_time_ns - start_time_ns)
    if elapsed_ns == 0:
        # prevent division by zero
        elapsed_ns = 1
    fps = 1.0 * (10**9) / elapsed_ns
    print("FPS: %.2f" % fps)

def main(_argv):
....
start_time_ns = time.time_ns()
....
 print_fps(start_time_ns, time.time_ns())
.....

Tracking Object with Grayscale Frame

I already have a YoloV4-Tiny model that trained on grayscale image. I change save_model.py input layer to

if FLAGS.grayscale:
	input_layer = tf.keras.layers.Input([FLAGS.input_size, FLAGS.input_size, 1])
  else:
  	input_layer = tf.keras.layers.Input([FLAGS.input_size, FLAGS.input_size, 3])

on object_tracker.py I already tried to add some code to make the grayscale work

frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frame = frame[:, :, np.newaxis]

but I got error like this

Traceback (most recent call last):
  File "object_tracker.py", line 304, in <module>
    app.run(main)
  File "/home/se790/anaconda3/envs/yolov4-gpu/lib/python3.7/site-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/home/se790/anaconda3/envs/yolov4-gpu/lib/python3.7/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "object_tracker.py", line 218, in main
    features = encoder(frame, bboxes)
  File "/home/se790/Stickearn/yolov4-deepsort/tools/generate_detections.py", line 118, in encoder
    return image_encoder(image_patches, batch_size)
  File "/home/se790/Stickearn/yolov4-deepsort/tools/generate_detections.py", line 99, in __call__
    {self.input_var: data_x}, out, batch_size)
  File "/home/se790/Stickearn/yolov4-deepsort/tools/generate_detections.py", line 23, in _run_in_batches
    out[s:e] = f(batch_data_dict)
  File "/home/se790/Stickearn/yolov4-deepsort/tools/generate_detections.py", line 98, in <lambda>
    lambda x: self.session.run(self.output_var, feed_dict=x),
  File "/home/se790/anaconda3/envs/yolov4-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 958, in run
    run_metadata_ptr)
  File "/home/se790/anaconda3/envs/yolov4-gpu/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1157, in _run
    (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 128, 64) for Tensor 'images:0', which has shape '(None, 128, 64, 3)'

any ideas on how to do the grayscale inferencing ? Thanks

Run on the Jetson Nano

Hello,
First of all, This is so nice code for detection, tracking. I can run this code on my PC.
I have a question. I want to run this code on the Jetson Nano.
I could run save_model.py, But object_tracker.py showed Aborted (core dumped).
Is this not for arm architecture? Give me some advice.
Thank you

How update a tracker when detection class has changed?

Hello, first thanks for the great job!

I'm currently working in a project tracking people using different colors of helmets, sometime during the scene one person could change the color of the helmet, lets say that he/she enters in scene with a white helmet and during a walk he/she changes the helmet for a blue, the Yolo detect this change, but for the experiments i have done the tracker continues with the initial class and that is a problem.

How can i do changes in the algorithm to delete the past tracker and create a new one or simple update?

Tracker not working

The tracker isn't working. I've used the same tracker a week ago when it was working perfectly. Wanted to know if the issue was from my side.

deepsort is consuming huge amount of cpu

while running detection over a video, I see that my entire CPU memory is being used. I'm not able to run it on multiple threads as it leads to slowness.
Did anyone face this issue ?
Any help would be appreciated

Resize image

Hello, I have a problem with resizing images. Default size is 416 but when I try to set different size it gives me an error:
image

YOLO accept ssizes that are N * 32, am I correct? So for example size 606 should work.

Replacing the Feature Extractor

Hi,
Is it possible to replace the feature extractor ? Because, better person re-identification models can improve the accuracy (in a person reid specific scenario). For example

Scaled YoloV4

Any way to use this with Scaled Yolov4?

Tried converting Yolov4-CSP to tensorflow weights and got an error.

Where do you store the data?

I want to ask, at what code line do you store the previous coordinate detection before it update to new detection?

Calculate the objects movement speed

Hi!
I want to calculate the speed of the detected objects, but I don't know how to get the past frames from the objects with their own IDs, how can I get this?

Thank you! This repo is so amazing!

Run DeepSORT over tensorflow-serving

I'm trying to run deepSORT tracking over TensorFlow-serving. Can you let me know how can I deploy the Mars-small128 model over TensorFlow serving?

Why need to draw 2 rectangle?

I don't understand with the code under "draw bbox on screen". Why do we need to draw two rectangle? What's the function on the second draw?

list of detected objects

I would like to know if there is a way to work with a list of the different elements that are being detected

Only 1 channel

When my tiny Yolov4 has been configured for only one input channel, the resulting weights cannot be converted, with an error like:

ValueError: cannot reshape array of size 4032 into shape (18,256,1,1)

What needs to be change in order to process a gray-level video ?

quitting video hitting specific key

Hi! So far I tried to press some common keys on the keyboard to stop the video by webcam, but none of them seem to be working. As a result I had to shut down the whole processus in the launching terminal through Ctrl+C.

Isn't there a way to implement cutting the vid down with the Escape key?

How to get an output record as txt file?

Thank you very much for your skilled work!
I have two issues that need your suggestions. The first one is how to split the types of object in tracking process (e.g. counting the cars and trucks separately). Then the second is how to create the record of counting result with timeline?

Besides, I have run your code and found some problems with tracking.
The program changes the class of object and tracking number (the car-19 in the first frame was changed to car-20 in the second frame). Also, the program skipped the number from 10 to 13 in counting (I made a text with other videos and this issue still appeared).
Frame 1
image
Frame 2
image

Id Switching issue

Hi, In deep sort paper its mentioned that to compute Cost metric C, we combine both metric using weighted sum Lambda. In implementation how we can increase the weightage of Appearance metric to reduce Id switching issue.

Can this code run on jetson nano-2gb

Well, its a straight up question..It's been days I've been trying to make this code work on nvivdia jetson-nano 2gb and due to some tensorflow issue..its not working..

So it would be helpful if anyone can straight up tell if that's the case

Yolov4 - Deep Sort Project: Loop Fails after a while

Hello, I 'm new in ml and I really feel the need to thank you for your tutorial. I tried a set up really interesting. I downloaded the Larix Broadcaster in my android phone and I set up a server to upload my video in PrimCast. (free version) . Everything set, I ran the following command python object_tracker.py --weights ./checkpoints/yolov4-tiny-416 --model yolov4 --video rtmp://162.244.80.42:1935/arapellisodisseas/livestream --tiny. After a while, the code fails according to the following image showing the terminal. It's not a real issue. Your code is perfect for me. Any insights??

Thank you.
Larix Parameters_1
Larix Parameters_2
terminal

fail

Easy way to quit camera mode?

Hi! So far I tried to press some common keys on the keyboard to stop the video by webcam, but none of them seem to be working. As a result I had to shut down the whole processus in the launching terminal through Ctrl+C.

Isn't there a way to implement cutting the vid down with the Escape key?

Get the Bbox and ID dictionary lists

I am working with yolo for the first time and I would like to know if there is way in which I can get the coordinates of the identified objects with their class ID as a list or a Dictionary rather than a continuous stream of text.

Different Yolov4 .cfg file

When I use a different .weights trained from another .cfg Yolov4 configuration file, gives me the following error:

File "save_model.py", line 50, in save_tf
utils.load_weights(model, FLAGS.weights, FLAGS.model, FLAGS.tiny)
File "/home/redbird/Escritorio/2020/3. CAR-DETECTION/yolov4-custom-functions-master/core/utils.py", line 143, in load_weights
conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 4604005 into shape (1024,512,3,3)

How can I set a different .cfg to the FLAGS or something like that?
Your work is amazing!, and I need to use another configuration for my trained Yolov4
Thank you!

how to output accuracy?

I want to print out the accuracy of the recognized object on the screen.
Is there a way to print out the accuracy of objects recognized on the implemented code?

ID Switch issue

The demo video works fine, but when using a custom video the id is getting switched extensively. Any tips to handle this issue?

ImportError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block

Running the save_model.py code on Nvidia Jetson TX2 with Jetpack 4.4.1 installed, python3.6, Cuda 10.2.

Stack trace:

~/yolov4-deepsort$ python3 save_model.py --model yolov4-tiny
2021-01-07 15:46:30.404746: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
OpenCV loader: os.name="posix"  platform.system()="Linux"
OpenCV loader: loading config: /usr/lib/python3.6/dist-packages/cv2/config.py
OpenCV loader: loading config: /usr/lib/python3.6/dist-packages/cv2/config-3.6.py
OpenCV loader: PYTHON_EXTENSIONS_PATHS=['/usr/lib/python3.6/dist-packages/cv2/python-3.6']
OpenCV loader: BINARIES_PATHS=['/usr/lib/python3.6/dist-packages/cv2/../../../../lib/aarch64-linux-gnu']
OpenCV loader: replacing cv2 module
Traceback (most recent call last):
  File "save_model.py", line 4, in <module>
    from core.yolov4 import YOLO, decode, filter_boxes
  File "/home/cafepop/yolov4-deepsort/core/yolov4.py", line 6, in <module>
    import core.utils as utils
  File "/home/cafepop/yolov4-deepsort/core/utils.py", line 1, in <module>
    import cv2
  File "/usr/lib/python3.6/dist-packages/cv2/__init__.py", line 89, in <module>
    bootstrap()
  File "/usr/lib/python3.6/dist-packages/cv2/__init__.py", line 79, in bootstrap
    import cv2
ImportError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block

This seems to be an issue specific to Jetson boards that occurs when tensorflow is imported before opencv. Adding the line:

import cv2
import tensorflow as tf

to the beginning of the file resolved this issue for me. Just documenting it here in case anyone else has this issue. Seems to be related to opencv/opencv#14884

How to get bounding box coordinates and person's ID?

Hi,
I'm new to Deep sort. It's interesting. How can I get the bounding box coordinates of all objects and person's ID? I would need to cut photo to that object. So for instance, for each person, it would be 4 values for the rectangle: (top left x,top left y,width,height) and person-ID.

track by detection

Hi,
Thanks for sharing this code.
I want to speed-up my slow detector!
I have detection results every 150ms. So I want deep-sort to use track results in the 150ms gap.
but when I change your code to do this it couldn't keep track of object in intervals by tracking!
maybe kalman filter parameters is not suitable or ....

Could you please suggest me a way to do this!

same feature extractor for all classes?

Hi, First of all very nice implementation. In the deep sort paper, they mention that they trained encoder on mars dataset, which is essentially people discriminator, are you using same feature extractor for all class types? if not did u have to train the encoder network for each class?

训练自己的数据集

你好,我想请教一下怎么用你的项目训练模型来检测跟踪自己的数据集,因为在代码里面没有看到train.py文件,期待你的回复,非常感谢。

Tracking object get lost after few seconds

I am using this model on some custom videos and even though the detection is correct at the starting point, after some point (20 sec) the model is unable to detect simple object like dog in the video. I think maybe the yolo intermittent object detection and deepsort lost their connection at that point and deepsort is not getting the correct object label since then. I have attached the video and you can see the issue around sec 20 till end.https://drive.google.com/file/d/1WTijcKVBoEmFT82Hb0jK-ljMigPEaWAX/view?usp=sharing
it is YOLOv4 not the tiny version
I have tracked the video only with yolov5 (no deepsort) and eventhough it is slower than model improved with deepsort, it does not get confused.
link to the yolo only tracking model
https://drive.google.com/file/d/1KBQQyMkiZiOBdl-7JN9r3aJmAE8kG9OM/view?usp=sharing

DLL load failed when ''convert darknet weights to tensorflow model'' on Window 10

Can you give me the way how to fix the error "DLL load failed''?
Traceback (most recent call last):
File "C:\Users\Asus\anaconda3\envs\yolov4-cpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "save_model.py", line 1, in
import tensorflow as tf
File "C:\Users\Asus\anaconda3\envs\yolov4-cpu\lib\site-packages\tensorflow_init_.py", line 41, in
from tensorflow.python.tools import module_util as module_util
File "C:\Users\Asus\anaconda3\envs\yolov4-cpu\lib\site-packages\tensorflow\python_init
.py", line 40, in
from tensorflow.python.eager import context
File "C:\Users\Asus\anaconda3\envs\yolov4-cpu\lib\site-packages\tensorflow\python\eager\context.py", line 35, in
from tensorflow.python import pywrap_tfe
File "C:\Users\Asus\anaconda3\envs\yolov4-cpu\lib\site-packages\tensorflow\python\pywrap_tfe.py", line 28, in
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Asus\anaconda3\envs\yolov4-cpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Asus\anaconda3\envs\yolov4-cpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed: The specified module could not be found.

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.

Long launch time

Thanks for this tutorial!

I am running on a Jetson Xavier NX.

For some reason launching the script takes a very long time until the image is visible (~3 minutes from issuing python3 save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416 --model yolov4 --tiny)

Any thoughts for the cause of this?

Edit to add: tensorflow version 2.2.0
python 3.6
openCV 4.4.0

Convert_trt

Hi,How can I convert the tf model to tensorrt?

why my model only use the cpu

Thank you for your contribution.
cpu:i5 6500
gpu: gtx 1080
when I run this model, I only got about 2-3fps,and I use the command "nvidia-smi",the use of Gpu memory is only about 500mb/8116mb, I have install tf-gpu following your text.
when I run the "python object_tracker.py --video ./data/video/test.mp4 --output ./outputs/demo.avi --model yolov4",there is always a "Qt: Session management error: None of the authentication protocols",then the fps is really low.I'm really appreciate if you can offer some help!
my conda list:
_libgcc_mutex 0.1 conda_forge https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
_openmp_mutex 4.5 1_gnu https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
absl-py 0.11.0
astunparse 1.6.3
bzip2 1.0.8 h516909a_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ca-certificates 2020.6.20 hecda079_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
cachetools 4.1.1
cairo 1.16.0 hcf35c78_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
certifi 2020.6.20 py37he5f6b98_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
chardet 3.0.4
cudatoolkit 10.1.243 h6bb024c_0 defaults
cudnn 6.0 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cycler 0.10.0 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
dbus 1.13.6 h7a60e0d_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
easydict 1.9
expat 2.2.9 he1b5a44_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ffmpeg 4.3.1 h3215721_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
fontconfig 2.13.1 h86ecdb6_1001 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
freetype 2.10.4 h7ca028e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
gast 0.3.3
gettext 0.19.8.1 hf34092f_1004 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
glib 2.66.2 h58526e2_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
gmp 6.2.0 h58526e2_4 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
gnutls 3.6.13 h79a8f9a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
google-auth 1.23.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
graphite2 1.3.13 he1b5a44_1001 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
grpcio 1.33.2
gst-plugins-base 1.14.5 h0935bb2_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
gstreamer 1.14.5 h36ae1b5_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
h5py 2.10.0
harfbuzz 2.4.0 h9f30f68_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
hdf5 1.10.6 nompi_h3c11f04_101 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
icu 64.2 he1b5a44_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
idna 2.10
importlib-metadata 2.0.0
jasper 1.900.1 h07fcdf6_1006 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
jpeg 9d h36c2ea0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
Keras-Preprocessing 1.1.2
kiwisolver 1.3.0 py37hc928c03_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
lame 3.100 h14c3975_1001 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
lcms2 2.11 hbd6801e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libblas 3.9.0 2_openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libcblas 3.9.0 2_openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libclang 9.0.1 default_hde54327_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libedit 3.1.20191231 he28a2e2_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libffi 3.2.1 1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libgcc-ng 9.3.0 h5dbcf3e_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgfortran-ng 7.5.0 hae1eefd_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgfortran4 7.5.0 hae1eefd_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgfortran5 9.3.0 he4bcb1c_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libglib 2.66.2 hbe7bbb4_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgomp 9.3.0 h5dbcf3e_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libiconv 1.16 h516909a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
liblapack 3.9.0 2_openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
liblapacke 3.9.0 2_openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libllvm9 9.0.1 he513fc3_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libopenblas 0.3.12 pthreads_h4812303_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libopencv 4.5.0 py37_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libpng 1.6.37 h21135ba_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libstdcxx-ng 9.3.0 h2ae2ef3_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libtiff 4.1.0 hc7e4089_6 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libuuid 2.32.1 h14c3975_1000 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libwebp-base 1.1.0 h516909a_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libxcb 1.12 1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libxkbcommon 0.10.0 he1b5a44_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libxml2 2.9.10 hee79883_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
lxml 4.6.1
lz4-c 1.9.2 he1b5a44_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
Markdown 3.3.3
matplotlib 3.3.2 py37hc8dfbb8_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
matplotlib-base 3.3.2 py37hc9afd2a_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ncurses 6.2 he1b5a44_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
nettle 3.4.1 h1bed415_1002 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
nspr 4.29 he1b5a44_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
nss 3.58 h27285de_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
numpy 1.19.2 py37h7008fea_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
numpy 1.18.5
oauthlib 3.1.0
olefile 0.46 pyh9f0ad1d_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
opencv 4.5.0 py37_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
opencv-python 4.1.1.26
openh264 2.1.1 h8b12597_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
openssl 1.0.2u h516909a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
opt-einsum 3.3.0
pcre 8.44 he1b5a44_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pillow 8.0.1 py37h718be6c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pip 20.2.4 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pixman 0.38.0 h516909a_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
protobuf 3.13.0
py-opencv 4.5.0 py37hc6149b9_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pyasn1 0.4.8
pyasn1-modules 0.2.8
pyparsing 2.4.7 pyh9f0ad1d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
pyqt 5.12.3 py37h8685d9f_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
python 3.7.0 hd21baee_1006 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
python-dateutil 2.8.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
python_abi 3.7 1_cp37m https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
qt 5.12.5 hd8c4c69_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
readline 7.0 hf8c457e_1001 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
requests 2.24.0
requests-oauthlib 1.3.0
rsa 4.6
scipy 1.4.1
setuptools 49.6.0 py37he5f6b98_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
six 1.15.0 pyh9f0ad1d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
sqlite 3.33.0 h62c20be_0 defaults
tensorboard 2.2.2
tensorboard-plugin-wit 1.7.0
tensorflow-gpu 2.3.0rc0
termcolor 1.1.0
tf-estimator-nightly 2.3.0.dev2020062301
tk 8.6.10 hed695b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
tornado 6.1 py37h4abf009_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
tqdm 4.51.0
urllib3 1.25.11
Werkzeug 1.0.1
wheel 0.35.1 pyh9f0ad1d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
wrapt 1.12.1
x264 1!152.20180806 h14c3975_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-kbproto 1.0.7 h14c3975_1002 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-libice 1.0.10 h516909a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-libsm 1.2.3 h84519dc_1000 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-libx11 1.6.12 h516909a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-libxext 1.3.4 h516909a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-libxrender 0.9.10 h516909a_1002 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-renderproto 0.11.1 h14c3975_1002 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-xextproto 7.3.0 h14c3975_1002 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-xproto 7.0.31 h14c3975_1007 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xz 5.2.5 h516909a_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
zipp 3.4.0
zlib 1.2.11 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free

Can't use GPU

GPU 2060
CUDA 10.1
Cudnn 7.65
tensorflow 2.3.0rc0

But the GPU didn't be uesd at all when running this code, and I only got about 15 FPS.

Is there any way to solve it?

Multiple camera

It is really nice.
Can you tell me how to use it for multiple camera multiple person tracking?

Trying to run speedsort using custom yolov4.weights

I used the tutorial for training yolov4 to create my own weights to detect a certain object. I then tried to use those weights in the deepsort colab. so, instead of downloading yolov4.weights I put my own pretrained weights and changed COCO.names to match my classes. everything works fine and I get no errors but then when I get the output there are no objects detected at all.
I will really appreciate if you guide me to what I am missing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.