Git Product home page Git Product logo

roboflow / zero-shot-object-tracking Goto Github PK

View Code? Open in Web Editor NEW

This project forked from theaiguyscode/yolov4-deepsort

350.0 5.0 61.0 88.05 MB

Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.

Home Page: https://blog.roboflow.com/zero-shot-object-tracking/

License: GNU General Public License v3.0

Python 98.27% Dockerfile 0.23% Shell 1.50%
computer-vision openai-clip object-detection object-tracking deep-sort

zero-shot-object-tracking's Introduction

Roboflow Object Tracking Example

Object tracking using Roboflow Inference API and Zero-Shot (CLIP) Deep SORT. Read more in our Zero-Shot Object Tracking announcement post.

Example fish tracking

Example object tracking courtesy of the Roboflow Universe public Aquarium model and dataset. You can adapt this to your own dataset on Roboflow or any pre-trained model from Roboflow Universe.

Overview

Object tracking involves following individual objects of interest across frames. It combines the output of an object detection model with a secondary algorithm to determine which detections are identifying "the same" object over time.

Previously, this required training a special classification model to differentiate the instances of each different class. In this repository, we have used OpenAI's CLIP zero-shot image classifier to create a universal object tracking repository. All you need is a trained object detection model and CLIP handles the instance identification for the object tracking algorithm.

Getting Started

Colab Tutorial Here:

Open In Colab

Training your model

To use the Roboflow Inference API as your detection engine:

Upload, annotate, and train your model on Roboflow with Roboflow Train. Your model will be hosted on an inference URL.

To use YOLOv7 as your detection engine:

Follow Roboflow's Train YOLOv7 on Custom Data Tutorial

The YOLOv7 implementation uses this colab notebook

To use YOLOv5 as your detection engine:

Follow Roboflow's Train YOLOv5 on Custom Data Tutorial

The YOLOv5 implementation uses this colab notebook

The YOLOv5 implementation is currently compatible with this commit hash of YOLOv5 886f1c03d839575afecb059accf74296fad395b6

Performing Object Tracking

Clone repositories

git clone https://github.com/roboflow-ai/zero-shot-object-tracking
cd zero-shot-object-tracking
git clone https://github.com/openai/CLIP.git CLIP-repo
cp -r ./CLIP-repo/clip ./clip             // Unix based
robocopy CLIP-repo/clip clip\             // Windows

Install requirements (python 3.7+)

pip install --upgrade pip
pip install -r requirements.txt

Install requirements (anaconda python 3.8)

conda install pytorch torchvision torchaudio -c pytorch
conda install ftfy regex tqdm requests pandas seaborn
pip install opencv pycocotools tensorflow

Run with Roboflow

python clip_object_tracker.py --source data/video/fish.mp4 --url https://detect.roboflow.com/playing-cards-ow27d/1 --api_key ROBOFLOW_API_KEY --info

**NOTE you must provide a valid API key from Roboflow

Run with YOLOv7

python clip_object_tracker.py --weights models/yolov7.pt --source data/video/fish.mp4 --detection-engine yolov7 --info

Run with YOLOv5

python clip_object_tracker.py --weights models/yolov5s.pt --source data/video/fish.mp4 --detection-engine yolov5 --info

Run with YOLOv4

To use YOLOv4 for object detection you will need pretrained weights (.weights file), a model config for your weights (.cfg), and a class names file (.names). Test weights can be found here https://github.com/AlexeyAB/darknet. yolov4.weights yolov4.cfg

python clip_object_tracker.py --weights yolov4.weights --cfg yolov4.cfg --names coco.names --source data/video/cars.mp4 --detection-engine yolov4 --info

(by default, output will be in runs/detect/exp[num])

Help

python clip_object_tracker.py -h
--weights WEIGHTS [WEIGHTS ...]  model.pt path(s)
--source SOURCE                  source (video/image)
--img-size IMG_SIZE              inference size (pixels)
--confidence CONFIDENCE          object confidence threshold                      
--overlap OVERLAP                IOU threshold for NMS
--thickness THICKNESS            Thickness of the bounding box strokes
--device DEVICE                  cuda device, i.e. 0 or 0,1,2,3 or cpu
--view-img                       display results
--save-txt                       save results to *.txt
--save-conf                      save confidences in --save-txt labels
--classes CLASSES [CLASSES ...]  filter by class: --class 0, or --class 0 2 3
--agnostic-nms                   class-agnostic NMS
--augment                        augmented inference
--update                         update all models
--project PROJECT                save results to project/name
--name NAME                      save results to project/name
--exist-ok                       existing project/name ok, do not increment
--nms_max_overlap                Non-maxima suppression threshold: Maximum detection overlap.
--max_cosine_distance            Gating threshold for cosine distance metric (object appearance).
--nn_budget NN_BUDGET            Maximum size of the appearance descriptors allery. If None, no budget is enforced.
--api_key API_KEY                Roboflow API Key.
--url URL                        Roboflow Model URL.
--info                           Print debugging info.
--detection-engine               Which engine you want to use for object detection (yolov7, yolov5, yolov4, roboflow).

Acknowledgements

Huge thanks to:

zero-shot-object-tracking's People

Contributors

dependabot[bot] avatar jacobsolawetz avatar madewithstone avatar mkhir avatar mshirshekar avatar nickvaras avatar theaiguyscode avatar yeldarby avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

zero-shot-object-tracking's Issues

Issues with webcam

Hello all,

I tried to run this code with webcam. I have used the following command:

python clip_object_tracker.py --weights ./weights/best.pt --source 0 --detection-engine yolov7 --info

But it have thrown me the following error:

"TypeError: expected str, bytes or os.PathLike object, not list"

Anyone can help? thanks in advance.

SSL Error

clip_object_tracker.py --source data/video/cards.mp4 --url https://detect.roboflow.com/playing-cards-ow27d/1 --api_key ROBOFLOW_API_KEY

Namespace(agnostic_nms=False, api_key='ROBOFLOW_API_KEY', augment=False, classes=None, confidence=0.4, device='', exist_ok=False, img_size=640, info=False, max_cosine_distance=0.4, name='exp', nms_max_overlap=1.0, nn_budget=None, overlap=0.3, project='runs/detect', save_conf=False, save_txt=False, source='data/video/cards.mp4', thickness=3, update=False, url='https://detect.roboflow.com/playing-cards-ow27d/1', view_img=False, weights='yolov5s.pt')
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1319, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1422, in connect
server_hostname=server_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 423, in wrap_socket
session=session
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 870, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 1139, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "clip_object_tracker.py", line 283, in
detect()
File "clip_object_tracker.py", line 88, in detect
model, transform = clip.load(model_filename, device=device)
File "/Users/shayo/tracking/zero-shot-object-tracking/clip/clip.py", line 112, in load
model_path = _download(_MODELS[name], download_root or os.path.expanduser("~/.cache/clip"))
File "/Users/shayo/tracking/zero-shot-object-tracking/clip/clip.py", line 55, in _download
with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1362, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1321, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)>

When I inference with single class, e.g.,person, got this error

Traceback (most recent call last):
File "clip_object_tracker.py", line 360, in
detect()
File "clip_object_tracker.py", line 160, in detect
pred = yolov5_engine.infer(img)
File "/home/zx/zero-shot-object-tracking/utils/yolov5.py", line 16, in infer
pred = self.model(img, augment=self.augment)[0]
File "/home/zx/anaconda3/envs/roboflow/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zx/zero-shot-object-tracking/models/yolo.py", line 123, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "/home/zx/zero-shot-object-tracking/models/yolo.py", line 139, in forward_once
x = m(x) # run
File "/home/zx/anaconda3/envs/roboflow/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zx/zero-shot-object-tracking/models/common.py", line 120, in forward
return torch.cat(x, self.d)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 30 but got size 29 for tensor number 1 in the list.

Roboflow object tracking on mobile android app with Kotlin

Hello. Your object tracker looks great works perfectly with python scripts! Do you have the same object tracker but for mobile version written in Kotlin? Or can you give some advice where possible to find a object tracker for android in Kotlin.
I have an information about labels, but could not create a good tracker for each detected object. I have tried the following loop, it show some numbers during detection but not really like in your example case when it detects the fish it has the stable tracking ID during all the path the fish swimming. If you have some experience with Kotlin, can you please help to figure out how to make the same type of object tracking as you have. Thanks in advance and look forward to hearing from you!

image

No module named 'ftfy'

Looks like it might be missing from the requirements. This is from a fresh venv with pip install -r requirements.txt and Python 3.9:

(env) yeldarb@iUSB-C-You zero-shot-object-tracking % python clip_object_tracker.py --source ~/Downloads/cards-720.mov --url https://detect.roboflow.com/playing-cards-ow27d/1 --api_key {{SNIPPED}} 
Traceback (most recent call last):
  File "/Users/yeldarb/Code/zero-shot-object-tracking/clip_object_tracker.py", line 5, in <module>
    import clip
  File "/Users/yeldarb/Code/zero-shot-object-tracking/clip/__init__.py", line 1, in <module>
    from .clip import *
  File "/Users/yeldarb/Code/zero-shot-object-tracking/clip/clip.py", line 13, in <module>
    from .simple_tokenizer import SimpleTokenizer as _Tokenizer
  File "/Users/yeldarb/Code/zero-shot-object-tracking/clip/simple_tokenizer.py", line 6, in <module>
    import ftfy
ModuleNotFoundError: No module named 'ftfy'

Doing a pip install ftfy worked but then got No module named 'regex' (after pip install regex it ran for a bit then errored with No module named 'tensorflow' and that's as far as I've gotten).

AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

Namespace(agnostic_nms=False, api_key=None, augment=False, cfg='/content/zero-shot-object-tracking/models/yolov5s.yaml', classes=None, confidence=0.4, detection_engine='yolov5', device='', exist_ok=False, img_size=640, info=False, max_cosine_distance=0.4, name='exp', names='coco.names', nms_max_overlap=1.0, nn_budget=None, overlap=0.3, project='runs/detect', save_conf=False, save_txt=False, source='/content/zero-shot-object-tracking/example/video/fish.mp4', thickness=3, update=False, url=None, view_img=False, weights='yolov5s.pt')
Fusing layers...
Using torch 1.11.0+cu113 CUDA:0 (Tesla T4, 15109.75MB)

Traceback (most recent call last):
File "clip_object_tracker.py", line 360, in
detect()
File "clip_object_tracker.py", line 141, in detect
_ = yolov5_engine.infer(img.half() if half else img) if device.type != 'cpu' else None # run once
File "/content/zero-shot-object-tracking/utils/yolov5.py", line 16, in infer
pred = self.model(img, augment=self.augment)[0]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/content/zero-shot-object-tracking/models/yolo.py", line 123, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "/content/zero-shot-object-tracking/models/yolo.py", line 139, in forward_once
x = m(x) # run
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/upsampling.py", line 154, in forward
recompute_scale_factor=self.recompute_scale_factor)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1186, in getattr
type(self).name, name))
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

I can not inference with my model !

Thanks for this video , I tried to use this code :

!python clip_object_tracker.py --source /content/zero-shot-object-tracking/data/video/Turkey.mp4 --url https://detect.roboflow.com/video-track/2 --api_key *********** --info

but this issue appears:

video 1/1 (23/1351) /content/zero-shot-object-tracking/data/video/Turkey.mp4:
[Detections]
[Tracks] 2
Traceback (most recent call last):
File "clip_object_tracker.py", line 370, in
detect()
File "clip_object_tracker.py", line 269, in detect
update_tracks(tracker, frame_count, save_txt, txt_path, save_img, view_img, im0, gn)
File "clip_object_tracker.py", line 46, in update_tracks
class_name = names[int(class_num)] if opt.detection_engine == "yolov5" or "yolov7" else class_num
ValueError: invalid literal for int() with base 10: 'damage

Modifying YOLOv5 engine to work with custom classes

Is there an easy way to setup the object tracker to track object classes outwith the COCO-128 dataset?

I have a trained YOLOv5 model with weights yolo5_custom.pt trained on a specific set of classes. However the YOLOv5 detection engine does not take a data or names input as an additional argument, and if I try modifying the coco128.yaml file contained within the data folder it throws an error when executing:

Input:
zero-shot-object-tracking/clip_object_tracker.py --source zero-shot-object-tracking/data/video/*******.mp4 --weights zero-shot-object-tracking/models/yolo5_custom.pt --detection-engine yolov5 --info

Output:

/bin/bash: /anaconda/envs/jupyter_env/lib/libtinfo.so.6: no version information available (required by /bin/bash)
Namespace(agnostic_nms=False, api_key=None, augment=False, cfg='yolov4.cfg', classes=None, confidence=0.4, detection_engine='yolov5', device='', exist_ok=False, img_size=640, info=True, max_cosine_distance=0.4, name='exp', names='zero-shot-object-tracking/coco-copy.names', nms_max_overlap=1.0, nn_budget=None, overlap=0.3, project='runs/detect', save_conf=False, save_txt=False, source='zero-shot-object-tracking/data/video/**********.mp4', thickness=3, update=False, url=None, view_img=False, weights=['zero-shot-object-tracking/models/model.pt'])
Traceback (most recent call last):
  File "zero-shot-object-tracking/clip_object_tracker.py", line 370, in <module>
    detect()
  File "zero-shot-object-tracking/clip_object_tracker.py", line 105, in detect
    yolov5_engine = Yolov5Engine(opt.weights, device, opt.classes, opt.confidence, opt.overlap, opt.agnostic_nms, opt.augment, half)
  File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/compute-optimized-cpu001/code/Users/alex.jamieson/object_tracking_roboflow_method/zero-shot-object-tracking/utils/yolov5.py", line 6, in __init__
    self.model = attempt_load(weights, map_location=device)
  File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/compute-optimized-cpu001/code/Users/alex.jamieson/object_tracking_roboflow_method/zero-shot-object-tracking/models/experimental.py", line 118, in attempt_load
    model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval())  # load FP32 model
KeyError: 'model'

Modified coco128.yaml file for reference:

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco128  # dataset root dir
train: images/train2017  # train images (relative to 'path') 128 images
val: images/train2017  # val images (relative to 'path') 128 images
test:  # test images (optional)

# Classes
names:
  0: class1
  1: class2
  2: class3
  3: class4
  4: class5

# number of classes
nc: 5

# Download script/URL (optional)
download: https://ultralytics.com/assets/coco128.zip

edit: The root cause seems to come from my custom YOLOv5 model. Reverting the yaml config to its original state still throws the same error.

Can't get attribute 'DetectionModel' on <module 'models.yolo'

I get this error when running:
!python clip_object_tracker.py --source ./data/video/videolife.mp4 --detection-engine yolov5 --weights ./models/best2.pt --info

Can't get attribute 'DetectionModel' on <module 'models.yolo' from '/content/zero-shot-object-tracking/models/yolo.py'>
xdgb

  • best2.pt is yolov5 trained weights (ultralytics yolov5 v6)

could you please help

Object tracking for YOLOv7 model.

The current object tracking solution is implemented for YOLOv4 and YOLOv5. Can you please update the current solution for the YOLOv7 model as well?

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

When running this code snippet. I get the Type Error below. Any guidance would be appreciated
!python clip_object_tracker.py --source ./data/video/cars.mp4 --detection-engine yolov5


/usr/local/lib/python3.7/dist-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
video 1/1 (1/266) /content/zero-shot-object-tracking/zero-shot-object-tracking/data/video/cars.mp4: yolov5 inference

[Detections]
1 persons, 8 cars, 2 trucks,
Traceback (most recent call last):
File "clip_object_tracker.py", line 361, in
detect()
File "clip_object_tracker.py", line 249, in detect
class_nums = np.array([d.class_num for d in detections])
File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 678, in array
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

KeyError: 'predictions' issue within predict_image

When I followed the default steps on the repo I run into a KeyError: 'predictions' issue within predict_image when running the following ###Run with Roboflow command:

python clip_object_tracker.py --source data/video/fish.mp4 --url https://detect.roboflow.com/playing-cards-ow27d/1 --api_key ROBOFLOW_API_KEY --info

Something that might be related, I run python3 so that the python f string format compiles without a syntax error but the docs use vanilla python.

The following ### Run with Yolov5 command works great:

python clip_object_tracker.py --weights models/yolov5s.pt --source data/video/fish.mp4 --detection-engine yolov5 --info

The google colab notebook also works perfectly on first time 👍

Can't get attribute 'SPPF' on <module 'models.common' from '/content/zero-shot-object-tracking/models/common.py

i got this error when i run your colab

 File "clip_object_tracker.py", line 360, in <module>
    detect()
  File "clip_object_tracker.py", line 104, in detect
    yolov5_engine = Yolov5Engine(opt.weights, device, opt.classes, opt.confidence, opt.overlap, opt.agnostic_nms, opt.augment, half)
  File "/content/zero-shot-object-tracking/utils/yolov5.py", line 6, in __init__
    self.model = attempt_load(weights, map_location=device)
  File "/content/zero-shot-object-tracking/models/experimental.py", line 118, in attempt_load
    model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval())  # load FP32 model
  File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 607, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 882, in _load
    result = unpickler.load()
  File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 875, in find_class
    return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'SPPF' on <module 'models.common' from '/content/zero-shot-object-tracking/models/common.py'>

Run with yolov5 got this error

video 1/1 (1/215) /home/zx/zero-shot-object-tracking/data/video/fish.mp4: yolov5 inference
Done. (0.010s)
video 1/1 (2/215) /home/zx/zero-shot-object-tracking/data/video/fish.mp4: yolov5 inference

[Detections]
1 forks,
Traceback (most recent call last):
File "clip_object_tracker.py", line 360, in
detect()
File "clip_object_tracker.py", line 249, in detect
class_nums = np.array([d.class_num for d in detections])
File "/home/zx/anaconda3/envs/roboflow/lib/python3.8/site-packages/torch/_tensor.py", line 678, in array
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

cv2.error Assertion failed !image.empty() in function 'imencode'

After it does all the detections I get an error on cv2.imencode:

video 1/1 (42/83) /Users/yeldarb/Downloads/cards-720.mov: Traceback (most recent call last):
  File "/Users/yeldarb/Code/zero-shot-object-tracking/clip_object_tracker.py", line 263, in <module>
    detect()
  File "/Users/yeldarb/Code/zero-shot-object-tracking/clip_object_tracker.py", line 110, in detect
    pred, classes = predict_image(vid_cap, opt.api_key, opt.url, frame_count)
  File "/Users/yeldarb/Code/zero-shot-object-tracking/utils/roboflow.py", line 11, in predict_image
    retval, buffer = cv2.imencode('.jpg', image)
cv2.error: OpenCV(4.5.3) /private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pip-req-build-vy_omupv/opencv/modules/imgcodecs/src/loadsave.cpp:978: error: (-215:Assertion failed) !image.empty() in function 'imencode'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.