Git Product home page Git Product logo

zhongdao / unitrack Goto Github PK

View Code? Open in Web Editor NEW
336.0 10.0 34.0 64.19 MB

[NeurIPS'21] Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).

License: MIT License

Python 90.37% Shell 0.08% MATLAB 0.05% C 1.93% Cython 0.54% CMake 0.11% C++ 6.35% Java 0.56%
object-tracking single-object-tracking multi-object-tracking multi-object-tracking-segmentation multi-object-tracker multi-object-track pose-tracking video-object-segmentation video-object-tracking video-instance-segmentation

unitrack's Introduction


[NeurIPS 2021] Do different tracking tasks require different appearance model?

[ArXiv] [Project Page]

UniTrack is a simple and Unified framework for addressing multiple tracking tasks.

Being a fundamental problem in computer vision, tracking has been fragmented into a multitude of different experimental setups. As a consequence, the literature has fragmented too, and now the novel approaches proposed by the community are usually specialized to fit only one specific setup. To understand to what extent this specialization is actually necessary, we present UniTrack, a solution to address multiple different tracking tasks within the same framework. All tasks share the same appearance model. UniTrack

Demo

Multi-Object Tracking demo for 80 COCO classes (YOLOX + UniTrack)

In this demo we run the YOLOX detector and perform MOT for the 80 COCO classes. Try the demo by:

python demo/mot_demo.py --classes cls1 cls2 ... clsN

where cls1 to clsN represent the indices of classes you would like to detect and track. See here for the index list. By default all 80 classes are detected and tracked.

Single-Object Tracking demo for custom videos

python demo/sot_demo.py --config ./config/imagenet_resnet18_s3.yaml --input /path/to/your/video

In this demo, you are asked to annotate the target to be tracked, by drawing a rectangle in the first frame of the video. Then the algorithm tracks the target in following timesteps without object detection.

Tasks & Framework

tasksframework

Tasks

We classify existing tracking tasks along four axes: (1) Single or multiple targets; (2) Users specify targets or automatic detectors specify targets; (3) Observation formats (bounding box/mask/pose); (2) Class-agnostic or class-specific (i.e. human/vehicles). We mainly experiment on 5 tasks: SOT, VOS, MOT, MOTS, and PoseTrack. Task setups are summarized in the above figure.

Appearance model

An appearance model is the only learnable component in UniTrack. It should provide universal visual representation, and is usually pre-trained on large-scale dataset in supervised or unsupervised manners. Typical examples include ImageNet pre-trained ResNets (supervised), and recent self-supervised models such as MoCo and SimCLR (unsupervised).

Propagation and Association

Propagation and Association are the two core primitives used in UniTrack to address a wide variety of tracking tasks (currently 7, but more can be added), Both use the features extracted by the pre-trained appearance model. For propagation, we adopt exiting methods such as cross correlation, DCF, and mask propation. For association we employ a simple algorithm as in JDE and develop a novel reconstruction-based similairty metric that allows to compare objects across shapes and sizes.

Getting started

  1. Installation: Please check out docs/INSTALL.md
  2. Data preparation: Please check out docs/DATA.md
  3. Appearance model preparation: Please check out docs/MODELZOO.md
  4. Run evaluation on all datasets: Please check out docs/RUN.md

Results

Below we show results of UniTrack with a simple ImageNet Pre-trained ResNet-18 as the appearance model. More results can be found in RESULTS.md.

Single Object Tracking (SOT) on OTB-2015

Video Object Segmentation (VOS) on DAVIS-2017 val split

Multiple Object Tracking (MOT) on MOT-16 test set private detector track (Detections from FairMOT)

Multiple Object Tracking and Segmentation (MOTS) on MOTS challenge test set (Detections from COSTA_st)

Pose Tracking on PoseTrack-2018 val split (Detections from LightTrack)

Acknowledgement

A part of code is borrowed from

VideoWalk by Allan A. Jabri

SOT code by Zhipeng Zhang

Citation

@article{wang2021different,
  author    = {Wang, Zhongdao and Zhao, Hengshuang and Li, Ya-Li and Wang, Shengjin and Torr, Philip and Bertinetto, Luca},
  title     = {Do different tracking tasks require different appearance models?},
  journal   = {Thirty-Fifth Conference on Neural Infromation Processing Systems},
  year      = {2021},
}

unitrack's People

Contributors

kentaroy47 avatar zhongdao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unitrack's Issues

SOT on custom videos

Hi Zhongdao,
Thanks a lot for this great work!
I'm wondering how can we apply one of your SOT models on a custom video.
I couldn't find any details in your readme files on how to do that and if I understood correctly the test_sot_xx.py scripts are dedicated to the OTB2015 dataset.

i meet this proplem

2021-11-01 17:08:15.495 | INFO | main:main:149 - Model Summary: Params: 99.07M, Gflops: 211.45
2021-11-01 17:08:16.556 | INFO | main:main:152 - loading checkpoint
Traceback (most recent call last):
File "demo/mot_demo.py", line 193, in
main(exp, args)
File "demo/mot_demo.py", line 153, in main
ckpt = torch.load(args.ckpt, map_location="cpu")
File "D:\tool\anaconda\envs\unitrack\lib\site-packages\torch\serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "D:\tool\anaconda\envs\unitrack\lib\site-packages\torch\serialization.py", line 762, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '\x00'.

Unitrack on window10?

Hello, can I run the UniTrack demo file on window10 if I set the environment like ubuntu?

What is the UniTrack Pose FPS in Realtime?

Hello! I'm trying to use the Pose in realtime, but I have a Question
What's the Realtime FPS of Pose(LightTrack) in README example?
if it is not implemented realtime, please can you tell me expected FPS?

Tensor size mismatch when running mot_demo.py with custom test image size

I'd like to give it a try to run mot_demo.py with a custom test image size --tsize 800 600.
Exception arises in this case and the message said:
Sizes of tensors must match except in dimension 2. Got 75 and 76 (The offending index is 0)

No additional debug information is available, so it is difficult for me to find out which line of code throws this exception.

Here is my exp arguments and logs:

2022-04-04 01:35:38.341 | INFO     | __main__:main:135 - Args: Namespace(
asso_with_motion=True, 
ckpt='detector/YOLOX/weights/yolox_m.pth', 
classes=[0], conf=0.65, conf_thres=0.65, 
config='./config/imagenet_resnet18_s3.yaml', 
confirm_iou_thres=0.7, demo='video', device='cuda', 
down_factor=8, dup_iou_thres=0.15, 
exp_file='detector/YOLOX/exps/default/yolox_m.py', 
exp_name='imagenet_resnet18_s3', 
feat_size=[4, 10], gpu_id=0, 
im_mean=[0.485, 0.456, 0.406], im_std=[0.229, 0.224, 0.225], 
img_size=[800, 600], 
infer2D=True, iou_thres=0.5, min_box_area=200, 
model_type='imagenet18', 
mot_root='/home/wangzd/datasets/MOT/MOT16', 
motion_gated=True, motion_lambda=0.98, 
nms=0.3, nms_thres=0.4, 
nopadding=False, obid='FairMOT', 
output_root='./results/mot_demo', 
path='/workspace/project/samples/videos/G175647144539.mp4', 
prop_flag=False, remove_layers=['layer4'], 
resume='None', save_images=False, save_result=False, 
save_videos=True, test_mot16=False, track_buffer=30, 
tsize=[800, 600], use_kalman=True, workers=4)
[NULL @ 0x55b86843db00] PPS id out of range: 0
[hevc @ 0x55b86843db00] PPS id out of range: 0
Lenth of the video: 1107126 frames
2022-04-04 01:35:40.498 | INFO     | __main__:main:147 - Model Summary: Params: 25.33M, Gflops: 86.43
2022-04-04 01:35:45.542 | INFO     | __main__:main:150 - loading checkpoint
2022-04-04 01:35:45.888 | INFO     | __main__:main:154 - loaded checkpoint done.
[hevc @ 0x55b868566400] Could not find ref with POC 4
2022-04-04 01:35:46.662 | INFO     | __main__:eval_seq:100 - Processing frame 0 (100000.00 fps)
Sizes of tensors must match except in dimension 2. Got 75 and 76 (The offending index is 0)
...

mot_demo issue

python demo/mot_demo.py --classes 1 2demo/mot_demo.py:186: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
common_args = yaml.load(f)
2022-04-23 12:44:56.034 | INFO | main:main:135 - Args: Namespace(asso_with_motion=True, ckpt='./detector/YOLOX/weights/yolox_x.pth', classes=[1, 2], conf=0.65, conf_thres=0.65, config='./config/imagenet_resnet18_s3.yaml', confirm_iou_thres=0.7, demo='video', device='cuda', down_factor=8, dup_iou_thres=0.15, exp_file='./detector/YOLOX/exps/default/yolox_x.py', exp_name='imagenet_resnet18_s3', feat_size=[4, 10], gpu_id=0, im_mean=[0.485, 0.456, 0.406], im_std=[0.229, 0.224, 0.225], img_size=[640, 480], infer2D=True, iou_thres=0.5, min_box_area=200, model_type='imagenet18', mot_root='/home/wangzd/datasets/MOT/MOT16', motion_gated=True, motion_lambda=0.98, nms=None, nms_thres=0.4, nopadding=False, obid='FairMOT', output_root='./results/mot_demo', path='../mmtracking-master/demo/demo.mp4', prop_flag=False, remove_layers=['layer4'], resume='None', save_images=False, save_result=False, save_videos=True, test_mot16=False, track_buffer=30, tsize=[640, 480], use_kalman=True, workers=4)
Lenth of the video: 8 frames
1111111111111111111111111
/dfs/data/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1639180594101/work/aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
2022-04-23 12:44:57.335 | INFO | main:main:149 - Model Summary: Params: 99.07M, Gflops: 211.45
2022-04-23 12:45:05.732 | INFO | main:main:152 - loading checkpoint
<class 'dict'>
Traceback (most recent call last):
File "demo/mot_demo.py", line 201, in
main(exp, args)
File "demo/mot_demo.py", line 158, in main
det_model.load_state_dict(ckpt["model"])
KeyError: 'model'

i using YOLOX default pretrain train coco weight.(https://github.com/open-mmlab/mmdetection/tree/master/configs/yolox)

Padding issue of SiamFc

Hi! When evaluating the appearance models on SOT using SiamFC tracker, how does padding operation, which breaks SiamFC's fullly-convolutional property, affect the result?
Thanks in advance!

scikit-learn version error

I use scikit-learn==1.1.2 on my project and I also use Unitrack as a submodule.
However, Unitrack uses scikit-learn==0.22, which version is too old.

So, import error is occured in utils/mask.py line.16.

from sklearn.metrics import jaccard_similarity_score

jaccard_similarity_score is deleted from scikit-learn >= 0.23.
(And it isn't used in utils/mask.py, so it can be deleted.)

Will you update scikit-learn?
If scikit-learn is updated, it might be some other errors will be occured.

download pretrain model issue

Hi, Zhongdao,

   I can not  download pretrain model from model zoo links,:https://github.com/Zhongdao/UniTrack/blob/54347ba1bdba0903b241e00de2b5d0dc3c1a3d14/docs/MODELZOO.md

(ImageNet classification Imagenet 50 & SimCLR v2 )

Bounding Boxes doesn't scale for SOT

I've tried SOT on custom videos using demo/sot_demo.py. After drawing the initial Bounding Box, it doesn't scale even if size of Object is either reduced/increased. What might be the possible rootcause for this problem?

Thanks in advance.

Problem with the Unitrack + YOLOX Demo

Hello,
i will hope to run Unitrack + YOLOX Demo for realtime.

so i ran Unitrack + Yolox Demo with the "webcam" option
but It seems to only work on a test-video basis.

how can i run the Demo with webcam?

specify the numpy version

Hi Zhongdao, can you specify the Numpy version compatible with all the other dependencies? It is not specified in the requirements.
I am trying to run your code for my project but getting dependency errors with cython_bbox and pycocotools. They seem to be related to specific Numpy versions.
Thanks!

Using a custom Resnet-18 Classification model

I have trained a resnet-18 model on a custom dataset for classification. I also have trained YOLOX for detection on this custom dataset.
How do I use my resnet-18 model as an appearance model? Since its not trained on crw,imagenet etc, what model_type should I give in the config file? And do I have to edit model/model.py to handle this model_type and load the model from my checkpoint?
Thanks for your help!

License

Hi,

Great work!

What is the license of this repository?

Thanks

how to get MOTS-train.txt

Hi~, How do I get the MOTS-train.txt?

`
Eval Config:
USE_PARALLEL : False
NUM_PARALLEL_CORES : 8
BREAK_ON_ERROR : True
RETURN_ON_ERROR : False
LOG_ON_ERROR : results/mots/debug/quantitive/error.log
PRINT_RESULTS : True
PRINT_ONLY_COMBINED : False
PRINT_CONFIG : True
TIME_PROGRESS : True
DISPLAY_LESS_PROGRESS : True
OUTPUT_SUMMARY : True
OUTPUT_EMPTY_CLASSES : True
OUTPUT_DETAILED : True
PLOT_CURVES : False

MOTSChallenge Config:
GT_FOLDER : /data7/fenghao/dataset/MOTS_unitrack//images/train
TRACKERS_FOLDER : results/mots/debug/quantitive/..
OUTPUT_FOLDER : None
TRACKERS_TO_EVAL : ['quantitive']
CLASSES_TO_EVAL : ['pedestrian']
SPLIT_TO_EVAL : train
INPUT_AS_ZIP : False
PRINT_CONFIG : True
TRACKER_SUB_FOLDER :
OUTPUT_SUB_FOLDER :
TRACKER_DISPLAY_NAMES : None
SEQMAP_FOLDER : /data7/fenghao/dataset/MOTS_unitrack//images/train/../../seqmaps
SEQMAP_FILE : None
SEQ_INFO : None
GT_LOC_FORMAT : {gt_folder}/{seq}/gt/gt.txt
SKIP_SPLIT_FOL : True
BENCHMARK : MOTS20
no seqmap found: /data7/fenghao/dataset/MOTS_unitrack//images/train/../../seqmaps/MOTS-train.txt
Traceback (most recent call last):
File "/home/wangwd/.pycharm_helpers/pydev/pydevd.py", line 1448, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/wangwd/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/data7/fenghao/UniTrack/test/test_mots.py", line 170, in
save_videos=opt.save_videos)
File "/data7/fenghao/UniTrack/test/test_mots.py", line 121, in main
dataset_list = [trackeval.datasets.MOTSChallenge(dataset_config)]
File "/data7/fenghao/UniTrack/eval/trackeval/datasets/mots_challenge.py", line 76, in init
self.seq_list, self.seq_lengths = self._get_seq_info()
File "/data7/fenghao/UniTrack/eval/trackeval/datasets/mots_challenge.py", line 152, in _get_seq_info
raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))
eval.trackeval.utils.TrackEvalException: no seqmap found: MOTS-train.txt
python-BaseException

Process finished with exit code 1

`

This is my config
common:
exp_name: debug

# Model related
model_type: crw
remove_layers: ['layer4']
im_mean: [0.4914, 0.4822, 0.4465]
im_std: [0.2023, 0.1994, 0.2010]
nopadding: False
head_depth: -1
resume: 'weights/crw.pth'

# Misc
down_factor: 8
infer2D: True 
workers: 4
gpu_id: 3
device: cuda

mots:
obid: 'gt'
mots_root: '/data7/fenghao/dataset/MOTS_unitrack/'
save_videos: False
save_images: False
test: False
track_buffer: 30
nms_thres: 0.4
conf_thres: 0.5
iou_thres: 0.5
prop_flag: False
max_mask_area: 200
dup_iou_thres: 0.15
confirm_iou_thres: 0.7
first_stage_thres: 0.7
feat_size: [4,10]
use_kalman: True
asso_with_motion: True
motion_lambda: 0.98
motion_gated: False

Vehicles are not detected

Hi @Zhongdao, great work!

I ran mot_demo.py on some of my videos and there is a detecting problem, I used the default configs, newest code and here's the results I got:

00011
Pic 1: No vehicle is detected

00053
Pic 2: Not all vehicles are detected

00162
Pic 3: No vehicle is detected

Is it the detector's problem? Should I replace YOLOX with YOLOv5?

Update: I tried the YOLOX separately on the same data and it worked well, all vehicles are detected. Don't know where the problem came from.

00002

Update 2: I clone the newest version of YOLOX code into detector folder and replace the old one and it worked.

quantitative reid indicator

Hi Zhongdao ,
Do you have a quantitative reid indicator(mAP) for each model on different data sets(such as MOT)??

YOLOR + Unitrack

Hi, Ritesh here from Augmented Startups (89k) would you be releasing an application on YOLOR and Unitrack. I would cover it on my YouTube Channel.

PoseTrack download

Thanks for your excellent work.
I can't access PoseTrack's home page for data download. Do you have a link available for PoseTrack data download?
Thank you very much!

Tracklet ID

Is there a way to retrieve tracklet ID from inference?

bad result

hi @Zhongdao , I have a problem.
I had a bad result like this pic.
00162

I mean it should track more cars in pic.

I only donwload the yolox_x.pth and run the python demo.
what should i do?

Inference Statistics

Hello,

I know this project was just released and some things are still being put up, but do you have any information on the inference speeds for UniTrack with ResNet18 and ResNet50 base appearance models?

Apperance Model performance very bad in night scenario ?

Hi Author,

         I found Apperance Model can work in many scenario. but in night ,the picture quality or light have big impact on Apperance model peformance . Why it is normal ?? because  most of  image net data  came from daytime scene ?

Old YOLO detector Models

Can I use previous yolo detector models for MOT (object tracking) like yolov3 /yolov4 networks trained on our custom dataset. What possible changes we have to do if I can modify the existing code for yolov3. THanks

SOT on LaSOT

Hi, Zhongdao,

Thank you for your great work!

I have tested your code on LaSOT using the crw_resnet18_s3 model by modifying the datasets root in utils.py.
But the AUC is only 23.02 on this dataset. I'm not sure I got the correct results. Have you tested the unitrack on LaSOT or GOT10k? Could you provide the results on LaSOT and GOT10k at your convenience?

How to get the mask for mots task?

Hi!

Thanks for your great work!

When I prepare the segmentation mask for mots task, I followed the recommended instruction https://github.com/Zhongdao/UniTrack/blob/main/docs/DATA.md, and used the gen_mots_costa.py. Then i get the txt fils like follows:

1 2001 2 1080 1920 UkU\1`0RQ1>PoN\OVP1X1F=I3oSOTNlg0U2lWOVNng0m1nWOWNlg0n1PXOWNlg0l1SXOUNjg0P2.......
But it seems that the txt fils are not segmentation mask. Are these txt files right? or could you pls describe the mask generation process in more details?

Thank you!

HOTA vs HOTA(0)

Hi,

For evaluation, the HOTA printed out and the HOTA(0) in the csv don't match. What does the '(0)' mean?

Thanks,

Building wheel for pycocotools (setup.py) ... error

i install requirements.txt and there is a question:
Building wheels for collected packages: pycocotools
Building wheel for pycocotools (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'D:\tool\anaconda\envs\unitrack\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\GOD\AppData\Local\Temp\pip-install-sfqz1dry\pycocotools_37c8d0aeabba4390ad470c3122efd176\setup.py'"'"'; file='"'"'C:\Users\GOD\AppData\Local\Temp\pip-install-sfqz1dry\pycocotools_37c8d0aeabba4390ad470c3122efd176\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\GOD\AppData\Local\Temp\pip-wheel-u04ykz81'
cwd: C:\Users\GOD\AppData\Local\Temp\pip-install-sfqz1dry\pycocotools_37c8d0aeabba4390ad470c3122efd176
Complete output (19 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\pycocotools
copying pycocotools\coco.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\cocoeval.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\mask.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools_init_.py -> build\lib.win-amd64-3.7\pycocotools
running build_ext
building 'pycocotools._mask' extension
creating build\temp.win-amd64-3.7
creating build\temp.win-amd64-3.7\Release
creating build\temp.win-amd64-3.7\Release\pycocotools
creating build\temp.win-amd64-3.7\Release\common
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -ID:\tool\anaconda\envs\unitrack\lib\site-packages\numpy\core\include -Icommon -ID:\tool\anaconda\envs\unitrack\include -ID:\tool\anaconda\envs\unitrack\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tcpycocotools/_mask.c /Fobuild\temp.win-amd64-3.7\Release\pycocotools/_mask.obj -Wno-cpp -Wno-unused-function -std=c99
cl : Command line error D8021 : invalid numeric argument '/Wno-cpp'
error: command 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe' failed with exit status 2

ERROR: Failed building wheel for pycocotools

mot demo not producing valid tracking results

Thanks for releasing this repo!

I tried to run mot on a sample video. The script ran fine and was able to produce the video, but it did not contain any tracking results.
I checked up on yolox demo.py, but the detector was working fine.

Do I need to set the pretrained resnet18 to get the demo working?

One Potential Bug

Hi Zhongdao,

online_targets = tracker.update(img, img0, obs)

Thanks for your great work to provide such a wonderful evaluation framework!
Recently I have used your framework to test the model on MOT16 (test set). I find when the detection result of a specific frame provided by FairMOT is empty, the 'obs' shown in the above will be also empty and the program will crash.

Maybe we can re-create a zero matrix obs (1x5) when it is empty (len(obs) = 0). Is it right?

In addition, does your current code provide the VIS evaluation? Since there is no any configurations in your yaml files.

PoseTrack数据集相关

您好,由于PoseTrack官网目前不能访问,我无法从中下载数据,故向您求助:
有您在论文中使用的PoseTrack数据集下载链接分享吗?

谢谢

problem with the module yolox

Hello, after following the install.md I received the error: no module yolox found.
I did: pip install yolox
and then I receive no module found: yolox.data

Any suggestions?

Model zoo link doesn't work

Hi, I am trying to use different pretrained models provided by you model zoo but some of the links are empty. And also wondering if you have the im_mean and im_std for each of them.

Thank you,

Bad result on YOLOX + UniTrack demo

Hi, Zhangdao
thank you for your wonderful UniTrack framwork. but when i tried use YOLOX + UniTrack demo mot_demo.py on a video sequence,the result was not that good. i put the yolox_x(the better version) pre-traind model on yolox/weights and i used the config imagenet_resnet18_s3.yaml . it seems like the framwork could not detect the person object but works pretty well on other object . by the way i use the default classed arg list(range(80))
what should i do?

img

how to use the DCF head for SOT?

Hi,
Thanks for your interesting work. Does this code contain the DCF head tracking part? Since I don't find this tracker in the tracker folder.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.