Git Product home page Git Product logo

bytetrack_reid's Introduction

ByteTrack_ReID

[update 20231122]

fix reid FP16 bug in my RP to official ByteTrack. Now ReID module brings more befifits.

Also support HOTA metric from TrackEval.

Tracker before fix ReID FP 16 Bug
AP50:95 AP50 AP75 | HOTA MOTA IDF1
after fix ReID FP 16 Bug
AP50:95 AP50 AP75 | HOTA MOTA IDF1
ByteTrack 0.567 0.868 0.647 | 62.038 73.297 74.935 0.567 0.868 0.647 | 62.038 73.297 74.935
FairMOT 0.567 0.868 0.647 | 61.921 73.292 74.754 0.567 0.868 0.647 | 61.921 73.292 74.754
FairMOT+BYTE 0.567 0.868 0.647 | 61.602 73.359 73.648 0.567 0.868 0.647 | 61.602 73.359 73.648

[update 20220514]:

Onedrive link of trained model. Trained model's mAP should be 0.556 with MOTA 72.6 using ByteTrack and MOTA 70.9 using FairMOT tracking strategy.

[update 20220511]:

  1. Please open issue with English, not Chinese, so the discussion can benifit the community.

  2. To switch from different trackers to another, please replace the byte_track.py with other files.

  3. Training proceduer is the same as original ByteTrack. If you want to train the model on larger datasets with ids, please follow JDE/FairMOT.

[update 20220428]:

I found a ReID related bug of original ByteTrack. I made a PR to ByteTrack and it is merged to master branch of ByteTrack. ifzhang/ByteTrack#184

So the ReID part of current code will not be train correctly when track_id becomes larger. I will update the code when I got time.

Or you can make a PR to help me out!

[update 20220414]:

  1. Fix loss computation bug in yolo_head.py.
  2. Fix feature update in FairMOT tracker when recovering a tracklet.
  3. Fix training set in yolox_s_mot_half using train_half.json instead of train.json.
  4. Trained model can be download here jc69. Trained model's mAP should be 0.556 with MOTA 72.6 using ByteTrack and MOTA 70.9 using FairMOT tracking strategy.
  5. Note that reid embeddings only trained on MOT17 half is not reliable due to limited ID annotations.

ByteTrack is the SOTA tracker in MOT benchmarks with strong detector YOLOX and a simple association strategy only based on motion information.

Motion information (IoU distance) is efficient and effective in short-term tracking, but can not be used for recovering targets after long-time disappear or conditions with moving camera.

So it is important to enhance ByteTrack with a ReID module for long-term tracking, improving the performance under other challenging conditions, such as moving camera.

Some code is borrowed from FairMOT

For now, the results are trained on half of MOT17 and tested on the other half of MOT17. And the performance is lower than the original performance.

Any issue and suggestions are welcome!

tracking results using tracking strategy of ByteTrack, with detection head and ReID head trained together

tracking results using tracking strategy of FairMOT, with detection head and ReID head trained together

Modifications, TODOs and Performance

Modifications

  • Enhanced ByteTrack with a ReID module (head) following the paradigm of FairMOT.
  • Add a classifier for supervised training of ReID head.
  • Using uncertainty loss in FairMOT for the balance of detection and ReID tasks.
  • Tracking strategy is borrowed from FairMOT

TODOs

  • support more datasets
  • single class –> multiple class
  • other loss functions for better ReID performance
  • other strategies for multiple tasks balance
  • … …

The following contents is original README in ByteTrack.

PWC

PWC

ByteTrack is a simple, fast and strong multi-object tracker.

ByteTrack: Multi-Object Tracking by Associating Every Detection Box

Yifu Zhang, Peize Sun, Yi Jiang, Dongdong Yu, Zehuan Yuan, Ping Luo, Wenyu Liu, Xinggang Wang

arXiv 2110.06864

Demo Links

Google Colab demo Huggingface Demo Original Paper: ByteTrack
Open In Colab Hugging Face Spaces arXiv 2110.06864

Abstract

Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos. Most methods obtain identities by associating detection boxes whose scores are higher than a threshold. The objects with low detection scores, e.g. occluded objects, are simply thrown away, which brings non-negligible true object missing and fragmented trajectories. To solve this problem, we present a simple, effective and generic association method, tracking by associating every detection box instead of only the high score ones. For the low score detection boxes, we utilize their similarities with tracklets to recover true objects and filter out the background detections. When applied to 9 different state-of-the-art trackers, our method achieves consistent improvement on IDF1 scores ranging from 1 to 10 points. To put forwards the state-of-the-art performance of MOT, we design a simple and strong tracker, named ByteTrack. For the first time, we achieve 80.3 MOTA, 77.3 IDF1 and 63.1 HOTA on the test set of MOT17 with 30 FPS running speed on a single V100 GPU.

Tracking performance

Results on MOT challenge test set

Dataset MOTA IDF1 HOTA MT ML FP FN IDs FPS
MOT17 80.3 77.3 63.1 53.2% 14.5% 25491 83721 2196 29.6
MOT20 77.8 75.2 61.3 69.2% 9.5% 26249 87594 1223 13.7

Visualization results on MOT challenge test set

Installation

1. Installing on the host machine

Step1. Install ByteTrack.

git clone https://github.com/ifzhang/ByteTrack.git
cd ByteTrack
pip3 install -r requirements.txt
python3 setup.py develop

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

Step3. Others

pip3 install cython_bbox

2. Docker build

docker build -t bytetrack:latest .

# Startup sample
mkdir -p pretrained && \
mkdir -p YOLOX_outputs && \
xhost +local: && \
docker run --gpus all -it --rm \
-v $PWD/pretrained:/workspace/ByteTrack/pretrained \
-v $PWD/datasets:/workspace/ByteTrack/datasets \
-v $PWD/YOLOX_outputs:/workspace/ByteTrack/YOLOX_outputs \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
bytetrack:latest

Data preparation

Download MOT17, MOT20, CrowdHuman, Cityperson, ETHZ and put them under <ByteTrack_HOME>/datasets in the following structure:

datasets
   |——————mot
   |        └——————train
   |        └——————test
   └——————crowdhuman
   |         └——————Crowdhuman_train
   |         └——————Crowdhuman_val
   |         └——————annotation_train.odgt
   |         └——————annotation_val.odgt
   └——————MOT20
   |        └——————train
   |        └——————test
   └——————Cityscapes
   |        └——————images
   |        └——————labels_with_ids
   └——————ETHZ
            └——————eth01
            └——————...
            └——————eth07

Then, you need to turn the datasets to COCO format and mix different training data:

cd <ByteTrack_HOME>
python3 tools/convert_mot17_to_coco.py
python3 tools/convert_mot20_to_coco.py
python3 tools/convert_crowdhuman_to_coco.py
python3 tools/convert_cityperson_to_coco.py
python3 tools/convert_ethz_to_coco.py

Before mixing different datasets, you need to follow the operations in mix_xxx.py to create a data folder and link. Finally, you can mix the training data:

cd <ByteTrack_HOME>
python3 tools/mix_data_ablation.py
python3 tools/mix_data_test_mot17.py
python3 tools/mix_data_test_mot20.py

Model zoo

Ablation model

Train on CrowdHuman and MOT17 half train, evaluate on MOT17 half val

Model MOTA IDF1 IDs FPS
ByteTrack_ablation [google], [baidu(code:eeo8)] 76.6 79.3 159 29.6

MOT17 test model

Train on CrowdHuman, MOT17, Cityperson and ETHZ, evaluate on MOT17 train.

  • Standard models
Model MOTA IDF1 IDs FPS
bytetrack_x_mot17 [google], [baidu(code:ic0i)] 90.0 83.3 422 29.6
bytetrack_l_mot17 [google], [baidu(code:1cml)] 88.7 80.7 460 43.7
bytetrack_m_mot17 [google], [baidu(code:u3m4)] 87.0 80.1 477 54.1
bytetrack_s_mot17 [google], [baidu(code:qflm)] 79.2 74.3 533 64.5
  • Light models
Model MOTA IDF1 IDs Params(M) FLOPs(G)
bytetrack_nano_mot17 [google], [baidu(code:1ub8)] 69.0 66.3 531 0.90 3.99
bytetrack_tiny_mot17 [google], [baidu(code:cr8i)] 77.1 71.5 519 5.03 24.45

MOT20 test model

Train on CrowdHuman and MOT20, evaluate on MOT20 train.

Model MOTA IDF1 IDs FPS
bytetrack_x_mot20 [google], [baidu(code:3apd)] 93.4 89.3 1057 17.5

Training

The COCO pretrained YOLOX model can be downloaded from their model zoo. After downloading the pretrained models, you can put them under <ByteTrack_HOME>/pretrained.

  • Train ablation model (MOT17 half train and CrowdHuman)
cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  • Train MOT17 test model (MOT17 train, CrowdHuman, Cityperson and ETHZ)
cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_mix_det.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  • Train MOT20 test model (MOT20 train, CrowdHuman)

For MOT20, you need to clip the bounding boxes inside the image.

Add clip operation in line 134-135 in data_augment.py, line 122-125 in mosaicdetection.py, line 217-225 in mosaicdetection.py, line 115-118 in boxes.py.

cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  • Train custom dataset

First, you need to prepare your dataset in COCO format. You can refer to MOT-to-COCO or CrowdHuman-to-COCO. Then, you need to create a Exp file for your dataset. You can refer to the CrowdHuman training Exp file. Don't forget to modify get_data_loader() and get_eval_loader in your Exp file. Finally, you can train bytetrack on your dataset by running:

cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/your_exp_file.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth

Tracking

  • Evaluation on MOT17 half val

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse

You can get 76.6 MOTA using our pretrained model.

Run other trackers:

python3 tools/track_sort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/track_deepsort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/track_motdt.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
  • Test on MOT17

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/interpolation.py

Submit the txt files to MOTChallenge website and you can get 79+ MOTA (For 80+ MOTA, you need to carefully tune the test image size and high score detection threshold of each sequence).

  • Test on MOT20

We use the input size 1600 x 896 for MOT20-04, MOT20-07 and 1920 x 736 for MOT20-06, MOT20-08. You can edit it in yolox_x_mix_mot20_ch.py

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -c pretrained/bytetrack_x_mot20.pth.tar -b 1 -d 1 --fp16 --fuse --match_thresh 0.7 --mot20
python3 tools/interpolation.py

Submit the txt files to MOTChallenge website and you can get 77+ MOTA (For higher MOTA, you need to carefully tune the test image size and high score detection threshold of each sequence).

Applying BYTE to other trackers

See tutorials.

Combining BYTE with other detectors

Suppose you have already got the detection results 'dets' (x1, y1, x2, y2, score) from other detectors, you can simply pass the detection results to BYTETracker (you need to first modify some post-processing code according to the format of your detection results in byte_tracker.py):

from yolox.tracker.byte_tracker import BYTETracker
tracker = BYTETracker(args)
for image in images:
   dets = detector(image)
   online_targets = tracker.update(dets, info_imgs, img_size)

You can get the tracking results in each frame from 'online_targets'. You can refer to mot_evaluators.py to pass the detection results to BYTETracker.

Demo

cd <ByteTrack_HOME>
python3 tools/demo_track.py video -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --fp16 --fuse --save_result

Deploy

  1. ONNX export and ONNXRuntime
  2. TensorRT in Python
  3. TensorRT in C++
  4. ncnn in C++

Citation

@article{zhang2021bytetrack,
  title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
  author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
  journal={arXiv preprint arXiv:2110.06864},
  year={2021}
}

@article{zhang2021fairmot,
  title={Fairmot: On the fairness of detection and re-identification in multiple object tracking},
  author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
  journal={International Journal of Computer Vision},
  volume={129},
  pages={3069--3087},
  year={2021},
  publisher={Springer}
}

Acknowledgement

A large part of the code is borrowed from YOLOX, FairMOT, TransTrack and JDE-Cpp. Many thanks for their wonderful works.

bytetrack_reid's People

Contributors

ak391 avatar dumbpy avatar hanguangxin avatar iamrajee avatar ifzhang avatar johnqczhang avatar kentaroy47 avatar masterbin-iiau avatar peizesun avatar pinto0309 avatar sajjadaemmi avatar snehitvaddi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

bytetrack_reid's Issues

Training Error

@HanGuangXin hi htnaks for sharing your codebase i am having issues while training on MOT17 and MOT20 below is the error "
Traceback (most recent call last):

File "tools/train.py", line 122, in
args=(exp, args),
│ └ Namespace(batch_size=1, ckpt='pretrained/yolox_x.pth', devices=0, dist_backend='nccl', dist_url=None, exp_file='exps/example/...
└ ╒══════════════════╤═════════════════════════════════════════════════════════════════════════════════════════════════════════...

File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/launch.py", line 90, in launch
main_func(*args)
│ └ (╒══════════════════╤════════════════════════════════════════════════════════════════════════════════════════════════════════...
└ <function main at 0x7fad0f52cae8>

File "tools/train.py", line 100, in main
trainer.train()
│ └ <function Trainer.train at 0x7fad0f566ea0>
└ <yolox.core.trainer.Trainer object at 0x7fad0f488400>

File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/trainer.py", line 77, in train
self.train_in_epoch()
│ └ <function Trainer.train_in_epoch at 0x7fad0f56ed90>
└ <yolox.core.trainer.Trainer object at 0x7fad0f488400>
File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/trainer.py", line 86, in train_in_epoch
self.train_in_iter()
│ └ <function Trainer.train_in_iter at 0x7fad0f56e840>
└ <yolox.core.trainer.Trainer object at 0x7fad0f488400>
File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/trainer.py", line 92, in train_in_iter
self.train_one_iter()
│ └ <function Trainer.train_one_iter at 0x7fad0f509510>
└ <yolox.core.trainer.Trainer object at 0x7fad0f488400>
File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/trainer.py", line 112, in train_one_iter
self.scaler.scale(loss).backward() # loss.backward
│ │ │ └
│ │ └ <function GradScaler.scale at 0x7fad245a4b70>
│ └ <torch.cuda.amp.grad_scaler.GradScaler object at 0x7fad0f488048>
└ <yolox.core.trainer.Trainer object at 0x7fad0f488400>
File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
│ │ │ │ │ │ │ └ None
│ │ │ │ │ │ └ False
│ │ │ │ │ └ None
│ │ │ │ └ None
│ │ │ └
│ │ └ <function backward at 0x7fad2425c0d0>
│ └ <module 'torch.autograd' from '/home/anaconda3/envs/p37/lib/python3.7/site-packages/torch/autograd/init.py'>
└ <module 'torch' from '/home/anaconda3/envs/p37/lib/python3.7/site-packages/torch/init.py'>
File "/anaconda3/envs/p37/lib/python3.7/site-packages/torch/autograd/init.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag

RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1."

THanks in adavance

MOTA is lower

In the readme I saw that the mota can reach 84%, but I only have 72 mota after training with train half?

Data Association

@HanGuangXin
Hello, sorry to bother you again!
When the data is associated, why should the high-scoring detection frame that does not match the trajectory for the first time (motion+reid) and the trajectory that does not match the high-scoring detection frame match again through IoU (motion)? What is the benefit of doing this?
Looking forward to your reply.

about reproducing

Can you provide a pre-trained model?
For example yolox_s_mot17_half/latest_ckpt.pth.tar.
Thank you very much!

Where is reid association?

Hi!
Where is reid association in BYTE? It seems that the association strategy in yolox.tracker.byte_tracker.py is same as ByteTrack.

How to track with FairMOT

I tried to use fairmot infairmot_tracker.py for tracking, but I got an error messageValueError: too many values to unpack (expected 2), the input was incorrect when using the upadte function, but I carefully checked that the input met the requirements

    online_targets, lost_t = tracker.update(outputs[0], info_imgs_, self.img_size, id_feature)   # TODO: ReID. add 'id_feature'
                             │       │      │           │           │    │         └ tensor([[ 1.9701e-01,  2.9549e-02, -1.4506e-02, -1.1046e-02, -3.5317e-02,
                             │       │      │           │           │    │                     4.8791e-02,  3.2311e-01,  1.2805e-01, -1....
                             │       │      │           │           │    └ (896, 1600)
                             │       │      │           │           └ <yolox.evaluators.mot_evaluator.MOTEvaluator object at 0x7f5ec0ea7f70>
                             │       │      │           └ [1080, 1920, 1, 1, '02/img1/000001.jpg']
                             │       │      └ [tensor([[ 6.3289e+01,  8.5052e+01,  1.2288e+02,  1.6780e+02,  9.9645e-01,
                             │       │                  9.0823e-01,  0.0000e+00,  1.4871e+00,  2...
                             │       └ <function FairMOTTracker.update at 0x7f5ec2d6c280>
                             └ <yolox.tracker.byte_tracker_FairMOT.FairMOTTracker object at 0x7f60b4757af0>

ValueError: too many values to unpack (expected 2)

关于评测结果

评测结果只做了加ReID的ByteTrack和FairMot对比,请问和不加ReID的ByteTrack差了多少?

loss_ id is Nan

loss_ id and total_ loss sometimes becomes Nan. I'm not sure if this is normal
b6473bba5a103a324fbae6212be8d6e

track_id bug with fp16

Hi,here.When targets is converted to FP16, the track_id will lose the precision, resulting in wrong labels for reid.
How to separate track_id annotations from variable targets. And set targets to torch.float16as current code, but keep track_id to be torch.float32.I tried to modify it, but it didn't work.
Looking forward to your update on this bug.Thank you.

The tracked metric is too low

First of all thank you for adding Reid to bytetrack and making him open source,however,there are a few confusions when I reproduce your experimental results.
For now,the results are trained on half of MOT17 and tested on the other half of MOT17.
I run the following command:
python3 tools/train.py -f exps/example/mot/yolox_x_mot17_half.py -d 1 -b 2 --fp16 -o -c pretrained/yolox_x.pth.tar
This is the result after 80 rounds of training:
Bytetrack-Reid:
image
Here I also compared the original bytetrack in the mot17_train_half training, and found that the result of Bytetrack_Reid is not as high as the original, may I ask why this is :
Bytetrack:
image
Then I used these two weight files to evaluate on MOT17_val_half and got the following results:
Bytrack-Reid:
image
Bytetrack:
image
I did not get the same experimental results as you, may I ask where did I go wrong.
Looking forward to your reply.

Mixed data training

How to mix data training, some datasets do not have track_id.
When I train a mixed dataset, I get this error:
image
Then I modified it like this, can I modify it like this and then I encountered this problem:
image
I would like to ask how to train a mixed dataset, looking forward to your reply.

Training Issues

@HanGuangXin i was able to train the model in one system, when i shifted the system and try to set it up from beginnign i am facing issues with the data load ing , its not able to read the files from the dataset folder only , if i try to manually add it the self.nID is 0 any idea how to solve it and make the training setps much elaborate

thanks in advance

Mot指标

如何在自己的数据集上track的时候看到mota,idf1等等指标的结果呢,求大佬指教

Training MOT17 and MOT20 together

@HanGuangXin thanks for sharing the codebase i have couple of queries
When i am training for mix dataset of MOT17 and MOT20 i am getting an error shoudl i use the same scrip provide by you or should changes be made in the code

THanks in advance

About the trainning with ByteTrack_ReID model

Thank you the awesome contributions. Here are some questions about the respository below:

  1. Is it resonable to add ReID branch in YOLOx model? the YOLOx has 3 hierachies which of downsample ratio is 8, 16, 32. From my understanding the larger downsample ratio is, the much uncertainty ID features we got.
    2. About the nID when trainning crowdhuman datasets. The original Fairmot make the output class equel to the total num ID in datasets which is a large num. The trainning of ReID is hard to control. Furthermore, the performance of using ID features to match is worse than using detect results merely. Can you share some evaluated results.
  2. I am trainning the model on MOT20 datasets generated by convert_mot20_to_coco.py. After that, when I started the tranning I meet a error about loss backward. I finally resolved it by incresed the total_id num like that below:
    total_ids = max(max_id_each_img) + 1 + 1 # TODO Need Check: ids start with 0
    Though it can run successfully, it is curious why should increase the id num by 2 instead 1. Besides, The original Fairmot did the same operations.
  3. Not using ID features in demo_track.py. After trainning the model on crowdhuman datasets, I test a video with demo_track.py . After that, I find the tracker is ByteTracker which is not realted to the ID features. So what the right way to test custom video on the trained model with ID features Just replace the Bytetrack with Bytetrack_fairmot and modified some code. but the results i got is not as goog as i expected.
    I wiil be appreciate if anyone who can help me, Thank you in advance.

Training Issues

run python3 tools/train.py -f exps/example/mot/yolox_x_mot17_half.py -d 1 -b 1 --fp16 -o -c pretrained/yolox_x.pth

ERROR | yolox.core.launch:90 - An error has been caught in function 'launch', process 'MainProcess' (16529), thread 'MainThread' (140255758401728):
Traceback (most recent call last):

File "tools/train.py", line 122, in
args=(exp, args),
│ └ Namespace(batch_size=1, ckpt='pretrained/yolox_x.pth', devices=1, dist_backend='nccl', dist_url=None, exp_file='exps/example/...

File "/home/pc116/Documents/gxy/ByteTrack-main/yolox/core/launch.py", line 90, in launch
main_func(*args)
│ └
└ <function main at 0x7f8f41687620>

File "tools/train.py", line 100, in main
trainer.train()
│ └ <function Trainer.train at 0x7f8e8b7da9d8>
└ <yolox.core.trainer.Trainer object at 0x7f8f41681b00>

File "/home/pc116/Documents/gxy/ByteTrack-main/yolox/core/trainer.py", line 70, in train
self.before_train()
│ └ <function Trainer.before_train at 0x7f8f41667ae8>
└ <yolox.core.trainer.Trainer object at 0x7f8f41681b00>

File "/home/pc116/Documents/gxy/ByteTrack-main/yolox/core/trainer.py", line 146, in before_train
no_aug=self.no_aug,
│ └ False
└ <yolox.core.trainer.Trainer object at 0x7f8f41681b00>

File "exps/example/mot/yolox_x_mot17_half.py", line 54, in get_data_loader
total_ids = dataset.nID # TODO: total ids for reid classifier
└ <yolox.data.datasets.mot.MOTDataset object at 0x7f8f16dbc860>

File "/home/pc116/anaconda3/envs/gxy/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 83, in getattr
raise AttributeError

AttributeError

reproducing with yolox_s_mot17_half.py

@HanGuangXin
Sorry to bother you again!
Hello! I have finished training with yolox_s_mot17_half(self.train_ann=train_half.json;self.val_ann=val_half.json).However,I didn't get the desired result(YOLOX_S with mAP 55.5, MOTA 71.8 and IDF1 73.8).
Below is the result of my training:
Bytetrack-Reid with train.py:
image
Bytetrack-Reid with track.py:
COCO indicator:
image
MOT indicator:
image
Bytetrack with train.py:
image
Bytetrack with track.py:
COCO indicator:
image
MOT indicator:
image
Why is the tracking effect worse after adding Reid? Where am I going wrong?
Looking forward to your reply.

Training model

I admire your realization. Can you provide the completed hybrid training model with Re-ID?
Maybe not mot17_half training model, which is the training model used in mot17 test set.
Thank you very much!

Pretrained Model

@HanGuangXin thanks for your work and sharing it to opensource

  1. Can you please share the pretrained model for person reid ?
  2. Is there inference pipeline to check the shared pre trained model
  3. Can we extend this work for vehicle reid also ? if so what all changes have to be made to the current source code

Thanks in advance

Got an error when evaluating and testing the model trained with this code

Thanks for this work,
The training was successful, however I got an error when I tried to test (demo on video) and evaluating (for performance evaluation like MOTA, IDS etc)

  1. The following is an error when I do demo (on video)
[warning] No nID got!!!
2022-01-24 10:28:34.520 | INFO     | __main__:main:326 - Model Summary: Params: 104.65M, Gflops: 880.83
2022-01-24 10:28:34.524 | INFO     | __main__:main:334 - loading checkpoint
Traceback (most recent call last):
  File "tools/demo_track.py", line 372, in <module>
    main(exp, args)
  File "tools/demo_track.py", line 337, in main
    model.load_state_dict(ckpt["model"])
  File "C:\Users\admin\anaconda3\envs\bytetrack_reid\lib\site-packages\torch\nn\modules\module.py", line 1051, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for YOLOX:
        size mismatch for head.reid_classifier.weight: copying a param with shape torch.Size([40, 128]) from checkpoint, the shape in current model is torch.Size([2, 128]).
        size mismatch for head.reid_classifier.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([2]).

Is there any modification needed in the code. In [40, 128], 40 in the number of id trained in my model.

  1. The following is an error when I do evaluation
> File "c:\users\admin\desktop\bytetrack_Reid\yolox\core\launch.py", line 90, in launch
    main_func(*args)
    │          └ (╒══════════════════╤══════════════════════════════════════════════════════════════════════════════════════════════╕
    │            │ keys  ...
    └ <function main at 0x000001C75DE9F3A0>

  File "tools\track.py", line 220, in main
    *_, summary = evaluator.evaluate(
                  │         └ <function MOTEvaluator.evaluate at 0x000001C75D08C280>
                  └ <yolox.evaluators.mot_evaluator.MOTEvaluator object at 0x000001C714D7D5B0>

  File "c:\users\admin\desktop\bytetrack_Reid\yolox\evaluators\mot_evaluator.py", line 137, in evaluate
    frame_id = info_imgs[2].item()
               └ [tensor([1080]), tensor([1545]), ['MOT20-04/img1/000001.jpg']]

AttributeError: 'list' object has no attribute 'item'

Thank you.

What metrics are the ones in the second list

After having trained the yolox-s network on half mot17, I run the local evaluation with:
python3 tools/track.py -f exps/example/mot/yolox_s_mot17_half.py -c YOLOX_outputs/id_loss_1.0/best_ckpt.pth.tar -b 1 -d 2 --fp16 --fuse

Finally obtaining the following results
metrics

Though I do not understand the second set of values, especially the columns IDt, IDa, IDm. I tried also to check motmetrics library, but did not find any helpful result.

Code understanding

@HanGuangXin thanks for sharing hte code base , i having the following queries

  1. I was able to train only on MOT17 data , can we scale the training to other datasets similar to Bytetrack
  2. From the code structure perspective i am to see that you have added any additional reid head to the yolox architecture and this embedding from the reid ur passing it to the tracker, am i right in understanding ? if so by training with larger datasets can we use this model for multicamera person reidentification?
  3. In your comments you have mentioned "#TODO Reid" does that mean you have already implemented it and using these heads in the inference

Thanks in adavance

运行出错

您好,问个问题,执行python3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 3 -b 4 --fp16 -o -c pretrained/yolox_x.pth和ByteTrack运行指令一样,ByteTrack运行正常,在ByteTrack_ReID运行报错,有重新执行setup,请问这个版本的代码是可正常执行的吗?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.