This is the official implementation of Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box Estimation
.
- The code for CVPR2021 is updated. The old version is still available by
git checkout vot2020
. - AlphaRefine is accepted by the CVPR2021
- ๐ Alpha-Refine wins VOT2020 Real-Time Challenge with EAOMultistart 0.499!
- VOT2020 winner presentation slide has been uploaded.**
git clone https://github.com/MasterBin-IIAU/AlphaRefine.git
cd AlphaRefine
Run the installation script to install all the dependencies. You need to provide the ${conda_install_path}
(e.g. ~/anaconda3
) and the name ${env_name}
for the created conda environment (e.g. alpha
).
# install dependencies
bash install.sh ${conda_install_path} ${env_name}
conda activate alpha
python setup.py develop
We provide the models of AlphaRefine here. The AUC and Latency are tested with SiamRPN++ as the base tracker on LaSOT dataset, using a RTX 2080Ti GPU.
We recommend download the model into ltr/checkpoints/ltr/SEx_beta
.
Tracker | Backbone | Latency | AUC(%) | Model |
---|---|---|---|---|
AR34c+m | ResNet34 | 5.1ms | 55.9 | model |
AR18c+m | ResNet18 | 4.2ms | 55.0 | model |
When combined with more powerful base trackers, AlphaRefine leads to very competitive tracking systems (e.g. ARDiMP). Following are some of the best performed trackers on LaSOT. More results are present in Performance
Tracker | AUC(%) | Speed (fps) | Paper/Code |
---|---|---|---|
ARDiMP (ours) | 65.4 | 32 (RTX 2080Ti) | Paper/Result |
Siam R-CNN (CVPR20) | 64.8 | 5 (Tesla V100) | Paper/Code |
DimpSuper | 63.1 | 39 (RTX 2080Ti) | Paper/Code |
ARDiMP50 (ours) | 60.2 | 46 (RTX 2080Ti) | Paper/Result |
PrDiMP50 (CVPR20) | 59.8 | 30 (Unkown GPU) | Paper/Code |
LTMU (CVPR20) | 57.2 | 13 (RTX 2080Ti) | Paper/Code |
In this project, we introduce DiMP50, DiMPsuper, ATOM, ECO, RTMDNet, SiamRPN++ as our base trackers.
DiMP50, DiMPsuper, ATOM, ECO are trackers from PyTracking.
The base tracker models trained using PyTracking can be download from model zoo, download them into pytracking/networks
Or you can run the following script to download the models.
echo "****************** Downloading networks ******************"
mkdir pytracking/networks
echo "****************** DiMP Network ******************"
gdown https://drive.google.com/uc\?id\=1qgachgqks2UGjKx-GdO1qylBDdB1f9KN -O pytracking/networks/dimp50.pth
gdown https://drive.google.com/uc\?id\=1MAjrRJDCbL0DSjUKFyDkUuYS1-cYBNjk -O pytracking/networks/dimp18.pth
gdown https://drive.google.com/open?id=1qDptswis2FxihLRYLVRGDvx6aUoAVVLv -O pytracking/networks/super_dimp.pth
echo "****************** ATOM Network ******************"
gdown https://drive.google.com/uc\?id\=1VNyr-Ds0khjM0zaq6lU-xfY74-iWxBvU -O pytracking/networks/atom_default.pth
echo "****************** ECO Network ******************"
gdown https://drive.google.com/uc\?id\=1aWC4waLv_te-BULoy0k-n_zS-ONms21S -O pytracking/networks/resnet18_vggmconv1.pth
Please refer to external/pysot/README.md for establishing SiamRPN++ and external/RT_MDNet/README.md for establishing RTMDNet.
-
We provide the evaluation recipes of LaSOT | GOT-10K | TrackingNet | VOT2020. You can follow these recipes to run the evaluation scripts.
-
For some of the testing scripts, the path to the testing sets should be specified in
pytracking/evaluation/local.py
If
pytracking/evaluation/local.py
is not exist, please runpython -c "from pytracking.evaluation.environment import create_default_local_file; create_default_local_file()"
An example of
pytracking/evaluation/local.py.example
is provided.
The training code is based on Pytracking, thus the training operation is similar.
-
Download the Dataset GOT-10K | LaSOT | MS-COCO | ILSVRC-VID | ImageNet-DET | YouTube-VOS | TrackingNet, Saliency
For more details, you can refer to ltr/README.md
-
The path to the training sets should be specified in
ltr/admin/local.py
If the
ltr/admin/local.py
is not exist, please runpython -c "from ltr.admin.environment import create_default_local_file; create_default_local_file()"
An example
ltr/admin/local.py.example
is also provided.
The training recipes are placed in ltr/train_settings
(e.g. ltr/train_settings/SEx_beta/SEcm_r34.py
), you can
configure the training parameters and Dataloaders.
For a recipe named ltr/train_settings/$sub1/$sub2.py
, run the following command to launch the training procedure.
python -m torch.distributed.launch --nproc_per_node=8 \
run_training_multigpu.py $sub1 $sub2
The checkpoints will be saved in AlphaRefine/checkpoints/ltr/$sub1/$sub2/SEcmnet_ep00*.pth.tar
.
When combined with more powerful base trackers, AlphaRefine leads to very competitive tracking systems (e.g. ARDiMP). For more performance reports, please refer to our paper
-
LaSOT
Tracker Success Score Speed (fps) Paper/Code ARDiMP (ours) 0.654 32 (RTX 2080Ti) Paper/Result Siam R-CNN (CVPR20) 0.648 5 (Tesla V100) Paper/Code DimpSuper 0.631 39 (RTX 2080Ti) Paper/Code ARDiMP50 (ours) 0.602 46 (RTX 2080Ti) Paper/Result PrDiMP50 (CVPR20) 0.598 30 (Unkown GPU) Paper/Code LTMU (CVPR20) 0.572 13 (RTX 2080Ti) Paper/Code DiMP50 (ICCV19) 0.568 59 (RTX 2080Ti) Paper/Code Ocean (ECCV20) 0.560 25 (Tesla V100) Paper/Code ARSiamRPN (ours) 0.560 50 (RTX 2080Ti) Paper/Result SiamAttn (CVPR20) 0.560 45 (RTX 2080Ti) Paper/Code SiamFC++GoogLeNet (AAAI20) 0.544 90 (RTX 2080Ti) Paper/Code MAML-FCOS (CVPR20) 0.523 42 (NVIDIA P100) Paper/Code GlobalTrack (AAAI20) 0.521 6 (GTX TitanX) Paper/Code ATOM (CVPR19) 0.515 30 (GTX 1080) Paper/Code
- This repo is based on Pytracking which is an exellent work.
- Thanks for pysot and RTMDNet from which we borrow the code as base trackers.