Xiaohang Zhan, Xingang Pan, Ziwei Liu, Dahua Lin, Chen Change Loy, "Self-Supervised Learning via Conditional Motion Propagation", in CVPR 2019 [Project Page]
For further information, please contact Xiaohang Zhan.
Demos (Watching full demos in YouTube)
- Conditional motion propagation (motion prediction by guidance)
- Guided video generation (draw arrows to let a static image animated)
- Semi-automatic annotation (first row: interface, auto zoom-in, mask; second row: optical flows)
YFCC frames (45G). YFCC optical flows (LiteFlowNet) (29G). YFCC lists (251M).
-
Pre-trained models for semantic segmentation, instance segmentation and human parsing by CMP can be downloaded here
-
Models for demos (conditinal motion propagation, guided video generation and semi-automatic annotation) can be downloaded here
-
python>=3.6
-
pytorch>=0.4.0
-
others
pip install -r requirements.txt
-
Clone the repo.
git clone [email protected]:XiaohangZhan/conditional-motion-propagation.git cd conditional-motion-propagation
-
Prepare data (YFCC as an example)
mkdir data mkdir data/yfcc cd data/yfcc # download YFCC frames, optical flows and lists to data/yfcc tar -xf UnsupVideo_Frames_v1.tar.gz tar -xf flow_origin.tar.gz tar -xf lists.tar.gz
Then folder
data
looks like:data ├── yfcc ├── UnsupVideo_Frames ├── flow_origin ├── lists ├── train.txt ├── val.txt
-
Train CMP for Representation Learning.
- If your server supports multi-nodes training.
sh experiments/rep_learning/alexnet_yfcc_16gpu_70k/train.sh # 16 GPUs distributed training python tools/weight_process.py --config experiments/rep_learning/alexnet_yfcc_16gpu_70k/config.yaml --iter 70000 # extract weights of the image encoder to experiments/rep_learning/alexnet_yfcc_16gpu_70k/checkpoints/convert_iter_70000.pth.tar
- If your server does not support multi-nodes training.
sh experiments/rep_learning/alexnet_yfcc_8gpu_140k/train.sh # 8 GPUs distributed training python tools/weight_process.py --config experiments/rep_learning/alexnet_yfcc_8gpu_140k/config.yaml --iter 140000 # extract weights of the image encoder
-
Download the model and move it to
experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints/
. -
Launch jupyter notebook and run
demos/cmp.ipynb
for conditional motion propagation, ordemos/demo_annot.ipynb
for semi-automatic annotation. -
Train the model by yourself (optional)
# data not ready sh experiments/semiauto_annot/resnet50_vip+mpii_liteflow/train.sh # 8 GPUs distributed training
@inproceedings{zhan2019self,
author = {Zhan, Xiaohang and Pan, Xingang and Liu, Ziwei and Lin, Dahua and Loy, Chen Change},
title = {Self-Supervised Learning via Conditional Motion Propagation},
booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)},
month = {June},
year = {2019}
}