This is the official repository of VINet module in "A Multi-user Oriented Live Free-viewpoint Video Streaming System Based On View Interpolation" (ICME 2022)
git clone https://github.com/Eric-chuan/VINet
cd VINet
conda env create -f environments.yml
conda activate VINet
- organize your multi-view image folder <img_dir> as follow
├──── IMG_DIR/ ├──── 00.png ├──── 01.png ├──── ... ├──── 11.png
- run the script
python inference.py --img_dir image_folder --out_dir output_folder --exp=1 --gpu_idx=0
- output
├──── OUT_DIR/ ├──── 00.png ├──── inter_view1.png ├──── 01.png ├──── ... ├──── 10.png ├──── inter_view10.png ├──── 11.png
- you can increase the
--exp
to get a more intensive viewpoint
- prepare your own multi-view synchronized video
- organize your own multi-view synchronized video as follow
├──── YOUR_DIR/ ├──── raw_videos/ ├──── 00.mp4 ├──── ... ├──── 11.mp4
- convert your video to frames
python extract_videos.py
- organise your frames into triplets and compress them into npz format
- It looks so cumbersome, but fortunately you can perform the above operation using the script provided
python process-vimeo90k.py
- Run main.py with the following options in parse_args:
python -m torch.distributed.launch --nproc_per_node=2 train.py --world_size=2 --epoch=100 --batch_size=32
@article{hu2021multi,
title={A Multi-user Oriented Live Free-viewpoint Video Streaming System Based On View Interpolation},
author={Hu, Jingchuan and Guo, Shuai and Dong, Yu and Zhou, Kai and Xu, Jun and Song, Li},
journal={arXiv preprint arXiv:2112.10603},
year={2021}
}