๐ช 3DCrowdNet achieves the state-of-the-art accuracy on 3DPW (3D POSES IN THE WILD DATASET)!
๐ช We improved PA-MPJPE to 51.1mm and MPVPE to 97.6mm using a ResNet 50 backbone!
This repo is the official PyTorch implementation of Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes (CVPR 2022).
We recommend you to use an Anaconda virtual environment. Install PyTorch >=1.6.0 and Python >= 3.7.3.
Then, run sh requirements.sh
. You should slightly change torchgeometry
kernel code following here.
- Download the pre-trained 3DCrowdNet checkpoint from here and place it under
${ROOT}/demo/
. - Download demo inputs from here and place them under
${ROOT}/demo/input
(just unzip the demo_input.zip). - Make
${ROOT}/demo/output
directory. - Get SMPL layers and VPoser according to this.
- Download
J_regressor_extra.npy
from here and place under${ROOT}/data/
.
- Run
python demo.py --gpu 0
. You can change the input image with--img_idx {img number}
. - A mesh obj, a rendered mesh image, and an input 2d pose are saved under
${ROOT}/demo/
. - The demo images and 2D poses are from CrowdPose and HigherHRNet respectively.
- The depth order is not estimated. You can manually change it.
โ๏ธ Refer to the paper's main manuscript and supplementary material for diverse qualitative results!
Refer to here.
First finish the directory setting. Then, refer to here to train and test 3DCrowdNet.
@InProceedings{choi2022learning,
author = {Choi, Hongsuk and Moon, Gyeongsik and Park, JoonKyu and Lee, Kyoung Mu},
title = {Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}
year = {2022}
}