Git Product home page Git Product logo

monohuman_eval's Introduction

MonoHuman: Animatable Human Neural Field from Monocular Video (CVPR 2023)

MonoHuman: Animatable Human Neural Field from Monocular Video
Zhengming Yu, Wei Cheng, Xian Liu, Wayne Wu, and Kwan-Yee Lin
Demo Video | Project Page | Paper

This is an official implementation of MonoHuman using PyTorch

Animating virtual avatars with free-view control is crucial for various applications like virtual reality and digital entertainment. Previous studies attempt to utilize the representation power of neural radiance field (NeRF) to reconstruct the human body from monocular videos. Recent works propose to graft a deformation network into the NeRF to further model the dynamics of the human neural field for animating vivid human motions. However, such pipelines either rely on pose-dependent representations or fall short of motion coherency due to frame-independent optimization, making it difficult to generalize to unseen pose sequences realistically. In this paper, we propose a novel framework MonoHuman, which robustly renders view-consistent and high-fidelity avatars under arbitrary novel poses. Our key insight is to model the deformation field with bi-directional constraints and explicitly leverage the off-the-peg keyframe information to reason the feature correlations for coherent results. In particular, we first propose a Shared Bidirectional Deformation module, which creates a pose-independent generalizable deformation field by disentangling backward and forward deformation correspondences into shared skeletal motion weight and separate non-rigid motions. Then, we devise a Forward Correspondence Search module, which queries the correspondence feature of keyframes to guide the rendering network. The rendered results are thus multi-view consistent with high fidelity, even under challenging novel pose settings. Extensive experiments demonstrate the superiority of proposed MonoHuman over state-of-the-art methods.

Installation

We recommend to use Anaconda.

Create and activate a virtual environment.

conda env create -f environment.yaml
conda activate Monohuman

Download SMPL model

Download the gender neutral SMPL model from here, and unpack mpips_smplify_public_v2.zip.

Copy the smpl model.

cp /path/to/smpl/smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl third_parties/smpl/models

Follow this page to remove Chumpy objects from the SMPL model.

Run on ZJU-Mocap Dataset

Prepare a dataset

  1. Download ZJU-Mocap dataset from here.

  2. Modify the yaml file of subject at tools/prepare_zju_mocap/xxx.yaml as below (Replace the 'xxx' to the subject ID):

    dataset:
        zju_mocap_path: /path/to/zju_mocap
        subject: 'xxx'
        sex: 'neutral'

...
  1. Run the data preprocessing script.
    cd tools/prepare_zju_mocap
    python prepare_dataset.py --cfg xxx.yaml
    cd ../../
  1. Modify the 'dataset_path' in core/data/dataset_args.py to your /path/to/dataset

Training

Please replace the 'xxx' to the subject ID

python train.py --cfg configs/monohuman/zju_mocap/xxx/xxx.yaml resume False

Rendering and Evalutaion

Render the motion sequence. (e.g., subject 377)

python run.py \
    --type movement \
    --cfg configs/monohuman/zju_mocap/377/377.yaml 

video

Render free-viewpoint images on a particular frame (e.g., subject 386 and frame 100).

python run.py \
    --type freeview \
    --cfg configs/monohuman/zju_mocap/386/386.yaml \
    freeview.frame_idx 100

video

Render the text driven motion sequence. Generate poses sequence from MDM, and put the sequence to path/to/pose_sequence/sequence.npy (e.g., subject 394 and backflip)

python run.py \
    --type text \
    --cfg configs/monohuman/zju_mocap/394/394.yaml \
    text.pose_path path/to/pose_sequence/backflip.npy

video

Acknowledgement

Our code took reference from HumanNeRF, IBRNet, Neural Body. We thank these authors for their great works and open-source contribution.

TODO

  • Code Release.
  • Demo Video Release.
  • Paper Release.
  • DDP Training.
  • Pretrained Model Release.

Citation

If you find this work useful for your research, please consider citing our paper:

@inproceedings{yu2023monohuman,
  title={{MonoHuman}: Animatable Human Neural Field from Monocular Video},
  author={Yu, Zhengming and Cheng, Wei and Liu, xian and Wu, Wayne and Lin, Kwan-Yee},
  booktitle={CVPR},
  year={2023}
}

monohuman_eval's People

Contributors

yzmblog avatar mikeqzy avatar

Watchers

 avatar

Forkers

jackzhousz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.