Git Product home page Git Product logo

star's Introduction

๐ŸŒŸ STAR: Skeleton-aware Text-based 4D Avatar Generation with In-Network Motion Retargeting

Zenghao Chai | Chen Tang | Yongkang Wong | Mohan Kankanhalli

pipeline pipeline pipeline pipeline

Abstract

The creation of 4D avatars (i.e., animated 3D avatars) from text description typically uses text-to-image (T2I) diffusion models to synthesize 3D avatars in the canonical space and subsequently applies animation with target motions. However, such an optimization-by-animation paradigm has several drawbacks. (1) For pose-agnostic optimization, the rendered images in canonical pose for naive Score Distillation Sampling (SDS) exhibit domain gap and cannot preserve view-consistency using only T2I priors, and (2) For post hoc animation, simply applying the source motions to target 3D avatars yields translation artifacts and misalignment. To address these issues, we propose Skeleton-aware Text-based 4D Avatar generation with in-network motion Retargeting (STAR). STAR considers the geometry and skeleton differences between the template mesh and target avatar, and corrects the mismatched source motion by resorting to the pretrained motion retargeting techniques. With the informatively retargeted and occlusion-aware skeleton, we embrace the skeleton-conditioned T2I and text-to-video (T2V) priors, and propose a hybrid SDS module to coherently provide multi-view and frame-consistent supervision signals. Hence, STAR can progressively optimize the geometry, texture, and motion in an end-to-end manner. The quantitative and qualitative experiments demonstrate our proposed STAR can synthesize high-quality 4D avatars with vivid animations that align well with the text description. Additional ablation studies shows the contributions of each component in STAR.

pipeline

Installation

Prerequisite

  • System requirement: Ubuntu
  • Tested GPU: A100 (40GB)

Install Packages

git clone https://github.com/czh-98/STAR.git
cd STAR 

# create env
conda create -n star python=3.10

conda activate star 

# install pytorch
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge

# install pytorch3d
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
conda install pytorch3d -c pytorch3d

# install other dependencies
pip install -r requirements.txt

# install mmcv
pip install -U openmim
mim install mmcv==1.7.0

# install xformer
conda install xformers -c xformers

# install smplx
cd tada_smplx
python setup.py install 

cd ..

Data

data
โ”œโ”€โ”€ FLAME_masks
โ”‚   โ”œโ”€โ”€ FLAME_masks.gif
โ”‚   โ”œโ”€โ”€ FLAME_masks.pkl
โ”‚   โ”œโ”€โ”€ FLAME.obj
โ”‚   โ””โ”€โ”€ readme
โ”œโ”€โ”€ init_body
โ”‚   โ”œโ”€โ”€ data.npz
โ”‚   โ”œโ”€โ”€ data-remesh2.npz
โ”‚   โ”œโ”€โ”€ fit_smplx_dense_lbs_weights.npy
โ”‚   โ”œโ”€โ”€ fit_smplx_dense_unique.npy
โ”‚   โ”œโ”€โ”€ fit_smplx_dense_uv.npz
โ”‚   โ”œโ”€โ”€ fit_smplx_params.npz
โ”‚   โ””โ”€โ”€ mesh_uv.npz
โ”œโ”€โ”€ mediapipe
โ”‚   โ””โ”€โ”€ face_landmarker.task
โ”œโ”€โ”€ mediapipe_landmark_embedding
โ”‚   โ”œโ”€โ”€ mediapipe_landmark_embedding.npz
โ”‚   โ””โ”€โ”€ readme
โ”œโ”€โ”€ smplx
โ”‚   โ”œโ”€โ”€ FLAME_masks.pkl
โ”‚   โ”œโ”€โ”€ FLAME_SMPLX_vertex_ids.npy
โ”‚   โ”œโ”€โ”€ smplx_faces.npy
โ”‚   โ”œโ”€โ”€ smplx_lbs_weights.npy
โ”‚   โ”œโ”€โ”€ SMPLX_MALE.npz
โ”‚   โ”œโ”€โ”€ SMPLX_NEUTRAL_2020.npz
โ”‚   โ”œโ”€โ”€ smplx_param.pkl
โ”‚   โ”œโ”€โ”€ smplx_to_smpl.pkl
โ”‚   โ”œโ”€โ”€ smplx_uv_map
โ”‚   โ”œโ”€โ”€ smplx_uv.npz
โ”‚   โ”œโ”€โ”€ smplx_uv.obj
โ”‚   โ””โ”€โ”€ smplx_vert_segementation.json
โ”œโ”€โ”€ t2m
โ”‚   โ””โ”€โ”€ t2m_motiondiffuse
โ”‚       โ”œโ”€โ”€ meta
โ”‚       โ”‚   โ”œโ”€โ”€ mean.npy
โ”‚       โ”‚   โ””โ”€โ”€ std.npy
โ”‚       โ”œโ”€โ”€ model
โ”‚       โ”‚   โ””โ”€โ”€ latest.tar
โ”‚       โ””โ”€โ”€ opt.txt
โ””โ”€โ”€ talkshow
    โ”œโ”€โ”€ rich.npy
    โ””โ”€โ”€ rich.wav

Usage

Training

  • The results will be saved in workspace (./exp in default). You may edit the default path in the config/*.yaml files.
python -m apps.run --config configs/train.yaml --text "Tyrion Lannister in Game of Thrones wearing black leather jacket, he/she is performing extreme acrobat while raising hands and kicking quickly" --description demo --t2m_model mdiffuse

Testing

  • Once the avatar is optimized, it is ready to be animated of arbitrary motions to produce 4D contents.
python -m apps.demo --config configs/test.yaml --text "Tyrion Lannister in Game of Thrones wearing black leather jacket, he/she is performing extreme acrobat while raising hands and kicking quickly" --description demo --t2m_model mdiffuse

Export Fbx

  • To export the 4D avatars, you need first install the FBX SDK as follows:
wget -P ./externals/fbx-python-sdk https://damassets.autodesk.net/content/dam/autodesk/www/files/fbx202037_fbxpythonsdk_linux.tar.gz
tar -xvzf ./externals/fbx-python-sdk/fbx202037_fbxpythonsdk_linux.tar.gz -C ./externals/fbx-python-sdk

# Follow the Install_FBX_Python_SDK.txt
# Install FBX Python SDK
chmod ugo+x ./externals/fbx-python-sdk/fbx202037_fbxpythonsdk_linux
./externals/fbx-python-sdk/fbx202037_fbxpythonsdk_linux ./externals/fbx-python-sdk
pip install ./externals/fbx-python-sdk/fbx-2020.3.7-cp310-cp310-manylinux1_x86_64.whl
  • Then, you can save the fbx files according to:
python -m apps.export_fbx --config configs/test.yaml --text "Tyrion Lannister in Game of Thrones wearing black leather jacket, he/she is performing extreme acrobat while raising hands and kicking quickly" --description demo --t2m_model mdiffuse

Citation

If you find our work is helpful in your research, please cite:

@misc{chai2024star,
  author = {Chai, Zenghao and Tang, Chen and Wong, Yongkang and Kankanhalli, Mohan},
  title = {STAR: Skeleton-aware Text-based 4D Avatar Generation with In-Network Motion Retargeting},
  eprint={2406.04629},
  archivePrefix={arXiv},
  year={2024},
}

Acknowledgement

This repository is heavily based on TADA, DreamWaltz, ConditionVideo, R2ET, MotionDiffuse, MMHuman3D, Smplx2FBX. We would like to thank the authors of these work for publicly releasing their code.

star's People

Contributors

czh-98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

star's Issues

Version of mkl need to be fixed to `2024.0` in the installation instruction

Hi,

Thank you for your fanatstic work! During the installation process, I faced an issue when running mim install mmcv==1.7.0:

envs/star/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: iJIT_NotifyEvent

which is caused by the removal of this symbol iJIT_NotifyEvent in the latest mkl package according to this post, and the solution is to pin mkl's version to 2024.0.

Hope it helps.

Best

Exporting skeleton motion only

Hi,
great work! I'm really impressed.

I wonder what is the fastest way to export the retargeted skeleton motion only, without the mesh?

CUDA out of memory

Thank you for sharing.When I run the training code, I get the problem.CUDA out of memory.What should I do to solve this problem. Thanks.torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 39.59 GiB total capacity; 37.29 GiB already allocated; 206.19 MiB free; 37.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Several Issues Encountered During Model Training

Thank you for publicly releasing your code! However, I encountered several problems while training the model:

  1. At https://github.com/czh-98/STAR/blob/master/lib/dlmesh.py#L909, it appears that mask and dense face are not on the same device. I resolved this by moving the mask to the GPU.
  2. At https://github.com/czh-98/STAR/blob/master/lib/trainer.py#L691, modifying the retarget_pose attribute in the trainer class does not seem to alter its value. This causes a bug at https://github.com/czh-98/STAR/blob/master/lib/dlmesh.py#L874 because retarget_pose remains None. I'm unsure of the underlying reason, but I fixed this by encapsulating the function that sets retarget_pose within the dlmesh class.
  3. I couldn't locate data/FLAME_masks/FLAME.obj after downloading the FLAME Vertex Masks and FLAME Mediapipe Landmark files as described in the README. Could you provide specific instructions on how to obtain this file?

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.