Git Product home page Git Product logo

fastmetro's Introduction

[ECCV'22] Fast Mesh Transformer

  • This is the official PyTorch implementation of Cross-Attention of Disentangled Modalities for 3D Human Mesh Recovery with Transformers (ECCV 2022).
  • FastMETRO (Fast MEsh TRansfOrmer) has a novel transformer encoder-decoder architecture for 3D human pose and mesh reconstruction from a single RGB image. FastMETRO can also reconstruct other 3D objects such as 3D hand mesh.
  • Compared with the encoder-based transformers (METRO and Mesh Graphormer), FastMETRO-S is about 10× smaller and 2.5× faster and FastMETRO-L is about 4× smaller and 1.2× faster in terms of transformer architectures.

intro1 intro2


Overview

Transformer encoder architectures have recently achieved state-of-the-art results on monocular 3D human mesh reconstruction, but they require a substantial number of parameters and expensive computations. Due to the large memory overhead and slow inference speed, it is difficult to deploy such models for practical use. In this paper, we propose a novel transformer encoder-decoder architecture for 3D human mesh reconstruction from a single image, called FastMETRO. We identify the performance bottleneck in the encoder-based transformers is caused by the token design which introduces high complexity interactions among input tokens. We disentangle the interactions via an encoder-decoder architecture, which allows our model to demand much fewer parameters and shorter inference time. In addition, we impose the prior knowledge of human body's morphological relationship via attention masking and mesh upsampling operations, which leads to faster convergence with higher accuracy. Our FastMETRO improves the Pareto-front of accuracy and efficiency, and clearly outperforms image-based methods on Human3.6M and 3DPW. Furthermore, we validate its generalizability on FreiHAND.

overall_architecture


Installation

We provide two ways to install conda environments depending on CUDA versions.

Please check Installation.md for more information.


Download

We provide guidelines to download pre-trained models and datasets.

Please check Download.md for more information.

(Non-Parametric) FastMETRO

Model Dataset PA-MPJPE Link
FastMETRO-S-R50 Human3.6M 38.8 Download
FastMETRO-S-R50 3DPW 49.1 Download
FastMETRO-L-H64 Human3.6M 33.6 Download
FastMETRO-L-H64 3DPW 44.6 Download
FastMETRO-L-H64 FreiHAND 6.5 Download

(Parametric) FastMETRO with an optional SMPL parameter regressor

Model Dataset PA-MPJPE Link
FastMETRO-L-H64 Human3.6M 36.1 Download
FastMETRO-L-H64 3DPW 51.0 Download
  • Model checkpoints were obtained in Conda Environment (CUDA 11.1)
  • To use SMPL parameter regressor, you need to set --use_smpl_param_regressor as True

Demo

We provide guidelines to run end-to-end inference on test images.

Please check Demo.md for more information.


Experiments

We provide guidelines to train and evaluate our model on Human3.6M, 3DPW and FreiHAND.

Please check Experiments.md for more information.


Results

This repository provides several experimental results:

table2 figure1 figure4 smpl_regressor


Acknowledgments

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00290, Visual Intelligence for Space-Time Understanding and Generation based on Multi-layered Visual Common Sense; and No. 2019-0-01906, Artificial Intelligence Graduate School Program (POSTECH)).

Our repository is modified and adapted from these amazing repositories. If you find their work useful for your research, please also consider citing them:


License

This research code is released under the MIT license. Please see LICENSE for more information.

SMPL and MANO models are subject to Software Copyright License for non-commercial scientific research purposes. Please see SMPL-Model License and MANO License for more information.

We use submodules from third party (hassony2/manopth). Please see NOTICE for more information.


Contact

Junhyeong Cho ([email protected])

FastMETRO ([email protected])


Citation

If you find our work useful for your research, please consider citing our paper:

@InProceedings{cho2022FastMETRO,
    title={Cross-Attention of Disentangled Modalities for 3D Human Mesh Recovery with Transformers},
    author={Junhyeong Cho and Kim Youwang and Tae-Hyun Oh},
    booktitle={European Conference on Computer Vision (ECCV)},
    year={2022}
}

This work was done @ POSTECH Algorithmic Machine Intelligence Lab

fastmetro's People

Contributors

fastmetro avatar jhcho99 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.