Git Product home page Git Product logo

learnable-sampling's Introduction

Learnable-Sampling

This is the codebase for the ACM Multi-Media 2022 paper "3D Human Mesh Reconstruction by Learning to Sample Joint Adaptive Tokens for Transformers".

Installation

  1. set up the environment:
  • python: 3.7.12
  • pytorch: 1.7.1
  • torchvision: 0.8.2
  • cuda: 11.0
  • pyrender: 0.1.45
  • pyopengl: 3.1.6
  • joblib: 1.1.0
  • pytorch-lightning: 1.1.8
  • opencv-python: 4.5.4.60
  • pillow: 9.1.1
  • loguru: 0.5.3
  • yacs: 0.1.8
  • scikit-image: 0.19.0
  • azureml: 0.2.7
  • azureml-core: 1.36.0.post2
  • bottleneck
  1. run the following command in the ./Learnable-Sampling directory to install:
python setup.py build develop
  1. install opendr via pip+git:
pip install git+https://gitlab.eecs.umich.edu/ngv-python-modules/opendr.git

Download SMPL data

Please prepare the SMPL data following MeshGraphormer and put all the data into ./src/modeling/data/. After doing so, the structure of the directory should be as follows:

$ src
|-- modeling
|   |-- data
|   |   |-- basicModel_f_lbs_10_207_0_v1.0.0.pkl
|   |   |-- basicModel_m_lbs_10_207_0_v1.0.0.pkl
|   |   |-- basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
|   |   |-- J_regressor_extra.npy
|   |   |-- J_regressor_h36m_correct.npy

Download data and checkpoints

  1. Download the tsv datasets following METRO and put the downloaded files into ./data/datasets/.
  2. Download the db file and the evaluation config file for the 3DPW dataset from https://cloud.tsinghua.edu.cn/d/2da7d016fedb42de8039/ and put them into ./data/datasets/db/. Note that you should download the original images of the 3DPW dataset from the official webset and prepare them following the key 'image_name' in the db file.
  3. Download the pretrained models from https://cloud.tsinghua.edu.cn/d/bc66b120bd5847649622/ and put them into ./data/pretrained_models/.
  4. Download the trained checkpoints from https://cloud.tsinghua.edu.cn/d/61ba08aec1664798b7b6/ and put them into ./data/checkpoints/.
  5. After doing the above four steps, the structure of the ./data directory should be as follows:
$ data  
|-- datasets  
|   |-- Tax-H36m-coco40k-Muco-UP-Mpii
|   |-- human3.6m
|   |-- coco_smpl
|   |-- muco
|   |-- up3d
|   |-- mpii
|   |-- 3dpw
|   |-- db
|   |   |-- 3dpw_test_db.pt
|   |   |-- test_3dpw.yaml
|-- pretrained_models
|   |-- pose_coco
|   |   |-- pose_hrnet_w32_256x192.pth
|   |   |-- pose_hrnet_w64_256x192.pth
|   |   |-- pose_resnet_50_256x192.pth
|   |-- pare_checkpoint.ckpt
|-- checkpoints
|   |-- 3dpw_checkpoint.bin
|   |-- h36m_checkpoint.bin

Evaluation

To evaluate on 3DPW dataset, please run the following command:

python src/tools/main.py \
       --num_workers 1 \
       --config_yaml experiments/3dpw_config.yaml \
       --val_yaml data/datasets/db/test_3dpw.yaml \
       --per_gpu_eval_batch_size 10 \
       --output_dir ./output/ \
       --run_eval_only \
       --resume_checkpoint data/checkpoints/3dpw_checkpoint.bin

To evaluate on H36M dataset, please run the following command:

python src/tools/main.py \
       --num_workers 1 \
       --config_yaml experiments/h36m_config.yaml \
       --val_yaml data/datasets/human3.6m/valid.protocol2.yaml \
       --per_gpu_eval_batch_size 10 \
       --output_dir ./output/ \
       --run_eval_only \
       --resume_checkpoint data/checkpoints/h36m_checkpoint.bin

Training

Training instructions will come soon.

Acknowledgement

Our codebase is built upon open-source GitHub repositories. We thank all the authors for making their codes avaliable to facilitate the progress of our project.

microsoft / MeshGraphormer

microsoft / MeshTransformer

mkocabas / PARE

mkocabas / VIBE

learnable-sampling's People

Contributors

thuxyz19 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

learnable-sampling's Issues

Question about the generated heatmaps

Hi, once more thanks for your fantastic work!
I attempted to reproduce your work for 20-30 epochs, howerver the heatmaps are not as representative as those in the paper (especially the last column). Is this normal? In addition, I wonder whether you provide training logs for reference?
Thanks again!

image

Human3.6 Evaluation Questions

Hello, thank you very much for sharing such interesting work, I would like to ask how your method evaluates in the protocol1 of Human3.6m?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.