Git Product home page Git Product logo

3d-coded's Introduction

3D-CODED [Project Page] [Paper] [Talk] + Learning Elementary Structure [Project Page] [Paper] [Code]

3D-CODED : 3D Correspondences by Deep Deformation
Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry
In ECCV, 2018.

Learning elementary structures for 3D shape generation and matching
Theo Deprelle, Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry
In Neurips, 2019. Official Code

Learned templates

image-20190912160913089

Faust results
Method L2 Train SURREAL L2 Val SURREAL Faust Intra results Faust Inter results
3D-CODED 1.098 1.315 1.747 2.641
Points Translation 3D 9.980 1.263 1.626 2.714
Patch Deformation 3D 1.028 1.436 1.742 2.578
Points Translation + Patch Deformation 3D 0.969 1.173 1.676 2.779
Points Translation 2D 1.09 1.54 2.054 3.005
Patch Deformation 2D 6.354 6.767 4.46 5.420
Points Translation 10D 0.906 1.064 1.799 2.707
Patch Deformation 10D 0.952 1.183 1.683 2.83
Sample results Input : 2 meshes
Task : put them in point-wise correspondence. (suggested by color)

Install πŸ‘· [Pytorch, Conda]

This implementation uses Pytorch.

git clone https://github.com/ThibaultGROUEIX/3D-CODED.git ## Download the repo
cd 3D-CODED; git submodule update --init
conda env create -f 3D-CODED-ENV.yml ## Create python env
source activate pytorch-3D-CODED
pip install http://imagine.enpc.fr/~langloip/data/pymesh2-0.2.1-cp37-cp37m-linux_x86_64.whl
cd extension; python setup.py install; cd ..

Demo πŸš† and Inference Trained Models

python inference/correspondences.py --dir_name learning_elementary_structure_trained_models/1patch_deformation

This script takes as input 2 meshes from data and compute correspondences in results. Reconstruction are saved in dir_name.

Options (Usually default parameters are good)
# Key parameters
'--dir_name', type=str, default="",  help='dirname')
'--inputA', type=str, default =  "data/example_0.ply",  help='your path to mesh 0'
'--inputB', type=str, default =  "data/example_1.ply",  help='your path to mesh 1'

# Secondary parameters
'--HR', type=int, default=1, help='Use high Resolution template for better precision in the nearest neighbor step ?'
'--reg_num_steps', type=int, default=3000, help='number of epochs to train for during the regression step'
'--num_points', type=int, default = 6890,  help='number of points fed to poitnet'
'--num_angles', type=int, default = 100,  help='number of angle in the search of optimal reconstruction. Set to 1, if you mesh are already facing the cannonical 				direction as in data/example_1.ply'
'--env', type=str, default="CODED", help='visdom environment'
'--clean', type=int, default=0, help='if 1, remove points that dont belong to any edges'
'--scale', type=int, default=0, help='if 1, scale input mesh to have same volume as the template'
'--project_on_target', type=int, default=0, help='if 1, projects predicted correspondences point on target mesh'
'--randomize', type=int, default=0, help='if 1, projects predicted correspondences point on target mesh'
'--LR_input', type=int, default=1, help='Use Low Resolution Input '
Results
  • Initial guesses for example0 and example1:

  • Final reconstruction for example0 and example1:

On your own meshes

You need to make sure your meshes are preprocessed correctly :

  • The meshes are loaded with Trimesh, which should support a bunch of formats, but I only tested .ply files. Good converters include Assimp and Pymesh.

  • The trunk axis is the Y axis (visualize your mesh against the mesh in data to make sure they are normalized in the same way).

  • the scale should be about 1.7 for a standing human (meaning the unit for the point cloud is the m). You can automatically scale them with the flag --scale 1

--> Failure modes instruction : ⚠️

  • Sometimes the reconstruction is flipped, which break the correspondences. In the easiest case where you meshes are registered in the same orientation, you can just fix this angle in correspondence.py line 240, to avoid the flipping problem. Also note from this line that the angle search only looks in [-90Β°,+90Β°].

  • Check the presence of lonely outliers that break the Pointnet encoder. You could try to remove them with the --clean flag.

FAUST
  • If you want to use inference/correspondences.py to process a hole dataset, like FAUST test set, you can use ./inference/script.py, for the FAUST inter challenge. Good luck :-)

Training

python ./training/train.py
Trainer's Options
'--point_translation', type=int, default=0, help='point_translation'
'--dim_template', type=int, default=3, help='dim_template'
'--patch_deformation', type=int, default=0, help='patch_deformation'
'--dim_out_patch', type=int, default=3, help='dim_out_patch'
'--start_from', type=str, default="TEMPLATE", choices=["TEMPLATE, SOUP, TRAINDATA"] ,help='dim_out_patch'
Monitor your training on http://localhost:8888/

visdom

Note on data preprocessing

The generation process of the dataset is quite heavy so we provide our processed data. Should you want to reproduce the preprocessing, go to data/README.md. Brace yourselve :-)

Reproduce the paper πŸš†

python script/launch.py --mode training #Launch 4 trainings with different parameters.
python script/launch.py --mode inference #Eval the 4 pre-trained models.

Citing this work

If you find this work useful in your research, please consider citing:

@inproceedings{deprelle2019learning,
  			title={Learning elementary structures for 3D shape generation and matching},
  			author={Deprelle, Theo and Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G and Russell, Bryan C and Aubry, Mathieu},
  			booktitle={Neurips},
  			year={2019}
}

@inproceedings{groueix2018b,
          title = {3D-CODED : 3D Correspondences by Deep Deformation},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle = {ECCV},
          year = {2018}
        }

License

MIT

Cool Contributions

  • Zhongshi Jiang applying trained model on a monster model πŸ‘Ή (left: original , right: reconstruction)

visdom

Analytics

Acknowledgement

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.