Git Product home page Git Product logo

da-wand's Introduction

DA Wand [Project Page]

arXiv Pytorch CUDA

teaser

Public code release for "DA Wand: Distortion-Aware Selection using Neural Mesh Parameterization".

Getting Started

Installation

# With GPU 
conda env create --file dawand.yml
conda activate dawand

# Without GPU 
conda env create --file dawand_cpu.yml
conda activate dawand

To install pygco, go into the gco_python folder and run make, then run python setup.py install. If you experience any issues please refer to the gco_python repo.

System Requirements

  • Python 3.7
  • CUDA 11
  • 16 GB GPU (for training)
  • Blender 3.3.1 (for synthetic dataset generation)

Training

Download original training data from paper

Initial Primitives

Synthetic Dataset

Distortion Self-Supervision Dataset

To train, simply provide the path to the dataset folder as inputs to the argument --dataroot and the path to the test subfolder to the argument --test_dir. Set --gpu_ids to -1 if training without a GPU. See the scripts folder for example commands for training the network with the same parameters as in the paper.

Generate Synthetic Dataset

Call the script in ./scripts/generate_synthetic_data.sh to generate the synthetic dataset using the procedure outlined in the paper. The data will be stored in ./datasets/synthetic_data.

To generate the deformed primitives with custom parameters, you will need to have Blender 3.3.1 installed and available in PATH. From there, you can generate random deformed primitives using the primitives provided in datasets/primitives by calling

blender --background --python blender_deform.py --datadir ./datasets/primitives --outdir ./datasets/deformed_primitives ... 

This will generate a set of deformed primitives with the same ground truth near-developable decompositions. Refer to the blender_deform.py file for the adjustable parameters for generating the synthetic data.

After generating the deformed primitives, you can call generate_synthetic_data.py with custom parameters to construct the synthetic dataset with randomly sampled selection points and ground truth labels.

Build your own natural shape dataset

The differentiable parameterization layer enables training DA Wand over any arbitrary set of meshes. Creating a dataset for distortion self-supervised training is simple.

  1. Create a folder with the name of the dataset, with subfolders train and test Split your training meshes between the train and test folders.
  2. For each mesh in the subfolders, sample selection points as desired into a Python list and use dill to save them as pickled files in a folder titled anchors. The name of the pickled files should be {name of obj}.pkl. There should be separate train/anchors and test/anchors folders.
  3. To incorporate mixed training with ground truth labels, copy the respective labelled meshes into the train and test folders, copy their anchor .pkl files into the anchors subfolder, and create a new folder titled labels to store the labels. Each label file should be a binary numpy array with the same length as the number of mesh triangles. The label files should be titled by {name of obj}{anchor index}.npy, where the anchor index is the index of the respective selection point from the mesh selection point list saved in anchors. To train with mixed data, simply pass the flags --supervised --mixedtraining into the train.py call.

Interactive Demo

demo.mp4

The interactive demo was created using Polyscope. To run the demo, run the following command

python interactive.py --modeldir ./checkpoints --modelname dawand --meshdir {path to obj file} --meshfile {name of obj file}

To use a different DAWand model, simply change the inputs to modeldir and modelname. The application will expect the model weights to be saved with the naming convention {modelname}_net.pth

Note: DA Wand is trained on meshes within a restricted edge range (5000 - 12,000), and will be mostly effective on meshes within that range. For meshes of lower or higher resolution, we recommend either remeshing or retraining the model to meet the desired the resolution.

Acknowledgements

The implementation code for DA Wand relies on a number of excellent public source projects for geometry processing, analysis, and visualization. These projects are acknowledged below:

Citation

@article{liu2022dawand,
         author = {Liu, Richard and Aigerman, Noam and Kim, Vladimir G. and Hanocka, Rana},
         title  = {DA Wand: Distortion-Aware Selection using Neural Mesh Parameterization},
         journal = {arXiv},
         year = {2022}
         }

da-wand's People

Contributors

factoryofthesun avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.