Git Product home page Git Product logo

snake-1's Introduction

SNAKE: Shape-aware Neural 3D Keypoint Field

This repository contains the implementation of the following paper:

SNAKE: Shape-aware Neural 3D Keypoint Field
Chengliang Zhong, Peixing You, Xiaoxue Chen, Hao Zhao, Fuchun Sun, Guyue Zhou, Xiaodong Mu, Chuan Gan, Wenbing Huang

If you find our code or paper useful, please consider citing:

@

This repository is a PyTorch implementation.

Datasets

KeypointNet dataset is provided by UKPGAN, which can be downloaded from here.

SMPL is a skinned vertex-based deformable model, where a link to download the mesh is here.

ModelNet40, 3DMatch, and Redwood datasets are provided by USIP. We provide a link to download here.

Note: Please move the dataset into the 'data' folder or modify the data paths in the codes (for example, modify 'data_path' (line 117) in the 'exp/KeypointNet/train0526/config.yaml').

Our 'data' folder structure is as follows:

data
  ├── keypointnet_pcds
  │    ├── 02691156
  │    ...
  │    ├── annotations
  ├── modelnet40
  │    ├── modelnet40-normal_numpy
  │    ├── modelnet40-test_rotated_numpy
  ├── smpl_model
  │    ├── model.pkl
  │    ├── pose_test_100.npy  (See details later.)
  │    ├── beta_test_100.npy  (See details later.)
  ├── 3DMatch_npy
  ├── 3DMatch_eval_downsample (See details later.)
  └── redwood

Installation

Make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called SNAKE using

conda create --name SNAKE python=3.7
conda activate SNAKE

Note: Install python packages according to the CUDA version on your computer:

# CUDA >= 11.0
pip install -r requirements_cu11.txt 
pip install torch-scatter==2.0.9
# CUDA < 11.0
pip install -r requirements_cu10.txt 
pip install torch-scatter==2.0.4

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

Training

Choose one of the dataset names [KeypointNet, SMPL, ModelNet40, Redwood] to replace '[dataset_name]'.

If train on single GPU, run:

sh exp/[dataset_name]/train0526/train_single.sh

If train on multiple GPUs, modify the values of 'CUDA_VISIBLE_DEVICES' and 'nproc_per_node' in the 'train_multi.sh' according to the number of available GPUs of yours and run:

sh exp/[dataset_name]/train0526/train_multi.sh

Extract and save keypoints

sh exp/[dataset_name]/train0526/test.sh save_kpts

Evaluate

1. Semantic Consistency

Note: Test on KeypointNet and SMPL.

It takes some time to compute geodesic distances of input point clouds on KeypointNet. Download the test parameters (pose_test_100.npy, beta_test_100.npy) for paired human shapes generation, and move them to data/smpl_model/. Then, run:

python tools/eval_iou.py --dataset [dataset_name] --test_root exp/[dataset_name]/[result_name]

For example:

python tools/eval_iou.py --dataset KeypointNet --test_root exp/KeypointNet/train0526/test_result/noise-0-down-1-grid64-nms0.1-sal0.7-occ0.8-update10-lr0.001-pad0.125

2. Repeatability

Note: Test on ModelNet40 and Redwood.

Download the transformation matrix for each two-view point clouds(modelnet40, redwood), and move them to data/modelnet40/modelnet40-test_rotated_numpy/ and data/redwood/numpy_gt_normal/ respectively. Then, run:

python tools/eval_repeat.py --test_dataset [dataset_name] --test_root exp/[dataset_name]/[result_name]

For example:

python tools/eval_repeat.py --dataset Redwood --test_root exp/Redwood/train0526/test_result/noise-0-down-1-grid100-nms0.04-sal0.7-occ0.8-update10-lr0.001-pad0.125

3. Registration

Note: Test on 3DMatch.

Down-sample test data (see the repository from D3feat). The test data are renamed as '3DMatch_eval_downsample' and moved to 'data/'.

Then, extract keypoints using the model trained on KeypointNet. Run:

sh exp/KeypointNet/train0526_for_registration/test.sh save_kpts

Evaluate the registration performance using the repository from D3feat.

Visualization

Show predicted object/scene shape, run:

sh exp/[dataset_name]/train0526/test.sh show_occ

Show predicted saliency field slice, run:

sh exp/[dataset_name]/train0526/test.sh show_sal_slice

Show extract keypoints on the KeypointNet dataset, run:

python tools/keypoint_show.py

Pretrained models

We provide pretrained models on Google Drive and Baidu Netdisk: an3m. Move the models to exp/[dataset_name]/train0526/checkpoints/.

License

SNAKE is released under the MIT License.

Acknowledgment

We would like to thank the open-source code of R2D2, ConvONet, USIP, UKPGAN and D3feat.

snake-1's People

Contributors

zhongcl-thu avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.