Git Product home page Git Product logo

splatnet's Introduction

SPLATNet: Sparse Lattice Networks for Point Cloud Processing (CVPR2018)

License

Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).

Paper

arXiv

@inproceedings{su18splatnet,
  author={Su, Hang and Jampani, Varun and Sun, Deqing and Maji, Subhransu and Kalogerakis, Evangelos and Yang, Ming-Hsuan and Kautz, Jan},
  title     = {{SPLATN}et: Sparse Lattice Networks for Point Cloud Processing},
  booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages     = {2530--2539},
  year      = {2018}
}

Usage

  1. Install Caffe and bilateralNN

    Note that our code uses Python3.

    • Please follow the instructions on the bilateralNN repo.
    • A step-by-step installation guide for Ubuntu 16.04 is provided in INSTALL.md.
    • Alternatively, you can install nvidia-docker and use this docker image:
      docker pull suhangpro/caffe:bpcn
      You can also build this image with the Dockerfile.
    • The docker image provided above uses CUDA 8, which is no longer supported if you have Volta GPUs (e.g. Titan V), Turing GPUs (e.g. RTX 2080), or newer ones. Adapting the Dockerfile to more recent GPUs should be straightforward—check out the example supporting up to Turing, courtesy of @zyzwhdx.
  2. Include the project to your python path so imports can be found, e.g.

    export PYTHONPATH=<PATH_TO_PROJECT_ROOT>:$PYTHONPATH
  3. Download and prepare data files under folder data/

    See instructions in data/README.md.

  4. Usage examples

    • 3D facade segmentation
      • test pre-trained model
        cd exp/facade3d
        ./dl_model_facade3d.sh  # download pre-trained model
        SKIP_TRAIN=1 ./train_test.sh
        Prediction is output at pred_test.ply, with evaluation results in test.log.
      • or, train and evaluate
        cd exp/facade3d
        ./train_test.sh
    • ShapeNet Part segmentation
      • test pre-trained model
        cd exp/shapenet3d
        ./dl_model_shapenet3d.sh  # download pre-trained model
        ./test_only.sh
        Predictions are under pred/, with evaluation results in test.log.
      • or, train and evaluate
        cd exp/shapenet3d
        ./train_test.sh

References

We make extensive use of bilateralNN, which is proposed in these publications:

  • V. Jampani, M. Kiefel and P. V. Gehler. Learning Sparse High-Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks. CVPR, 2016.
  • M.Kiefel, V. Jampani and P. V. Gehler. Permutohedral Lattice CNNs. ICLR Workshops, 2015.

splatnet's People

Contributors

suhangpro avatar varunjampani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

splatnet's Issues

Docker Env

I am new to docker environment. Basically, I am able to enter the docker container and use caffe. However, when I run the test script, it complains that the cuda driver version is not sufficient to cuda runtime. What is the required cuda driver? And how can I update it in docker? Thanks.

ImportError: No module named 'splatnet'

Hi, when I cd /splatnet/exp/facade3d, run SKIP_TRAIN=1 ./train_test.sh, it reports error as follows:
Traceback (most recent call last):
File "/home/ryxu/splatnet/exp/facade3d/../../splatnet/semseg3d/test.py", line 11, in
import splatnet.configs
ImportError: No module named 'splatnet'

I think maybe some wrong operation about 2.Include the project to your python path so imports can be found, I set my ./bashrc file as follows:

caffe paths

export PYCAFFE_ROOT=/home/ryxu/caffe/python
export PYTHONPATH=$PYCAFFE_ROOT:$PYTHONPATH
export PATH=$CAFFE_ROOT/build/tools:$PYCAFFE_ROOT:$PATH
##Include the project to your python path
export PYTHONPATH=</home/ryxu/splatnet/splatnet>:$PYTHONPATH

Could anyone can help me? thanks very much.

Inconsistent cuda driver and runtime

I have reinstall the nvidia-docker with the following steps.

**sudo apt-get install -y nvidia-docker2
sudo docker pull suhangpro/caffe:bpcn
sudo docker run -it suhangpro/caffe:bpcn

pip install protobuf, scikit-image, bumpy
git clone https://github.com/NVlabs/splatnet
export PYTHONPATH=...
cd /workspace/splatnet/exp/shapenet3d
sh test_only.sh**

Then I got

test_only.sh: 6: test_only.sh: Bad substitution
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0916 21:30:26.347004 133 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
E0916 21:30:26.347118 133 common.cpp:121] Cannot create Curand generator. Curand won't be available.
F0916 21:30:26.347223 133 common.cpp:152] Check failed: error == cudaSuccess (35 vs. 0) CUDA driver version is insufficient for CUDA runtime version
*** Check failure stack trace: ***
Aborted (core dumped)

Did I wrongly operate on anything?

dataset problem

For semantic segmentation, ruemonge428 dataset is post-processed, height value is added to each point, I want to know the definition of height to each point. thanks very much!

train with another dataset

I‘m interested in your great work, and I want to train with another dataset with 12 classes, I am confused at what I should change about training with differentnumber except for classed number and cmap? Can you please help me ?? thanks very much, waiting for your reply

Joint 2D-3D experiments release plan

Would there still be a plan to release joint 2D-3D experiments? If so, may I ask what is the estimated release time? Sorry for the duplicated issue.

don't have attribute height

what if I don't have attribute-height, and i don't want to use height value for training, what should I do? any way can deal with it?

Joint 2D-3D experiments

Hi,

May I know when will the Joint 2D-3D experiments become available to try.
Thanks!

William

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbc in position 334: invalid start byte

Hi, when I try to run the command: SKIP_TRAIN=1 ./exp/facade3d/train_test.sh, it has problem as title:
Traceback (most recent call last):
File "/home/ryxu/splatnet/exp/facade3d/../../splatnet/semseg3d/test.py", line 211, in
args.dataset_params, save_dir, save_prefix, args.cpu)
File "/home/ryxu/splatnet/exp/facade3d/../../splatnet/semseg3d/test.py", line 96, in semseg_test
data, xyz, norms = dataset_facade.points(dims=input_dims+',x_y_z,nx_ny_nz', **dataset_params)
File "/home/ryxu/splatnet/splatnet/dataset/dataset_facade.py", line 90, in points
pcl_data = np.loadtxt(pcl_test_path, skiprows=15)
File "/home/ryxu/miniconda3/envs/caffe/lib/python3.5/site-packages/numpy/lib/npyio.py", line 880, in loadtxt
next(fh)
File "/home/ryxu/miniconda3/envs/caffe/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbc in position 334: invalid start byte

It maybe the decode problem, how to solve this problem in this splatnet project? Thanks.

nccl install

hello, I follow the step in INSTALL.md to install dependency,. when I install nccl using command 'sudo make -j install ' it didn't work, and it shows ' make: *** No rule to make target 'install'. Stop.'. can you please tell me how to solve it. thanks very much

Docker compatibility with NVIDIA 2080Ti

Hi, suhangpro

Thanks for your great contribution and efforts.
When I attempted to run your splatnet code with the docker image you pushed on my pc with an NVIDIA 2080Ti, it turned out that the cuda version is incompatible with my hardware.
So I managed to create a docker image that supports NVIDIA 2080Ti (Turing Arcitechture), with cuda10.0+cudnn7+caffe+bilateralNN+conda+python3.5+ubuntu16.04. Actually the only difference with your image is the CUDA support update.
Consider to add this image
https://hub.docker.com/repository/docker/zyzwhdx/splatnet/general

Thanks!

alternate dataset

@foertter @dumerrill @tmbdev thanks for open sourcing the wonderfull work , i had few queries
Q1 have you trained the architecture on the available other dataset like semanttic Kitti and 3D dataset
Q2 If not trained can we follow the same training pipeline , if trained can you please share the pre-trained model
Q3 can we use the currently pre-trained model to test on custom dataset which less number of point cloud density

Thanks in advance

Cannot train the dataset which has more than 500items total in ShapeNet Part segmentation datesat.

Thank you for the wonderful work.But I have run into an error I could not find solution.
My environment is ubuntu16.04,cuda9.2,cudnn7.6.4,pytorch1.1.0.
I can train the dataset in ShapeNet such as ‘bag’,'cap','earphone' and etc.
Their item numbes are under 500.
But when I tried to train the 'airplane','chair','table'an etc. I find the error below:
错误
I cannot find where is the codes to generate this file.
I checked the memory and the video memory,and they didn't reach the upper limit.
Thank you.

index error

when I run facade test pre-trained model code, it occurs indexError as follows:
File "/home/yt/splatnet/exp/facade3d/../../splatnet/semseg3d/test.py", line 211, in
args.dataset_params, save_dir, save_prefix, args.cpu)
File "/home/yt/splatnet/exp/facade3d/../../splatnet/semseg3d/test.py", line 96, in semseg_test
data, xyz, norms = dataset_facade.points(dims=input_dims+',x_y_z,nx_ny_nz', **dataset_params)
File "/home/yt/splatnet/splatnet/dataset/dataset_facade.py", line 98, in points
return tuple([pcl_data[:, idx] * sc for (idx, sc) in zip(feat_idxs, feat_scales)])
File "/home/yt/splatnet/splatnet/dataset/dataset_facade.py", line 98, in
return tuple([pcl_data[:, idx] * sc for (idx, sc) in zip(feat_idxs, feat_scales)])
IndexError: index 6 is out of bounds for axis 1 with size 6
I don,t know whether my installation is well installed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.