Git Product home page Git Product logo

ps-2net's Introduction

PS2Net: A Locally and Globally Aware Network for Point-Based Semantic Segmentation

Created by Na Zhao from National University of Singapore

teaser

Introduction

This repository contains the PyTorch implementation for our ICPR 2020 Paper "PS2Net: A Locally and Globally Aware Network for Point-Based SemanticSegmentation" by Na Zhao, Tat Seng Chua, Gim Hee Lee [arXiv]

In this paper, we present the PS2-Net - a locally and globally aware deep learning framework for semantic segmentation on 3D scene-level point clouds. In order to deeply incorporate local structures and global context to support 3D scene segmentation, our network is built on four repeatedly stacked encoders, where each encoder has twobasic components: EdgeConv that captures local structures and NetVLAD that models global context. Different from existing state-of-the-art methods for point-based scene semantic segmentation that either violate or do not achieve permutation invariance, our PS2-Net is designed to be permutation invariant which is an essential property of any deep network used to process unordered point clouds. We further provide theoretical proof to guarantee the permutation invariance property of our network. We perform extensive experiments on two large-scale 3D indoor scene datasets and demonstrate that our PS2-Net is able to achieve state-of-the-art performances as compared to existing approaches.

Setup

  • Install python --This repo is tested with python 3.6.5.
  • Install pytorch with CUDA -- This repo is tested with torch 0.4.0, CUDA 9.0. It may wrk with newer versions, but that is not gauranteed.
  • Install faiss with CPU version by conda install faiss-cpu -c pytorch -- This repo is tested with faiss 1.4.0
  • Install dependencies
    pip install -r requirements.txt
    

Usage

Data preparation

For S3DIS, follow the README under ./preprocess/s3dis folder.

For ScanNet, follow the README under ./preprocess/scannet folder.

Visualization

We use visdom for visualization. Loss values and performance are plotted in real-time. Please start the visdom server before training: python -m visdom.server

The visualization results can be viewed in browser with the address of: http://localhost:8097.

Running experiments on S3DIS

Under data preparation setup (P1):

  • train on each area:
    python main_P1/train.py --dataset_name S3DIS --data_dir ./datasets/S3DIS/P1/ --classes 13 --input_feat 9 --log_dir $LOG_DIR  --test_area $Area_Index
    
  • test on the corresponding area:
    python main_P1/test.py --dataset_name S3DIS --data_dir ./datasets/S3DIS/P1/ --classes 13 --input_feat 9 --log_dir $LOG_DIR  --checkpoint $CHECKPOINT_FILENAME --test_area $Area_Index
    

Under data preparation setup (P2):

  • train on each area:
    python main_P2/train.py --dataset_name S3DIS --dataset_size 114004 --data_dir ./datasets/S3DIS/P2/ --classes 13 --input_feat 6 --log_dir $LOG_DIR  --test_area $Area_Index
    
  • test on the corresponding area:
    python main_P2/inference.py --dataset_name S3DIS --data_dir ./datasets/S3DIS/P2/ --classes 13 --input_feat 6 --log_dir $LOG_DIR  --checkpoint $CHECKPOINT_FILENAME --test_area $Area_Index
    python main_P2/eval_s3dis.py --datafolder ./datasets/S3DIS/P2/ --test_area $Area_Index
    

Note that these commands are for training and evaluating only one area (specified by --test_area $Area_Index option) validation. Please iterate --test_area option to obtain results on other areas. The final result is computed based on 6-fold cross validation.

Running experiments on ScanNet

Under data preparation setup (P3):

  • train:
    python main_P1/train.py --dataset_name ScanNet --data_dir ./datasets/ScanNet/P3/ --classes 21 --input_feat 3 --log_dir $LOG_DIR 
    
  • test:
    python main_P1/test.py --dataset_name ScanNet --data_dir ./datasets/ScanNet/P3/ --classes 21 --input_feat 3 --log_dir $LOG_DIR  --checkpoint $CHECKPOINT_FILENAME
    

Under data preparation setup (P2):

  • train:
    python main_P2/train.py --dataset_name ScanNet --dataset_size 93402 --data_dir ./datasets/ScanNet/P2/ --classes 21 --input_feat 3 --log_dir $LOG_DIR  
    
  • test:
    python main_P2/inference.py --dataset_name ScanNet --data_dir ./datasets/ScanNet/P2/ --classes 21 --input_feat 3 --log_dir $LOG_DIR  --checkpoint $CHECKPOINT_FILENAME 
    python main_P2/eval_scannet.py --datafolder ./datasets/ScanNet/P2/test --picklefile ./datasets/ScanNet/P3/scannet_test.pickle
    

Citation

Please cite our paper if it is helpful to your research:

@inproceedings{zhao2020ps,
title={PS\^{}2-Net: A Locally and Globally Aware Network for Point-Based Semantic Segmentation},
author={Zhao, Na and Chua, Tat-Seng and Lee, Gim Hee},
booktitle={Proceedings of the 25th International Conference on Pattern Recognition (ICPR)},
pages={723--730},
year={2020}
}

Acknowledgements

Our implementation leverages on the source code or data from the following repositories:

ps-2net's People

Contributors

na-z avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ps-2net's Issues

Seeking guidance on training this model for custom dataset

I really appreciate your efforts for this work. And I'm very much eager to try out this work for my custom dataset. Hence seeking your valuable guidance and help

  1. I have a dataset - a lot of text files each containing X, Y, Z and C (Classification) per line
  2. This dataset is obtained from https://environment.data.gov.uk/defradatadownload/?mode=survey

I seek you help on how to prepare/preprocess this dataset into train and test folders in order to apply your model for doing Semantic Segmentation task.

Your guidance will be very helpful for me and would be grateful for the same.

Thanks in advance

bug

when run "python main_P2/train.py --dataset_name ScanNet --dataset_size 93402 --data_dir ./datasets/ScanNet/P2/ --classes 21 --input_feat 3 --log_dir log"

PS-2Net/models/model.py", line 58, in init
self.input_data = torch.FloatTensor(self.opt.batch_size, 9, self.opt.num_point).uniform_()
AttributeError: 'Namespace' object has no attribute 'num_point'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.