Git Product home page Git Product logo

layoutnetv2's Introduction

LayoutNet v2

PyTorch implementation for LayoutNet v2 in the paper:

3D Manhattan Room Layout Reconstruction from a Single 360 Image

https://arxiv.org/pdf/1910.04099.pdf

Original Torch implementation for LayoutNet is here.

You may also be interested in the source code of the methods for comparison in the paper: DuLa-Net and HorizonNet

Improvements upon LayoutNet

  • Extend to general Manhattan layout (on our newly labeled MatterportLayout dataset)
  • Use ResNet encoder instead of SegNet encoder
  • Training details and implementation details
  • Gradient ascent based post optimization, revised from sunset1995's PyTorch implementation
  • Add random stretching data augmentation

Requirements

  • Python 3
  • PyTorch >= 0.4.0
  • numpy, scipy, pickle, skimage, sklearn, random, cv2, shapely
  • torchvision
  • Matlab (for depth rendering)

Download Data and Pre-trained Model

Preporcess

  • We've provided sample code to transform original LayoutNet's .t7 file to .pkl file for PyTorch
    python t72pkl.py
    

Training

  • On PanoContext (note that we use Stanford 2D-3D as additional data in this script):
    python train_PC.py
    
  • On Stanford 2D-3D (note that we use PanoContext as additional data in this script):
    python train_stanford.py
    
  • On MatterportLayout
    python train_matterport.py
    

Evaluation

  • On PanoContext (Corner error, pixel error and 3D IoU)

    python test_PC.py
    
  • On Stanford 2D-3D (Corner error, pixel error and 3D IoU)

    python test_stanford.py
    
  • On Matterport3D (3D IoU, 2D IoU on under top-down view, RMSE for depth and delta_1 for depth)

    python test_matterport.py
    

    For depth related evaluation, we need to render depth map from predicted corner position on equirectangualr view (you can skip this step as we've provided pre-computed depth maps from our approach)

    First, uncomment L313-L314 in test_matterport.py, and comment out lines related to evaluation for depth. Run test_matterport.py and save intermediate corner predictions to folder ./result_gen. Then open matlab:

    cd matlab
    cor2depth
    cd ..
    

    Rendered depth maps will be saved to folder ./result_gen_depth/. Then comment out L313-L314 in test_matterport.py, uncomment lines related to evaluation for depth, and run test_matterport.py again

Citation

Please cite our paper for any purpose of usage.

@article{zou20193d,
  title={3D Manhattan Room Layout Reconstruction from a Single 360 Image},
  author={Zou, Chuhang and Su, Jheng-Wei and Peng, Chi-Han and Colburn, Alex and Shan, Qi and Wonka, Peter and Chu, Hung-Kuo and Hoiem, Derek},
  journal={arXiv preprint arXiv:1910.04099},
  year={2019}
}

layoutnetv2's People

Contributors

zouchuhang avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.