Git Product home page Git Product logo

sarpn's Introduction

Structure-Aware Residual Pyramid Network for Monocular Depth Estimation

This is the implementation of the paper Structure-Aware Residual Pyramid Network for Monocular Depth Estimation, IJCAI 2019, Xiaotian Chen, Xuejin Chen, and Zheng-Jun Zha.

Citation

@inproceedings{Chen2019structure-aware,
             title = {Structure-Aware Residual Pyramid Network for Monocular Depth Estimation},
             author = {Chen, Xiaotian and Chen , Xuejin and Zha, Zheng-Jun},
	     conference={International Joint Conferences on Artificial Intelligence},
             year = {2019}   
}

@article{Chen2020LapNet,
  title={Laplacian Pyramid Neural Network for Dense Continuous-Value Regression for Complex Scenes},
  author={Chen, Xuejin and Chen, Xiaotian and Zhang, Yiteng  and Fu, Xueyang and Zha, Zheng-Jun},
  journal={IEEE TNNLS},
  year={2020}
}

Contents

  1. Introduction
  2. Usage
  3. Results
  4. Acknowledgements

Introduction

Monocular depth estimation is an essential task for scene understanding. The underlying structure of objects and stuff in a complex scene is critical to recovering accurate and visually-pleasing depth maps. Global structure conveys scene layouts,while local structure reflects shape details. Recently developed approaches based on convolutional neural networks (CNNs) significantly improve the performance of depth estimation. However, few of them take into account multi-scale structures in complex scenes. In this paper, we propose a Structure-Aware Residual Pyramid Network (SARPN) to exploit multi-scale structures for accurate depth prediction. We propose a Residual Pyramid Decoder (RPD) which expresses global scene structure in upper levels to represent layouts, and local structure in lower levels to present shape details. At each level, we propose Residual Refinement Modules (RRM) that predict residual maps to progressively add finer structures on the coarser structure predicted at the upper level. In order to fully exploit multi-scale image features, an Adaptive Dense Feature Fusion (ADFF) module, which adaptively fuses effective features from all scales for inferring structures of each scale, is introduced. figure

Usage

Dependencies

Train

As an example, use the following command to train SARPN on NYUDV2.

CUDA_VISIBLE_DEVICES="0,1,2,3" python train.py --trainlist_path (the path of trainlist(nyu2_train.csv))\
					       --checkpoint_dir (the directory to save the checkpoints)\
					       --root_path (the root path of dataset)\
					       --logdir (the directory to save logs and checkpoints)\
					       --pretrained_dir (the path of pretrained models)\
					       --do_summary

Evaluation

Use the following command to evaluate the trained SARPN on NYUDV2 test data.

CUDA_VISIBLE_DEVICES="0" python evaluate.py --testlist_path (the path of testlist(nyu2_test.csv))\
					    --root_path (the root path of dataset)\
					    --loadckpt (the path of the loaded model)\
					    --pretrained_dir (the path of pretrained models)\
					    --threshold (threshold of the pixels on edges)

Pretrained Model

You can download the pretrained model:
NOTE: You don't need to decompress our pre-trained model. Please use torch.load() to load it.
NYUDV2 KITTI and Height estimation

Pre-processed Data

You can download the pre-processed data from this link, which is shared by Junjie Hu.

Results

Indoor scene

Outdoor scene

Some examples

Acknowledgements

Thanks to Junjie Hu for opening source of his excellent work. Our work is inspired by this work and part of codes.

sarpn's People

Contributors

xt-chen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sarpn's Issues

Dimension out of range

Thank you for your great work. But when trying to train the model to reproduce your result, I met a problem.

line 201, in colormap
color_map = colors[indices].transpose(2, 3).transpose(1, 2)
RuntimeError: Dimension out of range (expected to be in range of [-3, 2], but got 3)

I do not know how to fix it. Looking forward to hear from you! Thank you so much!

parameter definition

I want to know what is the definition of adff_num_features, and how is 1280 calculated?

get_models

I did not find any files about get_models. But net.py need this file, where can I find it?

The pretrained model file may be broken.

Hello, Xt-Chen.
I really thank you for releasing the SARPN codes for monocular depth estimation.

Unfortunately, the pretrained model that is uploaded on the MS OneDrive may be broken.
when trying to decompress the tar file, the file can't be decompressed.

Please check it one more time.
Thanks.

Something wrong in train.py

In line 48 of train.py, your code shows loadckpt = os.path.join(args.logdir, all_saved_ckpts[-1])
Maybe loadckpt = os.path.join(args.checkpoint_dir, all_saved_ckpts[-1]) is right. I am not sure.

复现论文问题

你好 我在复现这篇论文的时候损失函数总是降不下去,使用的pytorch是1.1.0版本的

About the depth ground truth pre-processing?

Thank for your great work!
I am a little confused about the dataloader code! Why divide the depth groundtruth by 255 and multiply it by 10 in training and divide it by 1000 in evaluate? Will this affect the evaluation results? If i need adjust it?

About Reproducible results

Hi Xt-Chen,

Thanks for your nice work. I find it interesting, so I cloned the code and re-trained on my PC without any changes except the dataset path. However, I tried several times, the results varies a lot. Take the ABS metric as an example, the results were 0.2%-0.4% worse than the reported results in your paper. I checked the code and could not find any operations to fix the rand seeds. I thought this might be the main potential issue that I could not reproduce the result.
As above, Would you please help me to cross the problem and I wonder if you tried many times and chosen the best one? or you just trained one time?

Thanks again.

Cheers,
cvgogogo

What does it means?

Processing the 501th image!
1: tensor([[[[0.1345, 0.1318, 0.1291, ..., 0.3593, 0.3570, 0.3546],
[0.1415, 0.1398, 0.1381, ..., 0.3608, 0.3596, 0.3584],
[0.1452, 0.1434, 0.1415, ..., 0.3629, 0.3608, 0.3616],
...,
[0.8342, 0.5683, 0.4067, ..., 0.2935, 0.2923, 0.2885],
[0.4220, 0.4454, 0.4706, ..., 0.3205, 0.3082, 0.2965],
[0.2269, 0.3603, 0.5547, ..., 0.3263, 0.3047, 0.2843]]]],
device='cuda:0', grad_fn=)
2: tensor(nan, device='cuda:0', grad_fn=)
3: tensor(16384., device='cuda:0')
4

train loss less than zero

Hi @Xt-Chen ,
Thanks for your excellent work, but when I train your model on my own datasets, the losses are less than zero, could you give me some advice?

RuntimeError: CUDA out of memory.

Dear @Xt-Chen, i got this problem how i can solve it, by the way i decreased Batch_size until 1 but does not solve
give me an Error on this line:
loading model ./checkpoints/SARPN_checkpoints_20.pth.tar
Processing the 0th image!
Traceback (most recent call last):
File "evaluate.py", line 95, in
test()
File "evaluate.py", line 52, in test
pred_depth = model(image)

RuntimeError: CUDA out of memory. Tried to allocate 22.00 MiB (GPU 0; 8.00 GiB total capacity; 1.27 GiB already allocated; 0 bytes free; 1.30 GiB reserved in total by PyTorch)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.