Git Product home page Git Product logo

anynet's Introduction

Anytime Stereo Image Depth Estimation on Mobile Devices

This repository contains the code (in PyTorch) for AnyNet introduced in the following paper

Anytime Stereo Image Depth Estimation on Mobile Devices

by Yan Wang∗, Zihang Lai∗, Gao Huang, Brian Wang, Laurens van der Maaten, Mark Campbell and Kilian Q. Weinberger.

It has been accepted by International Conference on Robotics and Automation (ICRA) 2019.

Figure

Citation

@article{wang2018anytime,
  title={Anytime Stereo Image Depth Estimation on Mobile Devices},
  author={Wang, Yan and Lai, Zihang and Huang, Gao and Wang, Brian H. and Van Der Maaten, Laurens and Campbell, Mark and Weinberger, Kilian Q},
  journal={arXiv preprint arXiv:1810.11408},
  year={2018}
}

Contents

  1. Introduction
  2. Usage
  3. Results
  4. Contacts

Introduction

Many real-world applications of stereo depth es- timation in robotics require the generation of disparity maps in real time on low power devices. Depth estimation should be accurate, e.g. for mapping the environment, and real-time, e.g. for obstacle avoidance. Current state-of-the-art algorithms can either generate accurate but slow, or fast but high-error mappings, and typically have far too many parameters for low-power/memory devices. Motivated by this shortcoming we propose a novel approach for disparity prediction in the anytime setting. In contrast to prior work, our end-to-end learned approach can trade off computation and accuracy at inference time. The depth estimation is performed in stages, during which the model can be queried at any time to output its current best estimate. In the first stage it processes a scaled down version of the input images to obtain an initial low resolution sketch of the disparity map. This sketch is then successively refined with higher resolution details until a full resolution, high quality disparity map emerges. Here, we leverage the fact that disparity refinements can be performed extremely fast as the residual error is bounded by only a few pixels. Our final model can process 1242×375 resolution images within a range of 10-35 FPS on an NVIDIA Jetson TX2 module with only marginal increases in error – using two orders of magnitude fewer parameters than the most competitive baseline.

Usage

  1. Install dependencies
  2. Generate the soft-links for the SceneFlow Dataset. You need to modify the scenflow_data_path to the actual SceneFlow path in create_dataset.sh file.
     sh ./create_dataset.sh
    
  3. Compile SPNet if SPN refinement is needed. (change NVCC path in make.sh when necessary)
    cd model/spn
    sh make.sh
    

Dependencies

Update:

Now our code supports Pytorch 1.0! You have to recompile the spn module

cd models/spn_t1
bash make.sh

Train

Firstly, we use the following command to pretrained AnyNet on Scene Flow

python main.py --maxdisp 192 --with_spn

Secondly, we use the following command to finetune AnyNet on KITTI 2015

python finetune.py --maxdisp 192 --with_spn --datapath path-to-kitti2015/training/

Pretrained Models

You can download the pretrained model from https://drive.google.com/file/d/18Vi68rQO-vcBn3882vkumIWtGggZQDoU/view?usp=sharing. It includes the SceneFlow, KITTI2012, KITTI2015 pretrained models. We also put the split files in the folder.

To evaluate the model on KITTI2012

python finetune.py --maxdisp 192 --with_spn --datapath path-to-kitti2012/training/ \
   --save_path results/kitti2012 --datatype 2012 --pretrained checkpoint/kitti2012_ck/checkpoint.tar \
   --split_file checkpoint/kitti2012_ck/split.txt --evaluate

To evaluate the model on KITTI2015

python finetune.py --maxdisp 192 --with_spn --datapath path-to-kitti2015/training/ \
    --save_path results/kitti2015 --datatype 2015 --pretrained checkpoint/kitti2015_ck/checkpoint.tar \
    --split_file checkpoint/kitti2015_ck/split.txt --evaluate

To fine-tune the ScenFlow pretrained model on KITTI2015

python finetune.py --maxdisp 192 --with_spn --datapath path-to-kitti2015/training/ \
    --pretrained checkpoint/sceneflow/sceneflow.tar

To fine-tune the ScenFlow pretrained model on KITTI2012

python finetune.py --maxdisp 192 --with_spn --datapath path-to-kitti2012/training/ \
    --pretrained checkpoint/sceneflow/sceneflow.tar --datatype 2012

Note: All results reported in the paper are averaged over five randomized 80/20 train/validation splits.

finetune on your own dataset

You have to organize your own dataset as the following format

path-to-your-dataset/
    | training
        | image_2/           #left images
        | image_3/           #right images
        | disp_occ_0/        #left disparities
    | validation
        | image_2/           #left images
        | image_3/           #right images
        | disp_occ_0/        #left disparities

The disparity ground truth has to be stored as png format and multiplied by 256. The finetune command is

python finetune.py --maxdisp 192 --with_spn --datapath path-to-your-dataset/ \
    --pretrained checkpoint/scenflow/sceneflow.tar --datatype other

Results

Figure KITTI2012 Results

anynet's People

Contributors

mileyan avatar zlai0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

anynet's Issues

Inference time evaluation input size and TX2 backend

Hi@v@mileyan, I want to know if the input image size is 1242x375 or 1232x368 since 1242x375 cannot run forward through the model. Another question is the same as #18. Could plz tell if you evaluate your model inference time on the TensorRT engine or not? Thank you for your reply and excellent code!

Training with resized input image?

Hi, I was wondering if I train the model with resized input instead of cropped input, will it affect the model that much? If I resize the input image, do I need to scale the ground truth disparity maps or I just resize it directly without further operation?

About _build_volume_2d3 function

Hi @mileyan,
Thanks for sharing the amazing work. I am curious about the _build_volume_2d3 function in line 118 of anynet.py.

batch_disp = batch_disp - batch_shift.float()

Why batch_disp = batch_disp - batch_shift.float() and not batch_disp = batch_disp + batch_shift.float() ?
Because the residual is from 2 to -2.
But in line 152,
pred_low_res = disparityregression2(-self.maxdisplist[scale]+1, self.maxdisplist[scale], stride=1)(F.softmax(-cost, dim=1))

The disparity regression is from -2 to 2

Where is the split.txt

Hi, it's nice to see your code released. Excellent work!
In the instruction, there is an argument when evaluating kitti2015, which is --split_file checkpoint/kitti2015_ck/split.txt
I cannot see the file after fine-tuning. I want to know where to find it or how it is generated

Some error about spn

Hi, Thanks for your code, I meet a error when I compile spn in torch1.0
When I run the code
cd models/spn_t1 bash make.sh
the terminal show:

`running clean
running build
running build_ext
building 'gaterecurrent2dnoind_cuda' extension
creating /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build
creating /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build/temp.linux-x86_64-3.6
creating /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build/temp.linux-x86_64-3.6/src
Emitting ninja build file /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] /usr/local/cuda/bin:/home/lab2/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/bin/nvcc -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/TH -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/bin:/home/lab2/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/include -I/home/lab2/anaconda3/envs/torch1.6/include/python3.6m -c -c /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_kernel.cu -o /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build/temp.linux-x86_64-3.6/src/gaterecurrent2dnoind_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=gaterecurrent2dnoind_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14
FAILED: /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build/temp.linux-x86_64-3.6/src/gaterecurrent2dnoind_kernel.o
/usr/local/cuda/bin:/home/lab2/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/bin/nvcc -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/TH -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/bin:/home/lab2/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/include -I/home/lab2/anaconda3/envs/torch1.6/include/python3.6m -c -c /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_kernel.cu -o /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build/temp.linux-x86_64-3.6/src/gaterecurrent2dnoind_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=gaterecurrent2dnoind_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14
/bin/sh: 1: /usr/local/cuda/bin:/home/lab2/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/bin/nvcc: not found
[2/2] c++ -MMD -MF /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build/temp.linux-x86_64-3.6/src/gaterecurrent2dnoind_cuda.o.d -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/TH -I/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/bin:/home/lab2/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/include -I/home/lab2/anaconda3/envs/torch1.6/include/python3.6m -c -c /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp -o /home/lab2/work/lhx/code/AnyNet/models/spn_t1/build/temp.linux-x86_64-3.6/src/gaterecurrent2dnoind_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=gaterecurrent2dnoind_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Parallel.h:149:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)

/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp: In function ‘int gaterecurrent2dnoind_forward_cuda(int, int, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:10:33: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * X_data = X.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:11:35: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G1_data = G1.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:12:35: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G2_data = G2.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:13:35: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G3_data = G3.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:14:38: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * H_data = output.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp: In function ‘int gaterecurrent2dnoind_backward_cuda(int, int, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:47:33: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * X_data = X.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:48:35: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G1_data = G1.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:49:35: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G2_data = G2.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:50:35: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G3_data = G3.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:51:35: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * H_data = top.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:53:40: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * H_diff = top_grad.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:55:38: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * X_diff = X_grad.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:56:40: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G1_diff = G1_grad.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:57:40: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G2_diff = G2_grad.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
/home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:58:40: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
float * G3_diff = G3_grad.data();
^
In file included from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /home/lab2/work/lhx/code/AnyNet/models/spn_t1/src/gaterecurrent2dnoind_cuda.cpp:4:
/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:354:7: note: declared here
T * data() const {
^~~~
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1515, in _run_ninja_build
env=env)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "setup.py", line 13, in
'build_ext': BuildExtension
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(*attrs)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions
build_ext.build_extensions(self)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
_build_ext.build_extension(self, ext)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 478, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/lab2/anaconda3/envs/torch1.6/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build
raise RuntimeError(message)
RuntimeError: Error compiling objects for extension
cp: 无法获取'build/lib
' 的文件状态(stat): 没有那个文件或目录`

a mistake when I compile SPN

Hello, Mileyan

I met a mistake when I compile SPN. Could you help me solve this problem? Thank you!

`yazhou@yazhou-wate-ubuntu:/media/yazhou/data_drive2/zcx/stereo/AnyNet-master/models/spn$ bash make.sh
Compiling gaterecurrent2dnoind layer kernels by nvcc...
Including CUDA code.
generating /tmp/tmph1wickvg/_gaterecurrent2dnoind.c
setting the current directory to '/tmp/tmph1wickvg'
running build_ext
building '_gaterecurrent2dnoind' extension
creating media
creating media/yazhou
creating media/yazhou/data_drive2
creating media/yazhou/data_drive2/zcx
creating media/yazhou/data_drive2/zcx/stereo
creating media/yazhou/data_drive2/zcx/stereo/AnyNet-master
creating media/yazhou/data_drive2/zcx/stereo/AnyNet-master/models
creating media/yazhou/data_drive2/zcx/stereo/AnyNet-master/models/spn
creating media/yazhou/data_drive2/zcx/stereo/AnyNet-master/models/spn/src
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c _gaterecurrent2dnoind.c -o ./gaterecurrent2dnoind.o
In file included from /usr/local/cuda/include/cuda_runtime.h:91:0,
from /usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/THC/THCGeneral.h:13,
from /usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/THC/THC.h:4,
from gaterecurrent2dnoind.c:493:
/usr/local/cuda/include/cuda_runtime_api.h:2933:97: error: unknown type name ‘cudaFuncAttribute’
ltin
cudaError_t CUDARTAPI cudaFuncSetAttribute(const void *func, cudaFuncAtt
^
Traceback (most recent call last):
File "/usr/lib/python3.5/distutils/unixccompiler.py", line 118, in _compile
extra_postargs)
File "/usr/lib/python3.5/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/usr/lib/python3.5/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/usr/lib/python3.5/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'x86_64-linux-gnu-gcc' failed with exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/cffi/ffiplatform.py", line 51, in _build
dist.run_command('build_ext')
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/lib/python3.5/distutils/command/build_ext.py", line 338, in run
self.build_extensions()
File "/usr/lib/python3.5/distutils/command/build_ext.py", line 447, in build_extensions
self._build_extensions_serial()
File "/usr/lib/python3.5/distutils/command/build_ext.py", line 472, in _build_extensions_serial
self.build_extension(ext)
File "/usr/lib/python3.5/distutils/command/build_ext.py", line 532, in build_extension
depends=ext.depends)
File "/usr/lib/python3.5/distutils/ccompiler.py", line 574, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/usr/lib/python3.5/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command 'x86_64-linux-gnu-gcc' failed with exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "build.py", line 34, in
ffi.build()
File "/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/init.py", line 184, in build
_build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
File "/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/init.py", line 108, in _build_extension
outfile = ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
File "/usr/local/lib/python3.5/dist-packages/cffi/api.py", line 697, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File "/usr/local/lib/python3.5/dist-packages/cffi/recompiler.py", line 1520, in recompile
compiler_verbose, debug)
File "/usr/local/lib/python3.5/dist-packages/cffi/ffiplatform.py", line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File "/usr/local/lib/python3.5/dist-packages/cffi/ffiplatform.py", line 58, in _build
raise VerificationError('%s: %s' % (e.class.name, e))
cffi.error.VerificationError: CompileError: command 'x86_64-linux-gnu-gcc' failed with exit status 1
`

AttributeError: 'torch.cuda.ByteTensor' object has no attribute 'detach_'

hello,the pytorch version is 0.4.0
when i run the commend ''python main.py --maxdisp 192 --with_spn"
the error is
[2018-11-09 10:13:52 main.py:65] INFO channels_3d: 4
[2018-11-09 10:13:52 main.py:65] INFO datapath: /home/lvhao/data/flythings3d/
[2018-11-09 10:13:52 main.py:65] INFO epochs: 10
[2018-11-09 10:13:52 main.py:65] INFO growth_rate: [4, 1, 1]
[2018-11-09 10:13:52 main.py:65] INFO init_channels: 1
[2018-11-09 10:13:52 main.py:65] INFO layers_3d: 4
[2018-11-09 10:13:52 main.py:65] INFO loss_weights: [0.25, 0.5, 1.0, 1.0]
[2018-11-09 10:13:52 main.py:65] INFO lr: 0.0005
[2018-11-09 10:13:52 main.py:65] INFO maxdisp: 192
[2018-11-09 10:13:52 main.py:65] INFO maxdisplist: [12, 3, 3]
[2018-11-09 10:13:52 main.py:65] INFO nblocks: 2
[2018-11-09 10:13:52 main.py:65] INFO print_freq: 5
[2018-11-09 10:13:52 main.py:65] INFO resume: None
[2018-11-09 10:13:52 main.py:65] INFO save_path: results/pretrained_anynet
[2018-11-09 10:13:52 main.py:65] INFO spn_init_channels: 8
[2018-11-09 10:13:52 main.py:65] INFO test_bsize: 4
[2018-11-09 10:13:52 main.py:65] INFO train_bsize: 6
[2018-11-09 10:13:52 main.py:65] INFO with_spn: True
[2018-11-09 10:13:54 main.py:70] INFO Number of model parameters: 43269
[2018-11-09 10:13:54 main.py:86] INFO Not Resume
[2018-11-09 10:13:54 main.py:90] INFO This is 0-th epoch
Traceback (most recent call last):
File "main.py", line 192, in
main()
File "main.py", line 92, in main
train(TrainImgLoader, model, optimizer, log, epoch)
File "main.py", line 120, in train
mask.detach_()
AttributeError: 'torch.cuda.ByteTensor' object has no attribute 'detach_'
i change the 'detach_()' into 'detach()' or 'date()' the error is still existed please tell me how to solve it?tks

Disparity to depth

Hello,

The network outputs the disparity between the left and right images. I would like to estimate the depth. The formulae for depth is :

depth = baseline * focal / disparity

I'm not able to get good results. Is there a script for calculating the depth ?

An Error in Pytorch1.4.0

Code ( main.py line 124 ):
outputs = [torch.squeeze(output, 1) for output in outputs]

The shape of outputs and disp_L are [6,1,,],so it seems unnecessary to squeeze.

Questions about the evaluation of Jetson TX2

Hi, First of all thanks for your great efforts on this research.

I have a question during the evaluation process of AnyNet through Jetson TX2.
After training AnyNet with GTX 1080Ti first, was it evaluated on Jetson TX2 with this weight? (Fig. 6 in paper)
Also, can I check the results of quantitative evaluation with Jetson TX2?

Thanks

Converting generated disparity npy to png format

I used the generate_disp script found in pseudo_lidar repo, I used it in converting the point cloud of Kitti object detection 7481 examples in order to have large data to train the model on, but in order to train i need to convert the images from npy format to png, i tried many ways

this one of them

disp = np.load(args.main_dir + '/' + fn)
disp_map = (disp_map - np.min(disp_map)) / (np.max(disp_map) - np.min(disp_map))
disp_map = (disp_map*255).astype(np.uint8)

it actually output a good looking image but when i start the training, the results are soooo bad, almost nothing is correct so any idea of the right way to convert these npy files to images i can train on?

Error comiling SPN module: affects the ouptput disparity images from test images

Hi @mileyan ,

I am implementing AnyNet for my project. However I have following questions:

1) I have not been able to compile SPN Module. It gives me the error that  **setup.py** file is not found.
Because of this, I am not able to refine the disparities and get clear results, even though I am using the pretrained checkpoints.
2) Also, if I am using pretrained checkpoints provided in your README, am I supposed to have SPN module compiled and running? I think since I don't have SP compiled, I am not able to get clear results as per your submitted paper results(colored disparity images)
3) Also, I was able to get black and white(graysclae) disparity image, but would like to have colored images as output. do you have some guidelines on producing colored images from the disparity images?
4) Any leads or workarounds would be greatly appreciated. I would be glad to hear from you soon.

Regards,
Nakul

RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

My python version is 3.7.0
Pytorch version is 0.4.1
When I follow README.md and train the network,the error is:

[2019-01-23 20:42:14 main.py:65] INFO     channels_3d: 4
[2019-01-23 20:42:14 main.py:65] INFO     datapath: /disk1/hyj/hyj/
[2019-01-23 20:42:14 main.py:65] INFO     epochs: 10
[2019-01-23 20:42:14 main.py:65] INFO     growth_rate: [4, 1, 1]
[2019-01-23 20:42:14 main.py:65] INFO     init_channels: 1
[2019-01-23 20:42:14 main.py:65] INFO     layers_3d: 4
[2019-01-23 20:42:14 main.py:65] INFO     loss_weights: [0.25, 0.5, 1.0, 1.0]
[2019-01-23 20:42:14 main.py:65] INFO     lr: 0.0005
[2019-01-23 20:42:14 main.py:65] INFO     maxdisp: 192
[2019-01-23 20:42:14 main.py:65] INFO     maxdisplist: [12, 3, 3]
[2019-01-23 20:42:14 main.py:65] INFO     nblocks: 2
[2019-01-23 20:42:14 main.py:65] INFO     print_freq: 5
[2019-01-23 20:42:14 main.py:65] INFO     resume: None
[2019-01-23 20:42:14 main.py:65] INFO     save_path: /disk1/hyj/Anytime
[2019-01-23 20:42:14 main.py:65] INFO     spn_init_channels: 8
[2019-01-23 20:42:14 main.py:65] INFO     test_bsize: 4
[2019-01-23 20:42:14 main.py:65] INFO     train_bsize: 4
[2019-01-23 20:42:14 main.py:65] INFO     with_spn: True
[2019-01-23 20:42:16 main.py:74] INFO     Number of model parameters: 43269
[2019-01-23 20:42:16 main.py:90] INFO     Not Resume
[2019-01-23 20:42:16 main.py:94] INFO     This is 0-th epoch
/home/hyj/anaconda3/lib/python3.7/site-packages/torch/nn/modules/upsampling.py:225: UserWarning: nn.UpsamplingBilinear2d is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.UpsamplingBilinear2d is deprecated. Use nn.functional.interpolate instead.")
/home/hyj/anaconda3/lib/python3.7/site-packages/torch/nn/modules/upsampling.py:122: UserWarning: nn.Upsampling is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.Upsampling is deprecated. Use nn.functional.interpolate instead.")
Traceback (most recent call last):
  File "/home/hyj/code/AnyNet/main.py", line 196, in <module>
    main()
  File "/home/hyj/code/AnyNet/main.py", line 96, in main
    train(TrainImgLoader, model, optimizer, log, epoch)
  File "/home/hyj/code/AnyNet/main.py", line 125, in train
    outputs = model(imgL, imgR)
  File "/home/hyj/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/hyj/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/hyj/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/hyj/code/AnyNet/models/anynet.py", line 142, in forward
    self.maxdisplist[scale], stride=1)
  File "/home/hyj/code/AnyNet/models/anynet.py", line 107, in _build_volume_2d
    cost[:, i//stride, :, :i] = feat_l[:, :, :, :i].abs().sum(1)
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Process finished with exit code 1

thanks

torch.onnx.export RuntimeError: Unsupported: ONNX export of Pad in opset 9. The sizes of the padding must be constant.

Environment:
GTX 3090
torch1.7.0+cu110
torchvision0.8.1
python3.8

File:
if name == 'main':
# main()
img_l = torch.randn(1,3,512,256).cuda()
img_r = torch.randn(1,3,512,256).cuda()
f = r"../results/pretrained_anynet/checkpoint.tar"
checkpoint = torch.load(f)
checkpoint['state_dict'] = proc_nodes_module(checkpoint, 'state_dict')
model = models.anynet.AnyNet(args).cuda()
model.load_state_dict(checkpoint['state_dict'])
model.eval()
torch.onnx.export(model, (img_l, img_r), "anynet.onnx", verbose=False, input_names=["img_l","img_r"],
output_names=["stage1","stage2","stage3"], opset_version=11)

Problem:
RuntimeError: Unsupported: ONNX export of Pad in opset 9. The sizes of the padding must be constant. Please try opset version 11.
I have set opset_version 11, but it doesn't work at all. Does anybody know how to solve this problem? I really need help, please.

error when compiling spn

hi @mileyan,

when compiling the spn-model in models/spn I get this error:

Compiling gaterecurrent2dnoind layer kernels by nvcc...
python: can't open file 'setup.py': [Errno 2] No such file or directory

cat make.sh reveals that python setup.pyis called in the same directory, but there is no file setup.py. What am I doing wrong here?

My system:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Thu_Jun_11_22:26:38_PDT_2020
Cuda compilation tools, release 11.0, V11.0.194
Build cuda_11.0_bu.TC445_37.28540450_0
NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0

Thank you in advance

Train using depth ground truth not disparity

we want to train AnyNet to calculate the loss on depth not disparity as it will be more efficient as discussed in Pseudo Lidar to use AnyNet with its high performance in time to use 3D object detection approach

can we train AnyNet model for with loss depth-based ?

Error Tracing the model using `torch.jit.trace`

While trying to export the model to traced pytorch model, i am getting Could not export Python function call 'Scatter' how do i resolve this ?

Notebook for reference: https://colab.research.google.com/drive/1vf8osp0aeQA7dgBTw5Ejy9OibGOvgpaj?usp=sharing

model = AnyNet(args)
model = nn.DataParallel(model).cuda()
checkpoint = torch.load('/content/checkpoint/kitti2015_ck/checkpoint.tar')
model.load_state_dict(checkpoint['state_dict'], strict=False)
example = [torch.rand(1, 3, 3840, 2160).cuda(), torch.rand(1, 3, 3840, 2160).cuda()]
traced_model = torch.jit.trace(model, example_inputs=example)
traced_model.save("anynet_3840x2160_b1.traced.pt")
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-15-462347c4b4a7> in <module>()
----> 1 traced_model.save("anynet_3840x2160_b1.traced.pt")

/usr/local/lib/python3.7/dist-packages/torch/jit/_script.py in save(self, *args, **kwargs)
    485             See :func:`torch.jit.save <torch.jit.save>` for details.
    486             """
--> 487             return self._c.save(*args, **kwargs)
    488 
    489         def _save_for_lite_interpreter(self, *args, **kwargs):

RuntimeError: 
Could not export Python function call 'Scatter'. Remove calls to Python functions before export. Did you forget to add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:
/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/scatter_gather.py(13): scatter_map
/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/scatter_gather.py(15): scatter_map
/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/scatter_gather.py(28): scatter
/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/scatter_gather.py(36): scatter_kwargs
/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py(168): scatter
/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py(157): forward
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py(709): _slow_forward
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py(725): _call_impl
/usr/local/lib/python3.7/dist-packages/torch/jit/_trace.py(940): trace_module
/usr/local/lib/python3.7/dist-packages/torch/jit/_trace.py(742): trace
<ipython-input-14-e92379b43790>(2): <module>

Problem about kitti2015 eval

I downloaded your pre-trained model and used the following command to test.
python finetune.py --maxdisp 192 --datapath path-to-kitti2015/training/
--save_path results/kitti2015 --datatype 2015 --pretrained checkpoint/kitti2015_ck/checkpoint.tar
--split_file checkpoint/kitti2015_ck/split.txt --evaluate
without spn.The result is :Average test 3-Pixel Error = Stage 0=0.1401, Stage 1=0.1580, Stage 2=0.1008
It is very different from the best result, which makes me very confused.
My CUDA 10.1 python 3.7.3 pytorch 1.5.1

performance about stereonet

Hi:
there is no released code for stereonet.
could you please tell me how do you do experiments on stereonet?
Thanks!

Finetune on kitti 2015

Hi, I wonder how you finetune on Kitti 2015 without overfitting. Because, when I tried to finetune on Kitti 2015 with random crop, the validation loss start to increase at epoch 20, further training can only make the metrics on the validation set worse.
For finetune I split the 200 images to 160 as train set and 40 as val set.

Support for pytorch >1.0 and CUDA10.

Hello,

Thanks for sharing your code and and paper on the impressive results achieved on jetson tx2. I wanted to replicate your results but now I am facing difficulties installing all the dependencies on a jetson nano. There are two issues:

a) Pytorch deprecation of extensions as stated in #4
b) I have been unable to install (or compile from source and install) pytorch0.4 on jetson nano (ubuntu18.04, CUDA10)

So I was wondering if you could please provide some pointers on how to add suport for pytorch v1.0 more specifically for the spn folder of scripts (I undestand one has to use cpp_extensions and rewrite some parts of the code), and/or if you could provide any directions on installing pytorch 0.4 on ubuntu18.04 with CUDA10.

Thank you very much!

A question about cost volume in Resiual prediction

Hi, as you mentioned in the paper, the cost volume is computed as

I wonder why the index is (j-k+2). In my view, maybe incorrect, the cost Volume is the same as the original one in the Disparity Network.

Would you give a detailed explanation?
Thank you.

Noisy ZED camera inference!

Hi, thank you for sharing your code!
I am using a ZED 2 camera from Stereolabs from which I grab HD720 left and right rectified images and feed them directly into the network. With your pretrained network unfortunately, the output is very noisy as can be seen here:
myimage
Can you give me a hint on what might be wrong?Does it mean that the network cannot generalize well and should be retrained with images better representing parameters of my camera (e.g. baseline, FOV)? Or maybe I can change some parameters of the network. Should I resize images?

Thank you for your kind reply!

About training

Hi,
Good job!
I used your code to train the network, but found that the network converges too slow. I got "Average train loss = Stage 0 = 7.32 Stage 1 = 6.86 Stage 2 = 6.38 Stage 3 = 5.62" after 8 epoch. Is it normal? And could you please release the trained model?
Besides, you compared your network with StereoNet. I tried to re-implement StereoNet using PyTorch, but the runtime is much longer than that reported in the paper. I got about 0.17s when testing images from Scene Flow dataset on 1080 ti GPU. I wonder if you can achieve the runtime reported in StereoNet using your re-implementation. I will be very grateful if you could share the implementation of StereoNet too.
Thank you very much.

Inference on Jetson TX2

Hi,
thanks for your great work!
What backend did you use to run inference on the device?
Have you used PyTorch model as is or converted it to TensorRT?

Pretrained model of AnyNet

Thanks for releasing the code!

Could you please provide the pre-trained model of AnyNet in SceneFlow dataset ? Thank you a lot !

Test image size problem

I would like to ask the author, if the size of the image used for network training is 960x540, then when I want to use this network test, the resolution of the photos taken by my camera is very large, such as 4608x3456. When I take this picture to test, the parallax map effect is very bad. When I resize the original image to 960x540, the parallax map effect is very good OK, but I don't know how to restore the parallax value of the original image. If the high-resolution image(4608x3456) is cut(960x540) and then put into the prediction, the prediction effect of each small resolution image cut out is not good.Or how to use your network to test higher resolution images without enough similar data sets to train? Hope that the author can guide, thank you very much!

Tensorflow lite

First of all I congratulate you on the result of the project
I'm interested in testing a tensorflow lite version of this model. Do you have any?

A question about spn

Dear @mileyan , when I used spn(sh make.sh), there was an error(ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead) and I didn't know how to solve it.
My Computer Environment: python-3.5 CUDA-9.0.176 pytorch-0.4.
Looking forward to your reply!

evaluation of problems with the model at KITTI2015

Hi, I try to use torchvision.utils.save_image() to save the output tensor of the test, but got these blank images, and I try to convert tensor to PIL format, the image is still blurry. So, how can I get the real disp image? I hope to get your answer. Thanks!
disp_39_2

about data set structure

I get a error about data set that shown as following:
Traceback (most recent call last):
File "main.py", line 192, in
main()
File "main.py", line 51, in main
args.datapath)
File "/home/****/Cworkspace/AnyNet/dataloader/listflowfile.py", line 23, in dataloader
monkaa_path = filepath + [x for x in image if 'monkaa' in x][0]
IndexError: list index out of range

The reason maybe I provide a wrong structure of data side, could you show me what the structure and path of scene flow data set should be?

Question about depth regression

I was looking at the code of AnyNet and had a question.
when you do disparity_regression in AnyNet, you do it for "-cost", is there a reason for doing it this way ?

"pred_low_res = disparityregression2(-self.maxdisplist[scale]+1, self.maxdisplist[scale], stride=1)(F.softmax(-cost, dim=1))"

Is it estimating depth from two real-time cameras?

hello,
Thanks for the great work ,
I want to ask that is it estimating depth of 2 images (pair) or estimating a real time depth from two cameras?
Please tell me as soon as possible. I will be very thankful to you.

Training with own data

I'd like to train AnyNet on my own data and implement a ROS Node for it. I couldn't find any details about training the network.

  • Could you please let me know if I need any data apart from rectified stereo images?
  • Do I need to change any code relating to image sizes, if so what code do I change?
  • What command I need to run to train the network?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.