Git Product home page Git Product logo

sivo's People

Contributors

navganti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sivo's Issues

How to evaluate the error

Hi @navganti
I use evo to evaluate the translation error(%) and rotation error(deg/m), but my results differ from yours in the paper of SIVO. can you tell me what evaluation tool you use?

maybe Pangolin version error . can you please give me your Pangolin version?

[100%] Linking CXX executable ../bin/SIVO
../lib/liborbslam.so:对‘pangolin::CreateWindowAndBind(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, int, pangolin::Params const&)’未定义的引用
../lib/liborbslam.so:对‘pangolin::CreatePanel(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)’未定义的引用
../lib/liborbslam.so:对‘pangolin::BindToContext(std::__cxx11::basic_string<char, std::char_traits, std::allocator >)’未定义的引用
../lib/liborbslam.so:对‘pangolin::Split(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, char)’未定义的引用
collect2: error: ld returned 1 exit status
CMakeFiles/SIVO.dir/build.make:218: recipe for target '../bin/SIVO' failed
make[2]: *** [../bin/SIVO] Error 1
CMakeFiles/Makefile2:169: recipe for target 'CMakeFiles/SIVO.dir/all' failed
make[1]: *** [CMakeFiles/SIVO.dir/all] Error 2
Makefile:90: recipe for target 'all' failed
make: *** [all] Error

i already run ORBSLAM2 so that my pangolin version maybe ok. but when i run build.sh. error occur. so i think maybe i change the version of Pangolin can solve this. can you please give me your pangolin version? my email adderss:[email protected] thank you!

batch response: This repository is over its data quota.

Hi, @navganti
Thank you for sharing your work, but I have trouble downloading the segbet standard model.
I am unable to clone or pull from Github because of a git-lfs quota.
When I run this command of "git lfs pull".
I got: batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access. error: failed to fetch some objects from 'https://github.com/navganti/SIVO.git/info/lfs'.
Both bandwidth and storage of my Git LFS Data are 0/1.0 GB.

Thanks.

CUDA out of memory?

Hi,

Thanks for sharing your code. I have just built it following your instruction, but I encounter following error when running:

F0330 14:27:00.461199  5738 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0)  out of memory. 

The system has 1080 with 8GB memory. The batch size in .prototxt has been set to 2, which is the minimum requirement. May I know what are the possible causes of this error? Thanks!

Can I run SIVO with indoor datasets?

Hello!@navganti
Thank you for sharing your work.I want to use SIVO to run indoor datasets ,and run with Segnet trained weights and models.I tested it by using kitti datasets , SUN weights and caffemodel.And I meet with this problem:CUDA out of memory.The batch size in .prototxt has been set to 2,and I don't know whether the problem is caused by the dataset not matches the weights and caffemodel,or the resolution of the images not suit for the model.

I am wondering is it feasible for SIVO to run indoor datasets?and what indoor datasets do you recommand to run for SIVO?

Looking forward to your reply!Thank you!

Double free or corruption(out)

Hi, thanks for the access to your repository. I tried running it on my hardware however, after making SIVO successfully, when I try to run it using the commands you have mentioned in the README file, it throws me "double free or corruption" runtime error. Did you ever get this error? Any idea how should I move ahead?

SegNet Standard Model Config Error. # SET SAMPLE SIZE HERE

Hi @navganti ,

Thanks a lot for sharing your work! When I run your program, the basic model perfoms well for me but the standard bayesian segnet does not work. It truned to be an error of the model layer size mismatch. Do you meet this error in your experiment? Following is running information:

ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.


Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!

WARNING: Logging before InitGoogleLogging() is written to STDERR
I1029 20:53:35.092469  6127 upgrade_proto.cpp:67] Attempting to upgrade input file specified using deprecated input fields: config/bayesian_segnet/standard/kitti/bayesian_segnet_kitti.prototxt
I1029 20:53:35.092579  6127 upgrade_proto.cpp:70] Successfully upgraded file specified using deprecated input fields.
W1029 20:53:35.092587  6127 upgrade_proto.cpp:72] Note that future Caffe releases will only support input layers and not input fields.
I1029 20:53:35.093302  6127 net.cpp:58] Initializing net from parameters: 
name: "bayesian_segnet"
state {
  phase: TEST
  level: 0
}
layer {
  name: "input"
  type: "Input"
  top: "data"
  input_param {
    shape {
      dim: 2
      dim: 3
      dim: 352
      dim: 1024
    }
  }
}
layer {
  name: "conv1_1"
  type: "Convolution"
  bottom: "data"
  top: "conv1_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv1_1_bn"
  type: "BN"
  bottom: "conv1_1"
  top: "conv1_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu1_1"
  type: "ReLU"
  bottom: "conv1_1"
  top: "conv1_1"
}
layer {
  name: "conv1_2"
  type: "Convolution"
  bottom: "conv1_1"
  top: "conv1_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv1_2_bn"
  type: "BN"
  bottom: "conv1_2"
  top: "conv1_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu1_2"
  type: "ReLU"
  bottom: "conv1_2"
  top: "conv1_2"
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1_2"
  top: "pool1"
  top: "pool1_mask"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2_1"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv2_1_bn"
  type: "BN"
  bottom: "conv2_1"
  top: "conv2_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu2_1"
  type: "ReLU"
  bottom: "conv2_1"
  top: "conv2_1"
}
layer {
  name: "conv2_2"
  type: "Convolution"
  bottom: "conv2_1"
  top: "conv2_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv2_2_bn"
  type: "BN"
  bottom: "conv2_2"
  top: "conv2_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu2_2"
  type: "ReLU"
  bottom: "conv2_2"
  top: "conv2_2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2_2"
  top: "pool2"
  top: "pool2_mask"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv3_1"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv3_1_bn"
  type: "BN"
  bottom: "conv3_1"
  top: "conv3_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu3_1"
  type: "ReLU"
  bottom: "conv3_1"
  top: "conv3_1"
}
layer {
  name: "conv3_2"
  type: "Convolution"
  bottom: "conv3_1"
  top: "conv3_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv3_2_bn"
  type: "BN"
  bottom: "conv3_2"
  top: "conv3_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu3_2"
  type: "ReLU"
  bottom: "conv3_2"
  top: "conv3_2"
}
layer {
  name: "conv3_3"
  type: "Convolution"
  bottom: "conv3_2"
  top: "conv3_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv3_3_bn"
  type: "BN"
  bottom: "conv3_3"
  top: "conv3_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu3_3"
  type: "ReLU"
  bottom: "conv3_3"
  top: "conv3_3"
}
layer {
  name: "pool3"
  type: "Pooling"
  bottom: "conv3_3"
  top: "pool3"
  top: "pool3_mask"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "pool3_drop"
  type: "Dropout"
  bottom: "pool3"
  top: "pool3"
  dropout_param {
    dropout_ratio: 0.5
    sample_weights_test: true
  }
}
layer {
  name: "conv4_1"
  type: "Convolution"
  bottom: "pool3"
  top: "conv4_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv4_1_bn"
  type: "BN"
  bottom: "conv4_1"
  top: "conv4_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu4_1"
  type: "ReLU"
  bottom: "conv4_1"
  top: "conv4_1"
}
layer {
  name: "conv4_2"
  type: "Convolution"
  bottom: "conv4_1"
  top: "conv4_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv4_2_bn"
  type: "BN"
  bottom: "conv4_2"
  top: "conv4_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu4_2"
  type: "ReLU"
  bottom: "conv4_2"
  top: "conv4_2"
}
layer {
  name: "conv4_3"
  type: "Convolution"
  bottom: "conv4_2"
  top: "conv4_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv4_3_bn"
  type: "BN"
  bottom: "conv4_3"
  top: "conv4_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu4_3"
  type: "ReLU"
  bottom: "conv4_3"
  top: "conv4_3"
}
layer {
  name: "pool4"
  type: "Pooling"
  bottom: "conv4_3"
  top: "pool4"
  top: "pool4_mask"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "pool4_drop"
  type: "Dropout"
  bottom: "pool4"
  top: "pool4"
  dropout_param {
    dropout_ratio: 0.5
    sample_weights_test: true
  }
}
layer {
  name: "conv5_1"
  type: "Convolution"
  bottom: "pool4"
  top: "conv5_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv5_1_bn"
  type: "BN"
  bottom: "conv5_1"
  top: "conv5_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu5_1"
  type: "ReLU"
  bottom: "conv5_1"
  top: "conv5_1"
}
layer {
  name: "conv5_2"
  type: "Convolution"
  bottom: "conv5_1"
  top: "conv5_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv5_2_bn"
  type: "BN"
  bottom: "conv5_2"
  top: "conv5_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu5_2"
  type: "ReLU"
  bottom: "conv5_2"
  top: "conv5_2"
}
layer {
  name: "conv5_3"
  type: "Convolution"
  bottom: "conv5_2"
  top: "conv5_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv5_3_bn"
  type: "BN"
  bottom: "conv5_3"
  top: "conv5_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu5_3"
  type: "ReLU"
  bottom: "conv5_3"
  top: "conv5_3"
}
layer {
  name: "pool5"
  type: "Pooling"
  bottom: "conv5_3"
  top: "pool5"
  top: "pool5_mask"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "pool5_drop"
  type: "Dropout"
  bottom: "pool5"
  top: "pool5"
  dropout_param {
    dropout_ratio: 0.5
    sample_weights_test: true
  }
}
layer {
  name: "upsample5"
  type: "Upsample"
  bottom: "pool5"
  bottom: "pool5_mask"
  top: "pool5_D"
  upsample_param {
    scale: 2
    upsample_h: 23
    upsample_w: 30
  }
}
layer {
  name: "conv5_3_D"
  type: "Convolution"
  bottom: "pool5_D"
  top: "conv5_3_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv5_3_D_bn"
  type: "BN"
  bottom: "conv5_3_D"
  top: "conv5_3_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu5_3_D"
  type: "ReLU"
  bottom: "conv5_3_D"
  top: "conv5_3_D"
}
layer {
  name: "conv5_2_D"
  type: "Convolution"
  bottom: "conv5_3_D"
  top: "conv5_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv5_2_D_bn"
  type: "BN"
  bottom: "conv5_2_D"
  top: "conv5_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu5_2_D"
  type: "ReLU"
  bottom: "conv5_2_D"
  top: "conv5_2_D"
}
layer {
  name: "conv5_1_D"
  type: "Convolution"
  bottom: "conv5_2_D"
  top: "conv5_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv5_1_D_bn"
  type: "BN"
  bottom: "conv5_1_D"
  top: "conv5_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu5_1_D"
  type: "ReLU"
  bottom: "conv5_1_D"
  top: "conv5_1_D"
}
layer {
  name: "upsample4_drop"
  type: "Dropout"
  bottom: "conv5_1_D"
  top: "conv5_1_D"
  dropout_param {
    dropout_ratio: 0.5
    sample_weights_test: true
  }
}
layer {
  name: "upsample4"
  type: "Upsample"
  bottom: "conv5_1_D"
  bottom: "pool4_mask"
  top: "pool4_D"
  upsample_param {
    scale: 2
    upsample_h: 45
    upsample_w: 60
  }
}
layer {
  name: "conv4_3_D"
  type: "Convolution"
  bottom: "pool4_D"
  top: "conv4_3_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv4_3_D_bn"
  type: "BN"
  bottom: "conv4_3_D"
  top: "conv4_3_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu4_3_D"
  type: "ReLU"
  bottom: "conv4_3_D"
  top: "conv4_3_D"
}
layer {
  name: "conv4_2_D"
  type: "Convolution"
  bottom: "conv4_3_D"
  top: "conv4_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv4_2_D_bn"
  type: "BN"
  bottom: "conv4_2_D"
  top: "conv4_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu4_2_D"
  type: "ReLU"
  bottom: "conv4_2_D"
  top: "conv4_2_D"
}
layer {
  name: "conv4_1_D"
  type: "Convolution"
  bottom: "conv4_2_D"
  top: "conv4_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv4_1_D_bn"
  type: "BN"
  bottom: "conv4_1_D"
  top: "conv4_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu4_1_D"
  type: "ReLU"
  bottom: "conv4_1_D"
  top: "conv4_1_D"
}
layer {
  name: "upsample3_drop"
  type: "Dropout"
  bottom: "conv4_1_D"
  top: "conv4_1_D"
  dropout_param {
    dropout_ratio: 0.5
    sample_weights_test: true
  }
}
layer {
  name: "upsample3"
  type: "Upsample"
  bottom: "conv4_1_D"
  bottom: "pool3_mask"
  top: "pool3_D"
  upsample_param {
    scale: 2
  }
}
layer {
  name: "conv3_3_D"
  type: "Convolution"
  bottom: "pool3_D"
  top: "conv3_3_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv3_3_D_bn"
  type: "BN"
  bottom: "conv3_3_D"
  top: "conv3_3_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu3_3_D"
  type: "ReLU"
  bottom: "conv3_3_D"
  top: "conv3_3_D"
}
layer {
  name: "conv3_2_D"
  type: "Convolution"
  bottom: "conv3_3_D"
  top: "conv3_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv3_2_D_bn"
  type: "BN"
  bottom: "conv3_2_D"
  top: "conv3_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu3_2_D"
  type: "ReLU"
  bottom: "conv3_2_D"
  top: "conv3_2_D"
}
layer {
  name: "conv3_1_D"
  type: "Convolution"
  bottom: "conv3_2_D"
  top: "conv3_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv3_1_D_bn"
  type: "BN"
  bottom: "conv3_1_D"
  top: "conv3_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu3_1_D"
  type: "ReLU"
  bottom: "conv3_1_D"
  top: "conv3_1_D"
}
layer {
  name: "upsample2_drop"
  type: "Dropout"
  bottom: "conv3_1_D"
  top: "conv3_1_D"
  dropout_param {
    dropout_ratio: 0.5
    sample_weights_test: true
  }
}
layer {
  name: "upsample2"
  type: "Upsample"
  bottom: "conv3_1_D"
  bottom: "pool2_mask"
  top: "pool2_D"
  upsample_param {
    scale: 2
  }
}
layer {
  name: "conv2_2_D"
  type: "Convolution"
  bottom: "pool2_D"
  top: "conv2_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv2_2_D_bn"
  type: "BN"
  bottom: "conv2_2_D"
  top: "conv2_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu2_2_D"
  type: "ReLU"
  bottom: "conv2_2_D"
  top: "conv2_2_D"
}
layer {
  name: "conv2_1_D"
  type: "Convolution"
  bottom: "conv2_2_D"
  top: "conv2_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv2_1_D_bn"
  type: "BN"
  bottom: "conv2_1_D"
  top: "conv2_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu2_1_D"
  type: "ReLU"
  bottom: "conv2_1_D"
  top: "conv2_1_D"
}
layer {
  name: "upsample1"
  type: "Upsample"
  bottom: "conv2_1_D"
  bottom: "pool1_mask"
  top: "pool1_D"
  upsample_param {
    scale: 2
  }
}
layer {
  name: "conv1_2_D"
  type: "Convolution"
  bottom: "pool1_D"
  top: "conv1_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv1_2_D_bn"
  type: "BN"
  bottom: "conv1_2_D"
  top: "conv1_2_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  bn_param {
    scale_filler {
      type: "constant"
      value: 1
    }
    shift_filler {
      type: "constant"
      value: 0
    }
    bn_mode: INFERENCE
  }
}
layer {
  name: "relu1_2_D"
  type: "ReLU"
  bottom: "conv1_2_D"
  top: "conv1_2_D"
}
layer {
  name: "conv1_1_D"
  type: "Convolution"
  bottom: "conv1_2_D"
  top: "conv1_1_D"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 15
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "prob"
  type: "Softmax"
  bottom: "conv1_1_D"
  top: "prob"
  softmax_param {
    engine: CAFFE
  }
}
I1029 20:53:35.094120  6127 layer_factory.hpp:77] Creating layer input
I1029 20:53:35.094143  6127 net.cpp:100] Creating Layer input
I1029 20:53:35.094153  6127 net.cpp:408] input -> data
I1029 20:53:35.101982  6127 net.cpp:150] Setting up input
I1029 20:53:35.102032  6127 net.cpp:157] Top shape: 2 3 352 1024 (2162688)
I1029 20:53:35.102036  6127 net.cpp:165] Memory required for data: 8650752
I1029 20:53:35.102053  6127 layer_factory.hpp:77] Creating layer conv1_1
I1029 20:53:35.102109  6127 net.cpp:100] Creating Layer conv1_1
I1029 20:53:35.102130  6127 net.cpp:434] conv1_1 <- data
I1029 20:53:35.102143  6127 net.cpp:408] conv1_1 -> conv1_1
I1029 20:53:35.290997  6127 net.cpp:150] Setting up conv1_1
I1029 20:53:35.291038  6127 net.cpp:157] Top shape: 2 64 352 1024 (46137344)
I1029 20:53:35.291045  6127 net.cpp:165] Memory required for data: 193200128
I1029 20:53:35.291100  6127 layer_factory.hpp:77] Creating layer conv1_1_bn
I1029 20:53:35.291123  6127 net.cpp:100] Creating Layer conv1_1_bn
I1029 20:53:35.291128  6127 net.cpp:434] conv1_1_bn <- conv1_1
I1029 20:53:35.291136  6127 net.cpp:395] conv1_1_bn -> conv1_1 (in-place)
I1029 20:53:35.292418  6127 net.cpp:150] Setting up conv1_1_bn
I1029 20:53:35.292439  6127 net.cpp:157] Top shape: 2 64 352 1024 (46137344)
I1029 20:53:35.292443  6127 net.cpp:165] Memory required for data: 377749504
I1029 20:53:35.292455  6127 layer_factory.hpp:77] Creating layer relu1_1
I1029 20:53:35.292464  6127 net.cpp:100] Creating Layer relu1_1
I1029 20:53:35.292469  6127 net.cpp:434] relu1_1 <- conv1_1
I1029 20:53:35.292474  6127 net.cpp:395] relu1_1 -> conv1_1 (in-place)
I1029 20:53:35.292676  6127 net.cpp:150] Setting up relu1_1
I1029 20:53:35.292686  6127 net.cpp:157] Top shape: 2 64 352 1024 (46137344)
I1029 20:53:35.292690  6127 net.cpp:165] Memory required for data: 562298880
I1029 20:53:35.292693  6127 layer_factory.hpp:77] Creating layer conv1_2
I1029 20:53:35.292706  6127 net.cpp:100] Creating Layer conv1_2
I1029 20:53:35.292709  6127 net.cpp:434] conv1_2 <- conv1_1
I1029 20:53:35.292716  6127 net.cpp:408] conv1_2 -> conv1_2
I1029 20:53:35.294865  6127 net.cpp:150] Setting up conv1_2
I1029 20:53:35.294893  6127 net.cpp:157] Top shape: 2 64 352 1024 (46137344)
I1029 20:53:35.294898  6127 net.cpp:165] Memory required for data: 746848256
I1029 20:53:35.294909  6127 layer_factory.hpp:77] Creating layer conv1_2_bn
I1029 20:53:35.294920  6127 net.cpp:100] Creating Layer conv1_2_bn
I1029 20:53:35.294924  6127 net.cpp:434] conv1_2_bn <- conv1_2
I1029 20:53:35.294930  6127 net.cpp:395] conv1_2_bn -> conv1_2 (in-place)
I1029 20:53:35.296283  6127 net.cpp:150] Setting up conv1_2_bn
I1029 20:53:35.296311  6127 net.cpp:157] Top shape: 2 64 352 1024 (46137344)
I1029 20:53:35.296315  6127 net.cpp:165] Memory required for data: 931397632
I1029 20:53:35.296324  6127 layer_factory.hpp:77] Creating layer relu1_2
I1029 20:53:35.296334  6127 net.cpp:100] Creating Layer relu1_2
I1029 20:53:35.296339  6127 net.cpp:434] relu1_2 <- conv1_2
I1029 20:53:35.296345  6127 net.cpp:395] relu1_2 -> conv1_2 (in-place)
I1029 20:53:35.296674  6127 net.cpp:150] Setting up relu1_2
I1029 20:53:35.296684  6127 net.cpp:157] Top shape: 2 64 352 1024 (46137344)
I1029 20:53:35.296699  6127 net.cpp:165] Memory required for data: 1115947008
I1029 20:53:35.296702  6127 layer_factory.hpp:77] Creating layer pool1
I1029 20:53:35.296706  6127 layer_factory.cpp:91] cuDNN does not support multiple tops. Using Caffe's own pooling layer.
I1029 20:53:35.296712  6127 net.cpp:100] Creating Layer pool1
I1029 20:53:35.296717  6127 net.cpp:434] pool1 <- conv1_2
I1029 20:53:35.296723  6127 net.cpp:408] pool1 -> pool1
I1029 20:53:35.296741  6127 net.cpp:408] pool1 -> pool1_mask
I1029 20:53:35.296797  6127 net.cpp:150] Setting up pool1
I1029 20:53:35.296814  6127 net.cpp:157] Top shape: 2 64 176 512 (11534336)
I1029 20:53:35.296819  6127 net.cpp:157] Top shape: 2 64 176 512 (11534336)
I1029 20:53:35.296823  6127 net.cpp:165] Memory required for data: 1208221696
I1029 20:53:35.296826  6127 layer_factory.hpp:77] Creating layer conv2_1
I1029 20:53:35.296839  6127 net.cpp:100] Creating Layer conv2_1
I1029 20:53:35.296844  6127 net.cpp:434] conv2_1 <- pool1
I1029 20:53:35.296855  6127 net.cpp:408] conv2_1 -> conv2_1
I1029 20:53:35.299191  6127 net.cpp:150] Setting up conv2_1
I1029 20:53:35.299223  6127 net.cpp:157] Top shape: 2 128 176 512 (23068672)
I1029 20:53:35.299226  6127 net.cpp:165] Memory required for data: 1300496384
I1029 20:53:35.299237  6127 layer_factory.hpp:77] Creating layer conv2_1_bn
I1029 20:53:35.299245  6127 net.cpp:100] Creating Layer conv2_1_bn
I1029 20:53:35.299249  6127 net.cpp:434] conv2_1_bn <- conv2_1
I1029 20:53:35.299255  6127 net.cpp:395] conv2_1_bn -> conv2_1 (in-place)
I1029 20:53:35.299832  6127 net.cpp:150] Setting up conv2_1_bn
I1029 20:53:35.299844  6127 net.cpp:157] Top shape: 2 128 176 512 (23068672)
I1029 20:53:35.299847  6127 net.cpp:165] Memory required for data: 1392771072
I1029 20:53:35.299854  6127 layer_factory.hpp:77] Creating layer relu2_1
I1029 20:53:35.299861  6127 net.cpp:100] Creating Layer relu2_1
I1029 20:53:35.299865  6127 net.cpp:434] relu2_1 <- conv2_1
I1029 20:53:35.299871  6127 net.cpp:395] relu2_1 -> conv2_1 (in-place)
I1029 20:53:35.300143  6127 net.cpp:150] Setting up relu2_1
I1029 20:53:35.300154  6127 net.cpp:157] Top shape: 2 128 176 512 (23068672)
I1029 20:53:35.300158  6127 net.cpp:165] Memory required for data: 1485045760
I1029 20:53:35.300161  6127 layer_factory.hpp:77] Creating layer conv2_2
I1029 20:53:35.300173  6127 net.cpp:100] Creating Layer conv2_2
I1029 20:53:35.300176  6127 net.cpp:434] conv2_2 <- conv2_1
I1029 20:53:35.300184  6127 net.cpp:408] conv2_2 -> conv2_2
I1029 20:53:35.302278  6127 net.cpp:150] Setting up conv2_2
I1029 20:53:35.302292  6127 net.cpp:157] Top shape: 2 128 176 512 (23068672)
I1029 20:53:35.302296  6127 net.cpp:165] Memory required for data: 1577320448
I1029 20:53:35.302304  6127 layer_factory.hpp:77] Creating layer conv2_2_bn
I1029 20:53:35.302314  6127 net.cpp:100] Creating Layer conv2_2_bn
I1029 20:53:35.302318  6127 net.cpp:434] conv2_2_bn <- conv2_2
I1029 20:53:35.302326  6127 net.cpp:395] conv2_2_bn -> conv2_2 (in-place)
I1029 20:53:35.302934  6127 net.cpp:150] Setting up conv2_2_bn
I1029 20:53:35.302947  6127 net.cpp:157] Top shape: 2 128 176 512 (23068672)
I1029 20:53:35.302959  6127 net.cpp:165] Memory required for data: 1669595136
I1029 20:53:35.302965  6127 layer_factory.hpp:77] Creating layer relu2_2
I1029 20:53:35.302983  6127 net.cpp:100] Creating Layer relu2_2
I1029 20:53:35.302986  6127 net.cpp:434] relu2_2 <- conv2_2
I1029 20:53:35.302991  6127 net.cpp:395] relu2_2 -> conv2_2 (in-place)
I1029 20:53:35.303185  6127 net.cpp:150] Setting up relu2_2
I1029 20:53:35.303194  6127 net.cpp:157] Top shape: 2 128 176 512 (23068672)
I1029 20:53:35.303206  6127 net.cpp:165] Memory required for data: 1761869824
I1029 20:53:35.303210  6127 layer_factory.hpp:77] Creating layer pool2
I1029 20:53:35.303215  6127 layer_factory.cpp:91] cuDNN does not support multiple tops. Using Caffe's own pooling layer.
I1029 20:53:35.303233  6127 net.cpp:100] Creating Layer pool2
I1029 20:53:35.303236  6127 net.cpp:434] pool2 <- conv2_2
I1029 20:53:35.303241  6127 net.cpp:408] pool2 -> pool2
I1029 20:53:35.303247  6127 net.cpp:408] pool2 -> pool2_mask
I1029 20:53:35.303294  6127 net.cpp:150] Setting up pool2
I1029 20:53:35.303303  6127 net.cpp:157] Top shape: 2 128 88 256 (5767168)
I1029 20:53:35.303318  6127 net.cpp:157] Top shape: 2 128 88 256 (5767168)
I1029 20:53:35.303320  6127 net.cpp:165] Memory required for data: 1808007168
I1029 20:53:35.303324  6127 layer_factory.hpp:77] Creating layer conv3_1
I1029 20:53:35.303333  6127 net.cpp:100] Creating Layer conv3_1
I1029 20:53:35.303337  6127 net.cpp:434] conv3_1 <- pool2
I1029 20:53:35.303344  6127 net.cpp:408] conv3_1 -> conv3_1
I1029 20:53:35.306005  6127 net.cpp:150] Setting up conv3_1
I1029 20:53:35.306030  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.306032  6127 net.cpp:165] Memory required for data: 1854144512
I1029 20:53:35.306044  6127 layer_factory.hpp:77] Creating layer conv3_1_bn
I1029 20:53:35.306056  6127 net.cpp:100] Creating Layer conv3_1_bn
I1029 20:53:35.306059  6127 net.cpp:434] conv3_1_bn <- conv3_1
I1029 20:53:35.306066  6127 net.cpp:395] conv3_1_bn -> conv3_1 (in-place)
I1029 20:53:35.306267  6127 net.cpp:150] Setting up conv3_1_bn
I1029 20:53:35.306275  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.306288  6127 net.cpp:165] Memory required for data: 1900281856
I1029 20:53:35.306293  6127 layer_factory.hpp:77] Creating layer relu3_1
I1029 20:53:35.306300  6127 net.cpp:100] Creating Layer relu3_1
I1029 20:53:35.306303  6127 net.cpp:434] relu3_1 <- conv3_1
I1029 20:53:35.306308  6127 net.cpp:395] relu3_1 -> conv3_1 (in-place)
I1029 20:53:35.306589  6127 net.cpp:150] Setting up relu3_1
I1029 20:53:35.306599  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.306612  6127 net.cpp:165] Memory required for data: 1946419200
I1029 20:53:35.306615  6127 layer_factory.hpp:77] Creating layer conv3_2
I1029 20:53:35.306627  6127 net.cpp:100] Creating Layer conv3_2
I1029 20:53:35.306630  6127 net.cpp:434] conv3_2 <- conv3_1
I1029 20:53:35.306638  6127 net.cpp:408] conv3_2 -> conv3_2
I1029 20:53:35.310780  6127 net.cpp:150] Setting up conv3_2
I1029 20:53:35.310804  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.310809  6127 net.cpp:165] Memory required for data: 1992556544
I1029 20:53:35.310832  6127 layer_factory.hpp:77] Creating layer conv3_2_bn
I1029 20:53:35.310842  6127 net.cpp:100] Creating Layer conv3_2_bn
I1029 20:53:35.310847  6127 net.cpp:434] conv3_2_bn <- conv3_2
I1029 20:53:35.310854  6127 net.cpp:395] conv3_2_bn -> conv3_2 (in-place)
I1029 20:53:35.311046  6127 net.cpp:150] Setting up conv3_2_bn
I1029 20:53:35.311053  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.311056  6127 net.cpp:165] Memory required for data: 2038693888
I1029 20:53:35.311072  6127 layer_factory.hpp:77] Creating layer relu3_2
I1029 20:53:35.311079  6127 net.cpp:100] Creating Layer relu3_2
I1029 20:53:35.311082  6127 net.cpp:434] relu3_2 <- conv3_2
I1029 20:53:35.311089  6127 net.cpp:395] relu3_2 -> conv3_2 (in-place)
I1029 20:53:35.311360  6127 net.cpp:150] Setting up relu3_2
I1029 20:53:35.311372  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.311385  6127 net.cpp:165] Memory required for data: 2084831232
I1029 20:53:35.311388  6127 layer_factory.hpp:77] Creating layer conv3_3
I1029 20:53:35.311399  6127 net.cpp:100] Creating Layer conv3_3
I1029 20:53:35.311403  6127 net.cpp:434] conv3_3 <- conv3_2
I1029 20:53:35.311410  6127 net.cpp:408] conv3_3 -> conv3_3
I1029 20:53:35.316313  6127 net.cpp:150] Setting up conv3_3
I1029 20:53:35.316332  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.316352  6127 net.cpp:165] Memory required for data: 2130968576
I1029 20:53:35.316360  6127 layer_factory.hpp:77] Creating layer conv3_3_bn
I1029 20:53:35.316371  6127 net.cpp:100] Creating Layer conv3_3_bn
I1029 20:53:35.316375  6127 net.cpp:434] conv3_3_bn <- conv3_3
I1029 20:53:35.316382  6127 net.cpp:395] conv3_3_bn -> conv3_3 (in-place)
I1029 20:53:35.316571  6127 net.cpp:150] Setting up conv3_3_bn
I1029 20:53:35.316578  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.316582  6127 net.cpp:165] Memory required for data: 2177105920
I1029 20:53:35.316586  6127 layer_factory.hpp:77] Creating layer relu3_3
I1029 20:53:35.316593  6127 net.cpp:100] Creating Layer relu3_3
I1029 20:53:35.316596  6127 net.cpp:434] relu3_3 <- conv3_3
I1029 20:53:35.316603  6127 net.cpp:395] relu3_3 -> conv3_3 (in-place)
I1029 20:53:35.316762  6127 net.cpp:150] Setting up relu3_3
I1029 20:53:35.316771  6127 net.cpp:157] Top shape: 2 256 88 256 (11534336)
I1029 20:53:35.316774  6127 net.cpp:165] Memory required for data: 2223243264
I1029 20:53:35.316778  6127 layer_factory.hpp:77] Creating layer pool3
I1029 20:53:35.316782  6127 layer_factory.cpp:91] cuDNN does not support multiple tops. Using Caffe's own pooling layer.
I1029 20:53:35.316788  6127 net.cpp:100] Creating Layer pool3
I1029 20:53:35.316792  6127 net.cpp:434] pool3 <- conv3_3
I1029 20:53:35.316797  6127 net.cpp:408] pool3 -> pool3
I1029 20:53:35.316804  6127 net.cpp:408] pool3 -> pool3_mask
I1029 20:53:35.316841  6127 net.cpp:150] Setting up pool3
I1029 20:53:35.316848  6127 net.cpp:157] Top shape: 2 256 44 128 (2883584)
I1029 20:53:35.316851  6127 net.cpp:157] Top shape: 2 256 44 128 (2883584)
I1029 20:53:35.316855  6127 net.cpp:165] Memory required for data: 2246311936
I1029 20:53:35.316859  6127 layer_factory.hpp:77] Creating layer pool3_drop
I1029 20:53:35.316865  6127 net.cpp:100] Creating Layer pool3_drop
I1029 20:53:35.316869  6127 net.cpp:434] pool3_drop <- pool3
I1029 20:53:35.316874  6127 net.cpp:395] pool3_drop -> pool3 (in-place)
I1029 20:53:35.316908  6127 net.cpp:150] Setting up pool3_drop
I1029 20:53:35.316915  6127 net.cpp:157] Top shape: 2 256 44 128 (2883584)
I1029 20:53:35.316918  6127 net.cpp:165] Memory required for data: 2257846272
I1029 20:53:35.316921  6127 layer_factory.hpp:77] Creating layer conv4_1
I1029 20:53:35.316932  6127 net.cpp:100] Creating Layer conv4_1
I1029 20:53:35.316936  6127 net.cpp:434] conv4_1 <- pool3
I1029 20:53:35.316941  6127 net.cpp:408] conv4_1 -> conv4_1
I1029 20:53:35.325304  6127 net.cpp:150] Setting up conv4_1
I1029 20:53:35.325330  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.325333  6127 net.cpp:165] Memory required for data: 2280914944
I1029 20:53:35.325343  6127 layer_factory.hpp:77] Creating layer conv4_1_bn
I1029 20:53:35.325367  6127 net.cpp:100] Creating Layer conv4_1_bn
I1029 20:53:35.325374  6127 net.cpp:434] conv4_1_bn <- conv4_1
I1029 20:53:35.325382  6127 net.cpp:395] conv4_1_bn -> conv4_1 (in-place)
I1029 20:53:35.325567  6127 net.cpp:150] Setting up conv4_1_bn
I1029 20:53:35.325574  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.325587  6127 net.cpp:165] Memory required for data: 2303983616
I1029 20:53:35.325592  6127 layer_factory.hpp:77] Creating layer relu4_1
I1029 20:53:35.325600  6127 net.cpp:100] Creating Layer relu4_1
I1029 20:53:35.325604  6127 net.cpp:434] relu4_1 <- conv4_1
I1029 20:53:35.325608  6127 net.cpp:395] relu4_1 -> conv4_1 (in-place)
I1029 20:53:35.326001  6127 net.cpp:150] Setting up relu4_1
I1029 20:53:35.326012  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.326025  6127 net.cpp:165] Memory required for data: 2327052288
I1029 20:53:35.326028  6127 layer_factory.hpp:77] Creating layer conv4_2
I1029 20:53:35.326040  6127 net.cpp:100] Creating Layer conv4_2
I1029 20:53:35.326045  6127 net.cpp:434] conv4_2 <- conv4_1
I1029 20:53:35.326051  6127 net.cpp:408] conv4_2 -> conv4_2
I1029 20:53:35.340030  6127 net.cpp:150] Setting up conv4_2
I1029 20:53:35.340067  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.340071  6127 net.cpp:165] Memory required for data: 2350120960
I1029 20:53:35.340090  6127 layer_factory.hpp:77] Creating layer conv4_2_bn
I1029 20:53:35.340102  6127 net.cpp:100] Creating Layer conv4_2_bn
I1029 20:53:35.340107  6127 net.cpp:434] conv4_2_bn <- conv4_2
I1029 20:53:35.340114  6127 net.cpp:395] conv4_2_bn -> conv4_2 (in-place)
I1029 20:53:35.340299  6127 net.cpp:150] Setting up conv4_2_bn
I1029 20:53:35.340306  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.340309  6127 net.cpp:165] Memory required for data: 2373189632
I1029 20:53:35.340313  6127 layer_factory.hpp:77] Creating layer relu4_2
I1029 20:53:35.340319  6127 net.cpp:100] Creating Layer relu4_2
I1029 20:53:35.340323  6127 net.cpp:434] relu4_2 <- conv4_2
I1029 20:53:35.340339  6127 net.cpp:395] relu4_2 -> conv4_2 (in-place)
I1029 20:53:35.340616  6127 net.cpp:150] Setting up relu4_2
I1029 20:53:35.340626  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.340629  6127 net.cpp:165] Memory required for data: 2396258304
I1029 20:53:35.340632  6127 layer_factory.hpp:77] Creating layer conv4_3
I1029 20:53:35.340663  6127 net.cpp:100] Creating Layer conv4_3
I1029 20:53:35.340667  6127 net.cpp:434] conv4_3 <- conv4_2
I1029 20:53:35.340683  6127 net.cpp:408] conv4_3 -> conv4_3
I1029 20:53:35.357230  6127 net.cpp:150] Setting up conv4_3
I1029 20:53:35.357260  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.357265  6127 net.cpp:165] Memory required for data: 2419326976
I1029 20:53:35.357290  6127 layer_factory.hpp:77] Creating layer conv4_3_bn
I1029 20:53:35.357306  6127 net.cpp:100] Creating Layer conv4_3_bn
I1029 20:53:35.357312  6127 net.cpp:434] conv4_3_bn <- conv4_3
I1029 20:53:35.357322  6127 net.cpp:395] conv4_3_bn -> conv4_3 (in-place)
I1029 20:53:35.357561  6127 net.cpp:150] Setting up conv4_3_bn
I1029 20:53:35.357569  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.357587  6127 net.cpp:165] Memory required for data: 2442395648
I1029 20:53:35.357594  6127 layer_factory.hpp:77] Creating layer relu4_3
I1029 20:53:35.357610  6127 net.cpp:100] Creating Layer relu4_3
I1029 20:53:35.357625  6127 net.cpp:434] relu4_3 <- conv4_3
I1029 20:53:35.357630  6127 net.cpp:395] relu4_3 -> conv4_3 (in-place)
I1029 20:53:35.357844  6127 net.cpp:150] Setting up relu4_3
I1029 20:53:35.357854  6127 net.cpp:157] Top shape: 2 512 44 128 (5767168)
I1029 20:53:35.357858  6127 net.cpp:165] Memory required for data: 2465464320
I1029 20:53:35.357862  6127 layer_factory.hpp:77] Creating layer pool4
I1029 20:53:35.357867  6127 layer_factory.cpp:91] cuDNN does not support multiple tops. Using Caffe's own pooling layer.
I1029 20:53:35.357887  6127 net.cpp:100] Creating Layer pool4
I1029 20:53:35.357892  6127 net.cpp:434] pool4 <- conv4_3
I1029 20:53:35.357898  6127 net.cpp:408] pool4 -> pool4
I1029 20:53:35.357908  6127 net.cpp:408] pool4 -> pool4_mask
I1029 20:53:35.357961  6127 net.cpp:150] Setting up pool4
I1029 20:53:35.357969  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.357985  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.357990  6127 net.cpp:165] Memory required for data: 2476998656
I1029 20:53:35.357995  6127 layer_factory.hpp:77] Creating layer pool4_drop
I1029 20:53:35.358003  6127 net.cpp:100] Creating Layer pool4_drop
I1029 20:53:35.358009  6127 net.cpp:434] pool4_drop <- pool4
I1029 20:53:35.358016  6127 net.cpp:395] pool4_drop -> pool4 (in-place)
I1029 20:53:35.358044  6127 net.cpp:150] Setting up pool4_drop
I1029 20:53:35.358060  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.358074  6127 net.cpp:165] Memory required for data: 2482765824
I1029 20:53:35.358079  6127 layer_factory.hpp:77] Creating layer conv5_1
I1029 20:53:35.358098  6127 net.cpp:100] Creating Layer conv5_1
I1029 20:53:35.358104  6127 net.cpp:434] conv5_1 <- pool4
I1029 20:53:35.358115  6127 net.cpp:408] conv5_1 -> conv5_1
I1029 20:53:35.380146  6127 net.cpp:150] Setting up conv5_1
I1029 20:53:35.380197  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.380204  6127 net.cpp:165] Memory required for data: 2488532992
I1029 20:53:35.380216  6127 layer_factory.hpp:77] Creating layer conv5_1_bn
I1029 20:53:35.380231  6127 net.cpp:100] Creating Layer conv5_1_bn
I1029 20:53:35.380249  6127 net.cpp:434] conv5_1_bn <- conv5_1
I1029 20:53:35.380268  6127 net.cpp:395] conv5_1_bn -> conv5_1 (in-place)
I1029 20:53:35.380529  6127 net.cpp:150] Setting up conv5_1_bn
I1029 20:53:35.380537  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.380551  6127 net.cpp:165] Memory required for data: 2494300160
I1029 20:53:35.380569  6127 layer_factory.hpp:77] Creating layer relu5_1
I1029 20:53:35.380578  6127 net.cpp:100] Creating Layer relu5_1
I1029 20:53:35.380584  6127 net.cpp:434] relu5_1 <- conv5_1
I1029 20:53:35.380590  6127 net.cpp:395] relu5_1 -> conv5_1 (in-place)
I1029 20:53:35.380939  6127 net.cpp:150] Setting up relu5_1
I1029 20:53:35.380959  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.380964  6127 net.cpp:165] Memory required for data: 2500067328
I1029 20:53:35.380969  6127 layer_factory.hpp:77] Creating layer conv5_2
I1029 20:53:35.380993  6127 net.cpp:100] Creating Layer conv5_2
I1029 20:53:35.381000  6127 net.cpp:434] conv5_2 <- conv5_1
I1029 20:53:35.381009  6127 net.cpp:408] conv5_2 -> conv5_2
I1029 20:53:35.394979  6127 net.cpp:150] Setting up conv5_2
I1029 20:53:35.395015  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.395020  6127 net.cpp:165] Memory required for data: 2505834496
I1029 20:53:35.395038  6127 layer_factory.hpp:77] Creating layer conv5_2_bn
I1029 20:53:35.395051  6127 net.cpp:100] Creating Layer conv5_2_bn
I1029 20:53:35.395056  6127 net.cpp:434] conv5_2_bn <- conv5_2
I1029 20:53:35.395063  6127 net.cpp:395] conv5_2_bn -> conv5_2 (in-place)
I1029 20:53:35.395256  6127 net.cpp:150] Setting up conv5_2_bn
I1029 20:53:35.395263  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.395277  6127 net.cpp:165] Memory required for data: 2511601664
I1029 20:53:35.395282  6127 layer_factory.hpp:77] Creating layer relu5_2
I1029 20:53:35.395288  6127 net.cpp:100] Creating Layer relu5_2
I1029 20:53:35.395292  6127 net.cpp:434] relu5_2 <- conv5_2
I1029 20:53:35.395298  6127 net.cpp:395] relu5_2 -> conv5_2 (in-place)
I1029 20:53:35.395965  6127 net.cpp:150] Setting up relu5_2
I1029 20:53:35.395975  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.395988  6127 net.cpp:165] Memory required for data: 2517368832
I1029 20:53:35.395992  6127 layer_factory.hpp:77] Creating layer conv5_3
I1029 20:53:35.396013  6127 net.cpp:100] Creating Layer conv5_3
I1029 20:53:35.396018  6127 net.cpp:434] conv5_3 <- conv5_2
I1029 20:53:35.396025  6127 net.cpp:408] conv5_3 -> conv5_3
I1029 20:53:35.409796  6127 net.cpp:150] Setting up conv5_3
I1029 20:53:35.409823  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.409827  6127 net.cpp:165] Memory required for data: 2523136000
I1029 20:53:35.409837  6127 layer_factory.hpp:77] Creating layer conv5_3_bn
I1029 20:53:35.409859  6127 net.cpp:100] Creating Layer conv5_3_bn
I1029 20:53:35.409864  6127 net.cpp:434] conv5_3_bn <- conv5_3
I1029 20:53:35.409880  6127 net.cpp:395] conv5_3_bn -> conv5_3 (in-place)
I1029 20:53:35.410104  6127 net.cpp:150] Setting up conv5_3_bn
I1029 20:53:35.410112  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.410125  6127 net.cpp:165] Memory required for data: 2528903168
I1029 20:53:35.410131  6127 layer_factory.hpp:77] Creating layer relu5_3
I1029 20:53:35.410138  6127 net.cpp:100] Creating Layer relu5_3
I1029 20:53:35.410142  6127 net.cpp:434] relu5_3 <- conv5_3
I1029 20:53:35.410147  6127 net.cpp:395] relu5_3 -> conv5_3 (in-place)
I1029 20:53:35.410320  6127 net.cpp:150] Setting up relu5_3
I1029 20:53:35.410328  6127 net.cpp:157] Top shape: 2 512 22 64 (1441792)
I1029 20:53:35.410331  6127 net.cpp:165] Memory required for data: 2534670336
I1029 20:53:35.410334  6127 layer_factory.hpp:77] Creating layer pool5
I1029 20:53:35.410337  6127 layer_factory.cpp:91] cuDNN does not support multiple tops. Using Caffe's own pooling layer.
I1029 20:53:35.410343  6127 net.cpp:100] Creating Layer pool5
I1029 20:53:35.410359  6127 net.cpp:434] pool5 <- conv5_3
I1029 20:53:35.410365  6127 net.cpp:408] pool5 -> pool5
I1029 20:53:35.410372  6127 net.cpp:408] pool5 -> pool5_mask
I1029 20:53:35.410421  6127 net.cpp:150] Setting up pool5
I1029 20:53:35.410429  6127 net.cpp:157] Top shape: 2 512 11 32 (360448)
I1029 20:53:35.410432  6127 net.cpp:157] Top shape: 2 512 11 32 (360448)
I1029 20:53:35.410434  6127 net.cpp:165] Memory required for data: 2537553920
I1029 20:53:35.410437  6127 layer_factory.hpp:77] Creating layer pool5_drop
I1029 20:53:35.410444  6127 net.cpp:100] Creating Layer pool5_drop
I1029 20:53:35.410457  6127 net.cpp:434] pool5_drop <- pool5
I1029 20:53:35.410464  6127 net.cpp:395] pool5_drop -> pool5 (in-place)
I1029 20:53:35.410495  6127 net.cpp:150] Setting up pool5_drop
I1029 20:53:35.410501  6127 net.cpp:157] Top shape: 2 512 11 32 (360448)
I1029 20:53:35.410513  6127 net.cpp:165] Memory required for data: 2538995712
I1029 20:53:35.410516  6127 layer_factory.hpp:77] Creating layer upsample5
I1029 20:53:35.410537  6127 net.cpp:100] Creating Layer upsample5
I1029 20:53:35.410552  6127 net.cpp:434] upsample5 <- pool5
I1029 20:53:35.410557  6127 net.cpp:434] upsample5 <- pool5_mask
I1029 20:53:35.410562  6127 net.cpp:408] upsample5 -> pool5_D
I1029 20:53:35.410598  6127 net.cpp:150] Setting up upsample5
I1029 20:53:35.410604  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.410609  6127 net.cpp:165] Memory required for data: 2541821952
I1029 20:53:35.410611  6127 layer_factory.hpp:77] Creating layer conv5_3_D
I1029 20:53:35.410622  6127 net.cpp:100] Creating Layer conv5_3_D
I1029 20:53:35.410626  6127 net.cpp:434] conv5_3_D <- pool5_D
I1029 20:53:35.410632  6127 net.cpp:408] conv5_3_D -> conv5_3_D
I1029 20:53:35.424585  6127 net.cpp:150] Setting up conv5_3_D
I1029 20:53:35.424613  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.424629  6127 net.cpp:165] Memory required for data: 2544648192
I1029 20:53:35.424639  6127 layer_factory.hpp:77] Creating layer conv5_3_D_bn
I1029 20:53:35.424652  6127 net.cpp:100] Creating Layer conv5_3_D_bn
I1029 20:53:35.424659  6127 net.cpp:434] conv5_3_D_bn <- conv5_3_D
I1029 20:53:35.424665  6127 net.cpp:395] conv5_3_D_bn -> conv5_3_D (in-place)
I1029 20:53:35.424873  6127 net.cpp:150] Setting up conv5_3_D_bn
I1029 20:53:35.424880  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.424883  6127 net.cpp:165] Memory required for data: 2547474432
I1029 20:53:35.424888  6127 layer_factory.hpp:77] Creating layer relu5_3_D
I1029 20:53:35.424896  6127 net.cpp:100] Creating Layer relu5_3_D
I1029 20:53:35.424909  6127 net.cpp:434] relu5_3_D <- conv5_3_D
I1029 20:53:35.424914  6127 net.cpp:395] relu5_3_D -> conv5_3_D (in-place)
I1029 20:53:35.425227  6127 net.cpp:150] Setting up relu5_3_D
I1029 20:53:35.425238  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.425252  6127 net.cpp:165] Memory required for data: 2550300672
I1029 20:53:35.425257  6127 layer_factory.hpp:77] Creating layer conv5_2_D
I1029 20:53:35.425268  6127 net.cpp:100] Creating Layer conv5_2_D
I1029 20:53:35.425273  6127 net.cpp:434] conv5_2_D <- conv5_3_D
I1029 20:53:35.425279  6127 net.cpp:408] conv5_2_D -> conv5_2_D
I1029 20:53:35.439083  6127 net.cpp:150] Setting up conv5_2_D
I1029 20:53:35.439112  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.439117  6127 net.cpp:165] Memory required for data: 2553126912
I1029 20:53:35.439128  6127 layer_factory.hpp:77] Creating layer conv5_2_D_bn
I1029 20:53:35.439141  6127 net.cpp:100] Creating Layer conv5_2_D_bn
I1029 20:53:35.439146  6127 net.cpp:434] conv5_2_D_bn <- conv5_2_D
I1029 20:53:35.439157  6127 net.cpp:395] conv5_2_D_bn -> conv5_2_D (in-place)
I1029 20:53:35.439373  6127 net.cpp:150] Setting up conv5_2_D_bn
I1029 20:53:35.439380  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.439394  6127 net.cpp:165] Memory required for data: 2555953152
I1029 20:53:35.439400  6127 layer_factory.hpp:77] Creating layer relu5_2_D
I1029 20:53:35.439407  6127 net.cpp:100] Creating Layer relu5_2_D
I1029 20:53:35.439411  6127 net.cpp:434] relu5_2_D <- conv5_2_D
I1029 20:53:35.439417  6127 net.cpp:395] relu5_2_D -> conv5_2_D (in-place)
I1029 20:53:35.439730  6127 net.cpp:150] Setting up relu5_2_D
I1029 20:53:35.439743  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.439756  6127 net.cpp:165] Memory required for data: 2558779392
I1029 20:53:35.439760  6127 layer_factory.hpp:77] Creating layer conv5_1_D
I1029 20:53:35.439774  6127 net.cpp:100] Creating Layer conv5_1_D
I1029 20:53:35.439779  6127 net.cpp:434] conv5_1_D <- conv5_2_D
I1029 20:53:35.439786  6127 net.cpp:408] conv5_1_D -> conv5_1_D
I1029 20:53:35.453780  6127 net.cpp:150] Setting up conv5_1_D
I1029 20:53:35.453816  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.453820  6127 net.cpp:165] Memory required for data: 2561605632
I1029 20:53:35.453831  6127 layer_factory.hpp:77] Creating layer conv5_1_D_bn
I1029 20:53:35.453843  6127 net.cpp:100] Creating Layer conv5_1_D_bn
I1029 20:53:35.453850  6127 net.cpp:434] conv5_1_D_bn <- conv5_1_D
I1029 20:53:35.453858  6127 net.cpp:395] conv5_1_D_bn -> conv5_1_D (in-place)
I1029 20:53:35.454074  6127 net.cpp:150] Setting up conv5_1_D_bn
I1029 20:53:35.454082  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.454085  6127 net.cpp:165] Memory required for data: 2564431872
I1029 20:53:35.454116  6127 layer_factory.hpp:77] Creating layer relu5_1_D
I1029 20:53:35.454123  6127 net.cpp:100] Creating Layer relu5_1_D
I1029 20:53:35.454128  6127 net.cpp:434] relu5_1_D <- conv5_1_D
I1029 20:53:35.454133  6127 net.cpp:395] relu5_1_D -> conv5_1_D (in-place)
I1029 20:53:35.454320  6127 net.cpp:150] Setting up relu5_1_D
I1029 20:53:35.454329  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.454344  6127 net.cpp:165] Memory required for data: 2567258112
I1029 20:53:35.454346  6127 layer_factory.hpp:77] Creating layer upsample4_drop
I1029 20:53:35.454355  6127 net.cpp:100] Creating Layer upsample4_drop
I1029 20:53:35.454360  6127 net.cpp:434] upsample4_drop <- conv5_1_D
I1029 20:53:35.454363  6127 net.cpp:395] upsample4_drop -> conv5_1_D (in-place)
I1029 20:53:35.454392  6127 net.cpp:150] Setting up upsample4_drop
I1029 20:53:35.454399  6127 net.cpp:157] Top shape: 2 512 23 30 (706560)
I1029 20:53:35.454401  6127 net.cpp:165] Memory required for data: 2570084352
I1029 20:53:35.454404  6127 layer_factory.hpp:77] Creating layer upsample4
I1029 20:53:35.454411  6127 net.cpp:100] Creating Layer upsample4
I1029 20:53:35.454413  6127 net.cpp:434] upsample4 <- conv5_1_D
I1029 20:53:35.454418  6127 net.cpp:434] upsample4 <- pool4_mask
I1029 20:53:35.454424  6127 net.cpp:408] upsample4 -> pool4_D
F1029 20:53:35.454461  6127 upsample_layer.cpp:59] Check failed: bottom[0]->height() == bottom[1]->height() (23 vs. 22) 
*** Check failure stack trace: ***
Aborted (core dumped)

I set the input data size like this:

input_shape {
  dim: 2 # SET SAMPLE SIZE HERE
  dim: 3
  dim: 352
  dim: 1024
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.