Git Product home page Git Product logo

flownet2's Introduction

Caffe for FlowNet2

This is the release of:

  • the CVPR 2017 version of FlowNet2.0

It comes as a fork of the caffe master branch and with trained networks, as well as examples to use and train them.

License and Citation

All code is provided for research purposes only and without any warranty. Any commercial use requires our consent. When using the code in your research work, please cite the following paper:

@InProceedings{IMKDB17,
  author       = "E. Ilg and N. Mayer and T. Saikia and M. Keuper and A. Dosovitskiy and T. Brox",
  title        = "FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks",
  booktitle    = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
  month        = "Jul",
  year         = "2017",
  url          = "http://lmb.informatik.uni-freiburg.de//Publications/2017/IMKDB17"
}

Compiling

First compile caffe, by configuring a

"Makefile.config" (example given in Makefile.config.example)

then make with

$ make -j 5 all tools pycaffe 

Running

(this assumes you compiled the code sucessfully)

IMPORTANT: make sure there is no other caffe version in your python and system paths and set up your environment with:

$ source set-env.sh 

This will configure all paths for you. Then go to the model folder and download models:

$ cd models 
$ ./download-models.sh 

Running a FlowNet on a single image pair ($net is a folder in models):

$ run-flownet.py /path/to/$net/$net_weights.caffemodel[.h5] \
                 /path/to/$net/$net_deploy.prototxt.template \ 
                 x.png y.png z.flo 

(where x.png and y.png are images and z.flo is the output file)

Running a FlowNet on lots of image pairs:

$ run-flownet-many.py /path/to/$net/$net_weights.caffemodel[.h5] \ 
                      /path/to/$net/$net_deploy.prototxt.template \
                       list.txt 

(where list.txt contains lines of the form "x.png y.png z.flo")

NOTE: If you want to compute many flows, this option is much faster since caffe and the net are loaded only once.

Training

(this assumes you compiled the code sucessfully)

First you need to download and prepare the training data. For that go to the data folder:

$ cd data 

Then run:

$ ./download.sh 
$ ./make-lmdbs.sh 

(this will take some time and quite some disk space)

Then set up your network for training ($net is a folder in models):

$ cd /path/to/$net 
$ cp ../solver_S_<type>.prototxt solver.prototxt 
$ cp $net_train.prototxt.template train.prototxt 
# Edit train.prototxt and make sure all settings are correct 
$ caffe train --solver solver.prototxt 

IMPORTANT: Edit train.prototxt to use your selected dataset and make sure the correct parts of the network are enabled by setting/adding loss weights and blob learning rates.

NOTE: The training templates include augmentation, during which an affine transformation is applied to a crop from the input immages. For training we use different batch sizes for each resolution:

FlyingChairs: 448 x 320 (batch size 8) ChairsSDHom: 448 x 320 (batch size 8) FlyingThings3D: 768 x 384 (batch size 4)

flownet2's People

Contributors

mbuckler avatar nikolausmayer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flownet2's Issues

Large transformations in data augmentation

Hi there,

Thank you for your time with my previous questions. I attempted to rewrite the data augmentation code in TensorFlow.

Here are my original two images along with the corresponding flow:

Image 0 Image 1 Flow
img0 img1 flow0

I then augment only with an x,y translation. I'm using the same uniform_bernoulli / gaussian_bernoulli distribution options as in the models for FlowNetS.

Image 0 Image 1 Flow
img0_aug img1_aug flow_aug

This seems like a large amount of information lost from such a huge translation. Half of the flow field is gone. Is this size of augmentation normal? As an extreme, sometimes I end up with an empty, noisy image after all of the transformations.

screenshot 2017-06-13 02 29 32

Thanks for your help!

Sam

why the flow_loss2 is too large?

when i train the C network.
and i set the l1_loss_param:
l1_loss_param {
l2_per_location: false
normalize_by_num_entries: true
}
the flow_loss2 is lager than other losses even though the settings of these layers are similar..
i check the ptototxt many times and cant find any problems so come here for help.

Reproducing Results FlowNet2

If I wanted to reproduce your results (starting from random) do I need to manually train each network independently and load their respective weights in sequence, or is there a simpler method?

Why multiply flow groundtruth data by 32 when creating LMDB?

Sorry, it is about the code itself, not for building error report. But I didn't find any forums to discuss flownet for the researcher. So I have to post it here.

In the file tools/convert_imageset_and_flow.cpp, I found that the ground truth value of optical flow is multiplied by 32 when creating LMDB. See https://github.com/lmb-freiburg/flownet2/blob/master/tools/convert_imageset_and_flow.cpp#L176 . And I found that in layers/custom_data_layer.cpp, 32 is divided. OK, just for storing in LMDB, no problem.

I understand that you were to convert the flow data into int16 data type to feed it to the LMDB file. It is the number 32 that confuses me. Why you used 32 as the ratio? I print some ground truth value of optical flow:

E0619 16:04:31.712563 18609 convert_imageset_and_flow.cpp:148] transformed value data: 28
E0619 16:04:31.712574 18609 convert_imageset_and_flow.cpp:147] previous flow data: 0.905912
E0619 16:04:31.712585 18609 convert_imageset_and_flow.cpp:148] transformed value data: 28
E0619 16:04:31.712596 18609 convert_imageset_and_flow.cpp:147] previous flow data: 0.906365
E0619 16:04:31.712608 18609 convert_imageset_and_flow.cpp:148] transformed value data: 29
E0619 16:04:31.712642 18609 convert_imageset_and_flow.cpp:147] previous flow data: 0.906817
E0619 16:04:31.712658 18609 convert_imageset_and_flow.cpp:148] transformed value data: 29
E0619 16:04:31.712671 18609 convert_imageset_and_flow.cpp:147] previous flow data: 0.90727
E0619 16:04:31.712682 18609 convert_imageset_and_flow.cpp:148] transformed value data: 29
E0619 16:04:31.712694 18609 convert_imageset_and_flow.cpp:147] previous flow data: 0.907722
E0619 16:04:31.712707 18609 convert_imageset_and_flow.cpp:148] transformed value data: 29
E0619 16:04:31.712725 18609 convert_imageset_and_flow.cpp:147] previous flow data: 0.908175
E0619 16:04:31.712738 18609 convert_imageset_and_flow.cpp:148] transformed value data: 29
E0619 16:04:31.712750 18609 convert_imageset_and_flow.cpp:147] previous flow data: 0.908627
E0619 16:04:31.712764 18609 convert_imageset_and_flow.cpp:148] transformed value data: 29

Considering that the range of int16 is -2^15~2^15, why not to multiply a lager number to reduce the round error in the transformation? Are there some special reasons for that?

run-flownet.py missing

The performance of FlowNet2.0 is very impressive. Currently, I attempt to reproduce your work. But I find that run-flownet.py is missing.

In run-flownet.py : input_dict[net.inputs[blob_idx]] = input_data[blob_idx] IndexError: list index out of range

I am not able to figure out how to run the run-flownet.py and run-flownet-many.py. Could you please provide me an example ? The path for the net, I am using is /home/Downloads/flownet2-master/models/FlowNet2/.
Also after running the following command in ubuntu terminal,
$ python run-flownet.py /home/ruh/Downloads/flownet2-master/models/FlowNet2 /FlowNet2_weights.caffemodel.h5 \/home/ruh/Downloads/flownet2-master/models/FlowNet2/FlowNet2_deploy.prototxt.template \/home/ruh/Downloads/flownet2-master/data/FlyingChairs_examples/0000000-img0.ppm /home/ruh/Downloads/flownet2-master/data/FlyingChairs_examples/0000000-img1.ppm z.flo

It is giving me the following error

<caffe._caffe.Net object at 0x7f0abe9cfe10> Traceback (most recent call last): File "run-flownet.py", line 69, in <module> input_dict[net.inputs[blob_idx]] = input_data[blob_idx] IndexError: list index out of range

how to show the .flo result?

I have run run-flownet.py and get the .flo file. But i don't how to read the .flo and show it like the rgb image. Could anyone give me some advice?

About the usefulness of warping

First of all, great work!
I was wondering why warping is useful in this case. More specifically, what troubles me are the artifacts of the warping operation in the case of occlusions. Is there any guarantee that the warped image 2 we would get by using the ground truth flow is "closer" to image 1 than the warped image 2 we get by using the estimated flow in this case?

Two questions

Hi, sorry for interrupting:

  1. What is the usefulness of brightness error in FlowNet2? Is it used to indicate the location of false predictions?

  2. After converting .pfm to .flo in the FlyingThings3D dataset, it seems that the flow files do not match the corresponding left images. For example:
    https://cloud.githubusercontent.com/assets/24893551/26722513/a273554a-47d3-11e7-9c6e-6f01671f89d3.png
    Maybe the flow data should be further processed or transformed( rotation, transpose, or scaled, etc. ) in the "pfm to flo" script?

Could you please provide some advice? Thanks~

About the GenerateAugmentationParameters layer

I have some questions about the GenerateAugmentationParameters layer. In the following FlowNet2.prototxt, it seems that this layer can generate more augmentation parameters than DataAugmentation layer if the aug_.mode() == "add", such as the brightness, gamma, constract and color, however, I think they can also be generated in the DataAugmentation layer, so why not use the DataAugmentation layer to generate all the parameters?
Besides in the FlowNetC.prototxt, I found the spatial transforming paramters (rotate, translate, zoom )are both generated by the two layers. If the aug_.mode() == "add", it seems there are two different groups of spatial transforming paramters in the output blob (that is blob8 in the FlowNetC.prototxt).

FlowNet2.prototxt
layer {
name: "img0s_aug"
type: "DataAugmentation"
bottom: "blob13"
top: "img0_aug"
top: "blob16"
augmentation_param {
max_multiplier: 1
augment_during_test: false
recompute_mean: 1000
mean_per_pixel: false
translate {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 0.2
prob: 1.0
}
lmult_pow {
rand_type: "uniform_bernoulli"
exp: true
mean: -0.2
spread: 0.4
prob: 1.0
}
lmult_mult {
rand_type: "uniform_bernoulli"
exp: true
mean: 0.0
spread: 0.4
prob: 1.0
}
lmult_add {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 0.03
prob: 1.0
}
sat_pow {
rand_type: "uniform_bernoulli"
exp: true
mean: 0
spread: 0.4
prob: 1.0
}
sat_mult {
rand_type: "uniform_bernoulli"
exp: true
mean: -0.3
spread: 0.5
prob: 1.0
}
sat_add {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 0.03
prob: 1.0
}
col_pow {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.4
prob: 1.0
}
col_mult {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.2
prob: 1.0
}
col_add {
rand_type: "gaussian_bernoulli"
exp: false
mean: 0
spread: 0.02
prob: 1.0
}
ladd_pow {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.4
prob: 1.0
}
ladd_mult {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0.0
spread: 0.4
prob: 1.0
}
ladd_add {
rand_type: "gaussian_bernoulli"
exp: false
mean: 0
spread: 0.04
prob: 1.0
}
col_rotate {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 1
prob: 1.0
}
crop_width: 448
crop_height: 320
chromatic_eigvec: 0.51
chromatic_eigvec: 0.56
chromatic_eigvec: 0.65
chromatic_eigvec: 0.79
chromatic_eigvec: 0.01
chromatic_eigvec: -0.62
chromatic_eigvec: 0.35
chromatic_eigvec: -0.83
chromatic_eigvec: 0.44
noise {
rand_type: "uniform_bernoulli"
exp: false
mean: 0.03
spread: 0.03
prob: 1.0
}
}
}
layer {
name: "aug_params1"
type: "GenerateAugmentationParameters"
bottom: "blob16"
bottom: "blob13"
bottom: "img0_aug"
top: "blob17"
augmentation_param {
augment_during_test: false
gamma {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.02
prob: 1.0
}
brightness {
rand_type: "gaussian_bernoulli"
exp: false
mean: 0
spread: 0.02
prob: 1.0
}
contrast {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.02
prob: 1.0
}
color {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.02
prob: 1.0
}
}
coeff_schedule_param {
half_life: 50000
initial_coeff: 0.5
final_coeff: 1
}
}

FlowNetC.prototxt
layer {
name: "img0s_aug"
type: "DataAugmentation"
bottom: "blob4"
top: "img0_aug"
top: "blob7"
propagate_down: false
augmentation_param {
max_multiplier: 1
augment_during_test: false
recompute_mean: 1000
mean_per_pixel: false
translate {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 0.4
prob: 1.0
}
rotate {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 0.4
prob: 1.0
}
zoom {
rand_type: "uniform_bernoulli"
exp: true
mean: 0.2
spread: 0.4
prob: 1.0
}
squeeze {
rand_type: "uniform_bernoulli"
exp: true
mean: 0
spread: 0.3
prob: 1.0
}
lmult_pow {
rand_type: "uniform_bernoulli"
exp: true
mean: -0.2
spread: 0.4
prob: 1.0
}
lmult_mult {
rand_type: "uniform_bernoulli"
exp: true
mean: 0.0
spread: 0.4
prob: 1.0
}
lmult_add {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 0.03
prob: 1.0
}
sat_pow {
rand_type: "uniform_bernoulli"
exp: true
mean: 0
spread: 0.4
prob: 1.0
}
sat_mult {
rand_type: "uniform_bernoulli"
exp: true
mean: -0.3
spread: 0.5
prob: 1.0
}
sat_add {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 0.03
prob: 1.0
}
col_pow {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.4
prob: 1.0
}
col_mult {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.2
prob: 1.0
}
col_add {
rand_type: "gaussian_bernoulli"
exp: false
mean: 0
spread: 0.02
prob: 1.0
}
ladd_pow {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.4
prob: 1.0
}
ladd_mult {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0.0
spread: 0.4
prob: 1.0
}
ladd_add {
rand_type: "gaussian_bernoulli"
exp: false
mean: 0
spread: 0.04
prob: 1.0
}
col_rotate {
rand_type: "uniform_bernoulli"
exp: false
mean: 0
spread: 1
prob: 1.0
}
crop_width: 448
crop_height: 320
chromatic_eigvec: 0.51
chromatic_eigvec: 0.56
chromatic_eigvec: 0.65
chromatic_eigvec: 0.79
chromatic_eigvec: 0.01
chromatic_eigvec: -0.62
chromatic_eigvec: 0.35
chromatic_eigvec: -0.83
chromatic_eigvec: 0.44
noise {
rand_type: "uniform_bernoulli"
exp: false
mean: 0.03
spread: 0.03
prob: 1.0
}
}
}
layer {
name: "aug_params1"
type: "GenerateAugmentationParameters"
bottom: "blob7"
bottom: "blob4"
bottom: "img0_aug"
top: "blob8"
augmentation_param {
augment_during_test: false
translate {
rand_type: "gaussian_bernoulli"
exp: false
mean: 0
spread: 0.03
prob: 1.0
}
rotate {
rand_type: "gaussian_bernoulli"
exp: false
mean: 0
spread: 0.03
prob: 1.0
}
zoom {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.03
prob: 1.0
}
gamma {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.02
prob: 1.0
}
brightness {
rand_type: "gaussian_bernoulli"
exp: false
mean: 0
spread: 0.02
prob: 1.0
}
contrast {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.02
prob: 1.0
}
color {
rand_type: "gaussian_bernoulli"
exp: true
mean: 0
spread: 0.02
prob: 1.0
}
}
coeff_schedule_param {
half_life: 50000
initial_coeff: 0.5
final_coeff: 1
}
}

Message type "caffe.LayerParameter" has no field named "black_augmentation_param"

Hi,
Sorry for troubling. Errors occurred in the training process:
1. "Message type "caffe.LayerParameter" has no field named "black_augmentation_param""
I cannot find the definition of black_augmentation in src/caffe/layers or caffe.proto. I am wondering if I missed something?
2. Unknown bottom blob 'img0_a_org' (layer 'crop_params')
Thanks~

About Caffe version

Thanks for your nice work !!

I want to know flownet/flownet2 is based on what version of caffe !?

Thanks !!

make fail on tools/convert_imageset_and_disparity

hi, I follow the step on this page, and when I run the command: make -j 5 all tools pycaffe, I get the following errors:
NVCC src/caffe/layers/correlation_layer1d.cu
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
CXX tools/convert_imageset_and_disparity.cpp
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
CXX tools/test_net.cpp
tools/convert_imageset_and_disparity.cpp:5:1: warning: multi-line comment [-Wcomment]
// convert_imageset [-g] ROOTFOLDER/ LISTFILE DB_NAME RANDOM_SHUFFLE[0 or 1]
^
In file included from ./include/thirdparty/CImg/CImg.h:208:0,
from tools/convert_imageset_and_disparity.cpp:39:
tools/convert_imageset_and_disparity.cpp: In function ‘int main(int, char**)’:
tools/convert_imageset_and_disparity.cpp:329:14: error: expected unqualified-id before ‘int’
leveldb::Status status = leveldb::DB::Open(
^
In file included from tools/convert_imageset_and_disparity.cpp:17:0:
tools/convert_imageset_and_disparity.cpp:331:11: error: ‘status’ was not declared in this scope
CHECK(status.ok()) << "Failed to open leveldb " << argv[arg_offset+2];
^
tools/convert_imageset_and_disparity.cpp:331:11: note: suggested alternatives:
In file included from /usr/include/boost/filesystem.hpp:17:0,
from ./include/caffe/util/io.hpp:4,
from tools/convert_imageset_and_disparity.cpp:37:
/usr/include/boost/filesystem/operations.hpp:396:15: note: ‘boost::filesystem::status’
file_status status(const path& p, system::error_code& ec)
^
/usr/include/boost/filesystem/operations.hpp:320:17: note: ‘boost::filesystem::detail::status’
file_status status(const path&p, system::error_code* ec=0);
^
Makefile:582: recipe for target '.build_release/tools/convert_imageset_and_disparity.o' failed
make: *** [.build_release/tools/convert_imageset_and_disparity.o] Error 1
make: *** Waiting for unfinished jobs....
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
./include/caffe/parallel.hpp(99): warning: type qualifier on return type is meaningless

./include/caffe/parallel.hpp(99): warning: type qualifier on return type is meaningless

./include/caffe/parallel.hpp(99): warning: type qualifier on return type is meaningless

./include/caffe/parallel.hpp(99): warning: type qualifier on return type is meaningless

./include/caffe/parallel.hpp(99): warning: type qualifier on return type is meaningless

./include/caffe/parallel.hpp(99): warning: type qualifier on return type is meaningless

./include/caffe/parallel.hpp(99): warning: type qualifier on return type is meaningless

./include/caffe/parallel.hpp(99): warning: type qualifier on return type is meaningless

Could you help me to solve it, please? Thank you

run-flownet-many.py gives CUDNN error after some time

I am trying to extract flow from many image pairs by using run-flownet-many.py

I have a list.txt file with ~300K lines. The execution stops after running 290 image pairs successfully on GTX1080 (with 8GB GPU memory) with an error message:

WARNING: Logging before InitGoogleLogging() is written to STDERR F0601 17:53:43.127583 35329 cudnn_conv_layer.cpp:52] Check failed: error == cudaSuccess (2 vs. 0) out of memory *** Check failure stack trace: *** Aborted (core dumped)

and stops after running 490 image pairs successfully on Titan X (with 12GB GPU memory) with an error message:

WARNING: Logging before InitGoogleLogging() is written to STDERR F0601 18:01:21.865638 76360 cudnn_conv_layer.cpp:53] Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR *** Check failure stack trace: *** Aborted (core dumped)

I have tried this only a few times, but thought maybe you have a guess why this could be happening. I am using CUDNNv6 and compiled after doing changes suggested in #17 .

Thanks!

problem of running speed

I run the code using Tesla K40 GPU on CentOS. When I use the FlowNet2 model, it runs about 7-8 seconds per image pair. But it is said it should be 100-200 ms per image pair. My test imges are all 640x360. So why it runs that slow?

make convert_imageset_and_flow.cpp failed

In file included from ./include/thirdparty/CImg/CImg.h:361:0,
from tools/convert_imageset_and_flow.cpp:38:
tools/convert_imageset_and_flow.cpp: In function ‘int main(int, char**)’:
tools/convert_imageset_and_flow.cpp:373:14: error: expected unqualified-id before ‘int’
leveldb::Status status = leveldb::DB::Open(options, argv[arg_offset+2], &db);
^
In file included from tools/convert_imageset_and_flow.cpp:18:0:
tools/convert_imageset_and_flow.cpp:374:11: error: ‘status’ was not declared in this scope
CHECK(status.ok()) << "Failed to open leveldb " << argv[arg_offset+2];
^
tools/convert_imageset_and_flow.cpp:374:11: note: suggested alternatives:
In file included from /usr/include/boost/filesystem.hpp:17:0,
from ./include/caffe/util/io.hpp:4,
from tools/convert_imageset_and_flow.cpp:36:
/usr/include/boost/filesystem/operations.hpp:396:15: note: ‘boost::filesystem::status’
file_status status(const path& p, system::error_code& ec)
^
/usr/include/boost/filesystem/operations.hpp:320:17: note: ‘boost::filesystem::detail::status’
file_status status(const path&p, system::error_code* ec=0);
^
Makefile:581: recipe for target '.build_release/tools/convert_imageset_and_flow.o' failed
make: *** [.build_release/tools/convert_imageset_and_flow.o] Error 1
make: *** 正在等待未完成的任务....


I have tried to compile this code in ubuntu 16.04 with cuda 8.0 and gcc 5, this error happened, and I have no idea how to solve it? please help me , thank you!

Correlation Layer Fast Implementation

Hi,

I'm trying to rewrite FlownetC in TensorFlow (@el3ment I think you're working on this?). Here's my current code for the correlation layer: (https://gist.github.com/sampepose/1244694a546ed173b2f38d1bb3e6a433)

It's unfortunately slow because it's so many nested loops. Also, the output feature maps don't match your Caffe model, so my logic is either incorrect or the ordering of my output differs from yours. I'm having some trouble understanding the CUDA for the correlation layer. Is there a way to calculate this using batch matrix operations?

Thank you for your help!

About the compile error

Thank you for your useful code! I have a error at the compile(make caffe). $ make -j 5 all tools pycaffe

In file included from src/caffe/common.cpp:8:0:
./include/caffe/util/rng.hpp:20:79: error: ‘RandomGeneratorParameter’ does not name a type
template <typename Dtype,typename Randtype> Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff = 1,Dtype prob0_value = NAN);
^
Makefile:576: recipe for target '.build_release/src/caffe/common.o' failed
make: *** [.build_release/src/caffe/common.o] Error 1

make error

I am getting the following error while making the caffe :
value=std::numeric_limits::max();
.tools/convert_imageset_and_disparity.cpp:156:51: error: expected ‘;’ before ‘short’

Check failed when make runtest

When run make runtest after building FlowNet2, check failure error occurs in custom_data_layer:

[----------] 1 test from LayerFactoryTest/1, where TypeParam = caffe::CPUDevice<double>
[ RUN      ] LayerFactoryTest/1.TestCreateLayer
src/caffe/test/test_layer_factory.cpp:47: Failure
Value of: layer->type()
  Actual: "NormLayer"
Expected: iter->first
Which is: "ChannelNorm"
F0523 19:43:38.419502 14652 custom_data_layer.cpp:661] Check failed: !pthread_join(thread_, NULL) Pthread joining failed.
*** Check failure stack trace: ***
    @     0x7f212bcd95cd  google::LogMessage::Fail()
    @     0x7f212bcdb433  google::LogMessage::SendToLog()
    @     0x7f212bcd915b  google::LogMessage::Flush()
    @     0x7f212bcdbe1e  google::LogMessageFatal::~LogMessageFatal()
    @     0x7f2126a957d2  caffe::CustomDataLayer<>::JoinPrefetchThread()
    @     0x7f2126a99400  caffe::CustomDataLayer<>::~CustomDataLayer()
    @     0x7f2126a996c9  caffe::CustomDataLayer<>::~CustomDataLayer()
    @           0x82a58e  caffe::LayerFactoryTest_TestCreateLayer_Test<>::TestBody()
    @           0x90e0d3  testing::internal::HandleExceptionsInMethodIfSupported<>()
    @           0x9076ea  testing::Test::Run()
    @           0x907838  testing::TestInfo::Run()
    @           0x907915  testing::TestCase::Run()
    @           0x908bef  testing::internal::UnitTestImpl::RunAllTests()
    @           0x908f13  testing::UnitTest::Run()
    @           0x46d47d  main
    @     0x7f2125db7830  __libc_start_main
    @           0x474e69  _start
    @              (nil)  (unknown)
Makefile:527: recipe for target 'runtest' failed
make: *** [runtest] Aborted (core dumped)

I am using Ubuntu16.04.2, GCC version 5.4.0, CUDA8.0 with CUDNN6.0, with Intel MKL.

Is it possible to run inference on GPU with 1GB DRAM?

I am still able to run FlowNet2-CS but FlowNet2-CSS and FlowNet2 fail with "Check failed: error == cudaSuccess (2 vs. 0) out of memory". When I query free memory with cudaMemGetInfo() I can see 950MB free before I run run-flownet.py and FlowNet2 weights occupy only 650MB.

Can it still be possible to fit the model into memory with some tricks?

DataAugmentationLayer error: shape mismatch

when I run python code with two 640x480 images, it worked fine
./scripts/run-flownet-many.py ./models/FlowNet2-css/FlowNet2-css_weights.caffemodel ./models/FlowNet2-css/FlowNet2-css_deploy.prototxt
However, I run the C++ code, the caffe code tell the following error:

Cannot copy param 1 weights from layer 'img0s_aug'; shape mismatch. Source param shape is 1 3 384 768 (884736); target param shape is 1 3 480 640 (921600). To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer.

I found these comments which showed the possible reason?

template
void DataAugmentationLayer::LayerSetUp(const vector<Blob>& bottom,
const vector<Blob
>& top)
// TODO This won't work when applying a net to images of size different from what the net was trained on

It's very odd that python code worked, but c++ code did not work.
could you tell me the underlying? Thanks
@eddy-ilg @nikolausmayer @mbuckler

FlowNet-CSS

Hello,

I am confused about the training steps for your CSS model.

A) You first train C and S separately. Then you use these results to construct CSS with the final S being randomly initialized. You fix the C and first S, and then train only the final S portion of the model.

B) You train C. Then take this result to make CS (with S random init), fix C, and train. You then take this result, add random init S (to make CSS), fix the first C and S, and train.

Which of the above is correct? Both are 3 trainings, but will yield different results.

Thank you,

Sam P

make fails with an error

$ make -j 5 all tools pycaffe 

collect2: error: ld returned 1 exit status
Makefile:620: recipe for target '.build_release/tools/compute_image_mean.bin' failed
make: *** [.build_release/tools/compute_image_mean.bin] Error 1

Anyone encountered this?

an interesting test!

i want to use flownet to deal with disparity problem...
and i want to see whether the warp image similar to img0(left image) if we use the disparity(gt) and img1(right image). here is the test.prototxt i use:
layer {
name: "CustomData1"
type: "CustomData"
top: "blob0"
top: "blob1"
top: "blob2"
include {
phase: TRAIN
}
data_param {
source: "test_lmdb"
batch_size: 1
backend: LMDB
rand_permute: true
rand_permute_seed: 77
slice_point: 1
slice_point: 2
encoding: UINT8
encoding: UINT8
encoding: UINT16FLOW
verbose: true
}
}
layer {
name: "CustomData2"
type: "CustomData"
top: "blob0"
top: "blob1"
top: "blob2"
include {
phase: TEST
}
data_param {
source: "test_lmdb"
batch_size: 1
backend: LMDB
rand_permute: true
rand_permute_seed: 77
slice_point: 1
slice_point: 2
encoding: UINT8
encoding: UINT8
encoding: UINT16FLOW
verbose: true
}
}
layer {
name: "Eltwise3"
type: "Eltwise"
bottom: "blob2"
top: "disp_large"
eltwise_param {
operation: SUM
coeff: 31.25
}
}
layer {
name: "DummyData1"
type: "DummyData"
top: "blob3"
include {
phase: TRAIN
}
dummy_data_param {
data_filler {
type: "constant"
value: 0
}
num: 1
channels: 1
height: 540
width: 960
}
}
layer {
name: "DummyData2"
type: "DummyData"
top: "blob3"
include {
phase: TEST
}
dummy_data_param {
data_filler {
type: "constant"
value: 0
}
num: 1
channels: 1
height: 540
width: 960
}
}
layer {
name: "Concat1"
type: "Concat"
bottom: "disp_large"
bottom: "blob3"
top: "blob4"
concat_param {
concat_dim: 1
}
}

layer {
name: "FlowWarp1"
type: "FlowWarp"
bottom: "blob1"
bottom: "blob4"
top: "blob5"
}
layer {
name: "PFMWriter1"
type: "PFMWriter"
bottom: "blob0"
writer_param {
folder: "."
prefix: "img0-"
suffix: ""
scale: 1.0
}
}
layer {
name: "PFMWriter2"
type: "PFMWriter"
bottom: "blob1"
writer_param {
folder: "."
prefix: "img1-"
suffix: ""
scale: 1.0
}
}
layer {
name: "PFMWriter3"
type: "PFMWriter"
bottom: "blob5"
writer_param {
folder: "."
prefix: "warp-"
suffix: ""
scale: 1.0
}
}
layer {
name: "PFMWriter4"
type: "PFMWriter"
bottom: "disp_large"
writer_param {
folder: "."
prefix: "disp-"
suffix: ""
scale: 1.0
}
}
and i get these pfm. when i turn 'pfm' to 'jpg' and i found that the warp image is not like the img0. it likes the image combine img0 and img1. the foreground objectes are repetited twice like double image.
i can sure the disparity image is normal and i dont change the flowwarp.cpp. even though it is insignificance but i want to know why?! thank u for your answer!

About the preprocessing before warping

It was mentioned in a previous issue that the occluded areas are filled with 0s before warping. Is this filling with 0s performed somewhere in this code? And is the location of the occluded areas available in your datasets?

question of FlownetS

Hello, I am a bit puzzled by the steps to predict the flow after the image is extracted

layer {
  name: "Convolution1"
  type: "Convolution"
  bottom: "conv6_1"
  top: "predict_flow6"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 1
    decay_mult: 0
  }
  convolution_param {
    num_output: 2
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "msra"
    }
    bias_filler {
      type: "constant"
    }
    engine: CUDNN
  }
}

The input is feature map of two input image, and then convolution....the output is predict_flow. How this layer is converted from the feature map to the optical flow?
And the num_out = 2 is the U and V of predict_flow?

num_iter/recompute_mean in DataAugmentationLayer

I am running the FlowNetC deploy model.

What is the purpose of num_iter (aka blobs_[0]) in the DataAugmentationLayer?

From what I can tell, num_iter is initialized to 0 and is incremented with each forward pass of the layer. If num_iter <= recompute_mean, then the mean of the current batch is computed. Otherwise, we don't recalculate the mean, and just use an old mean? Why?

Also, I've found that the code to recompute the mean is never actually run. I placed a LOG(INFO) after if (num_iter <= aug.recompute_mean()) { in data_augmentation_layer.cu and it is never logged. Why is this happening?

"recovered iteration count 1.9491e+06" is logged from data_augmentation_layer.cpp. I don't understand how that number can be so high, we're only doing 1 forward GPU passes per image through the DataAugmentationLayer.

Thanks!

Error parsing text-format caffe.NetParameter

I'm trying to run your test script (run-flownet.py) and I get the following error. I was wondering if you had any idea what might be causing this?

WARNING: Logging before InitGoogleLogging() is written to STDERR
W0503 18:41:55.555775  8258 _caffe.cpp:139] DEPRECATION WARNING - deprecated use of Python interface
W0503 18:41:55.555805  8258 _caffe.cpp:140] Use this instead (with the named "weights" parameter):
W0503 18:41:55.555807  8258 _caffe.cpp:142] Net('/tmp/tmpPx01R9', 1, weights='/home/shreyas/repositories/flownet2/models/FlowNet2-CSS-ft-sd/FlowNet2-CSS-ft-sd_weights.caffemodel.h5')
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 40:22: Message type "caffe.LayerParameter" has no field named "augmentation_param".
F0503 18:41:55.556738  8258 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /tmp/tmpPx01R9
*** Check failure stack trace: ***
Aborted (core dumped)

Custom Data Layer Error

Hi,

I am trying to use different data than you have used in your experiment. So I have used CustomData and slicing points. The network read all the data and the sizes of each record. For example :

I0705 15:18:15.431681 162296 custom_data_layer.cpp:541] output 0 data size: 8,3,384,512
I0705 15:18:15.431759 162296 custom_data_layer.cpp:541] output 1 data size: 8,3,384,512

After that I have got the following error:

I0705 15:18:15.516582 162296 net.cpp:145] Setting up CustomData-M
F0705 15:18:15.516605 162307 custom_data_layer.cpp:184] Internal data fetch error: Tried to fetch element 0 of 6999 which is in DB: 5521

5521 is the first record before slicing in the LMBD.

The question is: since the network is reading all the input data and can determine the size and the number of channels , why this error is showing?

Thanks

problem about "make-lmdbs.sh"

hello,everyone.
I met some problems on making lmdbs. I have downloaded the ChairsSDHom and FlyingChairs dataset.
But after i run the make-lmdbs.sh, the realse file "data.mdb" didnt look right. It only 8KB.
And the FlyingChairs.list can't be opened and its links are broken. I don't know how.
When I run the train program, it had errors.

**_“../../dispnet-release/data/FlyingChairs_release.list” doesn't exist

../build/tools/convert_imageset_and_flow.bin /media/xugang/data/data/FlyingChairs_release.list FlyingChairs_release_lmdb 0 lmdb_**

DownsamplingLayer cannot do backward

My log file gives me a bunch of lines like

[net.cpp:218] Downsample5 needs backward computation.

before halting and throwing the error

[downsample_layer.cu:138] DownsamplingLayer cannot do backward.

A similar issue was raised before on the flownet: liruoteng/FlowNet#9, but it seems strange that I would need to modify net.cpp since you were training successfully. Have you seen this issue before?

Where are "img0_b", "img1_b", and "flow_gt_b" defined in FlowNet2 model?

I have encountered "Unknown bottom blob 'img0_b' (layer 'Concat1', bottom index 1)" error
when training with FlowNet2 model.

The follwing is the part of "train.prototxt" which is originally "FlowNet2_train.prototxt.template".

layer {
name: "Concat1"
type: "Concat"
bottom: "img0_a"
bottom: "img0_b"
top: "img0"
concat_param {
axis: 0
}
}
layer {
name: "Concat2"
type: "Concat"
bottom: "img1_a"
bottom: "img1_b"
top: "img1"
concat_param {
axis: 0
}
}
layer {
name: "Concat3"
type: "Concat"
bottom: "flow_gt_a"
bottom: "flow_gt_b"
top: "flow_gt"
concat_param {
axis: 0
}
}

Thank you in advance.

Flow augmentation steps

Hi!

Steps from the flow augmentation layer comments:
// Step 1: Apply inverse tranformation of Image 1
// Step 2: Apply flow field
// Step 3: Apply tranformation of Image 2
// Step 4: Difference between the new and old positions gives the flow

However, looking at the code it seems that transMat1 holds the spatial transformation matrix applied to image 1 and transMat2 holds the inverse of the spatial transformation matrix applied to image 2. So the code would then be applying the transformation of image 1, applying the flow field, then applying the inverse transformation of image 2. Am I misreading the code or are the comments wrong?

Thanks!

Performance of FlowNet2-CSS-ft-sd

After testing FlowNet2-CSS-ft-sd on sintel training dataset, I found that:

  1. On sintel clean, FlowNet2-CSS-ft-sd can produce exact the same AEE (2.08) as reported in <FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks>;

  2. However, AEE on sintel final is 3.57, which is higher than the result reported in the paper.

So could you please check the performance of FlowNet2-CSS-ft-sd on sintel final? Thanks~

two question

Thanks for your nice work!
I have two questions:

  1. I want to reconstruct the flying chairs dataset, could you please give me the code that generate the dataset?
    2.could you please tell me how can i download the ChairsSDHom?

‘RandomGeneratorParameter’ does not name a type, maybe need to add `#include "caffe/proto/caffe.pb.h"` in the file `$CAFFE_ROOT$/include/caffe/util/rng.hpp`

When I try to compile your code, error occurs like this.

examples/mnist examples/siamese examples/cpp_classification examples/cifar10 matlab/+caffe/private python/caffe src/caffe src/caffe/layers src/caffe/solvers src/caffe/proto src/caffe/util src/caffe/test src/gtest tools
PROTOC src/caffe/proto/caffe.proto
CXX src/caffe/layers/inner_product_layer.cpp
CXX src/caffe/layers/window_data_layer.cpp
CXX src/caffe/layers/input_layer.cpp
CXX src/caffe/layers/cudnn_relu_layer.cpp
CXX src/caffe/layers/silence_layer.cpp
CXX src/caffe/layers/absval_layer.cpp
CXX src/caffe/layers/neg_relu_layer.cpp
CXX src/caffe/layers/recurrent_layer.cpp
CXX src/caffe/layers/bnll_layer.cpp
CXX src/caffe/layers/prelu_layer.cpp
CXX src/caffe/layers/base_data_layer.cpp
CXX src/caffe/layers/imgreader_layer.cpp
CXX src/caffe/layers/slice_layer.cpp
CXX src/caffe/layers/sigmoid_layer.cpp
CXX src/caffe/layers/lrn_layer.cpp
CXX src/caffe/layers/tanh_layer.cpp
CXX src/caffe/layers/mean_layer.cpp
CXX src/caffe/layers/batch_norm_layer.cpp
CXX src/caffe/layers/pooling_layer.cpp
CXX src/caffe/layers/cudnn_sigmoid_layer.cpp
CXX src/caffe/layers/flatten_layer.cpp
CXX src/caffe/layers/hdf5_output_layer.cpp
CXX src/caffe/layers/softmax_layer.cpp
CXX src/caffe/layers/contrastive_loss_layer.cpp
CXX src/caffe/layers/downsample_layer.cpp
CXX src/caffe/layers/cudnn_pooling_layer.cpp
CXX src/caffe/layers/hdf5_data_layer.cpp
CXX src/caffe/layers/data_layer.cpp
CXX src/caffe/layers/cudnn_conv_layer.cpp
CXX src/caffe/layers/channel_norm_layer.cpp
CXX src/caffe/layers/custom_data_layer.cpp
CXX src/caffe/layers/neuron_layer.cpp
CXX src/caffe/layers/tile_layer.cpp
CXX src/caffe/layers/l1loss_layer.cpp
CXX src/caffe/layers/batch_reindex_layer.cpp
CXX src/caffe/layers/dummy_data_layer.cpp
CXX src/caffe/layers/im2col_layer.cpp
CXX src/caffe/layers/deconv_layer.cpp
CXX src/caffe/layers/euclidean_loss_layer.cpp
CXX src/caffe/layers/conv_layer.cpp
CXX src/caffe/layers/flow_warp_layer.cpp
CXX src/caffe/layers/correlation_layer1d.cpp
CXX src/caffe/layers/accum_layer.cpp
CXX src/caffe/layers/mvn_layer.cpp
CXX src/caffe/layers/multinomial_logistic_loss_layer.cpp
CXX src/caffe/layers/spp_layer.cpp
CXX src/caffe/layers/log_layer.cpp
CXX src/caffe/layers/exp_layer.cpp
CXX src/caffe/layers/image_data_layer.cpp
CXX src/caffe/layers/parameter_layer.cpp
CXX src/caffe/layers/cudnn_lcn_layer.cpp
src/caffe/layers/channel_norm_layer.cpp: In instantiation of ‘void caffe::ChannelNormLayer<Dtype>::Reshape(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]’:
src/caffe/layers/channel_norm_layer.cpp:193:1:   required from here
src/caffe/layers/channel_norm_layer.cpp:34:13: warning: unused variable ‘channels’ [-Wunused-variable]
   const int channels = bottom[0]->channels();
             ^
src/caffe/layers/channel_norm_layer.cpp: In instantiation of ‘void caffe::ChannelNormLayer<Dtype>::Reshape(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = double]’:
src/caffe/layers/channel_norm_layer.cpp:193:1:   required from here
src/caffe/layers/channel_norm_layer.cpp:34:13: warning: unused variable ‘channels’ [-Wunused-variable]
CXX src/caffe/layers/base_conv_layer.cpp
CXX src/caffe/layers/split_layer.cpp
CXX src/caffe/layers/augmentation_layer_base.cpp
CXX src/caffe/layers/generate_augmentation_parameters_layer.cpp
CXX src/caffe/layers/resample_layer.cpp
CXX src/caffe/layers/memory_data_layer.cpp
CXX src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
CXX src/caffe/layers/data_augmentation_layer.cpp
CXX src/caffe/layers/reshape_layer.cpp
CXX src/caffe/layers/flowriter_layer.cpp
CXX src/caffe/layers/pfmwriter_layer.cpp
CXX src/caffe/layers/lpq_loss_layer.cpp
CXX src/caffe/layers/floatreader_layer.cpp
CXX src/caffe/layers/lstm_unit_layer.cpp
CXX src/caffe/layers/argmax_layer.cpp
CXX src/caffe/layers/power_layer.cpp
CXX src/caffe/layers/lstm_layer.cpp
CXX src/caffe/layers/accuracy_layer.cpp
CXX src/caffe/layers/scale_layer.cpp
CXX src/caffe/layers/eltwise_layer.cpp
CXX src/caffe/layers/crop_layer.cpp
CXX src/caffe/layers/elu_layer.cpp
CXX src/caffe/layers/bias_layer.cpp
CXX src/caffe/layers/flow_augmentation_layer.cpp
CXX src/caffe/layers/threshold_layer.cpp
CXX src/caffe/layers/relu_layer.cpp
CXX src/caffe/layers/filter_layer.cpp
CXX src/caffe/layers/disparity_data_layer.cpp
CXX src/caffe/layers/softmax_loss_layer.cpp
src/caffe/layers/flow_warp_layer.cpp: In instantiation of ‘void caffe::FlowWarpLayer<Dtype>::Backward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<bool>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]’:
src/caffe/layers/flow_warp_layer.cpp:259:1:   required from here
src/caffe/layers/flow_warp_layer.cpp:130:18: warning: unused variable ‘warped_data’ [-Wunused-variable]
     const Dtype* warped_data = top[0]->cpu_data(); // dest
                  ^
src/caffe/layers/flow_warp_layer.cpp: In instantiation of ‘void caffe::FlowWarpLayer<Dtype>::Backward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<bool>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = double]’:
src/caffe/layers/flow_warp_layer.cpp:259:1:   required from here
src/caffe/layers/flow_warp_layer.cpp:130:18: warning: unused variable ‘warped_data’ [-Wunused-variable]
CXX src/caffe/layers/hinge_loss_layer.cpp
CXX src/caffe/layers/cudnn_tanh_layer.cpp
CXX src/caffe/layers/rnn_layer.cpp
CXX src/caffe/layers/floatwriter_layer.cpp
CXX src/caffe/layers/infogain_loss_layer.cpp
CXX src/caffe/layers/reduction_layer.cpp
CXX src/caffe/layers/concat_layer.cpp
CXX src/caffe/layers/cudnn_lrn_layer.cpp
CXX src/caffe/layers/correlation_layer.cpp
CXX src/caffe/layers/dropout_layer.cpp
CXX src/caffe/layers/loss_layer.cpp
CXX src/caffe/layers/imgwriter_layer.cpp
CXX src/caffe/layers/embed_layer.cpp
CXX src/caffe/layers/cudnn_softmax_layer.cpp
CXX src/caffe/common.cpp
CXX src/caffe/layer.cpp
CXX src/caffe/syncedmem.cpp
CXX src/caffe/parallel.cpp
CXX src/caffe/layer_factory.cpp
CXX src/caffe/solvers/adagrad_solver.cpp
CXX src/caffe/solvers/adam_solver.cpp
CXX src/caffe/solvers/adadelta_solver.cpp
CXX src/caffe/solvers/rmsprop_solver.cpp
CXX src/caffe/solvers/sgd_solver.cpp
CXX src/caffe/solvers/nesterov_solver.cpp
CXX src/caffe/data_reader.cpp
CXX src/caffe/blob.cpp
CXX src/caffe/internal_thread.cpp
CXX src/caffe/net.cpp
CXX src/caffe/data_transformer.cpp
CXX src/caffe/util/io.cpp
CXX src/caffe/util/rng.cpp
CXX src/caffe/util/upgrade_proto.cpp
CXX src/caffe/util/output.cpp
CXX src/caffe/util/math_functions.cpp
CXX src/caffe/util/benchmark.cpp
CXX src/caffe/util/db_leveldb.cpp
CXX src/caffe/util/hdf5.cpp
CXX src/caffe/util/insert_splits.cpp
CXX src/caffe/util/db.cpp
In file included from src/caffe/util/rng.cpp:1:0:
./include/caffe/util/rng.hpp:20:79: error: ‘RandomGeneratorParameter’ does not name a type
 template <typename Dtype,typename Randtype> Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff = 1,Dtype prob0_value = NAN);
                                                                               ^
./include/caffe/util/rng.hpp:20:105: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 template <typename Dtype,typename Randtype> Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff = 1,Dtype prob0_value = NAN);
                                                                                                         ^
src/caffe/util/rng.cpp:8:35: error: ‘RandomGeneratorParameter’ does not name a type
 Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff, Dtype prob0_value) {
                                   ^
src/caffe/util/rng.cpp:8:61: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff, Dtype prob0_value) {
                                                             ^
src/caffe/util/rng.cpp: In function ‘Randtype caffe::caffe_rng_generate(const int&, Dtype, Dtype)’:
src/caffe/util/rng.cpp:10:13: error: request for member ‘apply_schedule’ in ‘param’, which is of non-class type ‘const int’
   if (param.apply_schedule())
             ^
src/caffe/util/rng.cpp:11:20: error: request for member ‘spread’ in ‘param’, which is of non-class type ‘const int’
     spread = param.spread() * discount_coeff;
                    ^
src/caffe/util/rng.cpp:13:20: error: request for member ‘spread’ in ‘param’, which is of non-class type ‘const int’
     spread = param.spread();
                    ^
src/caffe/util/rng.cpp:14:40: error: request for member ‘rand_type’ in ‘param’, which is of non-class type ‘const int’
   const std::string rand_type =  param.rand_type();
                                        ^
src/caffe/util/rng.cpp:20:34: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
       caffe_rng_uniform(1, param.mean() - spread, param.mean() + spread, &tmp);
                                  ^
src/caffe/util/rng.cpp:20:57: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
       caffe_rng_uniform(1, param.mean() - spread, param.mean() + spread, &tmp);
                                                         ^
src/caffe/util/rng.cpp:22:19: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
       tmp = param.mean();
                   ^
src/caffe/util/rng.cpp:23:15: error: request for member ‘exp’ in ‘param’, which is of non-class type ‘const int’
     if (param.exp())
               ^
src/caffe/util/rng.cpp:30:35: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
       caffe_rng_gaussian(1, param.mean(), spread, &tmp);
                                   ^
src/caffe/util/rng.cpp:32:19: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
       tmp = param.mean();
                   ^
src/caffe/util/rng.cpp:33:15: error: request for member ‘exp’ in ‘param’, which is of non-class type ‘const int’
     if (param.exp())
               ^
src/caffe/util/rng.cpp:39:15: error: request for member ‘prob’ in ‘param’, which is of non-class type ‘const int’
     if (param.prob() > 0.)
               ^
src/caffe/util/rng.cpp:40:36: error: request for member ‘prob’ in ‘param’, which is of non-class type ‘const int’
       caffe_rng_bernoulli(1, param.prob(), &tmp);
                                    ^
src/caffe/util/rng.cpp:53:15: error: request for member ‘prob’ in ‘param’, which is of non-class type ‘const int’
     if (param.prob() > 0.)
               ^
src/caffe/util/rng.cpp:54:36: error: request for member ‘prob’ in ‘param’, which is of non-class type ‘const int’
       caffe_rng_bernoulli(1, param.prob(), &tmp2);
                                    ^
src/caffe/util/rng.cpp:65:36: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
         caffe_rng_uniform(1, param.mean() - spread, param.mean() + spread, &tmp1);
                                    ^
src/caffe/util/rng.cpp:65:59: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
         caffe_rng_uniform(1, param.mean() - spread, param.mean() + spread, &tmp1);
                                                           ^
src/caffe/util/rng.cpp:67:22: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
         tmp1 = param.mean();
                      ^
src/caffe/util/rng.cpp:70:15: error: request for member ‘exp’ in ‘param’, which is of non-class type ‘const int’
     if (param.exp())
               ^
src/caffe/util/rng.cpp:83:15: error: request for member ‘prob’ in ‘param’, which is of non-class type ‘const int’
     if (param.prob() > 0.)
               ^
src/caffe/util/rng.cpp:84:36: error: request for member ‘prob’ in ‘param’, which is of non-class type ‘const int’
       caffe_rng_bernoulli(1, param.prob(), &tmp2);
                                    ^
src/caffe/util/rng.cpp:95:37: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
         caffe_rng_gaussian(1, param.mean(), spread, &tmp1);
                                     ^
src/caffe/util/rng.cpp:97:22: error: request for member ‘mean’ in ‘param’, which is of non-class type ‘const int’
         tmp1 = param.mean();
                      ^
src/caffe/util/rng.cpp:100:15: error: request for member ‘exp’ in ‘param’, which is of non-class type ‘const int’
     if (param.exp())
               ^
src/caffe/util/rng.cpp:110:12: error: request for member ‘discretize’ in ‘param’, which is of non-class type ‘const int’
   if(param.discretize()) rand = round(rand);
            ^
src/caffe/util/rng.cpp:111:16: error: request for member ‘multiplier’ in ‘param’, which is of non-class type ‘const int’
   rand = param.multiplier() * rand;
                ^
src/caffe/util/rng.cpp: At global scope:
src/caffe/util/rng.cpp:116:54: error: ‘RandomGeneratorParameter’ does not name a type
 template float caffe_rng_generate<float,float>(const RandomGeneratorParameter& param, float discount_coeff, float prob0_value);
                                                      ^
src/caffe/util/rng.cpp:116:80: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 template float caffe_rng_generate<float,float>(const RandomGeneratorParameter& param, float discount_coeff, float prob0_value);
                                                                                ^
src/caffe/util/rng.cpp:117:52: error: ‘RandomGeneratorParameter’ does not name a type
 template bool caffe_rng_generate<float,bool>(const RandomGeneratorParameter& param, float discount_coeff, float prob0_value);
                                                    ^
src/caffe/util/rng.cpp:117:78: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 template bool caffe_rng_generate<float,bool>(const RandomGeneratorParameter& param, float discount_coeff, float prob0_value);
                                                                              ^
src/caffe/util/rng.cpp:118:55: error: ‘RandomGeneratorParameter’ does not name a type
 template float caffe_rng_generate<double,float>(const RandomGeneratorParameter& param, double discount_coeff, double prob0_value);
                                                       ^
src/caffe/util/rng.cpp:118:81: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 template float caffe_rng_generate<double,float>(const RandomGeneratorParameter& param, double discount_coeff, double prob0_value);
                                                                                 ^
src/caffe/util/rng.cpp:119:57: error: ‘RandomGeneratorParameter’ does not name a type
 template double caffe_rng_generate<double,double>(const RandomGeneratorParameter& param, double discount_coeff, double prob0_value);
                                                         ^
src/caffe/util/rng.cpp:119:83: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 template double caffe_rng_generate<double,double>(const RandomGeneratorParameter& param, double discount_coeff, double prob0_value);
                                                                                   ^
src/caffe/util/rng.cpp:120:53: error: ‘RandomGeneratorParameter’ does not name a type
 template bool caffe_rng_generate<double,bool>(const RandomGeneratorParameter& param, double discount_coeff, double prob0_value);
                                                     ^
src/caffe/util/rng.cpp:120:79: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 template bool caffe_rng_generate<double,bool>(const RandomGeneratorParameter& param, double discount_coeff, double prob0_value);
                                                                               ^
make: *** [.build_release/src/caffe/util/rng.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from src/caffe/common.cpp:8:0:
./include/caffe/util/rng.hpp:20:79: error: ‘RandomGeneratorParameter’ does not name a type
 template <typename Dtype,typename Randtype> Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff = 1,Dtype prob0_value = NAN);
                                                                               ^
./include/caffe/util/rng.hpp:20:105: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 template <typename Dtype,typename Randtype> Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff = 1,Dtype prob0_value = NAN);
                                                                                                         ^
make: *** [.build_release/src/caffe/common.o] Error 1
In file included from src/caffe/util/math_functions.cpp:8:0:
./include/caffe/util/rng.hpp:20:79: error: ‘RandomGeneratorParameter’ does not name a type
 template <typename Dtype,typename Randtype> Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff = 1,Dtype prob0_value = NAN);
                                                                               ^
./include/caffe/util/rng.hpp:20:105: error: ISO C++ forbids declaration of ‘param’ with no type [-fpermissive]
 template <typename Dtype,typename Randtype> Randtype caffe_rng_generate(const RandomGeneratorParameter& param, Dtype discount_coeff = 1,Dtype prob0_value = NAN);
                                                                                                         ^
make: *** [.build_release/src/caffe/util/math_functions.o] Error 1

So I add #include "caffe/proto/caffe.pb.h" in the file $CAFFE_ROOT$/include/caffe/util/rng.hpp and the error is fixed.

My environment is Ubuntu 14.04.1LTS with CUDA8.0. GCC version is 4.8.4

compile error with CPU_ONLY option

I am getting compile error with CPU_ONLY=1 in makefile.config.

make -j 5 all tools pycaffe
examples/mnist examples/cpp_classification examples/siamese examples/cifar10 matlab/+caffe/private python/caffe src/caffe src/caffe/layers src/caffe/util src/caffe/test src/caffe/proto src/caffe/solvers src/gtest tools
PROTOC (python) src/caffe/proto/caffe.proto
CXX src/caffe/layers/dropout_layer.cpp
CXX src/caffe/layers/hdf5_data_layer.cpp
PROTOC src/caffe/proto/caffe.proto
CXX src/caffe/layers/channel_norm_layer.cpp
CXX src/caffe/layers/image_data_layer.cpp
CXX src/caffe/layers/neg_relu_layer.cpp
CXX src/caffe/layers/im2col_layer.cpp
CXX src/caffe/layers/loss_layer.cpp
src/caffe/layers/channel_norm_layer.cpp: In instantiation of ‘void caffe::ChannelNormLayer::Reshape(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&) [with Dtype = float]’:
src/caffe/layers/channel_norm_layer.cpp:193:1: required from here
src/caffe/layers/channel_norm_layer.cpp:34:13: warning: unused variable ‘channels’ [-Wunused-variable]
const int channels = bottom[0]->channels();
^
src/caffe/layers/channel_norm_layer.cpp: In instantiation of ‘void caffe::ChannelNormLayer::Reshape(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&) [with Dtype = double]’:
src/caffe/layers/channel_norm_layer.cpp:193:1: required from here
src/caffe/layers/channel_norm_layer.cpp:34:13: warning: unused variable ‘channels’ [-Wunused-variable]
CXX src/caffe/layers/pooling_layer.cpp
CXX src/caffe/layers/hinge_loss_layer.cpp
CXX src/caffe/layers/generate_augmentation_parameters_layer.cpp
CXX src/caffe/layers/relu_layer.cpp
CXX src/caffe/layers/imgreader_layer.cpp
CXX src/caffe/layers/custom_data_layer.cpp
CXX src/caffe/layers/cudnn_conv_layer.cpp
CXX src/caffe/layers/cudnn_relu_layer.cpp
CXX src/caffe/layers/imgwriter_layer.cpp
CXX src/caffe/layers/sigmoid_layer.cpp
CXX src/caffe/layers/reshape_layer.cpp
In file included from ./include/caffe/common.hpp:19:0,
from ./include/caffe/blob.hpp:8,
from ./include/caffe/layer.hpp:8,
from src/caffe/layers/generate_augmentation_parameters_layer.cpp:9:
src/caffe/layers/generate_augmentation_parameters_layer.cpp:116:10: error: redefinition of ‘void caffe::GenerateAugmentationParametersLayer::Backward_gpu(const std::vector<caffe::Blob>&, const std::vector&, const std::vector<caffe::Blob>&)’
STUB_GPU(GenerateAugmentationParametersLayer);
^
./include/caffe/util/device_alternate.hpp:17:6: note: in definition of macro ‘STUB_GPU’
void classname::Backward_gpu(const vector<Blob>& top,
^
In file included from src/caffe/layers/generate_augmentation_parameters_layer.cpp:10:0:
./include/caffe/layers/generate_augmentation_parameters_layer.hpp:38:16: error: ‘virtual void caffe::GenerateAugmentationParametersLayer::Backward_gpu(const std::vector<caffe::Blob
>&, const std::vector&, const std::vector<caffe::Blob>&)’ previously declared here
virtual void Backward_gpu(const vector<Blob
>& top,
^
make: *** [.build_release/src/caffe/layers/generate_augmentation_parameters_layer.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from ./include/caffe/common.hpp:19:0,
from ./include/caffe/blob.hpp:8,
from ./include/caffe/layer.hpp:8,
from src/caffe/layers/imgwriter_layer.cpp:9:
./include/caffe/util/device_alternate.hpp:24:36: error: no ‘void caffe::ImgWriterLayer::Forward_gpu(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&)’ member function declared in class ‘caffe::ImgWriterLayer’
const vector<Blob*>& top) { NO_GPU; }
^
src/caffe/layers/imgwriter_layer.cpp:147:1: note: in expansion of macro ‘STUB_GPU_FORWARD’
STUB_GPU_FORWARD(ImgWriterLayer, Forward);
^
make: *** [.build_release/src/caffe/layers/imgwriter_layer.o] Error 1

how to use driving dataset to train this network?

i have the dataset named driving including disparity and frames_cleanpass. how can i use it to train the network? how can i transfer the pfm to flo? and how can i create the list ? does it work if i have the list ? what the list looks like? thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.