Git Product home page Git Product logo

monodepth's People

Contributors

dantkz avatar gosip avatar hirico avatar mrharicot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monodepth's Issues

Stereo model

I am interested to know if you plan (and when) to publish the model trained on concatenated stereo image inputs which you have briefly mention in the paper (section 4.3)?

Btw, I really like this unsupervised approach to depth estimation - great work guys.

Training models with own data (stero camera) not generating good disparity map results

Input is the following image.

left_14650

Output does not generate disparity map.

left_14650_disp

Here are the parameters used: Along with loss.

parameters(encoder='vgg', height=256, width=512, batch_size=8, num_threads=8, num_epochs=50, do_stereo=False, wrap_mode='border', use_deconv=False, alpha_image_loss=0.85, disp_gradient_loss_weight=0.1, lr_loss_weight=1.0, full_summary=False)
total number of samples: 15875
total number of steps: 99250

batch 98900 | examples/s: 14.65 | loss: 0.49273 | time elapsed: 15.01h | time left: 0.05h
batch 99000 | examples/s: 14.48 | loss: 0.43627 | time elapsed: 15.03h | time left: 0.04h
batch 99100 | examples/s: 14.79 | loss: 0.37097 | time elapsed: 15.04h | time left: 0.02h
batch 99200 | examples/s: 14.49 | loss: 0.20799 | time elapsed: 15.06h | time left: 0.01h

I am not able to get good results on my own data, however I am able to recreate results with kitti data set. And If I input the same image into a model trained on kitti data the result is the following. (disparity map)

left_14650_disp

Network Output

Hi Clement,
I want to use your model for a project i am currently working. I had two questions regarding the same.
Firstly, till what distance does it accurately predict?
Secondly, can we extract the real distance in meters from the depth map?

Error for testing stereo trained model

Assign requires shapes of both tensors to match. lhs shape= [7,7,3,32] rhs shape= [7,7,6,32]

Does this mean the input to a stereo model is left right concatenated?

Cityscapes pretraining dataset

Hi,
just a small note: in the readme on GitHub you state, that you used leftImg8bit_trainvaltest.zip and leftImg8bit_trainextra.zip, which is only the left images, however your filenames file (and algorithm) require both left and right images.
Cheers

Error running monodepth_simple, kitti

Caused by op u'save/RestoreV2_7', defined at:
File "monodepth_simple.py", line 126, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "monodepth_simple.py", line 123, in main
test_simple(params)
File "monodepth_simple.py", line 80, in test_simple
train_saver = tf.train.Saver()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1051, in init
self.build()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1081, in build
restore_sequentially=self._restore_sequentially)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 675, in build
restore_sequentially, reshape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 402, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 242, in restore_op
[spec.tensor.dtype])[0])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 668, in restore_v2
dtypes=dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1264, in init
self._traceback = _extract_stack()

DataLossError (see above for traceback): Unable to open table file /home/caffelounge/depth/monodepth/models/model_kitti: Failed precondition: /home/caffelounge/depth/monodepth/models/model_kitti: perhaps your file is in a different file format and you need to use a different restore operator?
[[Node: save/RestoreV2_7 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_7/tensor_names, save/RestoreV2_7/shape_and_slices)]]

Image comparison of the ground truth depth and predicted depth

I generated images from the ground truth depth data and the predicted depth data based on the stereo_files_2015.txt files that contain the exact images that has been used by you. However, the predicted depth images differ significantly from the ground truth images and this is making it hard for me to understand if the predicted depths are really correct and can be used later. Could you please advise?

All I did was just save the images using scipy in evaluate_utils.py using the code

  gt_path = "monodepth/gt/" + str(i) + ".png"
   
    path = "monodepth/pred/"+ str(i) + ".png"
    scipy.misc.imsave(path, pred_depth)
    scipy.misc.imsave(gt_path,gt_depth)

0_groundtruth
0_predicted

Please let me know if you need more information. These are the values i got on the 200 images

abs_rel, sq_rel, rms, log_rms, d1_all, a1, a2, a3
0.0897, 1.2037, 5.157, 0.166, 16.366, 0.915, 0.969, 0.986

obtain HD disparities

Is there a way of feeding say HD (1280x720) resolution images to the network and get disparities in HD format?

Not able to recreate the results

Hello,
I tried running the code to generate the disparity maps for the images you reported in your paper. The code runs without any error, but when I plot the generated disparity maps, they are nothing like the ones you reported. Am i missing something?

Not able to run monodepth_simple: Unsuccessful TensorSliceReader constructor

This is the original issue I had, but I will leave it here as it includes my setup - however the second post below contains the "Unsuccessful TensorSliceReader constructor" error.

I am having issues running the sample line of code - it looks like perhaps a clash of versions to me, but I am also not too experienced with Tensorflow and the like. Could anyone give me a hanf with where to look? Thanks in advance!

I am using this command:
python monodepth_simple.py --image_path IMG_20170709_200049.jpg --checkpoint_path ./models/model_kitti

I have Tensorflow 1.0, Cuda 8.0 using Python 3.5 (should it be 2.7?) .... YES!! see next post
Here is the list of packages in my environent:

certifi 2016.2.28 py35_0
cycler 0.10.0 py35_0
dbus 1.10.20 0
expat 2.1.0 0
fontconfig 2.12.1 3
freetype 2.5.5 2
glib 2.50.2 1
gst-plugins-base 1.8.0 0
gstreamer 1.8.0 0
icu 54.1 0
jpeg 9b 0
libffi 3.2.1 1
libgcc 5.2.0 0
libgfortran 3.0.0 1
libiconv 1.14 0
libpng 1.6.30 1
libprotobuf 3.4.0 0
libxcb 1.12 1
libxml2 2.9.4 0
matplotlib 2.0.2 np112py35_0
mkl 2017.0.3 0
numpy 1.12.1 py35_0
openssl 1.0.2l 0
pcre 8.39 1
pip 9.0.1 py35_1
protobuf 3.4.0 py35_0
pyparsing 2.2.0 py35_0
pyqt 5.6.0 py35_2
python 3.5.4 0
python-dateutil 2.6.1 py35_0
pytz 2017.2 py35_0
qt 5.6.2 5
readline 6.2 2
scipy 0.19.1 np112py35_0
setuptools 36.4.0 py35_1
sip 4.18 py35_0
six 1.10.0 py35_0
sqlite 3.13.0 0
tensorflow 1.0.1 np112py35_0
tk 8.5.18 0
wheel 0.29.0 py35_0
xz 5.2.3 0
zlib 1.2.11 0

Here is the folder structure (I get the same error running either the Kitti model or the CityScapes):

├── average_gradients.py
├── bilinear_sampler.py
├── IMG_20170709_200049.jpg
├── LICENSE
├── models
│   ├── model_cityscapes.data-00000-of-00001
│   ├── model_cityscapes.index
│   ├── model_cityscapes.meta
│   ├── model_kitti.data-00000-of-00001
│   ├── model_kitti.index
│   └── model_kitti.meta
├── monodepth_dataloader.py
├── monodepth_main.py
├── monodepth_model.py
├── monodepth_simple.py
├── pycache
│   ├── average_gradients.cpython-35.pyc
│   ├── bilinear_sampler.cpython-35.pyc
│   ├── monodepth_dataloader.cpython-35.pyc
│   └── monodepth_model.cpython-35.pyc
├── readme.md
└── utils
├── evaluate_kitti.py
├── evaluation_utils.py
├── filenames
│   ├── cityscapes_test_files.txt
│   ├── cityscapes_train_files.txt
│   ├── cityscapes_val_files.txt
│   ├── eigen_test_files.txt
│   ├── eigen_train_files.txt
│   ├── eigen_val_files.txt
│   ├── kitti_stereo_2015_test_files.txt
│   ├── kitti_test_files.txt
│   ├── kitti_train_files.txt
│   └── kitti_val_files.txt
├── get_model.sh
└── kitti_archives_to_download.txt

Here is the full stack trace. It begins with a lot of the same error message (I have omitted a lot), then we see the usual stack trace:

2017-09-29 02:29:17.740915: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2017-09-29 02:29:17.740919: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y
2017-09-29 02:29:17.740924: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0)
2017-09-29 02:29:18.681968: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.700250: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.716061: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.731301: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.746996: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.759425: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.772928: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.785207: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.797946: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.809876: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
2017-09-29 02:29:18.821502: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for
....
....
Traceback (most recent call last):
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 491, in apply_op
preferred_dtype=default_dtype)
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 702, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 110, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 99, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 367, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).name))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "monodepth_simple.py", line 110, in
tf.app.run()
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "monodepth_simple.py", line 107, in main
test_simple(params)
File "monodepth_simple.py", line 53, in test_simple
model = MonodepthModel(params, "test", left, None)
File "/home/n1k31t4/sandbox/sigra-prep/depth/monodepth/monodepth_model.py", line 49, in init
self.build_model()
File "/home/n1k31t4/sandbox/sigra-prep/depth/monodepth/monodepth_model.py", line 291, in build_model
self.left_pyramid = self.scale_pyramid(self.left, 4)
File "/home/n1k31t4/sandbox/sigra-prep/depth/monodepth/monodepth_model.py", line 81, in scale_pyramid
scaled_imgs.append(tf.image.resize_area(img, [nh, nw]))
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/ops/gen_image_ops.py", line 738, in resize_area
align_corners=align_corners, name=name)
File "/home/n1k31t4/anaconda3/envs/depth/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 500, in apply_op
repr(values), type(values).name))
TypeError: Expected int32 passed to parameter 'size' of op 'ResizeArea', got [<tf.Tensor 'model/truediv:0' shape=() dtype=float64>, <tf.Tensor 'model/truediv_1:0' shape=() dtype=float64>] of type 'list' instead.

Some questions

Hi

Thanks for your wonderful works!
After reading your papers, I have a few questions. It would be great helpful if you share some informations.

  1. Scale issue. Your paper did not mention anything about the well-known scale issue of monocular depth method. Do you think your algorithm/network overcomes the scale issue?

  2. Gray vs. color images. Did you ever try with gray images with the same conditions? If you did, could you share any comparison or opinion?

  3. KITTI full-resolution. It seems that your result of stereo is good. Did you evaluate on the official KITTI evaluation server with full-resolution images or do you have plan to do?

Regards,
CJ

Error:When try it on my image

Hello ,
I just want to try it on my image.
And I download the model cityscapes .

I use the command:
python monodepth_simple.py --image_path ~/data/image074.png --checkpoint_path ~/models/model_cityscapes

I got the error :
Traceback (most recent call last):
File "monodepth_simple.py", line 110, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "monodepth_simple.py", line 107, in main
test_simple(params)
File "monodepth_simple.py", line 57, in test_simple
input_image = scipy.misc.imresize(input_image, [args.input_height, args.input_width], interp='lanczos')
File "/usr/local/lib/python2.7/dist-packages/scipy/misc/pilutil.py", line 487, in imresize
imnew = im.resize(size, resample=func[interp])
KeyError: 'lanczos'

My image is png and is 640*360 .

Could you help me with this problem ~

Thank you .

trying to run test getting error

python monodepth_main.py --mode test --data_path data/kitti --filenames_file utils/filenames/kitti_test_files.txt --log_directory tmp/ --checkpoint_path tmp/my_model/model-181250.
All files references resolve correctly also tried this with stereo filenames , same error.

Traceback (most recent call last):
  File "monodepth_main.py", line 254, in <module>
    tf.app.run()
  File "/home/richard/.virtualenvs/cvp3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "monodepth_main.py", line 251, in main
    test(params)
  File "monodepth_main.py", line 187, in test
    model = MonodepthModel(params, args.mode, left, right)
  File "/home/richard/opencv3-p3-code/monodepth/monodepth_model.py", line 49, in __init__
    self.build_model()
  File "/home/richard/opencv3-p3-code/monodepth/monodepth_model.py", line 291, in build_model
    self.left_pyramid  = self.scale_pyramid(self.left,  4)
  File "/home/richard/opencv3-p3-code/monodepth/monodepth_model.py", line 81, in scale_pyramid
    scaled_imgs.append(tf.image.resize_area(img, [nh, nw]))
  File "/home/richard/.virtualenvs/cvp3/lib/python3.5/site-packages/tensorflow/python/ops/gen_image_ops.py", line 757, in resize_area
    align_corners=align_corners, name=name)
  File "/home/richard/.virtualenvs/cvp3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 499, in apply_op
    repr(values), type(values).__name__))
TypeError: Expected int32 passed to parameter 'size' of op 'ResizeArea', got [<tf.Tensor 'model/truediv:0' shape=() dtype=float64>, <tf.Tensor 'model/truediv_1:0' shape=() dtype=float64>] of type 'list' instead

Shape mismatch error with "--do_stereo"

Hi

I tried to inference test with pretrained your model with "--do_stereo", but I saw the following error. But without "--do_stereo", every tests completed without any error. Please let me know how to resolve.

Regards,
-CJ

tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [7,7,6,32] rhs shape= [7,7,3,32]
[[Node: save/Assign_37 = Assign[T=DT_FLOAT, _class=["loc:@model/encoder/Conv/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](model/encoder/Conv/weights, save/RestoreV2_37/_129)]]

RandomShuffleQueue running out of elements.

When attempting to train, the queue behind the shuffle batch keeps running out before even one epoch. The exact iteration isn't deterministic, but is usually around iteration 150.

The exact command I'm using is as follows:

python monodepth_main.py --mode train --model_name my_model --data_path <path to my data> --filenames_file utils/filenames/kitti_train_files.txt --log_directory logs --num_gpus=4

I've attempted to greatly increase min_after_dequeue in monodepth_dataloader by about a factor of 10 (which has helped me with this problem in the past), but it had no effect. In fact, greatly reducing it also had no effect on when the queue emptied. I haven't made any changes to the code, so I'm not sure what could be causing issues.

I'm using TF 1.1.0, CUDA 8.0.61, and python 2.7.12 on ubuntu

And thanks for sharing your code!

Problem startings training

hi , I'm beginner of deep running
I want to start training with your code,but I have problem and I dont understand why problem occurs
my command line is " python monodepth_main.py --mode train --model_name my_model --data_path ~/monodepth-master/my_data/ --filenames_file ./utils/filenames/file_list.txt --log_directory ./tmp/my_model/ --num_threads 16 --batch_size 100 --input_height 721 --input_width 1491 --num_epochs 20 --num_gpus 1"

my data sample size is 18750.
data path is /home/jiil/monodepth-master/my_data
the content of the file_list.txt is
"left_rectify_images/1.png
right_rectify_images/1.png
...
left_rectify_images/9375.png
right_rectify_images/9375.png"

when I run the command,I get this error.

jiil@jiil-ubuntu:~/monodepth-master$ python monodepth_main.py --mode train --model_name my_model --data_path ~/monodepth-master/my_data/ --filenames_file ./utils/filenames/file_list.txt --log_directory ./tmp/my_model/ --num_threads 16 --batch_size 100 --input_height 721 --input_width 1491 --num_epochs 20 --num_gpus 1
total number of samples: 18750
total number of steps: 3760
/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py:95: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
2017-09-06 11:50:32.089024: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-06 11:50:32.089049: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-06 11:50:32.089054: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-06 11:50:32.089059: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-06 11:50:32.089063: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
number of trainable parameters: 31600072
2017-09-06 11:50:35.470142: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.499116: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.502235: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.502860: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.502854: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.502923: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.502894: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.502913: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.502970: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.503047: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.503048: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.503095: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.503095: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.505434: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.567099: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:35.724375: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-09-06 11:50:56.195857: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.195939: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.195979: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196017: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196054: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196089: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196124: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196158: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196191: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196225: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196258: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196292: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196325: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196358: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196391: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196423: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196456: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196488: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196521: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196555: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196587: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196620: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196653: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196686: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196719: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196752: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196784: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196818: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196851: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196883: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196917: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.196950: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197055: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197100: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197133: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197167: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197200: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197232: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197265: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197297: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197335: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197368: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197401: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197434: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197466: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197498: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197531: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197564: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197596: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197628: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197660: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197692: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197724: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197757: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197789: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197821: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197853: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197885: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197918: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197950: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.197984: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198016: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198049: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198081: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198114: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198145: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198213: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198254: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198288: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198321: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198384: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198419: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198451: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198484: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198567: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198610: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198646: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198680: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198713: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198746: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198779: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198811: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198844: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198879: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198914: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198948: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.198982: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199015: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199048: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199083: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199118: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199152: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199184: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199217: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199251: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199284: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199318: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199351: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199384: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199418: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199452: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199485: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199517: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199549: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199581: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199614: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199647: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199680: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199712: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199746: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.199780: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.213825: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.213895: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
2017-09-06 11:50:56.213934: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
Traceback (most recent call last):
  File "monodepth_main.py", line 252, in <module>
    tf.app.run()
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "monodepth_main.py", line 247, in main
    train(params)
  File "monodepth_main.py", line 163, in train
    _, loss_value = sess.run([apply_gradient_op, total_loss])
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1124, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
    options, run_metadata)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]

Caused by op u'shuffle_batch', defined at:
  File "monodepth_main.py", line 252, in <module>
    tf.app.run()
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "monodepth_main.py", line 247, in main
    train(params)
  File "monodepth_main.py", line 95, in train
    dataloader = MonodepthDataloader(args.data_path, args.filenames_file, params, args.dataset, args.mode)
  File "/home/jiil/monodepth-master/monodepth_dataloader.py", line 63, in __init__
    params.batch_size, capacity, min_after_dequeue, params.num_threads)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 1220, in shuffle_batch
    name=name)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 791, in _shuffle_batch
    dequeued = queue.dequeue_many(batch_size, name=name)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 457, in dequeue_many
    self._queue_ref, n=n, component_types=self._dtypes, name=name)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 1342, in _queue_dequeue_many_v2
    timeout_ms=timeout_ms, name=name)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/jiil/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]

I would appreciate it if you coude provide a clue to the solution.....

Depth of 2D images

Is it possible to use the pre-trained weights from kitti model and use them to run my own test set of 2D images to get the depth estimation. How can I achieve this on my data where I dont have both left and right images.

Train with --do_stereo flag on

When i try to train with the --do_stereo flag set, it gives me the following error:

  File "/home/monodepth/monodepth_main.py", line 266, in <module>
    tf.app.run()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/home/monodepth/monodepth_main.py", line 261, in main
    train(params)
  File "/home/monodepth/monodepth_main.py", line 124, in train
    model = MonodepthModel(params, args.mode, left_splits[i], right_splits[i], reuse_variables, i)
  File "/home/monodepth/monodepth_model.py", line 50, in __init__
    self.build_outputs()
  File "/home/monodepth/monodepth_model.py", line 320, in build_outputs
    self.left_est  = [self.generate_image_left(self.right_pyramid[i], self.disp_left_est[i])  for i in range(4)]
AttributeError: 'MonodepthModel' object has no attribute 'right_pyramid'

Seems like in build_model(), right_pyramid is only defined if --do_stereo flag is not set:

self.left_pyramid  = self.scale_pyramid(self.left,  4)
if self.mode == 'train' and not self.params.do_stereo:
        self.right_pyramid = self.scale_pyramid(self.right, 4)

Whereas, build_outputs() tries to access right_pyramid irrespective of the --do_stereo flag:

 # GENERATE IMAGES
 with tf.variable_scope('images'):
        self.left_est  = [self.generate_image_left(self.right_pyramid[i], self.disp_left_est[i])  for i in range(4)]
        self.right_est = [self.generate_image_right(self.left_pyramid[i], self.disp_right_est[i]) for i in range(4)]

Any idea how to solve this?

Model about stereo version

In your paper you said that you also implemented a stereo version of your model. Your test results has showed that stereo version improved the D1-all disparity measure.

But you also said that "This model was only trained for 12 epochs as it becomes unstable if trained for longer", Wht this happened?

if I want train my own model use my own training data(indoor scenes) with stereo version model, Do I need to change your source code?

If we only concern the D1-err, stereo version model similar with the traditional stereo matching algorithm named "elas", it means that this depth estimation method can be used to some practical applications

Evaluation result on egien test split

Hi Clement,

I have downloaded your pretrained-model on eigen split. After calling

python monodepth_main.py --mode test --data_path KITTI_raw_data/ \
    --filenames_file utils/filenames/eigen_test_files.txt \
    --output_directory UnsuperviseMono/eigen_pretrain \
    --checkpoint_path UnsuperviseMono/pretrain_model/model_eigen

I got disparities.npy. Then I called

python utils/evaluate_kitti.py --split eigen --predicted_disp_path UnsuperviseMono/eigen_pretrain/disparities.npy \
    --gt_path KITTI_raw_data/

to get the result

0 files missing
   abs_rel,     sq_rel,        rms,    log_rms,     d1_all,         a1,         a2,         a3
    0.9966,    15.9183,     19.672,      6.374,      0.000,      0.000,      0.000,      0.000

which is not consistent with that of your paper.

But the result for KITTI 200 training images is consistent with that in your paper.

I checked the evaluation code in evaluate_kiiti.py, and found that you have commented some of the codes

    elif args.split == 'eigen':
        num_samples = 697
        test_files = read_text_lines(args.gt_path + 'eigen_test_files.txt')
        gt_files, gt_calib, im_sizes, im_files, cams = read_file_data(test_files, args.gt_path)

        num_test = len(im_files)
        gt_depths = []
        pred_depths = []
        for t_id in range(num_samples):
            camera_id = cams[t_id]  # 2 is left, 3 is right
            depth = generate_depth_map(gt_calib[t_id], gt_files[t_id], im_sizes[t_id], camera_id, False, True)
            gt_depths.append(depth.astype(np.float32))

            disp_pred = cv2.resize(pred_disparities[t_id], (im_sizes[t_id][1], im_sizes[t_id][0]), interpolation=cv2.INTER_LINEAR)
            # disp_pred = disp_pred * disp_pred.shape[1]

            # # need to convert from disparity to depth
            # focal_length, baseline = get_focal_length_baseline(gt_calib[t_id], camera_id)
            # depth_pred = (baseline * focal_length) / disp_pred
            # depth_pred[np.isinf(depth_pred)] = 0

            pred_depths.append(disp_pred)

I try to uncomment it, but the result is still weird. Could you help to have a look at it?

Thank you for your help and time.

Did you predict the left and right disparity by only the left image?

In the "monodepth_model.py", if "do_stereo" is false, the input for the network will be only the left image:

if self.params.do_stereo:
self.model_input = tf.concat([self.left, self.right], 3)
else:
self.model_input = self.left

But the output of the network contains both left disparity and right disparity:

def get_disp(self, x):
disp = 0.3 * self.conv(x, 2, 3, 1, tf.nn.sigmoid)
return disp

Is this the case in your code? If so, why not used the right image to predict the right disparity?

Thank you very much.

parameters setting for fine-tuning the model on kitti dataset using pre-trained model on cityscapes?

Hi,
i want to ask what's your parameters setting for fine-tuning the model on kitti dataset using pre-trained model on cityscapes? Do you just use the default parameters in your code for the results in your paper?
is the following command the right way to do so?
python monodepth_main.py --mode train --model_name xxx --data_path dataset/kitti/ --filenames_file utils/filenames/kitti_train_files.txt --log_directory tmp --checkpoint_path pre-trainedModel/model_cityscapes/model_cityscapes --retrain

Thanks for your help!

Evaluation on KITTI split is not good as your result

Hi
First of all, I like to thank for sharing your wonderful works. But I have some things to be cleared.

The file "utils/filenames/kitti_test_files.txt" contains 12223 test samples, and when I ran inference
using your pretrained weight, which I named as "vgg_pretrained" as follows:

python monodepth_main.py --mode test --data_path ~/tf/data/kitti/ --output_directory ~/tf/data/monodepth/my_run/vgg_pretrained/result --filenames_file utils/filenames/kitti_test_files.txt --log_directory tmp/ --checkpoint_path ~/tf/data/monodepth/my_run/vgg_pretrained/model/kitti/model_kitti

This inference test generated almost 6GiB disparities.npy. After that, I ran your evaluation script as:

python utils/evaluate_kitti.py --split kitti --predicted_disp_path ~/tf/data/monodepth/my_run/vgg_pretrained/result/disparities.npy --gt_path ~/tf/data/kitti/

But I got a quite different result and the evaluation script only check the first 200 samples:

abs_rel, sq_rel, rms, log_rms, d1_all, a1, a2, a3
0.3232, 5.2624, 10.887, 0.498, 61.258, 0.547, 0.710, 0.822

By the way, when I ran the evaluation on your KITTI disparities.npy(~100MiB), which I downloaded:

python utils/evaluate_kitti.py --split kitti --predicted_disp_path ~/tf/data/monodepth/ref/vgg/result/disparities.npy --gt_path ~/tf/data/kitti/

I got the same result as Table 1 in your paper.
abs_rel, sq_rel, rms, log_rms, d1_all, a1, a2, a3
0.1235, 1.3882, 6.125, 0.217, 30.272, 0.841, 0.936, 0.975

But I don't know why you just evaluated only on 200 samples, and your disparities.npy file contains also only 200 samples, but "utils/filenames/kitti_test_files.txt" contains 12223 test samples.
I think the evaluation result might be different due to this issue. Let me know why check on all 12223 samples, and could I get your full file (disparity.npy) for that?
Thanks in advance.

CJ

How feed images to model

Hi
I try to use your models in my project. So, maybe you can share easy way to feed images to model and get estimation. I loaded graph and model and it seems that i need to get tensor by name and feed images in some format.
Sorry, I can not debug project because downloading kitti takes too much time.
Thx

InvalidArgumentError (see above for traceback): Incompatible shapes: [8,94,310,1] vs. [8,94,309,1]

Hi, mrharicot
Thanks for your work!
I got the follow error while training model using original size of kitti:
InvalidArgumentError (see above for traceback): Incompatible shapes: [8,94,310,1] vs. [8,94,309,1]

Training code:
python monodepth_main.py --mode train --model_name kitti_model --data_path /mnt/nfs/ybwang/data/monodepth/ \ --filenames_file ./utils/filenames/kitti_train_files.txt --input_height 376 --input_width 1241 --log_directory ./logs/

Error when trying to run on an image

hi,

when doing the simple call to run the algo on my image with the below command found on the readme:
python monodepth_simple.py --image_path ~/monodepth/img/20170828_121009.jpg --checkpoint_path ~/monodepth/models/model_eigen

I got a set of errors starting with:
W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open /home/tets/monodepth/models/model_eigen: Failed precondition: /home/tets/monodepth/models/model_eigen: perhaps your file is in a different file format and you need to use a different restore operator?

I suspect it relates to the way I named the models but not sure.
under ~/monodepth/model/model_eigen/ I just have the two 365Mb files disparities.npy and disparities_pp.npy I extracted from your model page.

I have Tensorfow 1.2.1 installed and Cuda 8.0, running a GTX980Ti.

Not sure what I am doing wrong. Also I do not see any checkpoint files in the folders, should I have some?

Can you help please?
thank you

Tets

How to train on new dataset

Hello @mrharicot:

I am trying to train your model on our own dataset using

python monodepth_main.py --mode train --model_name my_model --data_path ./train/ \
--filenames_file ./train/test.txt --log_directory . --num_gpus=1 

All the image files have been put in ./train/ folder and the filnames of images are collected in ./train/test.txt.

However, I got the following errors:

total number of samples: 1023
total number of steps: 6400
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py:93: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
....
number of trainable parameters: 31600072
2017-06-08 06:28:10.137207: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: slice index 1 of dimension 0 out of bounds.
2017-06-08 06:28:10.222068: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: slice index 1 of dimension 0 out of bounds.
.....
2017-06-08 06:28:11.555313: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 8, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]
....
Caused by op u'shuffle_batch', defined at:
  File "monodepth_main.py", line 252, in <module>
    tf.app.run()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "monodepth_main.py", line 247, in main
    train(params)
  File "monodepth_main.py", line 95, in train
    dataloader = MonodepthDataloader(args.data_path, args.filenames_file, params, args.dataset, args.mode)
  File "/home/theiadl/3D_Train/monodepth_dataloader.py", line 63, in __init__
    params.batch_size, capacity, min_after_dequeue, params.num_threads)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 1214, in shuffle_batch
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 784, in _shuffle_batch
    dequeued = queue.dequeue_many(batch_size, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 458, in dequeue_many
    self._queue_ref, n=n, component_types=self._dtypes, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 1328, in _queue_dequeue_many_v2
    timeout_ms=timeout_ms, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in __init__
    self._traceback = _extract_stack()

OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 8, current size 0)
	 [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]

Is there any setting that I am missing?

Any suggestions would be appreciated!

Thanks!!

loss_value

Thanks for your work. I have run the program successfully! I have read the code five days.
But I don't understand the
_, loss_value = sess.run([apply_gradient_op, total_loss])
what is the meaning of the loss_value?
Thanks

Depth estimation on custom images

I trained the network on the kitti dataset and wanted to get the depth of some custom images as my test set. I have only one set of images (not stereo) but the depth I am getting doesnt look very good. Any suggestions on what can I do. Please see the images that I saved from the generated disparity map on the custom images.

1
4
2

model

I'm sorry if I troubled you. I want to know your pre-train model is mono or stereo.think you !!! I see your code '--do_stereo' is defult true.

disparity format

I just want to use the function in bilinear_sampler.py. in what format does it require the images and the disparity map to be in?

To elaborate :- Should it be in a range of 0-255, where 255 is max disparity, of just in terms of pixel disparity values, or scaled between 0-1

Thank you

Getting depth out of a depth map

Hi,
I ran monodepth for an image and got a depth map. Could someone tell me, using this depth map how to get depth(in meters) to any particular object or any particular point in my image? Thanks.

disagreement between the model in code and the model descripted in paper ?

Hi,
according to 'encoder' part of the model descripted in your paper,
i think conv_block in the code should be,
def conv_block(self, x, num_out_layers, kernel_size):
conv1 = self.conv(x, num_out_layers, kernel_size, 2)
conv2 = self.conv(conv1, num_out_layers, kernel_size, 1)
return conv2
instead of
def conv_block(self, x, num_out_layers, kernel_size):
conv1 = self.conv(x, num_out_layers, kernel_size, 1)
conv2 = self.conv(conv1, num_out_layers, kernel_size, 2)
return conv2
Am i understanding your paper or code right ?
Looking forward to your response. Thanks !

Unable to open table file /home/ubuntu/monodepth/model_kitti.data-00000-of-00001

Hi,

I am trying to run the pre-trained model of kitti dataset and have downloaded the necessary model from the link specified in the documentation. However i am getting an error

Traceback (most recent call last):
  File "monodepth_main.py", line 253, in <module>
    tf.app.run()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "monodepth_main.py", line 250, in main
    test(params)
  File "monodepth_main.py", line 206, in test
    train_saver.restore(sess, restore_path)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1548, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 789, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 997, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1132, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1152, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file /home/ubuntu/monodepth/model_kitti.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
	 [[Node: save/RestoreV2_1 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_1/tensor_names, save/RestoreV2_1/shape_and_slices)]]
	 [[Node: save/RestoreV2_47/_77 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_206_save/RestoreV2_47", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

Caused by op u'save/RestoreV2_1', defined at:
  File "monodepth_main.py", line 253, in <module>
    tf.app.run()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "monodepth_main.py", line 250, in main
    test(params)
  File "monodepth_main.py", line 193, in test
    train_saver = tf.train.Saver()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1139, in __init__
    self.build()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1170, in build
    restore_sequentially=self._restore_sequentially)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 691, in build
    restore_sequentially, reshape)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 407, in _AddRestoreOps
    tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 247, in restore_op
    [spec.tensor.dtype])[0])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 640, in restore_v2
    dtypes=dtypes, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1269, in __init__
    self._traceback = _extract_stack()

DataLossError (see above for traceback): Unable to open table file /home/ubuntu/monodepth/model_kitti.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
	 [[Node: save/RestoreV2_1 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_1/tensor_names, save/RestoreV2_1/shape_and_slices)]]
	 [[Node: save/RestoreV2_47/_77 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_206_save/RestoreV2_47", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

i used the following command to run the script
python monodepth_main.py --mode test --data_path ~/data/KITTI/ --filenames_file ~/monodepth/t.txt --checkpoint_path ~/monodepth/model_kitti.data-00000-of-00001

Do i need to convert the model file to some other format to run it?

Potential bug in get_disparity_smoothness

Hi,

I think I've found a bug in the function get_disparity_smoothness in monodepth_model.py. The last line concats two lists instead of adding them elementwise. The actual return of the function is a list of 8 tensors of shape [B,H,W,1]. Thus the use of the function may be inappropriate.

Video as input

Hi,
I was trying to understand the way you structured your code.
After having a careful look at your code, it seems that your code is structured in a way that it accepts all the test images (their names in filenames) at once during the call of
dataloader = MonodepthDataloader(args.data_path, args.filenames_file, params, args.dataset, args.mode)
Then based on the loaded data files, you load the model using
model = MonodepthModel(params, args.mode, left, right)
Finally when you run the session, I assume it gets the image using the filename and generates the output.
I dont really get what exactly model.disp_left_est[0] denotes in the call = sess.run(model.disp_left_est[0])
Can you please help me understand the structure of the code for mode=test.

I want to modify the code to receive video as input and generate disparity video as output using Moviepy. Understanding the code structure will really help me in this regard. Thank you

Error while trying to test it for just one image

Hi,
I came across your paper recently and I appreciate your work. I am getting an error when I download your pre-existing model and try it on one of my own test images. I appreciate if anybody could help me with this. Thanks.

My command:
python3 monodepth_simple.py --image_path ~/monodepth/right_image.pgm --checkpoint_path ~/monodepth/model_kitti.data-00000-of-00001

My image specs:
type: .pgm file
dimensions: width = 640,height = 480

Error I get:

Traceback (most recent call last):
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 490, in apply_op
preferred_dtype=default_dtype)
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 676, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 121, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 102, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 376, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).name))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "monodepth_simple.py", line 117, in
tf.app.run()
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "monodepth_simple.py", line 114, in main
test_simple(params)
File "monodepth_simple.py", line 60, in test_simple
model = MonodepthModel(params, "test", left, None)
File "/home/hannavaj/monodepth/monodepth_model.py", line 49, in init
self.build_model()
File "/home/hannavaj/monodepth/monodepth_model.py", line 291, in build_model
self.left_pyramid = self.scale_pyramid(self.left, 4)
File "/home/hannavaj/monodepth/monodepth_model.py", line 81, in scale_pyramid
scaled_imgs.append(tf.image.resize_area(img, [nh, nw]))
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_image_ops.py", line 821, in resize_area
align_corners=align_corners, name=name)
File "/home/hannavaj/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 499, in apply_op
repr(values), type(values).name))
TypeError: Expected int32 passed to parameter 'size' of op 'ResizeArea', got [<tf.Tensor 'model/truediv:0' shape=() dtype=float64>, <tf.Tensor 'model/truediv_1:0' shape=() dtype=float64>] of type 'list' instead.

Question about performance on small scene

Hi
Your works is wonderful!
I have a few questions.I wonder if you could share some informations.
Since your method produces state of the art results on the KITTI dataset,which is a outside scene,a big scene.I am curious about the performance of your model on small scene dataset (such as small toys and multi-view stereo performs well). Do you think your method can achieve good accuracy in small scene dataset after trained on the dataset?

Regards,
JH

Cannot load params on Ipython

Hi,
I am trying to run on a single image for visualization on Ipython but cannot load the model - my code more or less is

param_file = './model_city2kitti_resnet'
params = monodepth_parameters(
encoder='resnet50',
height=256,
width=512,
batch_size=1,
num_threads=1,
num_epochs=1,
do_stereo=False,
wrap_mode='border',
use_deconv=False,
alpha_image_loss=0.,
disp_gradient_loss_weight=0.,
lr_loss_weight=0.,
full_summary=False)
right = tf.placeholder(tf.float32, shape=(None,512, 256,3))
left = tf.placeholder(tf.float32, shape=(None,512, 256,3))
model = MonodepthModel(params, 'test', left, right)
saver = tf.train.Saver()
saver.restore(sess, param_file)

and I get this error
959 'Cannot feed value of shape %r for Tensor %r, '
960 'which has shape %r'
--> 961 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
962 if not self.graph.is_feedable(subfeed_t):
963 raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (13,) for Tensor u'save/Const:0', which has shape '()'

I am a bit new to TF so might be a noob mistake...

Thanks!

some question about the result d1_all

i use the model model_kitti provided by author and the model trained by myself. Both of them got the result like this:
abs_rel, sq_rel, rms, log_rms, d1_all, a1, a2, a3
0.1172, 1.1762, 5.954, 0.209, 30.927, 0.844, 0.941, 0.977
d1_all is about 30, far from the result in paper(9.194).How can i fix it?
ps:the only thing i modify is the function scale_pyramid in monodepth_model.py(it has some wrong maybe i use tf1.3)
def scale_pyramid(self, img, num_scales):
scaled_imgs = [img]
s = tf.shape(img)
#h = s[1]
#w = s[2]
h=img.shape[1]
w=img.shape[2]
for i in range(num_scales - 1):
ratio = 2 ** (i + 1)
nh =int(int(h) / ratio)
nw = int(int(w) / ratio)
scaled_imgs.append(tf.image.resize_area(img, tf.constant([nh,nw])))
return scaled_imgs
thanks

TEST KeyError: 'lanczos'

When i test the model_kitti ,a problem occured,

Traceback (most recent call last): File "monodepth_simple.py", line 120, in tf.app.run() File "/home/akasha/tensorflow-gpu/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "monodepth_simple.py", line 117, in main test_simple(params) File "monodepth_simple.py", line 67, in test_simple input_image = scipy.misc.imresize(input_image, [args.input_height, args.input_width], interp='lanczos') File "/usr/lib/python2.7/dist-packages/scipy/misc/pilutil.py", line 487, in imresize imnew = im.resize(size, resample=func[interp])
KeyError: 'lanczos'

I need help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.