Git Product home page Git Product logo

r2cnn_fpn_tensorflow's Introduction

R2CNN: Rotational Region CNN for Orientation Robust Scene Detection

Recommend improved code: https://github.com/DetectionTeamUCAS

A Tensorflow implementation of FPN or R2CNN detection framework based on FPN.
You can refer to the papers R2CNN Rotational Region CNN for Orientation Robust Scene Text Detection or Feature Pyramid Networks for Object Detection
Other rotation detection method reference R-DFPN, RRPN and R2CNN_HEAD
If useful to you, please star to support my work. Thanks.

Citation

Some relevant achievements based on this code.

@article{[yang2018position](https://ieeexplore.ieee.org/document/8464244),
	title={Position Detection and Direction Prediction for Arbitrary-Oriented Ships via Multitask Rotation Region Convolutional Neural Network},
	author={Yang, Xue and Sun, Hao and Sun, Xian and  Yan, Menglong and Guo, Zhi and Fu, Kun},
	journal={IEEE Access},
	volume={6},
	pages={50839-50849},
	year={2018},
	publisher={IEEE}
}

@article{[yang2018r-dfpn](http://www.mdpi.com/2072-4292/10/1/132),
	title={Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks},
	author={Yang, Xue and Sun, Hao and Fu, Kun and Yang, Jirui and Sun, Xian and Yan, Menglong and Guo, Zhi},
	journal={Remote Sensing},
	volume={10},
	number={1},
	pages={132},
	year={2018},
	publisher={Multidisciplinary Digital Publishing Institute}
} 

Configuration Environment

ubuntu(Encoding problems may occur on windows) + python2 + tensorflow1.2 + cv2 + cuda8.0 + GeForce GTX 1080
If you want to use cpu, you need to modify the parameters of NMS and IOU functions use_gpu = False in cfgs.py
You can also use docker environment, command: docker pull yangxue2docker/tensorflow3_gpu_cv2_sshd:v1.0

Installation

Clone the repository

git clone https://github.com/yangxue0827/R2CNN_FPN_Tensorflow.git    

Make tfrecord

The data is VOC format, reference here
Data path format ($R2CNN_ROOT/data/io/divide_data.py)

├── VOCdevkit
│   ├── VOCdevkit_train
│       ├── Annotation
│       ├── JPEGImages
│    ├── VOCdevkit_test
│       ├── Annotation
│       ├── JPEGImages

Clone the repository

cd $R2CNN_ROOT/data/io/  
python convert_data_to_tfrecord.py --VOC_dir='***/VOCdevkit/VOCdevkit_train/' --save_name='train' --img_format='.jpg' --dataset='ship'
     

Compile

cd $PATH_ROOT/libs/box_utils/
python setup.py build_ext --inplace

##Demo
1、Unzip the weight $R2CNN_ROOT/output/res101_trained_weights/*.rar
2、put images in $R2CNN_ROOT/tools/inference_image
3、Configure parameters in $R2CNN_ROOT/libs/configs/cfgs.py and modify the project's root directory
4、

cd $R2CNN_ROOT/tools      

5、image slice

python inference1.py   

6、large image

cd $FPN_ROOT/tools
python demo1.py --src_folder=.\demo_src --des_folder=.\demo_des         

Train

1、Modify $R2CNN_ROOT/libs/lable_name_dict/***_dict.py, corresponding to the number of categories in the configuration file
2、download pretrain weight(resnet_v1_101_2016_08_28.tar.gz or resnet_v1_50_2016_08_28.tar.gz) from here, then extract to folder $R2CNN_ROOT/data/pretrained_weights
3、

cd $R2CNN_ROOT/tools      

4、Choose a model(FPN or R2CNN))
If you want to train FPN :

python train.py   

elif you want to train R2CNN:

python train1.py   

Test tfrecord

cd $R2CNN_ROOT/tools   
python test.py(test1.py)   

eval(Not recommended, Please refer here

cd $R2CNN_ROOT/tools   
python eval.py(eval1.py)  

Summary

tensorboard --logdir=$R2CNN_ROOT/output/res101_summary/ 

01 02 03

Graph

04

icdar2015 test results

19
20

21
22

23
24

Test results

11
12

13
14

15
16

17
18

r2cnn_fpn_tensorflow's People

Contributors

yangxue0827 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

r2cnn_fpn_tensorflow's Issues

undefined symbol _rotate_nms

I tried to compile with cuda 9.0, got an error while compiling *.pyx files.
ImportError: /libs/box_utils/rotate_polygon_nms.so: undefined symbol: _rotate_nms

I cannot fine cpp implementations of _rotate_nms, _overlaps. Did I or you miss something?

训练卡住了

请教一个问题,在训练自己的模型时,restore 之后就卡住不动了。 请问这是怎么回事?

Rough time to converge

@yangxue0827 Hi! I started the training process, and about passed 1000 steps, the total loss still around 2~3, is that normal? And I found that the loss change sharply, could from 1+~5+! I trained your model on ICDAR2015。

Example log:
2018-02-13 09:21:42: step1191 image_name:img_785.jpg |>-
rpn_loc_loss:0.127042293549 |>-- rpn_cla_loss:0.167033866048 |>-
rpn_total_loss:0.294076144695 |
fast_rcnn_loc_loss:0.03043981269 |>- fast_rcnn_cla_loss:0.0624937154353 |>--
fast_rcnn_loc_rotate_loss:0.0329047143459 |> fast_rcnn_cla_rotate_loss:0.0626320391893 |>---
fast_rcnn_total_loss:0.188470289111 |>--
total_loss:1.3173609972 |>-- pre_cost_time:0.481674909592s
2018-02-13 09:21:47: step1201 image_name:img_760.jpg |>-
rpn_loc_loss:0.131411507726 |>-- rpn_cla_loss:0.156840890646 |>-
rpn_total_loss:0.288252413273 |
fast_rcnn_loc_loss:0.0636825785041 |>--- fast_rcnn_cla_loss:0.0698676481843 |>--
fast_rcnn_loc_rotate_loss:0.132074356079 |>- fast_rcnn_cla_rotate_loss:0.0675699263811 |>---
fast_rcnn_total_loss:0.333194494247 |>--
total_loss:1.45624518394 |>- pre_cost_time:0.488508939743s
2018-02-13 09:21:52: step1211 image_name:img_302.jpg |>-
rpn_loc_loss:1.17913043499 |>--- rpn_cla_loss:0.438374519348 |>-
rpn_total_loss:1.61750495434 |
fast_rcnn_loc_loss:0.299244552851 |> fast_rcnn_cla_loss:0.410798072815 |>---
fast_rcnn_loc_rotate_loss:0.620053052902 |>- fast_rcnn_cla_rotate_loss:0.429437577724 |>
fast_rcnn_total_loss:1.75953316689 |>---
total_loss:4.21183824539 |>- pre_cost_time:0.464334011078s
2018-02-13 09:21:57: step1221 image_name:img_584.jpg |>-
rpn_loc_loss:0.264385610819 |>-- rpn_cla_loss:0.17485126853 |>--
rpn_total_loss:0.439236879349 |
fast_rcnn_loc_loss:0.0316842421889 |>--- fast_rcnn_cla_loss:0.202808991075 |>---
fast_rcnn_loc_rotate_loss:0.252296447754 |>- fast_rcnn_cla_rotate_loss:0.213347256184 |>
fast_rcnn_total_loss:0.700136899948 |>--
total_loss:1.97416782379 |>- pre_cost_time:0.472580909729s
2018-02-13 09:22:02: step1231 image_name:img_589.jpg |>-
rpn_loc_loss:1.59727919102 |>--- rpn_cla_loss:0.580799102783 |>-
rpn_total_loss:2.17807817459 |
fast_rcnn_loc_loss:0.503440141678 |> fast_rcnn_cla_loss:0.603018581867 |>---
fast_rcnn_loc_rotate_loss:2.01638698578 |>-- fast_rcnn_cla_rotate_loss:0.620741009712 |>
fast_rcnn_total_loss:3.74358654022 |>---
total_loss:6.75644779205 |>- pre_cost_time:0.479368925095s

some problem about rotate_polygon_nms.pyx

Thanks for your code. My env is windows and python3, so I must rebuild the pyx file, but there is a main function in the file like '_rotate_nms' and '_overlaps'. they are defined in the hpp file, but the cpp file is generated by Cython. So where are their function body? It's my first time getting in touch with Cython.

OutOfRangeError in /data/io/read_tfrecord.py at line number 80

HI

I get the following error when trying to train the model using train1.py on my custom data set. I am using resnet-101 as the back end. Can you please help me out here?

Traceback (most recent call last):
File "train1.py", line 262, in
train()
File "train1.py", line 224, in train
fast_rcnn_total_loss, total_loss, train_op])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run
options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: PaddingFIFOQueue '_1_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fif
o_queue, get_batch/batch/n)]]

Caused by op u'get_batch/batch', defined at:
File "train1.py", line 262, in
train()
File "train1.py", line 36, in train
is_training=True)
File "../data/io/read_tfrecord.py", line 86, in next_batch
dynamic_pad=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 922, in batch
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 716, in _batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 457, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 1342, in _queue_dequeue_many_v2
timeout_ms=timeout_ms, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1204, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

OutOfRangeError (see above for traceback): PaddingFIFOQueue '_1_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fif
o_queue, get_batch/batch/n)]]

No module named networks.network_factory

I am a beginner, when I run the command statement: python inference.py or python inference1.py. there is a mistake: No module named networks.network_factory
but the module files are all there.
I tried to use: sys.path.append()
but it still doesn't work

Strange questions, have anyone come across ?

Traceback (most recent call last):
File "C:\Program Files\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1139, in _do_call
return fn(*args)
File "C:\Program Files\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1121, in _run_fn
status, run_metadata)
File "C:\Program Files\Anaconda2\envs\tensorflow\lib\contextlib.py", line 66, in exit
next(self.gen)
File "C:\Program Files\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.UnknownError: OverflowError: Python int too large to convert to C long
[[Node: fast_rcnn_loss/PyFunc_1 = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_INT32], Tout=[DT_UINT8], token="pyfunc_7", _device="/job:localhost/replica:0/task:0/cpu:0"](fast_rcnn_loss/Squeeze_1, fast_rcnn_loss/mul_1, fast_rcnn_loss/strided_slice_1)]]

算法细节问题

我看了R2CNN文章,里面说是以两个点坐标和宽来表示一个框,而你代码中用的是中心点和角度,有试过原版的方法,效果如何?

ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory

if d.decorator_argspec is not None), _inspect.getargspec(target))
/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if d.decorator_argspec is not None), _inspect.getargspec(target))
Traceback (most recent call last):
File "/home/louj/pywork/R2CNN_FPN/tools/inference1.py", line 17, in
from libs.fast_rcnn import build_fast_rcnn1
File "/home/louj/pywork/R2CNN_FPN/libs/fast_rcnn/build_fast_rcnn1.py", line 14, in
from libs.box_utils import nms_rotate
File "/home/louj/pywork/R2CNN_FPN/libs/box_utils/nms_rotate.py", line 12, in
from libs.box_utils.rotate_polygon_nms import rotate_gpu_nms
ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory

some wrong for module

thank you for your job
I run the demo,but some wrong for the module like:
File "./tools/inference.py",line18, in<module>
from libs.fast_rcnn import build_fast_rcnn
File"/yangxue/FPN_V18/libs/fast_rcnn/build_fast_rcnn.py",line 15 in <module>
ImportError: no mudule named tf tf_utils
I find no .py files in ./libs/fast_rcnn but .pyc
I think you not open the source file in ./lib
do you open later??

trouble testing R2CNN code

Hi @yangxue0827 ,
Starred your repo and thanks in advance.

Unfortunately, I had trouble running your inference1.py python script to test your model. Are there any missing installation step before running the script?

Here is a snapshot in the terminal from running inference1.py:
File "inference1.py", line 17, in
from libs.fast_rcnn import build_fast_rcnn1
File "../libs/fast_rcnn/build_fast_rcnn1.py", line 14, in
from libs.box_utils import nms_rotate
File "../libs/box_utils/nms_rotate.py", line 12, in
from libs.box_utils.rotate_polygon_nms import rotate_gpu_nms
ImportError: dynamic module does not define module export function (PyInit_rotate_polygon_nms)

Then I tried cd to the ./libs/box_utils/ and ran setup.py script, but failed:

running install
running bdist_egg
running egg_info
writing dependency_links to fast_rcnn.egg-info/dependency_links.txt
writing top-level names to fast_rcnn.egg-info/top_level.txt
writing fast_rcnn.egg-info/PKG-INFO
reading manifest file 'fast_rcnn.egg-info/SOURCES.txt'
writing manifest file 'fast_rcnn.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
skipping 'rbbox_overlaps.cpp' Cython extension (up-to-date)
building 'rbbox_overlaps' extension
/usr/local/cuda-8.0/bin/nvcc -I/home/administrator/.virtualenvs/cv/lib/python3.5/site-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/usr/include/python3.5m -I/home/administrator/.virtualenvs/cv/include/python3.5m -c rbbox_overlaps_kernel.cu -o build/temp.linux-x86_64-3.5/rbbox_overlaps_kernel.o -arch=sm_35 --ptxas-options=-v -c --compiler-options '-fPIC'
gcc: error: rbbox_overlaps_kernel.cu: No such file or directory
gcc: warning: ‘-x c++’ after last input file has no effect
gcc: fatal error: no input files
compilation terminated.
error: command '/usr/local/cuda-8.0/bin/nvcc' failed with exit status 1

It seems that I am missing rbbox_overlaps_kernel.cu file for some reason.

Please kindly suggest, thank you once again.

Thanks and regards,
Hanyi Hu

About R2CNN on icdar dataset

Thank you for sharing your code! I want to train your model on icdar dataset,but the convert_data_to_tfrecords.py is useless for icdar dataset because the data format in pascalVOC and icdar are different.I am very thankful if you can provide your code about coverting on icdar dataset. Looking forward for your reply.

训练集中没有前景检测对象。。。

在应用rpn检测对象时,如果某些训练图片中没有检测目标时,此时图片对应的bounding box如何设置?现在代码是否支持纯背景的图片训练?

train1.py

tfrecord path is --> /home/zhangqi/home/R2CNN_FPN_Tensorflow/data/tfrecords/ship_train*
WARNING:tensorflow:From ../libs/rpn/build_rpn.py:420: add_loss (from tensorflow.contrib.framework.python.ops.arg_scope) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.add_loss instead.
WARNING:tensorflow:From ../libs/rpn/build_rpn.py:424: softmax_cross_entropy (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.
WARNING:tensorflow:From /home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:398: compute_weighted_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.compute_weighted_loss instead.
WARNING:tensorflow:From /home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:151: add_loss (from tensorflow.contrib.framework.python.ops.arg_scope) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.add_loss instead.
WARNING:tensorflow:From ../libs/fast_rcnn/build_fast_rcnn1.py:314: softmax_cross_entropy (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.
WARNING:tensorflow:From /home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:398: compute_weighted_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.compute_weighted_loss instead.
WARNING:tensorflow:From /home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:151: add_loss (from tensorflow.contrib.framework.python.ops.arg_scope) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.add_loss instead.
WARNING:tensorflow:From ../libs/fast_rcnn/build_fast_rcnn1.py:320: add_loss (from tensorflow.contrib.framework.python.ops.arg_scope) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.add_loss instead.
WARNING:tensorflow:From ../libs/fast_rcnn/build_fast_rcnn1.py:324: softmax_cross_entropy (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.
WARNING:tensorflow:From /home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:398: compute_weighted_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.compute_weighted_loss instead.
WARNING:tensorflow:From /home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:151: add_loss (from tensorflow.contrib.framework.python.ops.arg_scope) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.add_loss instead.
WARNING:tensorflow:From ../libs/fast_rcnn/build_fast_rcnn1.py:331: add_loss (from tensorflow.contrib.framework.python.ops.arg_scope) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.add_loss instead.
WARNING:tensorflow:From train1.py:150: get_total_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.get_total_loss instead.
WARNING:tensorflow:From /home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:261: get_losses (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.get_losses instead.
WARNING:tensorflow:From /home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:263: get_regularization_losses (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.get_regularization_losses instead.
/home/zhangqi/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py:93: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
model restore from : ../output/res101_trained_weights/v5/voc_40000model.ckpt
2018-04-07 23:09:04.226284: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-07 23:09:04.226311: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-07 23:09:04.226316: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-04-07 23:09:04.226321: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-07 23:09:04.226325: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2018-04-07 23:09:04.401419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.3415
pciBusID 0000:02:00.0
Total memory: 11.92GiB
Free memory: 11.46GiB
2018-04-07 23:09:04.401452: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2018-04-07 23:09:04.401458: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y
2018-04-07 23:09:04.401466: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0)
restore model
2018-04-07 23:09:15.079529: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081579: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081612: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081630: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081648: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081664: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081681: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081697: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081713: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081729: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081746: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081850: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081874: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081892: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081909: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081926: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081943: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081959: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.081983: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082001: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082018: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082034: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082051: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082067: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082084: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082100: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082117: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082133: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082149: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082166: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082182: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082198: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082214: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082230: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082247: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082263: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082280: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082296: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082313: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082329: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082345: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082361: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082377: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082394: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082410: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082427: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082443: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082460: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082476: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082497: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082514: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082530: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082547: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082564: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082580: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082597: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082613: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082629: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082646: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082662: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082678: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082694: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082710: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082727: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082743: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082759: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082775: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082791: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082807: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082823: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.082839: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085301: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085320: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085337: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085353: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085370: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085387: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085402: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085418: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085435: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085452: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085469: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085485: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085502: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085519: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085535: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085552: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085569: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085585: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.085602: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089447: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089464: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089480: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089497: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089513: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089529: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089545: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089561: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089578: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089595: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089611: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089627: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089644: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089661: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089677: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089694: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089710: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089727: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089744: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089760: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089777: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089793: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089810: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089826: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089843: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089859: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089876: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089893: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-04-07 23:09:15.089909: W tensorflow/core/framework/op_kernel.cc:1158] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]

why this?how to do?

Annotation format

Hello, thank you for providing these codes. Could you tell me more details about the annotation format? As far as I know, there is no specific location to put the 'angle' (or 'orientation') value of the bounding box in pascal VOC format. Sorry for any possible misunderstanding.

The trainning of model for Text Detecting

Hi, thanks for your share of repo. When I run the train1.py ,the output of the paraof the parameters of the pretrained model and 'fast_rcnn_loc_loss' , 'fast_rcnn_loc_rotate_loss' are zero all the time. Are there any problems? The output are show as following:

resnet_v1_101/block4/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0
resnet_v1_101/block4/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0
resnet_v1_101/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0
resnet_v1_101/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv1/weights:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv2/weights:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv3/weights:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0
resnet_v1_101/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0
restore model
2018-06-08 15:06:45: step1 image_name:OCR_img_140048.jpg |
rpn_loc_loss:264.442718506 | rpn_cla_loss:0.828770041466 |
rpn_total_loss:265.271484375 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:2.29521894455 |
fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:1.15461230278 |
fast_rcnn_total_loss:3.44983124733 |
total_loss:269.557434082 | pre_cost_time:3.70543599129s
2018-06-08 15:06:53: step11 image_name:OCR_img_143147.jpg |
rpn_loc_loss:167.275375366 | rpn_cla_loss:0.0647400766611 |
rpn_total_loss:167.340118408 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.655150413513 |
fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.657649993896 |
fast_rcnn_total_loss:1.31280040741 |
total_loss:169.489440918 | pre_cost_time:0.217247009277s
2018-06-08 15:06:56: step21 image_name:OCR_img_10706.jpg |
rpn_loc_loss:203.170288086 | rpn_cla_loss:0.166537106037 |
rpn_total_loss:203.33682251 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.572579860687 |
fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.579669415951 |
fast_rcnn_total_loss:1.15224933624 |
total_loss:205.327468872 | pre_cost_time:0.273305892944s
2018-06-08 15:06:59: step31 image_name:OCR_img_143361.jpg |
rpn_loc_loss:252.477355957 | rpn_cla_loss:0.028674390167 |
rpn_total_loss:252.506027222 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.0728850066662 |
fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.0746394395828 |
fast_rcnn_total_loss:0.147524446249 |
total_loss:253.493240356 | pre_cost_time:0.2667491436s
2018-06-08 15:07:02: step41 image_name:OCR_img_141126.jpg |
rpn_loc_loss:212.294830322 | rpn_cla_loss:0.0308034606278 |
rpn_total_loss:212.325637817 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.41481667757 |
fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.430022597313 |
fast_rcnn_total_loss:0.844839274883 |
total_loss:214.012130737 | pre_cost_time:0.226809024811s
2018-06-08 15:07:04: step51 image_name:OCR_img_1434319.jpg |
rpn_loc_loss:170.955886841 | rpn_cla_loss:0.0378239601851 |
rpn_total_loss:170.993713379 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.33990368247 |
fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.357002526522 |
fast_rcnn_total_loss:0.696906208992 |
total_loss:172.533584595 | pre_cost_time:0.259843826294s
2018-06-08 15:07:07: step61 image_name:OCR_img_142783.jpg |
rpn_loc_loss:227.995605469 | rpn_cla_loss:0.0456900112331 |
rpn_total_loss:228.041290283 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.0201326087117 |
fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.0213839225471 |
fast_rcnn_total_loss:0.0415165312588 |
total_loss:228.926528931 | pre_cost_time:0.291739940643s
2018-06-08 15:07:10: step71 image_name:OCR_img_phone_1029.jpg |
rpn_loc_loss:168.393371582 | rpn_cla_loss:0.0498738214374 |
rpn_total_loss:168.443252563 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.247651547194 |
fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.265152215958 |
fast_rcnn_total_loss:0.512803792953 |
total_loss:169.801071167 | pre_cost_time:0.253770112991s

getting OOM with custom dataset

Hi,

I have trained this model (Resnet101) on my custom dataset with few hundred records but as soon as I increase that to ~80k records I am getting following memory error. I have tried decreasing RPN/Fast-RCNN batch sizes but still get the same error while allotting memory for one or the other tensor. I have 2 GTX 1080 GPUs but the error invariably comes no matter what config I use (single GPU, multi GPU with different batch sizes). Any help on how to avoid it will be greatly appreciated. Thanks.

2018-02-14 18:03:35.915512: W tensorflow/core/common_runtime/bfc_allocator.cc:277] **************************************************************************************************** 2018-02-14 18:03:35.915558: W tensorflow/core/framework/op_kernel.cc:1192] **Resource exhausted: OOM when allocating tensor with shape[1,75,570,256]** 2018-02-14 18:03:35.915634: W tensorflow/core/framework/op_kernel.cc:1192] Internal: Dst tensor is not initialized. [[Node: make_anchors/make_anchors_all_level/make_anchors_P6/range/_3411 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_16518_make_anchors/make_anchors_all_level/make_anchors_P6/range", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](^make_anchors/make_anchors_all_level/make_anchors_P6/range/_3410)]] 2018-02-14 18:03:35.915726: W tensorflow/core/framework/op_kernel.cc:1192] Internal: Dst tensor is not initialized. [[Node: make_anchors/make_anchors_all_level/make_anchors_P6/range/_3411 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_16518_make_anchors/make_anchors_all_level/make_anchors_P6/range", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](^make_anchors/make_anchors_all_level/make_anchors_P6/range/_3410)]] 2018-02-14 18:03:35.915792: W tensorflow/core/framework/op_kernel.cc:1192] Internal: Dst tensor is not initialized. [[Node: make_anchors/make_anchors_all_level/make_anchors_P6/range/_3411 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_16518_make_anchors/make_anchors_all_level/make_anchors_P6/range", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](^make_anchors/make_anchors_all_level/make_anchors_P6/range/_3410)]] 2018-02-14 18:03:35.915843: W tensorflow/core/framework/op_kernel.cc:1192] Internal: Dst tensor is not initialized. [[Node: make_anchors/make_anchors_all_level/make_anchors_P6/range/_3411 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_16518_make_anchors/make_anchors_all_level/make_anchors_P6/range", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](^make_anchors/make_anchors_all_level/make_anchors_P6/range/_3410)]] 2018-02-14 18:03:35.919749: W tensorflow/core/kernels/queue_base.cc:295] _0_get_batch/input_producer: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920027: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920066: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920089: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920110: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920132: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920174: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920195: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920214: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920235: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920281: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920306: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920322: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920343: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920362: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920382: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2018-02-14 18:03:35.920404: W tensorflow/core/kernels/queue_base.cc:295] _2_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed Traceback (most recent call last): File "/home/neo/ML/R2CNN_FPN_Tensorflow/tools/train1.py", line 263, in <module> train() File "/home/neo/ML/R2CNN_FPN_Tensorflow/tools/train1.py", line 225, in train fast_rcnn_total_loss, total_loss, train_op]) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 895, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1124, in _run feed_dict_tensor, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InternalError: Dst tensor is not initialized. [[Node: make_anchors/make_anchors_all_level/make_anchors_P6/range/_3411 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_16518_make_anchors/make_anchors_all_level/make_anchors_P6/range", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](^make_anchors/make_anchors_all_level/make_anchors_P6/range/_3410)]] [[Node: gradients/rpn_net/concat_1_grad/Shape_2/_3305 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_15992_gradients/rpn_net/concat_1_grad/Shape_2", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

if more than one box in the image?

thanks very much for your code. I going to use this code for the ocr problem. I am a bit puzzled about the xml file. There are multiple bounding boxes in one image, how do I make the xml files? hope you can give a hint ~

A label tool with R2CNN?

I know there is a label tool for pascal voc , labelImg.
For this dataset of R2CNN, do you have an annotation tool?

Skipping cancelled enqueue attempt with queue not closed

再训练过程中报如下错误' 2018-04-09 15:41:27: step56003 image_name:TB10M4WLXXXXXXaXXXXunYpLFXX.jpg |
rpn_loc_loss:0.322259783745 | rpn_cla_loss:0.395608723164 |
rpn_total_loss:0.717868506908 |
fast_rcnn_loc_loss:0.0871686562896 | fast_rcnn_cla_loss:0.0336526706815 |
fast_rcnn_loc_rotate_loss:0.438882261515 | fast_rcnn_cla_rotate_loss:0.039190351963 |
fast_rcnn_total_loss:0.598893940449 |
total_loss:2.12904310226 | pre_cost_time:0.330368041992s
2018-04-09 15:41:29.061342: W tensorflow/core/framework/op_kernel.cc:1158] Resource exhausted: ../output/res101_trained_weights/v5/voc_56003model.ckpt.data-00000-of-00001.tempstate18369706306524038910
2018-04-09 15:41:29.064765: W tensorflow/core/kernels/queue_base.cc:294] _0_get_batch/input_producer: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.065104: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.065383: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.065460: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.065551: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.065605: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066031: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066102: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066168: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066217: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066270: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066348: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066397: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066459: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066514: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066563: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2018-04-09 15:41:29.066614: W tensorflow/core/kernels/queue_base.cc:294] _1_get_batch/batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File "train1.py", line 260, in
train()
File "train1.py", line 251, in train
saver.save(sess, save_ckpt)
File "/home/admin/py2_tf_gpu/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1472, in save
{self.saver_def.filename_tensor_name: checkpoint_file})
File "/home/admin/py2_tf_gpu/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/home/admin/py2_tf_gpu/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "/home/admin/py2_tf_gpu/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "/home/admin/py2_tf_gpu/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: ../output/res101_trained_weights/v5/voc_56003model.ckpt.data-00000-of-00001.tempstate18369706306524038910
[[Node: save_1/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT’

Data format

Thank you for sharing your code ,but i also have some question to ask
i want to ask the data format must be VOCpascal data format ?

Model for R2CNN Text Detection

Hello! Thank you for sharing your code!
I have successfully run the python script inference1.py and got the result figure. But the model v5 seems to be applied for ships detection(object detection?). Can you offer the model for text detection?
Thanks.

failed call to cuInit: CUDA_ERROR_NO_DEVICE

M\windows-gpu\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
M\windows-gpu\PY\35\tensorflow\stream_executor\cuda\cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_NO_DEVICE
M\windows-gpu\PY\35\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: DESKTOP-KSM830A
M\windows-gpu\PY\35\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: DESKTOP-KSM830A

Error: Signed integer is less than minimum

Hi,
When I train FPN model (python train.py), the program sometimes appears the error, which is shown below:

2018-04-23 20:51:08: step17206    image_name:260008_234_1371_310_1394_0.jpg |	
                                rpn_loc_loss:0.393452852964 |	 rpn_cla_loss:0.0846831575036 |	 rpn_total_loss:0.478136003017 |
                                fast_rcnn_loc_loss:0.0 |	 fast_rcnn_cla_loss:0.042750172317 |	 fast_rcnn_total_loss:0.042750172317 |
                                total_loss:1.14391613007 |	 pre_cost_time:0.244937896729s
start summary ...
[ INFO:0] Initialize OpenCL runtime...
2018-04-23 20:51:09.819007: W tensorflow/core/framework/op_kernel.cc:1192] Unknown: exceptions.OverflowError: signed integer is less than minimum
2018-04-23 20:51:09.821829: W tensorflow/core/framework/op_kernel.cc:1192] Unknown: exceptions.OverflowError: signed integer is less than minimum
	 [[Node: draw_proposals/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_INT32], Tout=[DT_UINT8], token="pyfunc_6", _device="/job:localhost/replica:0/task:0/cpu:0"](draw_proposals/Squeeze/_3051, draw_proposals/Gather/_3053, draw_proposals/strided_slice/_3055)]]
2018-04-23 20:51:09.831181: W tensorflow/core/framework/op_kernel.cc:1192] Unknown: exceptions.OverflowError: signed integer is less than minimum
2018-04-23 20:51:09.832983: W tensorflow/core/framework/op_kernel.cc:1192] Unknown: exceptions.OverflowError: signed integer is less than minimum
	 [[Node: draw_proposals/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_INT32], Tout=[DT_UINT8], token="pyfunc_6", _device="/job:localhost/replica:0/task:0/cpu:0"](draw_proposals/Squeeze/_3051, draw_proposals/Gather/_3053, draw_proposals/strided_slice/_3055)]]
2018-04-23 20:51:09.835287: W tensorflow/core/framework/op_kernel.cc:1192] Unknown: exceptions.OverflowError: signed integer is less than minimum
	 [[Node: draw_proposals/PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_INT32], Tout=[DT_UINT8], token="pyfunc_6", _device="/job:localhost/replica:0/task:0/cpu:0"](draw_proposals/Squeeze/_3051, draw_proposals/Gather/_3053, draw_proposals/strided_slice/_3055)]]
... ...

After preliminary debugging, I find the error seems always to be triggered by this line: summary_str = sess.run(summary_op)

Why?

about the image labeling

Hi there, I have a small question, when I label my own data, should I label the image with rotated boxes or rectangle boxes??? hope your answer

use_angle_condition in libs/fast_rcnn/build_fast_rcnn1.py

Hi there,

Thanks for sharing this repo. I am wondering if you could provide some insight into what the use_angle_condition parameter in libs/fast_rcnn/build_fast_rcnn1.py is controlling, and why it is set to False?

Thanks again!
Lilly

loss optimization

在build_rpn.py的代码里,把fast-rcnn部分的loss cut掉,训练速度和效果看起来会更好一些,大概是这样:
image

how to converge?

hello,I use my own data to train the network, but but it unable to converge. How can I adjust the parameters? thanks a lot!

test error

hi,thank you for your work, after training my model of my own data, I found an error in my test stage describe below:
Not found: Key fast_rcnn_net_rotate/classifier/biases not found in checkpoint

关于nms设置

请问,在fast rcnn阶段无论旋转和未旋转的框,利用nms处理,为什么iou都设置为0.15,这也太小了吧(相比通用的检测任务),不知道你是基于什么考虑的??谢谢

how can i successfully run the train1.py???

thanks very much for your code。 When i run the train1.py, i always meet this bug:
UnknownError (see above for traceback): error: /home/travis/miniconda/conda-bld/conda_1485299288502/work/opencv-3.2.0/modules/imgproc/src/rotcalipers.cpp:166: error: (-215) orientation != 0 in function rotatingCalipers
So I wonder if it's because I didn't configure the INFERENCE_IMAGE_PATH folder?
hope you can give a light!

No module named rotate_polygon_nms

Hi, I run the code but there are some error happened.
When i run the inference.py, there is no mistake, and get the correct result.
but when i run the inerence1.py, there are some mistake, the message as follow.

Traceback (most recent call last): File "/home/give/Game/OCR/Papers-code/R2CNN_FPN_Tensorflow-master/tools/inference1.py", line 17, in <module> from libs.fast_rcnn import build_fast_rcnn1 File "/home/give/Game/OCR/Papers-code/R2CNN_FPN_Tensorflow-master/libs/fast_rcnn/build_fast_rcnn1.py", line 14, in <module> from libs.box_utils import nms_rotate File "/home/give/Game/OCR/Papers-code/R2CNN_FPN_Tensorflow-master/libs/box_utils/nms_rotate.py", line 12, in <module> from libs.box_utils.rotate_polygon_nms import rotate_gpu_nms ImportError: No module named rotate_polygon_nms

the file not found, but the file do exists. I guess there is a make operation, but i don't find any makefile.
do you known the reason? thanks.

python train.py fail.

When I use "python train.py" ,fail.
My train data is voc_pascal.I success to create pascal_train.tfrecord(2.7G) . The classes is 20(car,boat,person... etc)
I also change cfgs.py . DATASET_NAME = 'pascal' .
Also success load pre-train file resnet_v1_101.ckpt.
But finally, report error,just like this:

restore model
Traceback (most recent call last):
File "tools/train.py", line 238, in
train()
File "tools/train.py", line 206, in train
fast_rcnn_total_loss, total_loss, train_op])
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, gradients/range/delta)]]

Caused by op u'get_batch/batch', defined at:
File "tools/train.py", line 238, in
train()
File "tools/train.py", line 36, in train
is_training=True)
File "/media/stockerc/f/wz/project/R2CNN_FPN_Tensorflow/data/io/read_tfrecord.py", line 86, in next_batch
dynamic_pad=True)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 927, in batch
name=name)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 722, in _batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 464, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 2418, in _queue_dequeue_many_v2
component_types=component_types, timeout_ms=timeout_ms, name=name)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/home/stockerc/Tensor1.4/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

OutOfRangeError (see above for traceback): PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, gradients/range/delta)]]

Whats matter?@yangxue0827

image annotation

hello,writer,what tools do you use to mark the images? thank you very much!

Does the constant [103.939, 116.779, 123.68] have any effection on training?

I read about this constant in your paper that it is the mean value of training set, and the reference1.py runs pretty well with sample pictures. But is it the general setting for any set of dataset? I mean, how to set this value for my own dataset, or, I can just ignore it?

I kind of have trouble in training. It just won't converge. With limitted number of pictures(144 for training actually), how to adjust parameters and superparameters of this training?

Thanks.

rotate_polygon_nms.so

Hi. When trying to run python inference1.py I receive the following error:

Traceback (most recent call last): File "demo1.py", line 16, in <module> from libs.fast_rcnn import build_fast_rcnn1 File "../libs/fast_rcnn/build_fast_rcnn1.py", line 14, in <module> from libs.box_utils import nms_rotate File "../libs/box_utils/nms_rotate.py", line 12, in <module> from libs.box_utils.rotate_polygon_nms import rotate_gpu_nms ImportError: dlopen(../libs/box_utils/rotate_polygon_nms.so, 2): no suitable image found. Did find: ../libs/box_utils/rotate_polygon_nms.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00 /Users/tomtomtom/R2CNN_FPN_Tensorflow/libs/box_utils/rotate_polygon_nms.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00

Any suggestions on what is going wrong.Also, I am running this on a CPU.

Thanks

多尺度池化

你好,你论文里提到的多尺度池化是不是跟spp–net类似?

AttributeError: 'module' object has no attribute 'open'

Hi, @yangxue0827 , I faced a problem when I run your training scripts: python train1.py in your docker environment.

python train1.py

The error information like these:
Traceback (most recent call last):
File "train1.py", line 12, in
import tensorflow.contrib.slim as slim
File "/root/R2CNN_FPN/local/lib/python2.7/site-packages/tensorflow/init.py", line 24, in
from tensorflow.python import *
File "/root/R2CNN_FPN/local/lib/python2.7/site-packages/tensorflow/python/init.py", line 47, in
import numpy as np
File "/root/R2CNN_FPN/local/lib/python2.7/site-packages/numpy/init.py", line 142, in
from . import add_newdocs
File "/root/R2CNN_FPN/local/lib/python2.7/site-packages/numpy/add_newdocs.py", line 13, in
from numpy.lib import add_newdoc
File "/root/R2CNN_FPN/local/lib/python2.7/site-packages/numpy/lib/init.py", line 23, in
from .npyio import *
File "/root/R2CNN_FPN/local/lib/python2.7/site-packages/numpy/lib/npyio.py", line 14, in
from ._datasource import DataSource
File "/root/R2CNN_FPN/local/lib/python2.7/site-packages/numpy/lib/_datasource.py", line 220, in
_file_openers = _FileOpeners()
File "/root/R2CNN_FPN/local/lib/python2.7/site-packages/numpy/lib/_datasource.py", line 162, in init
self._file_openers = {None: io.open}
AttributeError: 'module' object has no attribute 'open'

About batch size

I find some problems. If the batch size is not 1, there will be something wrong. Is that right?

训练

训练时没有反应
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.3415
pciBusID 0000:02:00.0
Total memory: 11.92GiB
Free memory: 11.58GiB
2018-04-10 16:46:53.757745: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2018-04-10 16:46:53.757751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y
2018-04-10 16:46:53.757758: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0)
restore model
一直停在这里,怎么回事?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.