FYI, I am using Windows 10, and the full - virtually - output is as follows
(tensorflow) E:\repos\OpenNMT-tf>python -m bin.main train --model config/models/nmt_small.py --config config/opennmt-defaults.yml config/data/toy-ende.yml
INFO:tensorflow:Using config: {'_model_dir': 'toy-ende', '_tf_random_seed': None, '_save_summary_steps': 50, '_save_checkpoints_steps': 5000, '_save_checkpoints_secs': None, '_session_config': gpu_options {
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 50, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x000001DE786BA0F0>, '_task_type': 'worker', '_task_id': 0, '_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after 18000 secs (eval_spec.throttle_secs) or training is finished.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Number of trainable parameters: 59222508
2018-01-29 01:27:22.032908: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2018-01-29 01:27:22.689337: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1105] Found device 0 with properties:
name: Quadro M600M major: 5 minor: 0 memoryClockRate(GHz): 0.8755
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 1.66GiB
2018-01-29 01:27:22.689509: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Quadro M600M, pci bus id: 0000:01:00.0, compute capability: 5.0)
INFO:tensorflow:Saving checkpoints for 1 into toy-ende\model.ckpt.
INFO:tensorflow:loss = 10.4862585, step = 1
2018-01-29 01:27:49.416435: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 323.57MiB. Current allocation summary follows.
2018-01-29 01:27:49.416958: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (256): Total Chunks: 226, Chunks in use: 223. 56.5KiB allocated for chunks. 55.8KiB in use in bin. 9.4KiB client-requested in use in bin.
2018-01-29 01:27:49.418437: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (512): Total Chunks: 1, Chunks in use: 0. 768B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2018-01-29 01:27:49.418702: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (1024): Total Chunks: 1, Chunks in use: 1. 1.3KiB allocated for chunks. 1.3KiB in use in bin. 1.0KiB client-requested in use in bin.
2018-01-29 01:27:49.418952: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (2048): Total Chunks: 33, Chunks in use: 32. 79.8KiB allocated for chunks. 76.5KiB in use in bin. 58.0KiB client-requested in use in bin.
2018-01-29 01:27:49.419195: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (4096): Total Chunks: 111, Chunks in use: 111. 804.8KiB allocated for chunks. 804.8KiB in use in bin. 804.8KiB client-requested in use in bin.
2018-01-29 01:27:49.419397: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (8192): Total Chunks: 13, Chunks in use: 13. 115.5KiB allocated for chunks. 115.5KiB in use in bin. 111.0KiB client-requested in use in bin.
2018-01-29 01:27:49.419628: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (16384): Total Chunks: 2, Chunks in use: 2. 37.0KiB allocated for chunks. 37.0KiB in use in bin. 37.0KiB client-requested in use in bin.
2018-01-29 01:27:49.419848: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (32768): Total Chunks: 1, Chunks in use: 0. 48.3KiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2018-01-29 01:27:49.420079: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (65536): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2018-01-29 01:27:49.421086: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (131072): Total Chunks: 1289, Chunks in use: 1231. 161.41MiB allocated for chunks. 154.16MiB in use in bin. 153.92MiB client-requested in use in bin.
2018-01-29 01:27:49.421360: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (262144): Total Chunks: 268, Chunks in use: 137. 71.63MiB allocated for chunks. 38.88MiB in use in bin. 38.88MiB client-requested in use in bin.
2018-01-29 01:27:49.421600: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (524288): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2018-01-29 01:27:49.423608: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (1048576): Total Chunks: 1, Chunks in use: 1. 1.00MiB allocated for chunks. 1.00MiB in use in bin. 1.00MiB client-requested in use in bin.
2018-01-29 01:27:49.424369: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (2097152): Total Chunks: 4, Chunks in use: 4. 14.63MiB allocated for chunks. 14.63MiB in use in bin. 14.50MiB client-requested in use in bin.
2018-01-29 01:27:49.424834: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (4194304): Total Chunks: 1, Chunks in use: 1. 6.64MiB allocated for chunks. 6.64MiB in use in bin. 4.63MiB client-requested in use in bin.
2018-01-29 01:27:49.425584: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (8388608): Total Chunks: 41, Chunks in use: 7. 361.33MiB allocated for chunks. 64.00MiB in use in bin. 64.00MiB client-requested in use in bin.
2018-01-29 01:27:49.426064: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (16777216): Total Chunks: 2, Chunks in use: 1. 47.06MiB allocated for chunks. 20.83MiB in use in bin. 12.00MiB client-requested in use in bin.
2018-01-29 01:27:49.426584: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (33554432): Total Chunks: 1, Chunks in use: 1. 48.83MiB allocated for chunks. 48.83MiB in use in bin. 48.83MiB client-requested in use in bin.
2018-01-29 01:27:49.427393: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (67108864): Total Chunks: 4, Chunks in use: 4. 292.84MiB allocated for chunks. 292.84MiB in use in bin. 279.84MiB client-requested in use in bin.
2018-01-29 01:27:49.437010: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (134217728): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2018-01-29 01:27:49.446938: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:628] Bin (268435456): Total Chunks: 1, Chunks in use: 1. 424.90MiB allocated for chunks. 424.90MiB in use in bin. 323.57MiB client-requested in use in bin.
2018-01-29 01:27:49.449867: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:644] Bin for 323.57MiB was 256.00MiB, Chunk State:
2018-01-29 01:27:49.454072: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:662] Chunk at 0000000C010E0000 of size 1280
2018-01-29 01:27:49.466380: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:662] Chunk at 0000000C010E0500 of size 256
2018-01-29 01:27:49.474598: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:662] Chunk at 0000000C010E0600 of size 256
2018-01-29 01:27:49.486914: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:662] Chunk at 0000000C010E0700 of size 256
2018-01-29 01:27:49.489387: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:662] Chunk at 0000000C010E0800 of size 256
...
2018-01-29 01:28:05.238525: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:680] 1 Chunks of size 21837824 totalling 20.83MiB
2018-01-29 01:28:05.253985: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:680] 1 Chunks of size 51197952 totalling 48.83MiB
2018-01-29 01:28:05.257047: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:680] 3 Chunks of size 73359360 totalling 209.88MiB
2018-01-29 01:28:05.272085: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:680] 1 Chunks of size 86990848 totalling 82.96MiB
2018-01-29 01:28:05.275387: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:680] 1 Chunks of size 445540864 totalling 424.90MiB
2018-01-29 01:28:05.288695: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:684] Sum Total of in-use chunks: 1.04GiB
2018-01-29 01:28:05.306023: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:686] Stats:
Limit: 1500921856
InUse: 1119637504
MaxInUse: 1500920832
NumAllocs: 7767
MaxAllocSize: 445540864
2018-01-29 01:28:05.323922: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:277] ***************************************_*****************************************************xxxxxxx
2018-01-29 01:28:05.327611: W C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\framework\op_kernel.cc:1198] Resource exhausted: OOM when allocating tensor with shape[64,37,35820] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _do_call
return fn(*args)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1329, in _run_fn
status, run_metadata)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[64,37,35820] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: seq2seq/decoder/decoder_1/transpose = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder/decoder_1/TensorArrayStack/TensorArrayGatherV3, seq2seq/OptimizeLoss/gradients/seq2seq/encoder/rnn/transpose_grad/InvertPermutation)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Node: seq2seq/OptimizeLoss/gradients/seq2seq/decoder/decoder_1/TrainingHelperInitialize/cond/Merge_grad/tuple/control_dependency/_645 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_916_seq2seq/OptimizeLoss/gradients/seq2seq/decoder/decoder_1/TrainingHelperInitialize/cond/Merge_grad/tuple/control_dependency", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\repos\OpenNMT-tf\bin\main.py", line 275, in
main()
File "E:\repos\OpenNMT-tf\bin\main.py", line 263, in main
train(estimator, model, config)
File "E:\repos\OpenNMT-tf\bin\main.py", line 125, in train
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 432, in train_and_evaluate
executor.run_local()
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 611, in run_local
hooks=train_hooks)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 314, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 815, in _train_model
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 539, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1013, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1104, in run
raise six.reraise(*original_exc_info)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\six.py", line 693, in reraise
raise value
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1089, in run
return self._sess.run(*args, **kwargs)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1161, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 941, in run
return self._sess.run(*args, **kwargs)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 895, in run
run_metadata_ptr)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1128, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1344, in _do_run
options, run_metadata)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1363, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[64,37,35820] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: seq2seq/decoder/decoder_1/transpose = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder/decoder_1/TensorArrayStack/TensorArrayGatherV3, seq2seq/OptimizeLoss/gradients/seq2seq/encoder/rnn/transpose_grad/InvertPermutation)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Node: seq2seq/OptimizeLoss/gradients/seq2seq/decoder/decoder_1/TrainingHelperInitialize/cond/Merge_grad/tuple/control_dependency/_645 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_916_seq2seq/OptimizeLoss/gradients/seq2seq/decoder/decoder_1/TrainingHelperInitialize/cond/Merge_grad/tuple/control_dependency", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'seq2seq/decoder/decoder_1/transpose', defined at:
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\repos\OpenNMT-tf\bin\main.py", line 275, in
main()
File "E:\repos\OpenNMT-tf\bin\main.py", line 263, in main
train(estimator, model, config)
File "E:\repos\OpenNMT-tf\bin\main.py", line 125, in train
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 432, in train_and_evaluate
executor.run_local()
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 611, in run_local
hooks=train_hooks)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 314, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 743, in _train_model
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 725, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "E:\repos\OpenNMT-tf\opennmt\models\model.py", line 104, in call
outputs, predictions = self._build(features, labels, params, mode, config)
File "E:\repos\OpenNMT-tf\opennmt\models\sequence_to_sequence.py", line 164, in _build
memory_sequence_length=encoder_sequence_length)
File "E:\repos\OpenNMT-tf\opennmt\decoders\rnn_decoder.py", line 119, in decode
outputs, state, length = tf.contrib.seq2seq.dynamic_decode(decoder)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\decoder.py", line 324, in dynamic_decode
final_outputs = nest.map_structure(_transpose_batch_time, final_outputs)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\util\nest.py", line 387, in map_structure
structure[0], [func(*x) for x in entries])
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\util\nest.py", line 387, in
structure[0], [func(*x) for x in entries])
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\rnn.py", line 73, in _transpose_batch_time
([1, 0], math_ops.range(2, x_rank)), axis=0))
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1392, in transpose
ret = transpose_fn(a, perm, name=name)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 7687, in transpose
"Transpose", x=x, perm=perm, name=name)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 3160, in create_op
op_def=op_def)
File "C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1625, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[64,37,35820] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: seq2seq/decoder/decoder_1/transpose = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder/decoder_1/TensorArrayStack/TensorArrayGatherV3, seq2seq/OptimizeLoss/gradients/seq2seq/encoder/rnn/transpose_grad/InvertPermutation)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Node: seq2seq/OptimizeLoss/gradients/seq2seq/decoder/decoder_1/TrainingHelperInitialize/cond/Merge_grad/tuple/control_dependency/_645 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_916_seq2seq/OptimizeLoss/gradients/seq2seq/decoder/decoder_1/TrainingHelperInitialize/cond/Merge_grad/tuple/control_dependency", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.