flming / crnn.tf2 Goto Github PK
View Code? Open in Web Editor NEWConvolutional Recurrent Neural Network(CRNN) for End-to-End Text Recognition - TensorFlow 2
License: MIT License
Convolutional Recurrent Neural Network(CRNN) for End-to-End Text Recognition - TensorFlow 2
License: MIT License
Thank you so much for this incredibly useful repo. I see in dataset_factory.py that you are waiting for RaggedTensor support in Keras. But isn't this already available?
Is there any plan to implement this at some point?
Traceback (most recent call last):
File "train.py", line 71, in <module>
validation_data=val_ds)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1102, in fit
tmp_logs = self.train_function(iterator)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 796, in __call__
result = self._call(*args, **kwds)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 839, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 712, in _initialize
*args, **kwds))
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2948, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3319, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3181, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 614, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:809 train_function *
return step_function(self, iterator)
/mnt/d/github/CRNN.tf2/metrics.py:26 update_state *
values = tf.math.reduce_any(tf.math.not_equal(y_true, y_pred), axis=1)
/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:201 wrapper **
return target(*args, **kwargs)
/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:1674 not_equal
return gen_math_ops.not_equal(x, y, name=name)
/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py:6517 not_equal
name=name)
/mnt/e/ubuntu/anaconda3/envs/tflite/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:537 _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'NotEqual' Op has type float32 that does not match type int64 of argument 'x'.
麻烦帮忙看下这个错误, 我debug后看到 y_true 是None, 谢谢
Hi, nice work!!
The model provided here for demo (SavedModel) has the last layer ctc_greedy_decoder, while the model in train.py this layer is not present. How do we get it? is this layer added after training if so can you please tell, how to get it done?
logits (Dense) (None, None, 38) 19494
ctc_greedy_decoder (CTCGreed ((None,), (None,)) 0
我把数据换成了中文的,加载了您的pretrained weight, 然后训练了10个epoch之后,sequence_accuracy依然是0,loss开始下降的还不错,但是两个epoch之后,loss下降到了30左右,之后不再变化。当然我也使用了ReduceLROnPlateau。 数据集可以确认是没问题的,因为我用相同的数据集,用pytorch版的随便跑了几个epoch,sequence_accuracy可以达到70% 。 想请问,您训练的时候,sequence_accuracy跑了多少epoch之后开始有值?
I tried to run the demo of a trained model on a number of images with text separated by spaces - but only the first word was predicted. I'm a little confused by this - there are multiple words in the images, and I have checked that a space is also in the character table.
What is validation and test accuracy on MJSynth dataset?
Thanks your sharing! The "Mjsynth"Dataset download link can not be opened, can share the data set in other ways, Baidu disk or mailbox [email protected]。
作者你好,我发现你在tokenize里面用了tf.ragged.map_flat_values这个方法,这里会用0作为填充的index,但是index=0在字典里面又是有对应的字符的,请问这样训练的话会不会出问题呢?另外字典里面的最后一个BLK是用来做什么的呢?
系统:Windows10
python版本:3.7.7
tensorflow版本:2.2.0-gpu
情况:在训练开始后几个批次就出现Error polling for event status: failed to query event: CUDA_ERROR_LAUNCH_FAILED: unspecified launch failure
例如我已知该图片全为数字,那么我把rnn的输出的字母部分屏蔽为一个很小的值,这样应该能提高准确率。
但我试了一下这样好像会严重破坏后续的ctc解码(CTCGreedyDecoder)。
有什么办法可以实现白名单功能吗?
我记得谷歌的Tesseract-OCR在lstm识别模式下白名单功能就会被禁用
Line 27 in f41e335
在这里项目里面,您的 /< /BLK> 应该对应的 index 是 37 也可以理解为 max len sequnce - 1, 也可以理解为 index -1 , 在训练的时候 ctc loss 我们可以特别指定 blank_index 是任意值,但是在 decoder 的时候,函数里面就没有这个 blank_index 参数了,请问假设我在训练的时候 blank_index 随便赋予了一个 index 666,decoder 的时候这个 666 对应的blk字符 就会被显示在输出 string 里面,这个问题怎么理解,有没有好的解决方案
I trained model and try to load it in OpenCV(Python).
When I convert it to frozen_graph and load:
Traceback (most recent call last):
File "frozen_test.py", line 4, in <module>
net = cv.dnn.readNet('frozen_graph.pb')
cv2.error: OpenCV(4.5.1) /tmp/pip-req-build-ms668fyv/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1061: error: (-2:Unspecified error) Input layer not found: StatefulPartitionedCall/StatefulPartitionedCall/model/logits/Tensordot in function 'populateNet'
When I convert it to ONNX(tensorflow-onnx) and load:
File "text_recognition.py", line 32, in loadRecognitionModel
self.recognizer = cv.dnn.readNet(modelRecognition)
cv2.error: OpenCV(4.5.1) /tmp/pip-req-build-ms668fyv/opencv/modules/dnn/src/onnx/onnx_importer.cpp:1887: error: (-2:Unspecified error) in function 'handleNode'
> Node [Gather]:(StatefulPartitionedCall/model/reshape7/Shape:0) parse error: OpenCV(4.5.1) /tmp/pip-req-build-ms668fyv/opencv/modules/dnn/src/onnx/onnx_importer.cpp:1648: error: (-215:Assertion failed) indexMat.total() == 1 in function 'handleNode'
Python 3.8.5
Tensorflow 2.5.0
OpenCV 4.5.1
Ubuntu 20.04.2 LTS
What is the best way to use CRNN.tf2 in OpenCV?
python tools/demo.py --images demo/ --config configs/mjsynth.yml --model save/
2022-03-07 13:51:11.325152: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.334636: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.335111: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.335757: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-03-07 13:51:11.336192: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.336660: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.337103: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.673181: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.673651: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.674058: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-07 13:51:11.674440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9596 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-03-07 13:51:17.374815: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
2022-03-07 13:51:18.719922: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8100
2022-03-07 13:51:20.915022: I tensorflow/stream_executor/cuda/cuda_blas.cc:1760] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
Path: demo/2.jpg, y_pred: [b''], probability: [1.]
Path: demo/3.jpg, y_pred: [b'\xe9\x80\xa2\xe5\x9c\xba\xe7\xab\xbf\xe6\x9c\xa8'], probability: [0.9996128]
Path: demo/5.jpg, y_pred: [b'\xe5\x8f\xa4\xe4\xbb\x8a\xe5\xbf\x83\xe4\xba\xba\xe6\xb4\x81\xe6\x96\xb9\xe5\x8d\xab'], probability: [0.11679293]
Path: demo/1.jpg, y_pred: [b''], probability: [1.]
Path: demo/0.jpg, y_pred: [b'\xe8\x81\x94\xe4\xba\xa7\xe5\x93\x81'], probability: [0.99998957]
Path: demo/6.jpg, y_pred: [b'\xe7\x91\x9e\xe6\x99\xaf\xe8\x8b\x91\xe9\xa4\x90'], probability: [0.98446]
Path: demo/4.jpg, y_pred: [b'\xe5\xa4\xa9\xe7\xa7\x8b\xe6\x9c\x89\xe9\x9b\x81\xe7\xbe\xa4'], probability: [0.99987626]
能描述一下数据集吗?我想把这个模型当预训练模型
Hi, dear, it seems that you use 'chars = tf.strings.unicode_split(labels, 'UTF-8')' to split label strings in the project, now I want using the graphemes function to split charactors of Tibetan(Tibetan characters are variable length unicode), but I get the error 'OperatorNotAllowedInGraphError: iterating over tf.Tensor
is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function.' . What should I do? Or do you have any good suggestions?
感谢作者开源这么好的项目,我是基于您的项目重新构建的一个中文识别的项目,并用tensorflow-serving构建服务端,提高服务端的处理能力,与大家共享。链接:https://www.jianshu.com/p/e0d9efaadb0f
I use the demo file to predict, the config file is like below.
dataset_builder: &ds_builder
table_path: 'example/table.txt'
img_shape: [32, null, 3]
max_img_width: 400
ignore_case: true
train:
dataset_builder:
<<: *ds_builder
train_ann_paths:
- '/content/gdrive/MyDrive/CRNN/CRNN.tf2/example/annotation_train.txt'
- '/content/gdrive/MyDrive/CRNN/CRNN.tf2/example/annotation_val.txt'
val_ann_paths:
- '/content/gdrive/MyDrive/CRNN/CRNN.tf2/example/annotation_test.txt'
batch_size_per_replica: 32
# Number of epochs to train.
epochs: 2000
lr_schedule:
initial_learning_rate: 0.0001
decay_steps: 600000
alpha: 0.01
# TensorBoard Arguments
# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard#arguments_1
tensorboard:
histogram_freq: 1
profile_batch: 0
eval:
dataset_builder:
<<: *ds_builder
ann_paths:
- '/content/gdrive/MyDrive/CRNN/CRNN.tf2/example/annotation_eval.txt'
batch_size: 1
probability: [0.9999411] , but result show is . It only displays numbers, not characters.
The result it returns is :
test_images/IMG_20211212_091803.jpg, y_pred: [b''], probability: [0.9999411] .
While I'm looking forward to is :"HSD".
Hope you answer.
If I'm understanding correctly, the metric of 'sequence accuracy', for example val_sequence_accuracy: 0.52 means that 52% of each image in the validation set is read perfectly. Is there a way to ignore differences in spaces? I'm trying to read entire lines of images at once, and would like the prediction of 'hello there' to be considered equivalent to 'hello there' (even if the CTCLoss will be different, sequence_accuracy should be considered the same).
I downloaded the MJSynth dataset and followed the instructions but while training at the first epoch the loss suddenly changes to Nan, i tried adding regularization and norm clipping but both didn't fix the problem
Also the sequence accuracy is 0.0 most of the time
@FLming 感谢作者的分享,我在运行时发现,输入的标签较长时,会有如下报错,请问如何调整,谢谢!
tensorflow/core/util/ctc/ctc_loss_calculator.cc:144] No valid path found.
I converted model trained in CRNN.tf2 to onnx format(tensorflow-onnx). Thanks to @FLming I know that it is impossible to run it in OpenCV so I tried ONNX Runtime. It works, but I don't know how to get prediction accuracy?
import onnxruntime as rt
from onnxruntime.datasets import get_example
import cv2
import numpy as np
img = cv2.imread("file.jpg")
img = img.astype(np.float32)
model = get_example("model.onnx")
sess = rt.InferenceSession(model, providers=["CPUExecutionProvider"])
input_name = sess.get_inputs()[0].name
output_name = sess.get_outputs()[0].name
result = sess.run([output_name], {input_name: img})
raise ValueError('Unknown ' + printable_module_name + ': ' + class_name)
ValueError: Unknown layer: Functional
我想要训练一个能够识别中文的模型,能够按照训练英文这样训练吗,应该修改那些步骤?
I train with this code for Korean and I got this error:
(most recent call last):
File "crnn/train.py", line 58, in <module>
model.fit(train_ds, epochs=config['epochs'], callbacks=callbacks,
File "/home/***/miniconda3/envs/gpu38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1183, in fit
tmp_logs = self.train_function(iterator)
File "/home/***/miniconda3/envs/gpu38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 889, in __call__
result = self._call(*args, **kwds)
File "/home/***/miniconda3/envs/gpu38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 917, in _call
return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
File "/home/***/miniconda3/envs/gpu38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3023, in __call__
return graph_function._call_flat(
File "/home/***/miniconda3/envs/gpu38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1960, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "/home/***/miniconda3/envs/gpu38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 591, in call
outputs = execute.execute(
File "/home/***/miniconda3/envs/gpu38/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: 3 root error(s) found.
(0) Invalid argument: data/im\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00pg; Invalid argument
[[{{node ReadFile}}]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[ReadVariableOp/_290]]
(1) Invalid argument: data/im\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00pg; Invalid argument
[[{{node ReadFile}}]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
(2) Invalid argument: data/im\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00pg; Invalid argument
[[{{node ReadFile}}]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[replica_1/assert_equal_1/Assert/Assert/data_3/_218]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_23254]
Function call stack:
train_function -> train_function -> train_function
I think if this codes can recognize Chinese with no problem it can do other utf-8 texts too, but it can't
What is cause about this error?
The difference of my data and example is file name(is Korean, ex: 가나다_453.jpg ), so is this cause about it?
label不是个稀疏矩阵嘛,模型最后一层dense是分类数,博主这边是怎么运行的呢?
您好,我在執行B106Roger的CRNN.tf2 (網址:https://github.com/B106Roger/CRNN.tf2) 的eval_full_tflite.py時使用Beam Search Decoder並出現下列錯誤
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Expected concatenating dimensions in the range [-1, 1), but got 1 [Op:ConcatV2] name: concat
因為該作者的code是從您這裡延伸的,而且在decoder.py的部分並沒有修改,所以想請問這樣的錯誤是否有出現過,也想請問解決辦法,感謝!
我将table.txt改成
0
1
2
3
4
5
6
7
8
9
运行之后显示IndexError: list index out of range
Hello! I am trying to train on my own data based on exported_model.h5
. After 100 epochs i get empty preds:
My config:
train:
dataset_builder: &ds_builder
table_path: 'data/table.txt'
# 1: Grayscale image, 3: RGB image
img_channels: 3
# The image that width greater than max img_width will be dropped.
# Only work with image width is null.
max_img_width: 400
ignore_case: false
# If it is not null, the image will be distorted.
img_width: null
# If change height, change the net.
img_height: 32
train_ann_paths:
- 'data/dataset/annotation_train.txt'
- 'data/dataset/annotation_val.txt'
val_ann_paths:
- 'data/dataset/annotation_test.txt'
batch_size_per_replica: 256
# The model for restore, even if the number of characters is different
restore: 'exported_model.h5'
learning_rate: 0.001
# Number of epochs to train.
epochs: 100
# Reduce learning rate when a metric has stopped improving.
reduce_lr:
factor: 0.5
patience: 5
min_lr: 0.0001
# Tensorboard
tensorboard:
histogram_freq: 1
profile_batch: 0
eval:
dataset_builder:
<<: *ds_builder
ann_paths:
- '/datasets/ICDAR/2013/Challenge2_Test_Task3_gt.txt'
batch_size: 1
If i run demo with exported_model.h5 i get preds. What i am doing wrong?
The problem:
In the current implementation, if a path in the annotation file provided to DatasetBuilder
does not exist or one of the files is somehow corrupted, the entire training comes to a halt and you lose all your progress.
Considering it could take hours to iterate through all images, this becomes very frustrating.
I came across this problem because a few images (around 50) got somehow corrupted while downloading the MJSynth dataset. I did try to clean them up as this solution suggested, but I'm still encountering weird nonsensical errors:
[30] try:
---> [31] model.fit(train_ds,
[32] epochs=EPOCHS,
[33] callbacks=callbacks,
[34] validation_data=val_ds,
[35] use_multiprocessing=True)
[36] except KeyboardInterrupt:
[37] pass
File c:\Users\somso\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File c:\Users\somso\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\execute.py:54, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
InvalidArgumentError: Graph execution error:
2 root error(s) found.
(0) INVALID_ARGUMENT: jpeg::Uncompress failed. Invalid JPEG data or crop window.
[[{{node DecodeJpeg}}]]
[[IteratorGetNext]]
[[assert_equal_3/Assert/Assert/data_0/_4]]
(1) INVALID_ARGUMENT: jpeg::Uncompress failed. Invalid JPEG data or crop window.
[[{{node DecodeJpeg}}]]
[[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_11983]
What could be done:
Catching the exceptions thrown while trying to load images and logging the catched exception in the terminal and skipping it.
I honestly tried to come up with a solution myself but I still cannot understand how DatasetBuilder works. lol
I'd be happy to make a PR myself if you have an idea of how to fix this.
I'm learning a lot from this project. Thank you.
I'm testing with the h5 file you put on Google Drive.
However, the following error occurs.
python : 3.7.0
tensorlfow version : 2.4.0
1. export.py ==> Success
/opt/miniconda3/envs/crnn/bin/python /Users/nosun10005/PycharmProjects/CRNN.tf2-master/tools/export.py --model ../example/model/exported_model.h5 --output ../example/model/saved --config ../configs/mjsynth.yml --post greedy
2. tflite-converter.py ==> Fail
/opt/miniconda3/envs/crnn/bin/python /Users/nosun10005/PycharmProjects/CRNN.tf2-master/tools/tflite_converter.py -m ../example/model/saved -o ../example/model/exported_model.tflite
.......
function_optimizer: Graph size after: 752 nodes (709), 959 edges (915), time = 44.655ms.
function_optimizer: Graph size after: 752 nodes (0), 959 edges (0), time = 18.138ms.
Optimization results for grappler item: __inference_while_cond_5605_592
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_model_ctc_greedy_decoder_RaggedFromSparse_Assert_AssertGuard_true_6516_30714
function_optimizer: function_optimizer did nothing. time = 0ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_while_cond_4719_5412
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_while_body_5606_37121
function_optimizer: function_optimizer did nothing. time = 0ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_while_body_6046_31456
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_while_body_5160_7831
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_while_cond_6045_40507
function_optimizer: function_optimizer did nothing. time = 0ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_while_cond_5159_3281
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_model_ctc_greedy_decoder_RaggedFromSparse_Assert_AssertGuard_false_6517_4821
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_while_body_4720_763
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Traceback (most recent call last):
File "/Users/nosun10005/PycharmProjects/CRNN.tf2-master/tools/tflite_converter.py", line 22, in
tflite_model = converter.convert()
File "/opt/miniconda3/envs/crnn/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 1117, in convert
return super(TFLiteConverterV2, self).convert()
File "/opt/miniconda3/envs/crnn/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 920, in convert
_convert_to_constants.convert_variables_to_constants_v2_as_graph(
File "/opt/miniconda3/envs/crnn/lib/python3.8/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1102, in convert_variables_to_constants_v2_as_graph
converter_data = _FunctionConverterData(
File "/opt/miniconda3/envs/crnn/lib/python3.8/site-packages/tensorflow/python/framework/convert_to_constants.py", line 806, in init
self._build_tensor_data()
File "/opt/miniconda3/envs/crnn/lib/python3.8/site-packages/tensorflow/python/framework/convert_to_constants.py", line 825, in _build_tensor_data
data = val_tensor.numpy()
File "/opt/miniconda3/envs/crnn/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1071, in numpy
maybe_arr = self._numpy() # pylint: disable=protected-access
File "/opt/miniconda3/envs/crnn/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1039, in _numpy
six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access
File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.
Epoch 1/20
2022-02-16 08:42:18.888733: W tensorflow/core/framework/op_kernel.cc:1692] OP_REQUIRES failed at strided_slice_op.cc:108 : Invalid argument: slice index 1 of dimension 0 out of bounds.
2022-02-16 08:42:18.888759: W tensorflow/core/framework/op_kernel.cc:1692] OP_REQUIRES failed at strided_slice_op.cc:108 : Invalid argument: slice index 1 of dimension 0 out of bounds.
2022-02-16 08:42:18.888775: W tensorflow/core/framework/op_kernel.cc:1692] OP_REQUIRES failed at strided_slice_op.cc:108 : Invalid argument: slice index 1 of dimension 0 out of bounds.
Traceback (most recent call last):
File "crnn/train.py", line 59, in
validation_data=val_ds)
File "/root/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1184, in fit
tmp_logs = self.train_function(iterator)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 885, in call
result = self._call(*args, **kwds)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 950, in _call
return self._stateless_fn(*args, **kwds)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3040, in call
filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1964, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 596, in call
ctx=ctx)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: slice index 1 of dimension 0 out of bounds.
[[node strided_slice_1 (defined at crnn/train.py:59) ]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[OptionalHasValue_1/_12]]
(1) Invalid argument: slice index 1 of dimension 0 out of bounds.
[[node strided_slice_1 (defined at crnn/train.py:59) ]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_12630]
Function call stack:
train_function -> train_function
How to solve or train for special characters.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.