Git Product home page Git Product logo

Comments (5)

TimYao18 avatar TimYao18 commented on June 7, 2024 1

hi @TimYao18 ,I also encountered this problem on mobile device, hope you can share how to solve it, thanks~

I use this tool onnx2tf to convert and it default convert into the right format.

from midas.

TimYao18 avatar TimYao18 commented on June 7, 2024

The outputs of the step 3 during conversion to TFLite as below:

2024-01-10 10:59:33.953038: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2024-01-10 10:59:33.953063: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
Summary on the non-converted ops:
---------------------------------
 * Accepted dialects: tfl, builtin, func
 * Non-Converted Ops: 17122, Total Ops 34728, % non-converted = 49.30 %
 * 17122 ARITH ops

- arith.constant: 17122 occurrences  (f32: 17109, i32: 13)



  (f32: 99)
  (f32: 24)
  (f32: 17001)
  (f32: 73)
  (f32: 45)
  (uq_8: 70)
  (f32: 7)
  (f32: 1)
  (f32: 5)
  (f32: 97)
  (f32: 1)
  (f32: 180)
2024-01-10 10:59:39.877035: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2989] Estimated count of arithmetic ops: 9.246 G  ops, equivalently 4.623 G  MACs

from midas.

TimYao18 avatar TimYao18 commented on June 7, 2024

Now I can convert the model-small.onnx to tflite, but the result images are blurry. Where was I doing wrong? Please help me~

  1. I fix the onnx model's key problem ( the model input tensor name = '0' ==> KeyError: '0')
import onnx
from onnx import helper

onnx_model_path = "model-small.onnx"
onnx_model = onnx.load(onnx_model_path)

# Define a mapping from old names to new names
name_map = {"0": "arg_0"}

# Initialize a list to hold the new inputs
new_inputs = []

# Iterate over the inputs and change their names if needed
for inp in onnx_model.graph.input:
    if inp.name in name_map:
        # Create a new ValueInfoProto with the new name
        new_inp = helper.make_tensor_value_info(name_map[inp.name],
                                                inp.type.tensor_type.elem_type,
                                                [dim.dim_value for dim in inp.type.tensor_type.shape.dim])
        new_inputs.append(new_inp)
    else:
        new_inputs.append(inp)

# Clear the old inputs and add the new ones
onnx_model.graph.ClearField("input")
onnx_model.graph.input.extend(new_inputs)

# Go through all nodes in the model and replace the old input name with the new one
for node in onnx_model.graph.node:
    for i, input_name in enumerate(node.input):
        if input_name in name_map:
            node.input[i] = name_map[input_name]

# Save the renamed ONNX model
onnx_model_path = "model-small-fix.onnx"
onnx.save(onnx_model, onnx_model_path)
  1. Convert it into TensorFlow saved model format (the result of the TF model is confirmed OK):
import onnx
from onnx_tf.backend import prepare

model_path = "model-small-fix.onnx"
output_path = "modified_model_2"

onnx_model = onnx.load(model_path) # load onnx model
tf_rep = prepare(onnx_model)  # prepare tf representation
tf_rep.export_graph(output_path)  # export the model
  1. Convert the TensorFlow saved model into TFLite:
import tensorflow as tf
import io

path = 'modified_model_2'
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir=path)
tf_lite_model = converter.convert()
open('model_1.tflite', 'wb').write(tf_lite_model)
  1. When last python running, the log out as below:
python convert_tf2tflite_savedmodel.py 
2024-01-10 14:26:37.071501: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2024-01-10 14:26:37.071524: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
2024-01-10 14:26:37.072913: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: /modified_model_2
2024-01-10 14:26:37.089615: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }
2024-01-10 14:26:37.089642: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: /modified_model_2
2024-01-10 14:26:37.110969: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled
2024-01-10 14:26:37.118290: I tensorflow/cc/saved_model/loader.cc:233] Restoring SavedModel bundle.
2024-01-10 14:26:37.263435: I tensorflow/cc/saved_model/loader.cc:217] Running initialization op on SavedModel bundle at path: /modified_model_2
2024-01-10 14:26:37.389643: I tensorflow/cc/saved_model/loader.cc:316] SavedModel load for tags { serve }; Status: success: OK. Took 316731 microseconds.
2024-01-10 14:26:37.545668: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
Summary on the non-converted ops:
---------------------------------
 * Accepted dialects: tfl, builtin, func
 * Non-Converted Ops: 293, Total Ops 804, % non-converted = 36.44 %
 * 293 ARITH ops

- arith.constant:  293 occurrences  (f32: 281, i32: 12)



  (f32: 99)
  (f32: 73)
  (f32: 24)
  (f32: 73)
  (f32: 45)
  (f32: 7)
  (f32: 1)
  (f32: 5)
  (f32: 1)
  (f32: 180)
2024-01-10 14:26:38.047603: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2989] Estimated count of arithmetic ops: 9.246 G  ops, equivalently 4.623 G  MACs

from midas.

TimYao18 avatar TimYao18 commented on June 7, 2024

I found that the input shape (1, 3,256,256) in TensorFlow Lite is different from the MiDaS provided model (1,256,256,3)

I use the 'onnx2tf' that it automatically transposes (N,C,H,W) to (N,H,W,C) So I will close this issue.

from midas.

visonpon avatar visonpon commented on June 7, 2024

hi @TimYao18 ,I follw your steps and the test results are normal, but also encountered this problem on mobile device, hope you can share how to solve it, thanks~

from midas.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.