Git Product home page Git Product logo

Comments (8)

catcor01 avatar catcor01 commented on May 26, 2024

Hello,

Your issue is similar to the following one: #761.

Transpose and ConvTanspose support has not been added to the ONNX parser. The work to add this support is on our radar but has not been prioritized in the near future. I can make 2 suggestions to get things running on your side:

  • Convert the onnx model to tflite and run through our tflite parser or delegate.
  • If you want to try to add support for the operators to the ONNX parser we do welcome contributions. For more information on contributing to ArmNN please see the Contributing page on the MLPlatform.org website, or see the Contributor Guide.

Kind Regards,
Cathal.

from armnn.

catcor01 avatar catcor01 commented on May 26, 2024

Hello again,

I just wanted to suggest you can use ONNX runtime with ACL as an option to accelerate your model also. See here: https://onnxruntime.ai/docs/execution-providers/community-maintained/ACL-ExecutionProvider.html.

Kind Regards,
Cathal.

from armnn.

pandianAK avatar pandianAK commented on May 26, 2024

Thankyou for confirming the issue. I have tried converting onnx to tflite. I am facing issues with the onnx_tf library as it depends on older versions of tensorFlow and tensorFlow-addons.
https://stackoverflow.com/questions/53182177/how-do-you-convert-a-onnx-to-tflite
Would you recommend any other way.

I believe the contribution of adding "Transpose" to OnnxParser would take some time.

from armnn.

pandianAK avatar pandianAK commented on May 26, 2024

Hi,
I have tried ONNX Runtime with ArmNN using "ExecutionProviderArmNN" api. I believe it is done through bazel and has to be built from source. Would there be any precompiled binary/library available as a part of OnnxRuntime release, which has the support for armnn(so that it has api could be linked from the header-- "OrtSessionOptionsAppendExecutionProvider_ACL" or "OrtSessionOptionsAppendExecutionProvider_ArmNN").

Thanks
Pandian AK

from armnn.

Colm-in-Arm avatar Colm-in-Arm commented on May 26, 2024

Hello Pandian AK,

There is no publicly available prebuilt binary for ONNX Runtime package that includes the ACL EP.

Just to be clear it is the ACL execution provider you should be trying. There is an older Arm NN execution provider but it's probably too old for your purposes.

Colm.

from armnn.

pandianAK avatar pandianAK commented on May 26, 2024

Hi
Thanks for the info so far and suggesting the ExecutionProvider using ACL. I believe the options are not much configurable in that case.

And I would need an info on one of the configurations in ArmNN standalone itself.
In case of the backend options, if I use "CpuAcc", what are all the available optimize options, to enhance inference time.
An example :
armnn::OptimizerOptions optimizerOptions;
optimizerOptions.m_OptimizeForFastMath = true; // Enable FastMath
optimizerOptions.m_ReduceFp32ToFp16 = true; // Enable FP16
armnn::IOptimizedNetworkPtr optimizedNetwork = armnn::Optimize(*network, {armnn::Compute::CpuAcc}, optimizerOptions);

//Then the code for loadNetwork and EnqueueWorkload

Would you be advising to create dynamic backends, for the inner layers.(any example)

Thanks

from armnn.

pandianAK avatar pandianAK commented on May 26, 2024

Hi,
As per the previous comment, I was able to write "Transpose" API work by comparing it with the TFLiteParser. Will definitely add a contribution(PR) here if it is allowed, once it passes all Unit tests.

However, I don't see any improvement in time and would still need on utilizing ArmNN and run my model with very less inference time. I was able to get backend options for CpuAcc from this URL:
https://arm-software.github.io/armnn/latest/runtimeoptions.html

Incase of GpuAcc, Could you please provide all the options available and how to configure to get the fastest time. Here are the some of the options I could find. Pls add more, if there is any.

  1. OpenCL tuning..(with tuning file)
  2. FP16 and running 2FP16 instructions in 1FP32 cycle.. (Not sure how to enable this)
  3. Specify thread options.
  4. Any other cache based options.

I am not sure how to specify these options in the ArmNN for the GpuAcc backend. Please specify if there is more to this option.

Thanks
Pandian AK

from armnn.

Colm-in-Arm avatar Colm-in-Arm commented on May 26, 2024

Hello,

The GpuAcc tuning parameters are described here. These are also available through various command line options in ExecuteNetwork.

Colm.

from armnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.