Git Product home page Git Product logo

iwatake2222 / inferencehelper Goto Github PK

View Code? Open in Web Editor NEW
268.0 17.0 55.0 4.34 MB

C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ONNX Runtime, LibTorch, TensorFlow

License: Apache License 2.0

CMake 3.79% C++ 95.03% Python 0.59% Shell 0.30% PowerShell 0.29%
cpp deep-learning deeplearning tensorflow tensorrt opencv ncnn mnn inference nnabla

inferencehelper's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inferencehelper's Issues

iMX8 Plus NPU delegate

Environment (Hardware)

  • Hardware: iMX8 Plus SOC with NPU.
  • Software: yocto, qt, cmake

Information

I have a Qt6 application (cmake based) and included the Inference helper with Tensorflow Lite support.
I include this Qt project into a Yocto project, generating a Linux image for iMX8 Plus platform.

The project compile all well and the Inference helper runs with the INFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK settings.

external_delegate_path

In Yocto tensorflow-lite and tensorflow-lite-vx-delegate for iMX8P is integrated.
If I use the following command from the installed examples

USE_GPU_INFERENCE=0 ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt --external_delegate_path=/usr/lib/libvx_delegate.so

Tensorflow use the NPU hardware acceleration. Important for this are

  • USE_GPU_INFERENCE=0
  • --external_delegate_path=/usr/lib/libvx_delegate.so

Question

Is it possible to include USE_GPU_INFERENCE and --external_delegate_path=/usr/lib/libvx_delegate.so into the XNNPACK settings?
Or need I create a completely custom delegate pack?

GPU Device Selection

How to choose gpu1 and gpu2 device if using embedded platform and two gpus deployed via tflite's nnapi?
e.g. Dual GPU for embedded platform NXP i.MX8 QuadMax

PreProcess is slow

In the loop in PreProcess() function, GetWidth()/GetHeight()/GetChannel() is used as end condition. It makes processing speed slow because there are some logic in these getter functions.

Fix build for NNAPI

  • Issue
    • After updating TensorFlow Lite to v2.6.0, build with NNAPI delegate in Android fails

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.