Comments (5)
Hard to say without a stack trace. with symbol names
ORT will do most allocations during model initialization and the first inference. After that it's using a cache for memory so segfaults would typically be an out-of-memory scenario or bad input (e.g. input tensor is freed while ORT is using it).
If you're building from source can you build a debug version? May need to ensure the Android build doesn't strip the binary of symbols though as typically it.
Does the issue happen if you run on the Android emulator? Would be easier to debug if it did.
Another option would be to copy onnxruntime_perf_test using adb to the phone (use /data/local/tmp), along with the model, and run. you can specify the number of iterations or amount of time to run for, and it can generate dummy input data.
from onnxruntime.
Hi @skottmckay thanks for your response.
I have created an MRE in the form of a demo app that has the bug. Please check out this repo. The bug is reproducible on Android emulator, it will crash anywhere in the range of 100-1000 inference runs, which should only take a few minutes to reach. Does this help in debugging?
I would like to provide a stack trace of the crash also, but I don't know how to get that on the native layer. Any pointers you can give me for that? In any case, I appreciate the help :)
from onnxruntime.
this issue same with :
#21097
which I solved by including generated header files.
In my case, it's caused by function mapping.
maybe you can try. Hope it helps. 0x0
from onnxruntime.
@laurenspriem is it reproducible by running onnxruntime_perf_test in a shell on the emulator? If so that would rule out the issue being in the flutter plugin you're using (which we don't own).
Use adb push <file> /data/local/tmp
to copy onnxruntime_perf_test and your model to /data/local/tmp. Using adb shell
, chmod +x /data/local/tmp/onnxruntime_perf_test
to make it executable. cd /data/local/tmp
. ./onnxruntime_perf_test -I -r 2000 <model.onnx>
will run the model 1000 times, generating random input that matches the model inputs. If that does not crash, most likely the issue is with the flutter plugin.
May be possible to get symbols using ndk-stack: https://developer.android.com/ndk/guides/ndk-stack.html
from onnxruntime.
I am trying to run onnxruntime_perf_test in the emulator as you suggested. However, it stops and gives me the following text back:
/onnxruntime/onnxruntime/test/onnx/TestCase.cc:705 OnnxTestCase::OnnxTestCase(const std::string &, std::unique_ptr<TestModelInfo>, double, double) test case dir doesn't exist
Any clue what is going wrong?
from onnxruntime.
Related Issues (20)
- Symbolic Shape infer fails on onnx file without much logs
- How to convert quantized ONNX model from Tensor-Oriented format to Operator-Oriented format?
- Quantized ONNX Model Still Has Float32 Input/Output Tensors HOT 2
- [Build] build python wheel fails HOT 2
- [Documentation] Typo in tutorials at the top of the official webpage
- [Jvm] Native crash during createSession: std::bad_cast HOT 4
- [Performance] CUDA kernel not found in registries for Op type: ScatterND HOT 7
- [Training] Onnxruntime-training 1.18.0 for windows not available HOT 4
- [Performance] Whisper model inference results incorrect after Transformer Optimizer HOT 2
- [Training] Cannot export model for inferencing from session created from buffers
- [Performance] Failed to run Whisper inference after optimization with Dml EP HOT 1
- [E:onnxruntime:, qnn_execution_provider.cc:591 GetCapability] QNN SetupBackend failed qnn_backend_manager.cc:334 InitializeBackend Failed to initialize backend HOT 3
- [Feature Request] Add DFT support for CUDAExecutionProvider
- [Performance] Increased memory usage when loading from bytes HOT 5
- Can onnxruntime.quantization.quantize_dynamic() work with onnx-trt?
- CoreML EP inference result is improperly scaled HOT 3
- ORT 1.18.1 Release Candidates available for testing HOT 3
- [Build] "utf8_range::utf8_validity" does not exist HOT 5
- QDQ removal optimization from around MaxPool changes results with negative scale
- [Mobile] Cocoapods release archive zips are missing HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.