Comments (3)
Here's the DEBUG (0) log from Onnxruntime - I could't find any helpful info:
2024-05-24 23:25:21.261904 [I:onnxruntime:, inference_session.cc:533 TraceSessionOptions] Session Options { execution_mode:0 execution_order:DEFAULT enable_profiling:0 optimized_model_filepath: enable_mem_pattern:1 enable_mem_reuse:1 enable_cpu_mem_arena:1 profile_file_prefix:onnxruntime_profile_ session_logid: session_log_severity_level:0 session_log_verbosity_level:0 max_num_graph_transformation_steps:10 graph_optimization_level:3 intra_op_param:OrtThreadPoolParams { thread_pool_size: 0 auto_set_affinity: 0 allow_spinning: 1 dynamic_block_base_: 0 stack_size: 0 affinity_str: set_denormal_as_zero: 0 } inter_op_param:OrtThreadPoolParams { thread_pool_size: 0 auto_set_affinity: 0 allow_spinning: 1 dynamic_block_base_: 0 stack_size: 0 affinity_str: set_denormal_as_zero: 0 } use_per_session_threads:1 thread_pool_allow_spinning:1 use_deterministic_compute:0 config_options: { } }
2024-05-24 23:25:21.262142 [I:onnxruntime:, inference_session.cc:433 operator()] Flush-to-zero and denormal-as-zero are off
2024-05-24 23:25:21.262152 [I:onnxruntime:, inference_session.cc:441 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2024-05-24 23:25:21.262158 [I:onnxruntime:, inference_session.cc:459 ConstructorCommon] Dynamic block base set to 0
2024-05-24 23:25:21.289917 [I:onnxruntime:, inference_session.cc:1602 Initialize] Initializing session.
2024-05-24 23:25:21.304691 [I:onnxruntime:, graph_partitioner.cc:900 InlineFunctionsAOT] This model does not have any local functions defined. AOT Inlining is not performed
2024-05-24 23:25:21.305127 [I:onnxruntime:, graph_transformer.cc:15 Apply] GraphTransformer EnsureUniqueDQForNodeUnit modified: 0 with status: OK
2024-05-24 23:25:21.322771 [I:onnxruntime:, graph_transformer.cc:15 Apply] GraphTransformer Level1_RuleBasedTransformer modified: 1 with status: OK
multiprocessing.pool.RemoteTraceback:
from onnxruntime.
Solved the issue!
Cause:
Onnx opset version was not compatible with onnxruntime.
Note: This is not an issue with ONNXRuntime
Fix
-
Examine which onnx opset and onnxruntime version is required. Eg: onnxruntime==1.18 requires onnx=1.16 and opset 21.
-
Upgrade onnx opset:
import onnx
oldModel = onnx.load(modelPath)
upgradedModel = onnx.version_converter.convert_version(oldModel, 21)
onnx.save(upgradedModel, modelPath)
from onnxruntime.
This issue will be updated in a few months from 31st of May 2024:
pytorch/pytorch#127167
For general best practice, I recommend explicitly stating the ONNX opset version
from onnxruntime.
Related Issues (20)
- [Performance] CUDA kernel not found in registries for Op type: ScatterND HOT 7
- [Training] Onnxruntime-training 1.18.0 for windows not available HOT 4
- [Performance] Whisper model inference results incorrect after Transformer Optimizer HOT 2
- [Training] Cannot export model for inferencing from session created from buffers
- [Performance] Failed to run Whisper inference after optimization with Dml EP HOT 1
- [E:onnxruntime:, qnn_execution_provider.cc:591 GetCapability] QNN SetupBackend failed qnn_backend_manager.cc:334 InitializeBackend Failed to initialize backend HOT 3
- [Feature Request] Add DFT support for CUDAExecutionProvider
- [Performance] Increased memory usage when loading from bytes HOT 5
- Can onnxruntime.quantization.quantize_dynamic() work with onnx-trt?
- CoreML EP inference result is improperly scaled HOT 3
- ORT 1.18.1 Release Candidates available for testing HOT 3
- [Build] "utf8_range::utf8_validity" does not exist HOT 5
- QDQ removal optimization from around MaxPool changes results with negative scale
- [Mobile] Cocoapods release archive zips are missing HOT 5
- Cannot create arena allocator with Environment::CreateAndRegisterAllocator on MAC M2 with clang HOT 1
- [Build] How to build for Android armeabi platform? HOT 1
- Issue with performing shape inference using symbolic_shape_infer.py with Phi-3 ONNX Models HOT 4
- [Performance] Mapfile support for certain external data files is not working HOT 2
- [Documentation] Setup the CUDA Environment is not detailed enough HOT 1
- [Documentation] phi-3 vision tutorial lacks samples for languages that are actually used for desktop development. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.