Comments (3)
Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want.
When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy.
I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison.
Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit.
I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did.
thanks for the explanation. I'm going to reopen this.
it's probably worth taking a closer look to see where the difference is coming from, whether there's some optimization pass in ORT that changes the graph. +@chilo-ms , can you take a look when you have time?
from onnxruntime.
I haven't used polygraphy before, but it looks to me like the comparisons aren't exactly apples to apples here?
for OnnxRuntime, you're comparing TensorRT EP vs CUDA EP with a range of shapes [1, 860, 10] , [1, 870, 10] with random data etc.
for polygraph you're comparing OnnxRuntime CPU EP vs TensorRT with explicit shape [1, 800, 10] and random data.
In theory, if you feed in the same exact input data/shape (not random) to OnnxRuntime TensorRT EP and polygraphy with TensorRT backend, they should both return the same output (assuming they are using the same TensorRT version). Can you confirm that is the case?
+@kevinch-nv for any advice he can provide with using polygraphy.
from onnxruntime.
Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want.
When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy.
I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison.
Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit.
I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did.
from onnxruntime.
Related Issues (20)
- Symbolic Shape infer fails on onnx file without much logs
- How to convert quantized ONNX model from Tensor-Oriented format to Operator-Oriented format?
- Quantized ONNX Model Still Has Float32 Input/Output Tensors HOT 2
- [Build] build python wheel fails HOT 2
- [Documentation] Typo in tutorials at the top of the official webpage
- [Jvm] Native crash during createSession: std::bad_cast HOT 4
- [Performance] CUDA kernel not found in registries for Op type: ScatterND HOT 7
- [Training] Onnxruntime-training 1.18.0 for windows not available HOT 4
- [Performance] Whisper model inference results incorrect after Transformer Optimizer HOT 2
- [Training] Cannot export model for inferencing from session created from buffers
- [Performance] Failed to run Whisper inference after optimization with Dml EP HOT 1
- [E:onnxruntime:, qnn_execution_provider.cc:591 GetCapability] QNN SetupBackend failed qnn_backend_manager.cc:334 InitializeBackend Failed to initialize backend HOT 3
- [Feature Request] Add DFT support for CUDAExecutionProvider
- [Performance] Increased memory usage when loading from bytes HOT 5
- Can onnxruntime.quantization.quantize_dynamic() work with onnx-trt?
- CoreML EP inference result is improperly scaled HOT 3
- ORT 1.18.1 Release Candidates available for testing HOT 3
- [Build] "utf8_range::utf8_validity" does not exist HOT 5
- QDQ removal optimization from around MaxPool changes results with negative scale
- [Mobile] Cocoapods release archive zips are missing HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.