Git Product home page Git Product logo

Comments (3)

jywu-msft avatar jywu-msft commented on July 19, 2024 1

@jywu-msft

Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want.

When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy.

I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison.

Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit.

I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did.

Untitled

thanks for the explanation. I'm going to reopen this.
it's probably worth taking a closer look to see where the difference is coming from, whether there's some optimization pass in ORT that changes the graph. +@chilo-ms , can you take a look when you have time?

from onnxruntime.

jywu-msft avatar jywu-msft commented on July 19, 2024

I haven't used polygraphy before, but it looks to me like the comparisons aren't exactly apples to apples here?
for OnnxRuntime, you're comparing TensorRT EP vs CUDA EP with a range of shapes [1, 860, 10] , [1, 870, 10] with random data etc.
for polygraph you're comparing OnnxRuntime CPU EP vs TensorRT with explicit shape [1, 800, 10] and random data.
In theory, if you feed in the same exact input data/shape (not random) to OnnxRuntime TensorRT EP and polygraphy with TensorRT backend, they should both return the same output (assuming they are using the same TensorRT version). Can you confirm that is the case?
+@kevinch-nv for any advice he can provide with using polygraphy.

from onnxruntime.

akmalmasud96 avatar akmalmasud96 commented on July 19, 2024

@jywu-msft

Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want.

When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy.

I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison.

Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit.

I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did.

Untitled

from onnxruntime.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.