Git Product home page Git Product logo

pytorch2tensorrt's Issues

多输入输出分配内存

您好,看到inference时只是支持单输入 和单输出的内存分配,请问多输入或者多输出如何进行内存分配呢?

转int8模型时没有生成.cache文件

你好,我用main.py转了int8的trt,当我用inference测试的时候,输出全是nan,我怀疑是校准器没有工作,因为并没有生成任何校准文件,同时myCalibrator.py中的next_batch,get_batch_size, get_batch等函数均没有被执行,因为每执行一次,我会打印一些东西。不知道如何解决这个问题,能否提供一些思路?

engine转换失败

Namespace(batch_size=1, channel=3, height=640, width=640, cache_file='', mode='int8', onnx_file_path='/home/hitcrt/code/LabelVisulization/models/seg_0.947.onnx', engine_file_path='seg_0.947.trt', dynamic=False, imgs_dir='/home/home_expand/dataset/carBox/NanHang')
Loading ONNX file from path /home/hitcrt/code/LabelVisulization/models/seg_0.947.onnx...
Beginning ONNX file parsing
[12/25/2023-12:36:34] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Parsing ONNX file complete!
Building an engine from file /home/hitcrt/code/LabelVisulization/models/seg_0.947.onnx; this may take a while...
[12/25/2023-12:36:34] [TRT] [W] BuilderFlag::kENABLE_TACTIC_HEURISTIC has been ignored in this builder run. This feature is only supported on Ampere and beyond.
batch:[2599/2599][12/25/2023-12:43:50] [TRT] [W] Missing scale and zero-point for tensor /model.30/dfl/Softmax_output_0, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[12/25/2023-12:43:50] [TRT] [W] Missing scale and zero-point for tensor (Unnamed Layer* 352) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
activation(42): error: identifier "inff" is undefined
dst0 = tmp * inff;
^

1 error detected in the compilation of "activation".

[12/25/2023-12:48:28] [TRT] [E] 1: Unexpected exception NVRTC error: NVRTC_ERROR_COMPILATION
ERROR: Create engine failed!

TensorRT 9.X.0

Hi,

After translating and reading through your code it seems to be mainly meant for TRT 8.X release. Do you think your approach should work with TRT 9.X.0 releases?

do_inference报错

您好,我在使用do_inference.py时,发生如下错误
image
不知是否和我的自己转换的onnx有关?

另外您的centernet是使用的原版的代码还是自己构建的?

你好

我自己的模型 pth和onnx的结果一致,数字一致,但是trt的结果却不一致。请问怎么办呢?怎么会这样

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.