Git Product home page Git Product logo

Comments (8)

wang-xinyu avatar wang-xinyu commented on July 22, 2024 1

网络定义没啥问题,还是仔细对比下两边的计时过程是否一致, 如果没问题, 那可能是trt内部实现的问题:这个模块不如pytorch快

from tensorrtx.

xiaoche-24 avatar xiaoche-24 commented on July 22, 2024 1

先检查下两边计入耗时的过程是否一直,例如预处理、cpu to gpu memcpy、gpu to cpu memcpy等等

我只对比了infer的时间,然后在改进的yolov5中,我是采用了fasternet里面的partialconv+repvgg的组合模块替换了yolov5里的C3模块,下面是我在tenssortx上实现partialconv和repvgg模块代码: static ILayer* Partial_conv3(INetworkDefinition network, std::map<std::string, Weights>& weightMap, ITensor& input,int outch, int n_div, std::string lname) { Weights emptywts{ DataType::kFLOAT, nullptr, 0 }; Dims spliteDims = input.getDimensions(); int c_out = outch/n_div; ISliceLayer split1 = network->addSlice(input, Dims3{0, 0, 0}, Dims3{spliteDims.d[0]/n_div, spliteDims.d[1], spliteDims.d[2]}, Dims3{1, 1, 1}); ISliceLayer* split2 = network->addSlice(input, Dims3{spliteDims.d[0]/n_div, 0, 0}, Dims3{spliteDims.d[0]-spliteDims.d[0]/n_div, spliteDims.d[1], spliteDims.d[2]}, Dims3{1, 1, 1}); IConvolutionLayer* partial_conv3 = network->addConvolutionNd(split1->getOutput(0), c_out, DimsHW{ 3, 3 }, weightMap[lname + ".partial_conv3.0.weight"], emptywts); partial_conv3->setStrideNd(DimsHW{1, 1}); partial_conv3->setPaddingNd(DimsHW{1, 1}); partial_conv3->setNbGroups(1); assert(partial_conv3); ITensor inputTensors[] = {partial_conv3->getOutput(0), split2->getOutput(0) }; auto cat1 = network->addConcatenation(inputTensors, 2); assert(cat1); return cat1; }
static ILayer* RepVGG(INetworkDefinition *network, std::map<std::string, Weights> &weightMap, ITensor &input, int outch, int stride, int groups, std::string lname) { IConvolutionLayer *conv = network->addConvolutionNd(input, outch, DimsHW{3, 3}, weightMap[lname + ".rbr_dense.weight"], weightMap[lname + ".rbr_dense.bias"]); conv->setStrideNd(DimsHW{stride, stride}); conv->setPaddingNd(DimsHW{1, 1}); conv->setNbGroups(groups); assert(conv); // IActivationLayer *relu = network->addActivation(*conv->getOutput(0), ActivationType::kRELU); // assert(relu); // return relu; auto sig = network->addActivation(*conv->getOutput(0), ActivationType::kSIGMOID); assert(sig); auto ew = network->addElementWise(*conv->getOutput(0), *sig->getOutput(0), ElementWiseOperation::kPROD); assert(ew); return ew; } 麻烦您可以帮我看一下哪一步操作写的有问题吗?会导致推理时间的增加?因为在pt上,改进模型是比原始模型速度快的,麻烦您指教一下🙏

你好,请问能请教一下,怎么改写这里的模块吗,怎么根据自己在模型的网络层的修改,改写tensort中的东西,是自己搭建还是已有模块呢

仿照tensorrtx去调用Tensorrt的API搭建自己的模块,就是一些卷积、激活函数的堆叠,

from tensorrtx.

wang-xinyu avatar wang-xinyu commented on July 22, 2024

输出的tensor shape和内容不一样,tensorrtx的对输出层做了优化,具体可以看yololayer的实现

from tensorrtx.

xiaoche-24 avatar xiaoche-24 commented on July 22, 2024

输出的tensor shape和内容不一样,tensorrtx的对输出层做了优化,具体可以看yololayer的实现

你好,我修改了model.cpp,实现了改进的yolov5模型pt->wts->engine的转化,然后推理的结果也是正确的。但是出现了一个问题,相比于pt下的推理时间不成正比:pt下改进后的模型比原版推理时间快30%,但转成trt后,推理时间反而慢了30%,想问一下有哪些地方可能会造成这种现象?
然后我改进的模型里面包含repvgg结构,我按照yolov5-lite将repvgg权重合并保存再进行转换的。

from tensorrtx.

wang-xinyu avatar wang-xinyu commented on July 22, 2024

先检查下两边计入耗时的过程是否一直,例如预处理、cpu to gpu memcpy、gpu to cpu memcpy等等

from tensorrtx.

xiaoche-24 avatar xiaoche-24 commented on July 22, 2024

先检查下两边计入耗时的过程是否一直,例如预处理、cpu to gpu memcpy、gpu to cpu memcpy等等

我只对比了infer的时间,然后在改进的yolov5中,我是采用了fasternet里面的partialconv+repvgg的组合模块替换了yolov5里的C3模块,下面是我在tenssortx上实现partialconv和repvgg模块代码:
static ILayer* Partial_conv3(INetworkDefinition network, std::map<std::string, Weights>& weightMap, ITensor& input,int outch, int n_div, std::string lname) {
Weights emptywts{ DataType::kFLOAT, nullptr, 0 };
Dims spliteDims = input.getDimensions();
int c_out = outch/n_div;
ISliceLayer
split1 = network->addSlice(input,
Dims3{0, 0, 0},
Dims3{spliteDims.d[0]/n_div, spliteDims.d[1], spliteDims.d[2]},
Dims3{1, 1, 1});
ISliceLayer* split2 = network->addSlice(input,
Dims3{spliteDims.d[0]/n_div, 0, 0},
Dims3{spliteDims.d[0]-spliteDims.d[0]/n_div, spliteDims.d[1], spliteDims.d[2]},
Dims3{1, 1, 1});
IConvolutionLayer* partial_conv3 = network->addConvolutionNd(split1->getOutput(0), c_out, DimsHW{ 3, 3 }, weightMap[lname + ".partial_conv3.0.weight"], emptywts);
partial_conv3->setStrideNd(DimsHW{1, 1});
partial_conv3->setPaddingNd(DimsHW{1, 1});
partial_conv3->setNbGroups(1);
assert(partial_conv3);
ITensor
inputTensors[] = {partial_conv3->getOutput(0), split2->getOutput(0) };
auto cat1 = network->addConcatenation(inputTensors, 2);
assert(cat1);
return cat1;
}

static ILayer* RepVGG(INetworkDefinition *network, std::map<std::string, Weights> &weightMap, ITensor &input, int outch, int stride, int groups, std::string lname)
{
IConvolutionLayer *conv = network->addConvolutionNd(input, outch, DimsHW{3, 3}, weightMap[lname + ".rbr_dense.weight"], weightMap[lname + ".rbr_dense.bias"]);
conv->setStrideNd(DimsHW{stride, stride});
conv->setPaddingNd(DimsHW{1, 1});
conv->setNbGroups(groups);
assert(conv);
// IActivationLayer *relu = network->addActivation(*conv->getOutput(0), ActivationType::kRELU);
// assert(relu);
// return relu;
auto sig = network->addActivation(*conv->getOutput(0), ActivationType::kSIGMOID);
assert(sig);
auto ew = network->addElementWise(*conv->getOutput(0), *sig->getOutput(0), ElementWiseOperation::kPROD);
assert(ew);
return ew;
}
麻烦您可以帮我看一下哪一步操作写的有问题吗?会导致推理时间的增加?因为在pt上,改进模型是比原始模型速度快的,麻烦您指教一下🙏

from tensorrtx.

lwh1229 avatar lwh1229 commented on July 22, 2024

先检查下两边计入耗时的过程是否一直,例如预处理、cpu to gpu memcpy、gpu to cpu memcpy等等

我只对比了infer的时间,然后在改进的yolov5中,我是采用了fasternet里面的partialconv+repvgg的组合模块替换了yolov5里的C3模块,下面是我在tenssortx上实现partialconv和repvgg模块代码: static ILayer* Partial_conv3(INetworkDefinition network, std::map<std::string, Weights>& weightMap, ITensor& input,int outch, int n_div, std::string lname) { Weights emptywts{ DataType::kFLOAT, nullptr, 0 }; Dims spliteDims = input.getDimensions(); int c_out = outch/n_div; ISliceLayer split1 = network->addSlice(input, Dims3{0, 0, 0}, Dims3{spliteDims.d[0]/n_div, spliteDims.d[1], spliteDims.d[2]}, Dims3{1, 1, 1}); ISliceLayer* split2 = network->addSlice(input, Dims3{spliteDims.d[0]/n_div, 0, 0}, Dims3{spliteDims.d[0]-spliteDims.d[0]/n_div, spliteDims.d[1], spliteDims.d[2]}, Dims3{1, 1, 1}); IConvolutionLayer* partial_conv3 = network->addConvolutionNd(split1->getOutput(0), c_out, DimsHW{ 3, 3 }, weightMap[lname + ".partial_conv3.0.weight"], emptywts); partial_conv3->setStrideNd(DimsHW{1, 1}); partial_conv3->setPaddingNd(DimsHW{1, 1}); partial_conv3->setNbGroups(1); assert(partial_conv3); ITensor inputTensors[] = {partial_conv3->getOutput(0), split2->getOutput(0) }; auto cat1 = network->addConcatenation(inputTensors, 2); assert(cat1); return cat1; }

static ILayer* RepVGG(INetworkDefinition *network, std::map<std::string, Weights> &weightMap, ITensor &input, int outch, int stride, int groups, std::string lname) { IConvolutionLayer *conv = network->addConvolutionNd(input, outch, DimsHW{3, 3}, weightMap[lname + ".rbr_dense.weight"], weightMap[lname + ".rbr_dense.bias"]); conv->setStrideNd(DimsHW{stride, stride}); conv->setPaddingNd(DimsHW{1, 1}); conv->setNbGroups(groups); assert(conv); // IActivationLayer *relu = network->addActivation(*conv->getOutput(0), ActivationType::kRELU); // assert(relu); // return relu; auto sig = network->addActivation(*conv->getOutput(0), ActivationType::kSIGMOID); assert(sig); auto ew = network->addElementWise(*conv->getOutput(0), *sig->getOutput(0), ElementWiseOperation::kPROD); assert(ew); return ew; } 麻烦您可以帮我看一下哪一步操作写的有问题吗?会导致推理时间的增加?因为在pt上,改进模型是比原始模型速度快的,麻烦您指教一下🙏

你好,请问能请教一下,怎么改写这里的模块吗,怎么根据自己在模型的网络层的修改,改写tensort中的东西,是自己搭建还是已有模块呢

from tensorrtx.

stale avatar stale commented on July 22, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

from tensorrtx.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.