Comments (9)
Did you clone the repo with 'git clone --recursive' ?
If yes, you can find the cmake/config in this directory : 'TPAT/3rdparty/blazerml-tvm'
from tpat.
It seems clone the TensorRT code and rename the directory as TPAT in the document?not clone the really TPAT repository.
git clone -b master https://github.com/nvidia/TensorRT TPAT && cd TPAT && git submodule update --init --recursive
from tpat.
Sorry. our mistakes...
it should be TPAT directory.
git clone --recursive https://github.com/Tencent/TPAT.git
we have modified the repo address. thanks
from tpat.
请问,您跑通了么?我安装遇到一些问题,方便留个联系方式,请教一下么?
from tpat.
请问,您跑通了么?我安装遇到一些问题,方便留个联系方式,请教一下么?
[email protected]
from tpat.
can't build docker image.
Sending build context to Docker daemon 401.9kB
Step 1/9 : FROM nvcr.io/nvidia/tensorflow:20.06-tf1-py3
---> 61568efc3e0e
Step 2/9 : RUN wget -O "llvm-9.0.1.src.tar.xz" https://github.com/llvm/llvm-project/releases/download/llvmorg-9.0.1/llvm-9.0.1.src.tar.xz && tar -xvf llvm-9.0.1.src.tar.xz && mkdir llvm-9.0.1.src/build && cd llvm-9.0.1.src/build && cmake -G "Unix Makefiles" -DLLVM_TARGETS_TO_BUILD=X86 -DCMAKE_BUILD_TYPE="Release" -DCMAKE_INSTALL_PREFIX="/usr/local/llvm" .. && make -j8 && make install PREFIX="/usr/local/llvm"
---> Using cache
---> 81790a52d1ca
Step 3/9 : RUN pip install pycuda onnx nvidia-pyindex && pip install onnx-graphsurgeon onnxruntime tf2onnx xgboost
---> Using cache
---> 7b262fc70ae1
Step 4/9 : RUN git clone --recursive https://github.com/Tencent/TPAT.git /workspace/TPAT && cd /workspace/TPAT/3rdparty/blazerml-tvm && mkdir build && cp cmake/config.cmake build && cd build
---> Running in 6cbc94a98aad
Cloning into '/workspace/TPAT'...
Submodule '3rdparty/blazerml-tvm' (https://github.com/Tencent/BlazerML-tvm.git) registered for path '3rdparty/blazerml-tvm'
Cloning into '/workspace/TPAT/3rdparty/blazerml-tvm'...
fatal: unable to access 'https://github.com/Tencent/BlazerML-tvm.git/': Failed to connect to github.com port 443: Connection timed out
fatal: clone of 'https://github.com/Tencent/BlazerML-tvm.git' into submodule path '/workspace/TPAT/3rdparty/blazerml-tvm' failed
Failed to clone '3rdparty/blazerml-tvm'. Retry scheduled
Cloning into '/workspace/TPAT/3rdparty/blazerml-tvm'...
fatal: unable to access 'https://github.com/Tencent/BlazerML-tvm.git/': Failed to connect to github.com port 443: Connection timed out
fatal: clone of 'https://github.com/Tencent/BlazerML-tvm.git' into submodule path '/workspace/TPAT/3rdparty/blazerml-tvm' failed
Failed to clone '3rdparty/blazerml-tvm' a second time, aborting
The command '/bin/sh -c git clone --recursive https://github.com/Tencent/TPAT.git /workspace/TPAT && cd /workspace/TPAT/3rdparty/blazerml-tvm && mkdir build && cp cmake/config.cmake build && cd build' returned a non-zero code: 1
from tpat.
can't build docker image. Sending build context to Docker daemon 401.9kB Step 1/9 : FROM nvcr.io/nvidia/tensorflow:20.06-tf1-py3 ---> 61568efc3e0e Step 2/9 : RUN wget -O "llvm-9.0.1.src.tar.xz" https://github.com/llvm/llvm-project/releases/download/llvmorg-9.0.1/llvm-9.0.1.src.tar.xz && tar -xvf llvm-9.0.1.src.tar.xz && mkdir llvm-9.0.1.src/build && cd llvm-9.0.1.src/build && cmake -G "Unix Makefiles" -DLLVM_TARGETS_TO_BUILD=X86 -DCMAKE_BUILD_TYPE="Release" -DCMAKE_INSTALL_PREFIX="/usr/local/llvm" .. && make -j8 && make install PREFIX="/usr/local/llvm" ---> Using cache ---> 81790a52d1ca Step 3/9 : RUN pip install pycuda onnx nvidia-pyindex && pip install onnx-graphsurgeon onnxruntime tf2onnx xgboost ---> Using cache ---> 7b262fc70ae1 Step 4/9 : RUN git clone --recursive https://github.com/Tencent/TPAT.git /workspace/TPAT && cd /workspace/TPAT/3rdparty/blazerml-tvm && mkdir build && cp cmake/config.cmake build && cd build ---> Running in 6cbc94a98aad Cloning into '/workspace/TPAT'... Submodule '3rdparty/blazerml-tvm' (https://github.com/Tencent/BlazerML-tvm.git) registered for path '3rdparty/blazerml-tvm' Cloning into '/workspace/TPAT/3rdparty/blazerml-tvm'... fatal: unable to access 'https://github.com/Tencent/BlazerML-tvm.git/': Failed to connect to github.com port 443: Connection timed out fatal: clone of 'https://github.com/Tencent/BlazerML-tvm.git' into submodule path '/workspace/TPAT/3rdparty/blazerml-tvm' failed Failed to clone '3rdparty/blazerml-tvm'. Retry scheduled Cloning into '/workspace/TPAT/3rdparty/blazerml-tvm'... fatal: unable to access 'https://github.com/Tencent/BlazerML-tvm.git/': Failed to connect to github.com port 443: Connection timed out fatal: clone of 'https://github.com/Tencent/BlazerML-tvm.git' into submodule path '/workspace/TPAT/3rdparty/blazerml-tvm' failed Failed to clone '3rdparty/blazerml-tvm' a second time, aborting The command '/bin/sh -c git clone --recursive https://github.com/Tencent/TPAT.git /workspace/TPAT && cd /workspace/TPAT/3rdparty/blazerml-tvm && mkdir build && cp cmake/config.cmake build && cd build' returned a non-zero code: 1
Can you check your environment of Git? Mirror will clone submodule of TPAT:'https://github.com/Tencent/BlazerML-tvm.git.
Tips: Try to unset proxy of git?
from tpat.
可能是docker镜像内部设置的问题,我在容器外面把仓库clone下来了,然后在启动的容器内部去执行了dockfile里面的剩余命令就可以了,看文档Plugin Compiler Env这块:
And export TensorRT/include to Environment Variables : CPLUS_INCLUDE_PATH and C_INCLUDE_PATH:
这块是怎么设置的?我在容器内部执行dpkg -l | grep TensorRT 看到有如下输出:
ii graphsurgeon-tf 7.1.2-1+cuda11.0 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-bin 7.1.2-1+cuda11.0 amd64 TensorRT binaries
ii libnvinfer-dev 7.1.2-1+cuda11.0 amd64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 7.1.2-1+cuda11.0 amd64 TensorRT plugin libraries
ii libnvinfer-plugin7 7.1.2-1+cuda11.0 amd64 TensorRT plugin libraries
ii libnvinfer7 7.1.2-1+cuda11.0 amd64 TensorRT runtime libraries
ii libnvonnxparsers-dev 7.1.2-1+cuda11.0 amd64 TensorRT ONNX libraries
ii libnvonnxparsers7 7.1.2-1+cuda11.0 amd64 TensorRT ONNX libraries
ii libnvparsers-dev 7.1.2-1+cuda11.0 amd64 TensorRT parsers libraries
ii libnvparsers7 7.1.2-1+cuda11.0 amd64 TensorRT parsers libraries
ii python3-libnvinfer 7.1.2-1+cuda11.0 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 7.1.2-1+cuda11.0 amd64 Python 3 development package for TensorRT
ii uff-converter-tf 7.1.2-1+cuda11.0 amd64 UFF converter for TensorRT package
容器内部是已经安装好了tensorrt了吧?那这两个变量是怎么设置的?
from tpat.
可能是docker镜像内部设置的问题,我在容器外面把仓库clone下来了,然后在启动的容器内部去执行了dockfile里面的剩余命令就可以了,看文档Plugin Compiler Env这块: And export TensorRT/include to Environment Variables : CPLUS_INCLUDE_PATH and C_INCLUDE_PATH: 这块是怎么设置的?我在容器内部执行dpkg -l | grep TensorRT 看到有如下输出: ii graphsurgeon-tf 7.1.2-1+cuda11.0 amd64 GraphSurgeon for TensorRT package ii libnvinfer-bin 7.1.2-1+cuda11.0 amd64 TensorRT binaries ii libnvinfer-dev 7.1.2-1+cuda11.0 amd64 TensorRT development libraries and headers ii libnvinfer-plugin-dev 7.1.2-1+cuda11.0 amd64 TensorRT plugin libraries ii libnvinfer-plugin7 7.1.2-1+cuda11.0 amd64 TensorRT plugin libraries ii libnvinfer7 7.1.2-1+cuda11.0 amd64 TensorRT runtime libraries ii libnvonnxparsers-dev 7.1.2-1+cuda11.0 amd64 TensorRT ONNX libraries ii libnvonnxparsers7 7.1.2-1+cuda11.0 amd64 TensorRT ONNX libraries ii libnvparsers-dev 7.1.2-1+cuda11.0 amd64 TensorRT parsers libraries ii libnvparsers7 7.1.2-1+cuda11.0 amd64 TensorRT parsers libraries ii python3-libnvinfer 7.1.2-1+cuda11.0 amd64 Python 3 bindings for TensorRT ii python3-libnvinfer-dev 7.1.2-1+cuda11.0 amd64 Python 3 development package for TensorRT ii uff-converter-tf 7.1.2-1+cuda11.0 amd64 UFF converter for TensorRT package 容器内部是已经安装好了tensorrt了吧?那这两个变量是怎么设置的?
这块如果用Dockerfile的话,From nvidia的镜像里是有的。如果是自己创建的环境,是为了编译plugin.cu和.h。当然如果你修改了python/trt_plugin/Makefile里的trt_path,那么不用加到C和C++的头文件里也可以的
from tpat.
Related Issues (20)
- Conversion Error for IsInf OP HOT 11
- RandomNormal not supported for frontend ONNX HOT 1
- Support CUDA 11.5 and TensorRT 8.2.1.3? HOT 1
- 无法跑通 example HOT 1
- Support for dynamic shape ? HOT 2
- No radical Subgraph optimization for TensorRT HOT 1
- precision for one hot plugin is wrong HOT 2
- cuda kernel code generated by Ansor‘s search space will use shared memory optimization to auto tuning? HOT 1
- Is maintained of this repo? HOT 1
- Fail to run example test_onehot_dynamic_direct.py HOT 12
- Half model error HOT 8
- Error when running one_hot example HOT 6
- support for one hot plugin with dynamic axis other than batch_size dim HOT 4
- Is there a talk or article about of the implementation of this project? HOT 1
- unsupported ptx version error
- out of memeory HOT 2
- so build succeed, tensorrt run error HOT 6
- Could you provide simple tutorial on how to run onnx_to_plugin for simple operator? HOT 4
- KeyError int8 HOT 1
- so build succeed, tensorrt run error one hot example HOT 11
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tpat.