Git Product home page Git Product logo

neo-ai-dlr's People

Contributors

apivovarov avatar ashishgupta023 avatar aws-vrnatham avatar cloudmanx avatar cnv1989 avatar dependabot[bot] avatar eddiecho avatar happyamazonian avatar harishneit avatar hcho3 avatar jianzhong-xu avatar jpeddicord avatar kkoppolu1 avatar liangfu avatar melty-chocolate avatar minlu1021 avatar samgig avatar samskalicky avatar trevor-m avatar tusharkanekidey avatar tyleradavis avatar wuchih-amazon avatar wweic avatar ymwangg avatar ziyu-guo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neo-ai-dlr's Issues

No valid Treelite model files found under folder (XGBOOST)

I've created a Sagemaker XGBOOST model and compiled it for NEO using:

compiled_model = xgb.compile_model(target_instance_family='rasp3b', 
                                   role=role,
                                   framework='xgboost',
                                   framework_version='0.7',
                                   output_path=output_path)

I am deploying this model to a RPI3 via Greengrass for inference using dlr. When I use the following Python code on the device:

model = DLRModel('model', input_shape, output_shape, device)

I get an error "No valid Treelite model files found under folder". My "model" directory has a single file called "compiled_model.so".

What steps am I missing?

Building on Mac OS X

Hello,
I am following the steps mentioned at: https://neo-ai-dlr.readthedocs.io/en/latest/install.html#building-on-mac-os-x to build dlr on my mac. I am trying to install the version 1.3.0 of dlr. However, building on the latest branch is also failing with the same error.

make -j4 command fails with the following output

[ 25%] Linking CXX shared library libgtest.dylib
ld: unknown option: --exclude-libs
collect2: error: ld returned 1 exit status
make[2]: *** [googletest/googletest-build/googlemock/gtest/libgtest.dylib] Error 1
make[1]: *** [googletest/googletest-build/googlemock/gtest/CMakeFiles/gtest.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
......
[ 75%] Linking CXX static library libtvm_runtime_static.a
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: libtvm_runtime_static.a(rpc_pipe_impl.cc.o) has no symbols
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: libtvm_runtime_static.a(rpc_pipe_impl.cc.o) has no symbols
[ 75%] Built target tvm_runtime_static

The behavior is same with any gcc version over 8. The cmake version that I am using is 3.18.4

Upon explicitly setting export CMAKE_SYSTEM_NAME=darwin, the build continues to 81% and errors out with the same error.

ld: unknown option: --exclude-libs
collect2: error: ld returned 1 exit status
make[2]: *** [googletest/googletest-build/googlemock/gtest/libgtest.dylib] Error 1
make[1]: *** [googletest/googletest-build/googlemock/gtest/CMakeFiles/gtest.dir/all] Error 2 

Support converting Neo output to ONNX format

When optimizing ONNX models using Neo, it generates .params and .json files. When I try to convert them into ONNX, I get the error below:

MXNetError: Failed loading Op batch_norm0_div of type tvm_op: [10:30:54] src/core/op.cc:55: Check failed: op != nullptr Operator tvm_op is not registered

Stack trace returned 10 entries:
[bt] (0) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(dmlc::StackTrace()+0x44) [0x7ff170587394]
[bt] (1) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(nnvm::Op::Get(std::string const&)+0x345) [0x7ff17387ce75]
[bt] (2) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x3945a25) [0x7ff1738b9a25]
[bt] (3) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x3945e14) [0x7ff1738b9e14]
[bt] (4) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(dmlc::JSONObjectReadHelper::ReadAllFields(dmlc::JSONReader*)+0x110) [0x7ff1738bdcd0]
[bt] (5) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x3942e6c) [0x7ff1738b6e6c]
[bt] (6) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x394396f) [0x7ff1738b796f]
[bt] (7) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(std::_Function_handler<nnvm::Graph (nnvm::Graph), nnvm::Graph (*)(nnvm::Graph)>::_M_invoke(std::_Any_data const&, nnvm::Graph)+0x131) [0x7ff1731caa41]
[bt] (8) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(nnvm::ApplyPasses(nnvm::Graph, std::vector<std::string, std::allocatorstd::string > const&)+0x52d) [0x7ff1738800fd]
[bt] (9) /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::LoadLegacyJSONPass(nnvm::Graph)+0x19f) [0x7ff1731c566f]

DLR: DLRGetLastError: /neo-ai-dlr/3rdparty/tvm/src/runtime/module_util.cc:35: Check failed: f != nullptr Loader of _lib(module.loadbinary__lib) is not presented.

Hi,
I get the error when I a trying to load model on android app that was compiled with:

  1. aws sagemaker neo for qc603/qc605
  2. tvm - compiled for opencl and android

the only models that i was able to run with your runtime on android device are those that were compiled with tvm for arm

"DLR: DLRGetLastError: [05:18:45] /workplace/pivovaa/neo-ai-dlr/3rdparty/tvm/src/runtime/module_util.cc:35: Check failed: f != nullptr Loader of _lib(module.loadbinary__lib) is not presented."

Build failure on aarch64 platform

I got following build error on my aarch64 platform, may I know whether you try it on aarch64 platform?

root@aad2441cbcdb5598851a5f655d0a70f7:~/neo-ai-dlr/build# make -j16
[ 0%] Building CXX object CMakeFiles/objdlr.dir/src/dlr.cc.o
[ 8%] Built target objtreelite_runtime
[ 33%] Built target dmlc
[ 91%] Built target tvm_runtime_static
Scanning dependencies of target treelite_runtime_static
[ 91%] Linking CXX static library ../../../../../3rdparty/neo-ai-treelite/runtime/native/lib/libtreelite_runtime_static.a
[ 91%] Built target treelite_runtime_static
/root/neo-ai-dlr/src/dlr.cc: In member function 'void dlr::DLRModel::SetInput(const char*, const int64_t*, float*, int)':
/root/neo-ai-dlr/src/dlr.cc:248:14: error: 'accumulate' is not a member of 'std'
std::accumulate(shape, shape + dim, 1, std::multiplies<int64_t>());
^~~~~~~~~~
/root/neo-ai-dlr/src/dlr.cc:249:34: error: 'accumulate' is not a member of 'std'
int64_t expected_size = std::accumulate(
^~~~~~~~~~
CMakeFiles/objdlr.dir/build.make:62: recipe for target 'CMakeFiles/objdlr.dir/src/dlr.cc.o' failed
make[2]: *** [CMakeFiles/objdlr.dir/src/dlr.cc.o] Error 1
CMakeFiles/Makefile2:107: recipe for target 'CMakeFiles/objdlr.dir/all' failed
make[1]: *** [CMakeFiles/objdlr.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

DLR inference issue on raspberry pi (custom trained model)

System Information

  • Framework (e.g. MXNET): MXNET
  • Framework Version: lastest
  • Python Version: 2.7
  • CPU or GPU: CPU
  • Python SDK Version: latest
  • Are you using a custom image: no

Describe the problem

  1. Trained custom bird classification model using this notbook: https://github.com/gabehollombe-aws/jupyter-notebooks

  2. Following the steps in https://aws.amazon.com/blogs/aws/amazon-sagemaker-neo-train-your-machine-learning-models-once-run-them-anywhere/ to convert a MXNET resnet50 model to one for Raspberry pi 3B.

  3. Testing image: /dlr-1.0-py2-armv7l/tests/dog.npy
    Runs without an issue if I use resnet50 model under /dlr-1.0-py2-armv7l/models

  4. If I use my own custom trained model, probabilities or zero are returned for bird images or for the dog.npy. Seem the code I used below.

  5. As you can see in https://github.com/jwpwhite/birdpi-rp-dlr/blob/master/neotest.py#L40 this is how I'm trying to run the inference, but as I mentioned it is giving me probabilities of zero when using my own custom trained model linked to below.

Code: https://github.com/jwpwhite/birdpi-rp-dlr
Default model: resnet50 model included in /home/pi/dlr-1.0-py2-armv7l/models/resnet50
Custom model: https://www.dropbox.com/s/nc35wwuhemcrfjp/model-rasp3b.tar.gz?dl=0

Model inference results on Inf1 inconsistent with expected results

I was able to compile , deploy and run inference on Inf1 for a model created with mxnet 1.6.0 . I am using a pre-trained gluon cv i3d rennet model https://gluon-cv.mxnet.io/api/model_zoo.html#gluoncv.model_zoo.i3d_resnet50_v1_kinetics400.
The inference results does find the right predicted class , however the probability value is only 20.1 %. If i run this model standalone from here https://gluon-cv.mxnet.io/build/examples_action_recognition/demo_i3d_kinetics400.html , it gives the same predicted class but with a probability of 99.9 %.
So, clearly neo compile is not getting it right here ? how can we explain this, thanks

Build error on Raspberry Pi

Board: Raspberry Model 3 B+
OS: Raspbian

model name	: ARMv7 Processor rev 4 (v7l)
BogoMIPS	: 38.40
Features	: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 
CPU implementer	: 0x41
CPU architecture: 7
CPU variant	: 0x0
CPU part	: 0xd03
CPU revision	: 4

make -j4 fails at:

[ 28%] Building CXX object 3rdparty/treelite/runtime/native/CMakeFiles/objtreelite_runtime.dir/src/predictor.cc.o
In file included from /home/pi/Downloads/neo-ai-dlr/3rdparty/treelite/runtime/native/src/predictor.cc:9:0:
/home/pi/Downloads/neo-ai-dlr/3rdparty/treelite/include/treelite/common.h:372:15: error: redefinition of ‘T treelite::common::TextToNumber(const string&) [with T = unsigned int; std::__cxx11::string = std::__cxx11::basic_string<char>]’
 inline size_t TextToNumber(const std::string& str) {
               ^~~~~~~~~~~~
/home/pi/Downloads/neo-ai-dlr/3rdparty/treelite/include/treelite/common.h:357:17: note: ‘T treelite::common::TextToNumber(const string&) [with T = unsigned int; std::__cxx11::string = std::__cxx11::basic_string<char>]’ previously declared here
 inline uint32_t TextToNumber(const std::string& str) {
                 ^~~~~~~~~~~~
3rdparty/treelite/runtime/native/CMakeFiles/objtreelite_runtime.dir/build.make:158: recipe for target '3rdparty/treelite/runtime/native/CMakeFiles/objtreelite_runtime.dir/src/predictor.cc.o' failed
make[2]: *** [3rdparty/treelite/runtime/native/CMakeFiles/objtreelite_runtime.dir/src/predictor.cc.o] Error 1
CMakeFiles/Makefile2:695: recipe for target '3rdparty/treelite/runtime/native/CMakeFiles/objtreelite_runtime.dir/all' failed
make[1]: *** [3rdparty/treelite/runtime/native/CMakeFiles/objtreelite_runtime.dir/all] Error 2

then it proceeds to continue building other packages until:

[ 92%] Linking CXX static library libtvm_runtime_static.a
[ 92%] Built target tvm_runtime_static
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

ERROR error in DLRModel instantiation TVMError: key 0 is not supported

Hello,

I am building a FastAI model with SageMaker Neo for my Raspberry Pi 4 model B. I am trying to deploy it with AWS Greengrass. DLR is directly installed on my RPI4 using this link from this page.

If I connect to my RPI4 with SSH and load the model with the python interpreter, it works.

pi@raspberrypi:~/tmp $ python3
Python 3.7.3 (default, Dec 20 2019, 18:57:59) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dlr
>>> model = dlr.DLRModel('./cpu')
>>> model
<dlr.api.DLRModel object at 0xb66b1490>
>>> exit()

Unfortunately, the same model downloaded from S3 in the Lambda deployed by AWS greengrass gives me that error.

[2020-05-26T19:09:49.82+02:00][INFO]-main.py:48,Starting main.
[2020-05-26T19:09:52.2+02:00][INFO]-main.py:55,Webcam successfully initialized, triggering the processing.
[2020-05-26T19:09:52.2+02:00][INFO]-main.py:26,Starting webcam processing and inference.
[2020-05-26T19:09:52.2+02:00][INFO]-main.py:30,Initializing the process with image sampling number (15), face min confidence (0.9), mask min confidence (0.9).
[2020-05-26T19:09:52.201+02:00][INFO]-inference_wrapper.py:24,The model archive does not exist at /tmp/model.zip. Downloading from S3
[2020-05-26T19:09:58.352+02:00][INFO]-inference_wrapper.py:30,Unzipping done in /tmp
[2020-05-26T19:09:58.352+02:00][INFO]-inference_wrapper.py:32,Loading the face detection model.
[2020-05-26T19:09:58.435+02:00][INFO]-inference_wrapper.py:38,Loading the mask detection model.
[2020-05-26T19:09:58.981+02:00][ERROR]-init.py:1037,2020-05-26 19:09:58,831 ERROR error in DLRModel instantiation TVMError: key 0 is not supported
[2020-05-26T19:09:58.981+02:00][ERROR]-Stack trace:
[2020-05-26T19:09:58.981+02:00][ERROR]- File "/home/pi/workplace/neo-ai-dlr/3rdparty/tvm/src/runtime/graph/graph_runtime.h", line 405
[2020-05-26T19:09:58.981+02:00][ERROR]- [bt] (0) /usr/local/dlr/libdlr.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x38) [0xaa4f8338]
[2020-05-26T19:09:58.981+02:00][ERROR]- [bt] (1) /usr/local/dlr/libdlr.so(tvm::runtime::GraphRuntime::Load(dmlc::JSONReader*)+0x1c78) [0xaa5a3d6c]
[2020-05-26T19:09:58.981+02:00][ERROR]- [bt] (2) /usr/local/dlr/libdlr.so(tvm::runtime::GraphRuntime::Init(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, tvm::runtime::Module, std::vector<DLContext, std::allocator > const&)+0x190) [0xaa591be0]
[2020-05-26T19:09:58.981+02:00][ERROR]- [bt] (3) /usr/local/dlr/libdlr.so(dlr::TVMModel::SetupTVMModule(std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > >)+0xc88) [0xaa50bb64]
[2020-05-26T19:09:58.981+02:00][ERROR]- [bt] (4) /usr/local/dlr/libdlr.so(CreateDLRModel+0x1bd4) [0xaa4f4dc0]
[2020-05-26T19:09:58.981+02:00][ERROR]-Traceback (most recent call last):
[2020-05-26T19:09:58.981+02:00][ERROR]- File "/usr/local/lib/python3.7/dist-packages/dlr/api.py", line 82, in init
[2020-05-26T19:09:58.981+02:00][ERROR]- self._impl = DLRModelImpl(model_path, dev_type, dev_id)
[2020-05-26T19:09:58.981+02:00][ERROR]- File "/usr/local/lib/python3.7/dist-packages/dlr/dlr_model.py", line 111, in init
[2020-05-26T19:09:58.981+02:00][ERROR]- c_int(dev_id)))
[2020-05-26T19:09:58.981+02:00][ERROR]- File "/usr/local/lib/python3.7/dist-packages/dlr/dlr_model.py", line 26, in _check_call
[2020-05-26T19:09:58.981+02:00][ERROR]- raise DLRError(_LIB.DLRGetLastError().decode('ascii'))
[2020-05-26T19:09:58.981+02:00][ERROR]-dlr.dlr_model.DLRError: TVMError: key 0 is not supported

I am trying to find what this error means to see if I can fix it myself. Would you have any idea of the reason I get this error in the Greengrass Lambda and not the python interpreter ?

Thank you.

Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json

Trying to compile neo-ai-dlr as per documentation.

When trying to run the python load_and_run_tvm_model.py it fails on:

Preparing model artifacts for resnet18_v1 ...
Preparing model artifacts for mobilenet_v1_0.75_224_quant ...
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 10
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 9
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 8
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 7
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 6
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 5
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 4
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 3
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 2
Error downloading https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/mobilenet_v1_0.75_224_quant/ec2_a1.json on try 1
Traceback (most recent call last):
File "/home/tim/Downloads/neo-ai-dlr/tests/python/integration/test_utils.py", line 45, in get_models
urlretrieve(s3_path, local_path)
File "/usr/lib/python3.7/urllib/request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/usr/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.7/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.7/urllib/request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.7/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/lib/python3.7/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

Support loading the TVM model from one so file

Currently, When using relay.build, it shows the DeprecationWarning: legacy graph runtime behavior of producing json / lib / params will be removed in the next release. Please see documents of tvm.contrib.graph_runtime.GraphModule for the new recommended usage. but in dlr backend detection, we need the params file to create a TVM model

if (EndsWith(filename, ".params")) {
return DLRBackend::kTVM;
Does dlr support reading one so file to create TVM model?

TypeError: __init__() takes 2 to 4 positional arguments but 5 were given

System Information

Device: Raspberry Pi 3 Model B Rev 1.2 ARMv7 Procoessor rev 4 (v7l)
Framework (e.g. MXNET): PyTorch
Framework Version: latest
Python Version: 3.8.0
CPU or GPU: CPU
Python SDK Version: latest
Are you using a custom image: no

Describe the problem

I was trying to load my compiled PyTorch model on my Raspberry 3 by following the following:

https://aws.amazon.com/blogs/aws/amazon-sagemaker-neo-train-your-machine-learning-models-once-run-them-anywhere/

I tried to install the DLR using this script running in a pyenv virtualenv with Python 3.4.10:

https://github.com/neo-ai/neo-ai-dlr/tree/master/install/dlr-1.0-py2.py3-armv7l

But failed because the egg is no longer available. There is also no easy_install3. So I installed the DLR from:

https://neo-ai-dlr.readthedocs.io/en/latest/install.html

using:

https://s3-us-west-2.amazonaws.com/neo-ai-dlr-release/v1.0/pi-armv7l-raspbian4.14.71-glibc2_24-libstdcpp3_4/dlr-1.0-py2.py3-none-any.whl

for rasp2b.

When running the test-dlr.py in:

https://github.com/neo-ai/neo-ai-dlr/blob/master/install/dlr-1.0-py2.py3-armv7l/test-dlr.py

I encountered the following error

TypeError: init() takes 2 to 4 positional arguments but 5 were given

at the code: "model = DLRModel(model_path, input_shape, output_shape, device)

It doesn't seem to be able to take the input_shape and output_shape variables as arguments.

[RFC] Support alternative runtimes

I'd like to discuss how DLR can support alternative runtimes. e.g. Tensorflow, Tensorflow Lite, etc.

Currently we have DLRModel class which accepts model_path and device parameters.
We can use model files format to determine what runtime should be used. E.g. if model is represented by .pb file then it indicates that DLR needs to use tensorflow runtime.

In order to support for Tensorflow runtime and at the same time preserve current API we can do the following refactoring:

  • add IDLRModel abstract class with @abc.abstractmethod run(self, input_data) and other "public" method
  • move all DLRModel implementation to DLRModelImpl class
  • make DLRModelImpl to be sub-class of IDLRModel
  • add TFModelImpl(IDLRModel) class which incapsulates details on how to run tensorflow models.
  • make DLRModel to be a wrapper on top of particular implementation. init method will decide which impl to create based on format of the model files.

The diagram and the code below illustrate the idea

Diagram

DLRModel

Code Example

import abc

# Interface
class IDLRModel:
    __metaclass__=abc.ABCMeta

    @abc.abstractmethod
    def get_input_names(self):
        return


    @abc.abstractmethod
    def get_input(self, name, shape=None):
        return


    @abc.abstractmethod
    def run(self, input_data):
        return

# Wrapper class
class DLRModel(IDLRModel):
    def __init__(self, model_path, device):
        if model_path.endswith(".pb"):
            self._impl = TFModelImpl(model_path, device)
        else:
            self._impl = DLRModelImpl(model_path, device) 


    def run(self, input_data):
        return self._impl.run(input_data)


# Current DLRModel code will be moved to  DLRModelImpl
class DLRModelImpl(IDLRModel):
    def __init__(self, model_path, device):
        self.model_path = model_path
        self.device = device


    def run(self, input_data):
        return "TVM/Treelite model run {}".format(input_data)


# New class which support .pb models execution on tensorflow runtime
class TFModelImpl(IDLRModel):
    def __init__(self, model_path, device):
        self.model_path = model_path
        self.device = device


    def run(self, input_data):
        # import tensorflow as tf
        # res = sess.run(output_tensors, feed_dict={input_tensor: np_input})
        # return res
        return "TF model run {}".format(input_data)

Test run

# Test original DLR models
d = DLRModel("mymodel", "cpu")
print(type(d))
print(d.run("test"))

# Test TF models
d = DLRModel("mymodel.pb", "cpu")
print(type(d))
print(d.run("test"))

<class '__main__.DLRModel'>
TVM/Treelite model run test
<class '__main__.DLRModel'>
TF model run test

Add support for AlphaPose model

I am trying to implement alphapose on android. But for that according to this repo I need compiled .so file along with .json and .param . I am looking for obtaining shared object file corresponding to GluonCV alphapose model containing json and params. If anyone have already compiled for alphapose then please share it or if it is already available on dlr then please share dlr name like dlr name provided for mobilenet and other nets in repo. Otherwise please guide me how to obtain same one.

Loading DLR models compiled with Sagemaker Neo

Hi,

I've built the neo runtime in a centos Docker container based on ci_build.

I am getting an error when performing inference on my custom model which I compiled with SageMaker Neo targeting the ml_m4 instance family:

  File "/root/.local/lib/python2.7/site-packages/dlr-1.0-py2.7.egg/dlr/api.py", line 115, in __init__
    c_int(dev_id)))
  File "/root/.local/lib/python2.7/site-packages/dlr-1.0-py2.7.egg/dlr/api.py", line 72, in _check_call
    raise DLRError(_LIB.DLRGetLastError().decode('ascii'))
dlr.api.DLRError: [06:11:18] /workspace/3rdparty/tvm/src/runtime/dso_module.cc:93: Check failed: lib_handle_ != nullptr Failed to load dynamic shared library compiled/model.so /lib64/libc.so.6: version `GLIBC_2.14' not found (required by compiled/model.so)

I am able to successfully load & run inference on the test models in this project. Is there a different instance family I should target, or another way for me to compile my mxnet model such that it can be loaded by the DLR?

Thanks,
Julian.

Cant install on Windows error: can't copy '..\build\lib\libdlr.so': doesn't exist or not a regular file

I ran the following command after python setup.py build ran successfully

python setup.py install

I get the following error

D:\ProgramData\Anaconda3\envs\tfod\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'root_script_source_version'
warnings.warn(msg)
D:\ProgramData\Anaconda3\envs\tfod\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'default_python'
warnings.warn(msg)
D:\ProgramData\Anaconda3\envs\tfod\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'test_command'
warnings.warn(msg)
D:\ProgramData\Anaconda3\envs\tfod\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'doc_command'
warnings.warn(msg)
running install
running bdist_egg
running egg_info
writing dlr.egg-info\PKG-INFO
writing dependency_links to dlr.egg-info\dependency_links.txt
writing requirements to dlr.egg-info\requires.txt
writing top-level names to dlr.egg-info\top_level.txt
reading manifest file 'dlr.egg-info\SOURCES.txt'
writing manifest file 'dlr.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
running build_ext
installing package data to build\bdist.win-amd64\egg
running install_data
error: can't copy '..\build\lib\libdlr.so': doesn't exist or not a regular file

SageMaker Neo compilation produces different models for different AWS regions

Hi,

I compiled an object detection model in Frankfurt and Ohio regions. But the compiled model files are different and I could run only the model from the Frankfurt region on my target device. I used the following parameters for model compilation.

target_device = Jetson Nano
Framework = MXNet
input = {"data":[1,3,512,512]}

The compiled models were different. The model from the Frankfurt region has a size of 42.5MB whereas model from the Ohio region is 91.9MB in size. The file details from both regions are attached below.

Frankfurt region
model_files_frankfurt

Ohio region
model_files_ohio

Apart from the compiled.meta file, other files have different sizes. DLR 1.2 version was used to run both models and the files dlr.h and libdlr.so were removed from the compiled models. When I tried to run the compiled model from the Ohio region on Jetson Nano, the following error occurred.

ERROR: TVMError: Check failed: f != nullptr: Loader of metadata(runtime.module.loadbinary_metadata) is not presented.#012Stack trace:#012 File "/packages/neo-ai-dlr/3rdparty/tvm/src/runtime/library_module.cc", line 150#012 [bt] (0) /usr/local/dlr/libdlr.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x38) [0x7ef6d17698]#012 [bt] (1) /usr/local/dlr/libdlr.so(tvm::runtime::ProcessModuleBlob(char const*, tvm::runtime::ObjectPtr<tvm::runtime::Library>)+0x2448) [0x7ef72df4a8]#012 [bt] (2) /usr/local/dlr/libdlr.so(tvm::runtime::CreateModuleFromLibrary(tvm::runtime::ObjectPtr<tvm::runtime::Library>)+0x1d0) [0x7ef72e0480]#012 [bt] (3) /usr/local/dlr/libdlr.so(+0x244bb8) [0x7ef6d3fbb8]#012 [bt] (4) /usr/local/dlr/libdlr.so(tvm::runtime::Module::LoadFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1a0) [0x7ef72e5328]#012 [bt] (5) /usr/local/dlr/libdlr.so(dlr::TVMModel::SetupTVMModule(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)+0x1e30) [0x7ef6d70e98]#012 [bt] (6) /usr/local/dlr/libdlr.so(CreateDLRModel+0x1428) [0x7ef6d46a88]

Why do the compiled models differ based on AWS regions? How can I solve it?
Thanks in advance.

DLR Library (libdlr.so) could not be loaded

Hardware: Jetson Nano
DLR version: 1.2

Hi,
when I compile nowadays a model with sagemaker neo I get following error:

Aug 27 08:29:05 darkknight 
neo[13481]: [File : run.py, Function : main, Line : 142] ERROR: DLR library (libdlr.so) could not be loaded.
Likely causes:
* OpenMP runtime is not installed (vcomp140.dll or libgomp-1.dll for Windows, libgomp.so for UNIX-like OSes)
* You are running 32-bit Python on a 64-bit OS
Error message(s): libnvinfer.so.5: cannot open shared object file: No such file or directory

On my host system I am using dlr 1.2 (build from source) and I think Sagemaker Neo is using a newer version because in my new model tar file I can find following new files:

  • dlr.h
  • libdlr.so

I dont have a problem to run on my host older models without dlr.h and libdlr.so
If I remove the 2 mentioned files in my new compiled models the error doesnt occur and it runs successfully.
Is it somehow possible to force sagemaker neo to not include the files or use an older version? Do you have any other ideas how to solve it?
Thanks in advance!

dlr.dlr_model.DLRError: TVMError: Check failed: read_size == expected_size

Hi,

I follow this tutorial : https://docs.amazonaws.cn/en_us/greengrass/latest/developerguide/ml-dlc-console.html. And using Jetson Nano. The greengrass is installed correctly, and dlr and opencv are built from source using cmake. The local lambda function can be triggered, but I saw below error in the lambda function log. could anyone help with this issue?

[2020-11-17T14:34:05.231Z][ERROR]- return self._impl.run(input_values)
[2020-11-17T14:34:05.231Z][ERROR]- File “/usr/local/lib/python3.7/dist-packages/dlr-1.5.0-py3.7.egg/dlr/dlr_model.py”, line 466, in run
[2020-11-17T14:34:05.231Z][ERROR]- self._set_input(self.input_names[0], input_values)
[2020-11-17T14:34:05.231Z][ERROR]- File “/usr/local/lib/python3.7/dist-packages/dlr-1.5.0-py3.7.egg/dlr/dlr_model.py”, line 353, in _set_input
[2020-11-17T14:34:05.231Z][ERROR]- c_int(len(shape))))
[2020-11-17T14:34:05.231Z][ERROR]- File “/usr/local/lib/python3.7/dist-packages/dlr-1.5.0-py3.7.egg/dlr/dlr_model.py”, line 185, in _check_call
[2020-11-17T14:34:05.231Z][ERROR]- raise DLRError(self._lib.DLRGetLastError().decode(‘ascii’))
[2020-11-17T14:34:05.231Z][ERROR]-dlr.dlr_model.DLRError: TVMError: Check failed: read_size == expected_size (5760000 vs. 150528) : Mismatch found in input data size. Value read: 5760000, Expected: 150528
[2020-11-17T14:34:05.231Z][ERROR]-Stack trace:
[2020-11-17T14:34:05.231Z][ERROR]- File “/home/nvidia/neo-ai-dlr/src/dlr_tvm.cc”, line 172
[2020-11-17T14:34:05.231Z][ERROR]- [bt] (0) /usr/local/lib/python3.7/dist-packages/dlr-1.5.0-py3.7.egg/dlr/libdlr.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x88) [0x7f65fe8ef0]
[2020-11-17T14:34:05.231Z][ERROR]- [bt] (1) /usr/local/lib/python3.7/dist-packages/dlr-1.5.0-py3.7.egg/dlr/libdlr.so(dlr::TVMModel::SetInput(char const*, long const*, void const*, int)+0x488) [0x7f66048ea8]
[2020-11-17T14:34:05.231Z][ERROR]- [bt] (2) /usr/local/lib/python3.7/dist-packages/dlr-1.5.0-py3.7.egg/dlr/libdlr.so(SetDLRInput+0x20) [0x7f65ffca50]

Unable to load model on AWS P3 target

I get the following message when I try to load a model:

$ python
Python 3.7.3 (default, Mar 27 2019, 22:11:17) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dlr
>>> m=dlr.DLRModel('inception_v3', dev_type='gpu')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ubuntu/neo-ai-dlr/python/dlr/api.py", line 66, in __init__
    from .dlr_model import DLRModelImpl
  File "/home/ubuntu/neo-ai-dlr/python/dlr/dlr_model.py", line 64, in <module>
    _LIB = _load_lib()
  File "/home/ubuntu/neo-ai-dlr/python/dlr/dlr_model.py", line 59, in _load_lib
    'Error message(s): {}\n'.format(os_error_list))
dlr.dlr_model.DLRError: DLR library (libdlr.so) could not be loaded.
Likely causes:
  * OpenMP runtime is not installed (vcomp140.dll or libgomp-1.dll for Windows, libgomp.so for UNIX-like OSes)
  * You are running 32-bit Python on a 64-bit OS
Error message(s): ['libcudart.so.10.0: cannot open shared object file: No such file or directory']

The file libdlr.so appears to depend on multiple versions of CUDA, namely, 9.0 and 10.0:

$ ldd lib/libdlr.so 
	linux-vdso.so.1 =>  (0x00007ffd66dfb000)
	libcudart.so.10.0 => /usr/local/cuda/lib64/libcudart.so.10.0 (0x00007f4d54db2000)
	libcuda.so.1 => /usr/lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f4d53c5b000)
	libnvinfer.so.4 => /home/ubuntu/TensorRT-4.0.1.6/lib/libnvinfer.so.4 (0x00007f4d4df33000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f4d4dd2f000)
	libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f4d4d9ad000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4d4d6a4000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f4d4d48e000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4d4d0c4000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f4d553a4000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f4d4cea7000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f4d4cc9f000)
	libnvidia-fatbinaryloader.so.418.40.04 => /usr/lib/x86_64-linux-gnu/libnvidia-fatbinaryloader.so.418.40.04 (0x00007f4d4ca51000)
	libcudnn.so.7 => /usr/local/cuda/lib64/libcudnn.so.7 (0x00007f4d37c2a000)
	libcublas.so.9.0 => not found
	libcudart.so.9.0 => not found

Since /usr/local/cuda points to /usr/local/cuda-10.0, the latter two dependencies libcublas.so.9.0 and libcudart.so.9.0 cannot be detected. The only fix available is to copy these two files into /usr/local/cuda/lib64 and run ldconfig.

Slow first inference on jetson nano with dlr 1.2

Hi,

I am currently working with the jetson nano and dlr 1.2.

I have a ssd-mobilenet-v1 net for object detection and the first inference takes up 5 minutes.

2020-07-22 14:06:56,178 INFO Could not find libdlr.so in model artifact. Using dlr from /usr/local/dlr/libdlr.so
2020-07-22 14:06:56,178 INFO Could not find libdlr.so in model artifact. Using dlr from /usr/local/dlr/libdlr.so
[14:07:02] /packages/neo-ai-dlr/src/dlr_tvm.cc:67: Loading metadata file: ./model/compiled.meta
[14:07:23] /packages/neo-ai-dlr/src/dlr_tvm.cc:240: [json.exception.out_of_range.403] key 'Outputs' not found
[14:07:23] /packages/neo-ai-dlr/src/dlr.cc:141: Output node with index 0 was not found in metadata file!
[14:07:23] /packages/neo-ai-dlr/3rdparty/tvm/src/runtime/contrib/tensorrt/tensorrt_module.cc:80: Building new TensorRT engine for subgraph tensorrt_5
[14:11:58] /packages/neo-ai-dlr/3rdparty/tvm/src/runtime/contrib/tensorrt/tensorrt_module.cc:90: Finished building TensorRT engine for subgraph tensorrt_5

I know that most time in the first inference is spent on TRT runtime compilation but 5 minutes is still really slow.
Is there any way to speed that up or maybe I am missing something?
Information about my system:
Embedded board: Nvidia Jetson Nano b01
OS: Jetpack 4.3
Cuda 10.0
TensorRT 6.0.1
cudnn 7.6.3

Side note: Running the same model with Jetpack 4.4 (TRT 7, cudnn 8, cuda 10.2) the first inference takes about 2 minutes.

slow inference performance on jetson tx2

Hello,
On a Jetson TX2 getting about 1m18.137s wallclock time running inference of the vgg19 model. confirmed target built against the Jetson TX2 and running on gpu
"TargetDevice": "jetson_tx2"
Inference is correctly identifying the image, however some warnings are printed, not sure if it contributes to the slowness.

[04:07:24] /root/neo-ai-dlr/3rdparty/tvm/src/contrib/subgraph/tensorrt_executor.cc:608: Tensor flatten0_output0 is an input of FullyConnected layer dense0 in TensorRT. Its ndim is 2, while TensorRT can only accept ndim >= 3. Reset ndim to 4 by expanding two trailing dims with dim size = 1.
[04:07:45] /root/neo-ai-dlr/3rdparty/tvm/src/contrib/subgraph/tensorrt_executor.cc:608: Tensor dropout0_output0 is an input of FullyConnected layer dense1 in TensorRT. Its ndim is 2, while TensorRT can only accept ndim >= 3. Reset ndim to 4 by expanding two trailing dims with dim size = 1.
[04:07:47] /root/neo-ai-dlr/3rdparty/tvm/src/contrib/subgraph/tensorrt_executor.cc:608: Tensor dropout1_output0 is an input of FullyConnected layer dense2 in TensorRT. Its ndim is 2, while TensorRT can only accept ndim >= 3. Reset ndim to 4 by expanding two trailing dims with dim size = 1.
Class: albatross, mollymawk, probability: 0.976578

Environment
Ubuntu 16.04
module 'dlr' from '/usr/local/lib/python3.5/dist-packages/dlr-1.0-py3.5.egg/dlr/init.py
TensorRT 4.1.3-1+cuda9.0
dlr compile flags: -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_TENSORRT=/usr/src/tensorrt
Model: VGG19 (MXNet Model Zoo)
Framework: MXNET

input_shape = {'data': [1, 3, 224, 224]} # A single RGB 224x224 image
output_shape = [1, 1000]                 # The probability for each one of the 1,000 classes
device = 'gpu'                           # Go, Jetson, Go!
model = DLRModel('model', device)

Perf monitor
Ran perf on the code and see that 45%+ of the time is spent on the TensorRT 4 libraries

# Overhead  Command         Shared Object                        Symbol                                                
# ........  ..............  ...................................  ......................................................
#
    29.01%  python          libnvinfer.so.4.1.3                  [.] 0x000000000044d46c                                
    16.31%  python          libnvinfer.so.4.1.3                  [.] 0x00000000004003f4                                
     4.86%  python          libc-2.23.so                         [.] 0x0000000000078b54                                
     4.23%  python          libc-2.23.so                         [.] 0x0000000000078b6c                                
     3.38%  python          libnvinfer.so.4.1.3                  [.] 0x000000000044d470                                
     2.87%  python          [kernel.kallsyms]                    [k] el0_da                                            
     2.05%  python          [kernel.kallsyms]                    [k] clear_page                                        
     1.85%  python          libc-2.23.so                         [.] 0x0000000000078b58

Support for Android GPU

I can see that you only support cpu's on android devices.
How can i run models on android gpu?

Looking for pthread.h - not found

CMakeError.log
CMakeOutput.log

I am getting the errors below

CMake Error at CMakeLists.txt:131 (add_subdirectory):
The source directory E:/Documents/Projects/neo-ai-dlr/3rdparty/neo-ai-treelite does not contain a CMakeLists.txt file.

I tried git clone of neo-ai-treelite in this but then I still get this error

-- Looking for pthread.h
-- Looking for pthread.h - not found
-- Found Threads: TRUE
-- TVM_RUNTIME_LINKER_LIBS:
-- Configuring incomplete, errors occurred!
See also "E:/Documents/Projects/neo-ai-dlr/build/CMakeFiles/CMakeOutput.log".
See also "E:/Documents/Projects/neo-ai-dlr/build/CMakeFiles/CMakeError.log".

Not sure how to fix this

neo-ai-dlr installation error

Python3 error: During installing dlr
Reading https://pypi.python.org/simple/ No local packages or download links found for dlr-1.0-py3.7-linux-x86_64.egg

Python2.7 error: During installing dlr
Reading https://pypi.python.org/simple/ No local packages or download links found for dlr-1.0-py2.7-linux-x86_64.egg

Used dlr-1.0-py2.7-linux-x86_64.egg available in neo-runtime from AWS console(ML inference on greengrass).

1)What about DLR support for Python 3.5?

2)We ran the pre-compiled model from neo-runtime(Link: https://console.aws.amazon.com/iot/home?region=us-east-1#/software/machine-learning ) dlr for Intel atom(Apollo Lake). We were able to run the inference on Intel core i5 and Intel atom(Bay trial) by changing target device as "cpu" from "opencl"(In both opensource neo-ai-dlr , But the accuracy was very poor(for all test images labels were same with probability zero).
So, since its optimized for Intel atom Apollo Lake family and not working properly on other platforms? or any other configurations needs to be done from our end?

3)Is DLC fully configured in neo-ai? unable to import tvm and nnvm in sample script(Link: https://github.com/neo-ai/neo-ai-dlr/blob/master/tests/test_dlc_python_test.py ), it throws module not found?
Is there any additional setup needs to be done to perform optimization?

Executing load_and_run_tvm_model.py failed

Hi all,
I installed the DLR with whl[2] from doc.[1]
Next, I did git clone and python3 load_and_run_tvm_model.py to valid the DLR.

However, I got the error message like this:

nick@AWS-JN:~/AI/neo-ai-dlr/tests/python/integration$ python3 load_and_run_tvm_model.py 
Preparing model artifacts for resnet18_v1 ...
Preparing model artifacts for 4in2out ...
Preparing model artifacts for assign_op ...
Traceback (most recent call last):
  File "/home/nick/AI/neo-ai-dlr/tests/python/integration/test_utils.py", line 43, in get_models
    urlretrieve(s3_path, local_path) 
  File "/usr/lib/python3.6/urllib/request.py", line 248, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "/usr/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.6/urllib/request.py", line 532, in open
    response = meth(req, response)
  File "/usr/lib/python3.6/urllib/request.py", line 642, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python3.6/urllib/request.py", line 570, in error
    return self._call_chain(*args)
  File "/usr/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.6/urllib/request.py", line 650, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "load_and_run_tvm_model.py", line 67, in <module>
    get_models(model_name, arch, kind='tvm')
  File "/home/nick/AI/neo-ai-dlr/tests/python/integration/test_utils.py", line 45, in get_models
    raise ValueError('Downloading of model artifacts from %s failed' % s3_path)
ValueError: Downloading of model artifacts from https://s3-us-west-2.amazonaws.com/neo-ai-dlr-test-artifacts/assign_op/ec2_a1.json failed

Do you meet the problem before?

Here is my test bed information:

Hardware : Nvidia Jetson Nano
OS : Ubuntu 18.04.2 LTS
Kernel : 4.9.140-tegra
python3, pip3 : python 3.6.8
dlr : version 1.0

[1] : https://neo-ai-dlr.readthedocs.io/en/latest/install.html
[2] : https://s3-us-west-2.amazonaws.com/neo-ai-dlr-release/v1.0/jetsonnano-aarch64-cu10-ubuntu18_04-glibc2_27-libstdcpp3_4/dlr-1.0-py2.py3-none-any.whl

Install dlr python package failure

I had successfully build neo-ai-dlr, when I tried to do installation, I met following error. Can I have your suggestion?

root@aad2441cbcdb5598851a5f655d0a70f7:~/neo-ai-dlr/install/dlr-1.0-py2.py3-cuda90-aarch64# ./install-py2.sh
python is installed
easy_install is installed
pip is installed

Installing numpy...
Collecting numpy
Downloading https://files.pythonhosted.org/packages/2b/26/07472b0de91851b6656cbc86e2f0d5d3a3128e7580f23295ef58b6862d6c/numpy-1.16.1.zip (5.1MB)
100% |################################| 5.1MB 128kB/s
Building wheels for collected packages: numpy
Running setup.py bdist_wheel for numpy ... done
Stored in directory: /root/.cache/pip/wheels/04/64/e1/283a3672c2865608968594c02a6923311f44d033bcece2683b
Successfully built numpy
Installing collected packages: numpy
Found existing installation: numpy 1.16.0
Uninstalling numpy-1.16.0:
Successfully uninstalled numpy-1.16.0
Successfully installed numpy-1.16.1

Installing dlr for python...
Searching for dlr-1.0-py2.7-linux-aarch64.egg
Reading https://pypi.org/simple/dlr-1.0-py2.7-linux-aarch64.egg/
Couldn't find index page for 'dlr-1.0-py2.7-linux-aarch64.egg' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.org/simple/
No local packages or working download links found for dlr-1.0-py2.7-linux-aarch64.egg
error: Could not find suitable distribution for Requirement.parse('dlr-1.0-py2.7-linux-aarch64.egg')
root@aad2441cbcdb5598851a5f655d0a70f7:~/neo-ai-dlr/install/dlr-1.0-py2.py3-cuda90-aarch64#

Sagemaker Neo Compilation for ARM64

Hi,

I need to compile model for ARM64 based robotics board. My board is not in the list of target instances. Can I still compile? Also, my board supports Linux kernel 4.9, GLIBC_2.23

A model that I could compile choosing one of the target instances generated ARM64 .so file but is referencing a symbol LIBC which is not found in libc.so.6

/home/model # ldd model.so
./model.so: /lib/aarch64-linux-gnu/libc.so: version `LIBC' not found (required by ./model.so)
linux-vdso.so.1 => (0x0000007f7ddc1000)
libm.so => /lib/aarch64-linux-gnu/libm.so (0x0000007f7dca1000)
libdl.so => /lib/aarch64-linux-gnu/libdl.so (0x0000007f7dc8e000)
libc.so => /lib/aarch64-linux-gnu/libc.so (0x0000007f7db47000)
/lib/ld-linux-aarch64.so.1 (0x0000007f7dd96000)

Any help in this regard will be greatly appreciated

thanks a ton

error running DLR with target=opencl

Hi,
I compiled dlr with USE_OPENCL = ON
and when trying to run TVM model that compiled with target opencl
I get the next error :
TVMError: Check failed: f != nullptr: Loader of opencl(runtime.module.loadbinary_opencl) is not presented.

I am running the code on laptop that have i7 cpu and nvidia GPU

the co,piltation was done with
` target = 'opencl'
target_host = "llvm" #-device=arm_cpu

print('target:', target)
print("Compiling...")

with relay.build_config(opt_level=3):
graph, lib, params = relay.build(func, target, target_host, params=params)
`

when running clinfo:
`clinfo
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 10.2.178
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics
Platform Extensions function suffix NV

Platform Name NVIDIA CUDA
Number of devices 1
Device Name GeForce RTX 2070
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 440.95.01
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Topology (NV) PCI-E, 01:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 36
Max clock frequency 1440MHz
Compute Capability (NV) 7.5
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Preferred work group size multiple 32
Warp size (NV) 32
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 8370061312 (7.795GiB)
Error Correction support No
Max memory allocation 2092515328 (1.949GiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Global Memory cache type Read/Write
Global Memory cache size 1179648 (1.125MiB)
Global Memory cache line size 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 268435456 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 32768x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 32
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max number of constant args 9
Max constant buffer size 65536 (64KiB)
Max size of kernel argument 4352 (4.25KiB)
Queue properties
Out-of-order execution Yes
Profiling Yes
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Kernel execution timeout (NV) Yes
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 3
printf() buffer size 1048576 (1024KiB)
Built-in kernels
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics

NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) NVIDIA CUDA
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) Success [NV]
clCreateContext(NULL, ...) [default] Success [NV]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) Invalid device type for platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform

ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.2.11
ICD loader Profile OpenCL 2.1
`

CreateDLRModel failed for cuda model

I used the latest TVM to compile my model for x86_64 and for cuda_x86_64 targets (Cuda 10.2 sm_70).

After that I tried to use model_peeker to check the model folders
model_peeker works fine for x86_64 folder but fails for cuda_x86_64

Error - Loader of _lib(module.loadbinary__lib) is not presented :

[04:38:07] /home/ubuntu/neo-ai-dlr/demo/cpp/model_peeker.cc:92: TVMError: Check failed: f != nullptr: Loader of _lib(module.loadbinary__lib) is not presented.
Stack trace:
  File "/home/ubuntu/neo-ai-dlr/3rdparty/tvm/src/runtime/library_module.cc", line 131
  [bt] (0) ./model_peeker(+0xaf5e4) [0x56423d40e5e4]
  [bt] (1) ./model_peeker(+0xb0d48) [0x56423d40fd48]
  [bt] (2) ./model_peeker(+0x3442e) [0x56423d39342e]
  [bt] (3) ./model_peeker(+0xb5b4a) [0x56423d414b4a]
  [bt] (4) ./model_peeker(+0x2fefd) [0x56423d38eefd]
  [bt] (5) ./model_peeker(+0x16a8e) [0x56423d375a8e]
  [bt] (6) ./model_peeker(+0xbebd) [0x56423d36aebd]
  [bt] (7) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7ff9b58fcb97]
  [bt] (8) ./model_peeker(+0xd97a) [0x56423d36c97a]


terminate called after throwing an instance of 'std::runtime_error'
  what():  Could not load DLR Model
Aborted (core dumped)

The same error I got when I tried to call CreateDLRModel directly

  int device_type = 2;
  std::string input_name = "input";
  std::string  model_dir = "./mobilenet_v1_1.0_224/cuda_102_sm_70_x86_64";
  std::cout << "Loading model... " << std::endl;
  DLRModelHandle model;
  if (CreateDLRModel(&model, model_dir.c_str(), device_type, 0) != 0) {
    std::clog << DLRGetLastError() << std::endl;
    throw std::runtime_error("Could not load DLR Model");
  }

Error:

Loading model... 
TVMError: Check failed: f != nullptr: Loader of _lib(module.loadbinary__lib) is not presented.
Stack trace:
  File "/home/ubuntu/neo-ai-dlr/3rdparty/tvm/src/runtime/library_module.cc", line 131
  [bt] (0) ./libdlr.so(tvm::runtime::ImportModuleBlob(char const*, std::vector<tvm::runtime::Module, std::allocator<tvm::runtime::Module> >*)+0x21f4) [0x7f331285fb34]
  [bt] (1) ./libdlr.so(tvm::runtime::CreateModuleFromLibrary(tvm::runtime::ObjectPtr<tvm::runtime::Library>)+0x198) [0x7f3312861298]
  [bt] (2) ./libdlr.so(+0x5b6ae) [0x7f33127e46ae]
  [bt] (3) ./libdlr.so(tvm::runtime::Module::LoadFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x52a) [0x7f331286609a]
  [bt] (4) ./libdlr.so(dlr::TVMModel::SetupTVMModule(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)+0x1e3d) [0x7f33127e017d]
  [bt] (5) ./libdlr.so(CreateDLRModel+0x1dee) [0x7f33127c527e]
  [bt] (6) ./run-style-trans-dlr(+0xef6) [0x560aa80e9ef6]
  [bt] (7) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f3311e18b97]
  [bt] (8) ./run-style-trans-dlr(+0x104a) [0x560aa80ea04a]

terminate called after throwing an instance of 'std::runtime_error'
  what():  Could not load DLR Model
Aborted (core dumped)

Compiled models for x86_64 and cuda can be downloaded from here

Build error on macos

On trunk 35ed4fa:
neo-ai-dlr/build$ CC=gcc-9 CXX=g++-9 cmake ..
-- CMAKE_BUILD_TYPE: Release
-- Build with RPC support...
-- Build with Graph runtime support...
-- Build VTA runtime with target: sim
-- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES)
-- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES)
-- Could NOT find OpenMP (missing: OpenMP_C_FOUND OpenMP_CXX_FOUND)
CMake Error at 3rdparty/treelite/CMakeLists.txt:47 (include):
include could not find load file:

dmlc-core/cmake/Utils.cmake

==========================================
gcc-9 is installed with brew and is supporting openmp:
As the following builds without error:
gcc-9 -fopenmp /tmp/a.c

can't compile tensorflow .pb model with Sagemaker ENO

I use sagemaker neo to compile one inceptionv1 model. The model is freeze model with one '.pb' file. Compress the .pb file into one .tar.gz file and then compile from sagemaker console. The compile job can complete. However, when i check the compiled file in .tar.gz, there is just one .pb file same to complie before. i know that there should be three files '.so', '.param' and '.json' after comopile.

where is the problem? please help!

there is my setting of job
image

this is the model file with '.tat.gz' format befor compile
image

and this is the compiled file
image

Amazon Linux - version `GLIBC_2.27' not found

Hi,
Pre-trained image classification models that are compiled using SageMaker-Neo DLR work just fine but the object detection models; for eg. yolo3 and ssd512 when compiled and run on Amazon linux instance errors out as below

  File "/home/ec2-user/.local/lib/python3.6/site-packages/dlr/dlr_model.py", line 185, in _check_call
    raise DLRError(self._lib.DLRGetLastError().decode('ascii'))
dlr.dlr_model.DLRError: TVMError: Check failed: lib_handle_ != nullptr: Failed to load dynamic shared library /home/ec2-user/ssd/compiled.so /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /home/ec2-user/ssd/compiled.so)

The behavior is same for python3.6/3.7, dlr 1.0 - 1.3.0 and any amazon linux 2 instance as AL2 comes with GLIBC2.26 and these compiled models cannot find GLIBC2.27.

Both image classification and object detection models are from gluoncv model zoo and are compiled from AWS Sagemaker console for the platform linux - x86_64

hash mismatch of resnet_v1.5_50-ml_c4.tar.gz file when building DLR

I am building from the source for x86, with:

git clone --recursive https://github.com/neo-ai/neo-ai-dlr.git
cd neo-ai-dlr
mkdir build
cd build
cmake ..

Then, I got the following error:

...

-- Added Test: dlr_relayvm_test
-- Added Test: dlr_test
-- Added Test: dlr_treelite_test
-- Added Test: dlr_tvm_test
-- Downloading: https://neo-ai-dlr-test-artifacts.s3-us-west-2.amazonaws.com/tflite-models/cat224-3.txt to /home/denis/TEMP/neo-ai-dlr/build/cat224-3.txt
-- Downloading: https://neo-ai-dlr-test-artifacts.s3-us-west-2.amazonaws.com/test-data/street_small.npy to /home/denis/TEMP/neo-ai-dlr/build/street_small.npy
-- Downloading: https://neo-ai-dlr-test-artifacts.s3-us-west-2.amazonaws.com/compiled-models/resnet_v1.5_50-ml_c4.tar.gz to /tmp/resnet_v1.5_50-ml_c4.tar.gz
CMake Error at cmake/Utils.cmake:48 (file):
  file DOWNLOAD HASH mismatch

    for file: [/tmp/resnet_v1.5_50-ml_c4.tar.gz]
      expected hash: [447c22239e63882a2bc754db550131756373d4df]
        actual hash: [42c01a36f6ec0eea01f7a40e4217b88ffc503179]
             status: [28;"Timeout was reached"]

Call Stack (most recent call first):
  CMakeLists.txt:462 (download_file)

I changed the line 49 from cmake/Utils.cmake from:

         TIMEOUT 60  # seconds

to:

         TIMEOUT 600  # seconds

and then it built correctly.
I suggest changing this parameter to avoid errors in slower connections.

Unable to compile ONNX model using Sagemaker Neo

Hi,

I am unable to compile my ONNX model using the Sagemaker Neo web console. I am getting the following error-

ClientError: InputConfiguration: TVM cannot convert ONNX model. Please make sure the framework you selected is correct. list index out of range

The following are the screenshots of my compilation job and the error I get. The input shape is {"input.1":[1,3,800,1280]}.

1

2

3

The onnx model graph is-
onnx graph

WITH_HEXAGON build is broken

Looks like recent changes in dlr_hexagon.cc can not be built on Ubuntu 18.

$ cmake .. -DWITH_HEXAGON=1

$ make     
[  0%] Building CXX object CMakeFiles/objdlr.dir/src/dlr.cc.o
[  6%] Building CXX object CMakeFiles/objdlr.dir/src/dlr_common.cc.o
[  6%] Building CXX object CMakeFiles/objdlr.dir/src/dlr_relayvm.cc.o
[  6%] Building CXX object CMakeFiles/objdlr.dir/src/dlr_treelite.cc.o
[  6%] Building CXX object CMakeFiles/objdlr.dir/src/dlr_tvm.cc.o
[  6%] Building CXX object CMakeFiles/objdlr.dir/src/dlr_hexagon/dlr_hexagon.cc.o
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc: In constructor 'dlr::HexagonModel::HexagonModel(const string&, const DLContext&, int)':
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:185:3: error: expected ',' or ';' before 'if'
   if (!metadata.empty() && !IsFileEmpty(metadata)) {
   ^~
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:189:5: error: 'else' without a previous 'if'
   } else {
     ^~~~
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc: In member function 'virtual void dlr::HexagonModel::SetInput(const char*, const int64_t*, void*, int)':
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:234:15: error: 'GetInputIndex' was not declared in this scope
   int index = GetInputIndex(name);
               ^~~~~~~~~~~~~
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:234:15: note: suggested alternative: 'GetInputId'
   int index = GetInputIndex(name);
               ^~~~~~~~~~~~~
               GetInputId
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:243:31: error: no matching function for call to 'dlr::HexagonModel::SetInput(const char*&, const int64_t&, void*&)'
   SetInput(name, *shape, input);
                               ^
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:232:6: note: candidate: virtual void dlr::HexagonModel::SetInput(const char*, const int64_t*, void*, int)
 void HexagonModel::SetInput(const char* name, const int64_t* shape, void* input,
      ^~~~~~~~~~~~
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:232:6: note:   candidate expects 4 arguments, 3 provided
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc: At global scope:
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:252:6: error: redefinition of 'void dlr::HexagonModel::SetInput(const char*, const int64_t*, void*, int)'
 void HexagonModel::SetInput(const char* name, const int64_t* shape,
      ^~~~~~~~~~~~
/root/workplace/neo-ai-dlr/src/dlr_hexagon/dlr_hexagon.cc:232:6: note: 'virtual void dlr::HexagonModel::SetInput(const char*, const int64_t*, void*, int)' previously defined here
 void HexagonModel::SetInput(const char* name, const int64_t* shape, void* input,
      ^~~~~~~~~~~~
CMakeFiles/objdlr.dir/build.make:278: recipe for target 'CMakeFiles/objdlr.dir/src/dlr_hexagon/dlr_hexagon.cc.o' failed
make[2]: *** [CMakeFiles/objdlr.dir/src/dlr_hexagon/dlr_hexagon.cc.o] Error 1
CMakeFiles/Makefile2:134: recipe for target 'CMakeFiles/objdlr.dir/all' failed
make[1]: *** [CMakeFiles/objdlr.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2

Should libdlr_static.a contain all needed symbols from TVM and treelite?

I tried to use libdlr_static.a instead of libdlr.so to compile my program which uses CreateDLRModel function.
The compilation works fine with libdlr.so but fails with libdlr_static.a because of undefined reference to tvm::runtime::GraphRuntime, etc.
Makefile which I use to compile my programs:

all : run-style-trans-dlr run-style-trans-dlr2

run-style-trans-dlr2 : run-style-trans-dlr.o libdlr_static.a
	g++ -o run-style-trans-dlr2 run-style-trans-dlr.o libdlr_static.a -ldl -lpthread

run-style-trans-dlr : run-style-trans-dlr.o libdlr.so
	g++ -o run-style-trans-dlr run-style-trans-dlr.o -L. -ldlr -ldl -lpthread

run-style-trans-dlr.o : run-style-trans-dlr.cpp
	g++ -o run-style-trans-dlr.o -c -O3 -std=c++11 -march=native -mtune=native -I. run-style-trans-dlr.cpp

clean :
	rm -rf run-style-trans-dlr run-style-trans-dlr2 run-style-trans-dlr.o

linking which uses libdlr.so works fine

$ make run-style-trans-dlr
g++ -o run-style-trans-dlr.o -c -O3 -std=c++11 -march=native -mtune=native -I. run-style-trans-dlr.cpp
g++ -o run-style-trans-dlr run-style-trans-dlr.o -L. -ldlr -ldl -lpthread

linking which uses libdlr_static.a fails

$ make run-style-trans-dlr2
g++ -o run-style-trans-dlr2 run-style-trans-dlr.o libdlr_static.a -ldl -lpthread
libdlr_static.a(dlr.cc.o): In function `RunDLRModel':
dlr.cc:(.text+0x2f8): undefined reference to `TVMAPIHandleException(std::runtime_error const&)'
libdlr_static.a(dlr.cc.o): In function `GetDLRBackend':
dlr.cc:(.text+0x35e): undefined reference to `TVMAPIHandleException(std::runtime_error const&)'
libdlr_static.a(dlr.cc.o): In function `GetDLRVersion':
dlr.cc:(.text+0x81b): undefined reference to `TVMAPIHandleException(std::runtime_error const&)'
libdlr_static.a(dlr.cc.o): In function `SetDLRNumThreads':
dlr.cc:(.text+0xc87): undefined reference to `TVMAPIHandleException(std::runtime_error const&)'
libdlr_static.a(dlr.cc.o): In function `GetDLRNumWeights':
dlr.cc:(.text+0x117d): undefined reference to `TVMAPIHandleException(std::runtime_error const&)'
libdlr_static.a(dlr.cc.o):dlr.cc:(.text+0x166d): more undefined references to `TVMAPIHandleException(std::runtime_error const&)' follow
libdlr_static.a(dlr.cc.o): In function `DLRGetLastError':
dlr.cc:(.text+0x321): undefined reference to `TVMGetLastError'
libdlr_static.a(dlr_common.cc.o): In function `dlr::ListDir(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&)':
dlr_common.cc:(.text+0x3b4): undefined reference to `dmlc::io::FileSystem::GetInstance(dmlc::io::URI const&)'
libdlr_static.a(dlr_treelite.cc.o): In function `dlr::TreeliteModel::Run()':
dlr_treelite.cc:(.text+0x2955): undefined reference to `TreelitePredictorPredictBatch'
dlr_treelite.cc:(.text+0x2fb8): undefined reference to `TreeliteGetLastError'
libdlr_static.a(dlr_treelite.cc.o): In function `dlr::TreeliteModel::SetInput(char const*, long const*, float*, int)':
dlr_treelite.cc:(.text+0x78bc): undefined reference to `TreeliteAssembleSparseBatch'
dlr_treelite.cc:(.text+0x7b87): undefined reference to `TreeliteGetLastError'
libdlr_static.a(dlr_treelite.cc.o): In function `dlr::TreeliteModel::SetupTreeliteModule(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)':
dlr_treelite.cc:(.text+0x8b4b): undefined reference to `TreelitePredictorLoad'
dlr_treelite.cc:(.text+0x8f80): undefined reference to `TreelitePredictorQueryNumFeature'
dlr_treelite.cc:(.text+0x9609): undefined reference to `TreeliteGetLastError'
dlr_treelite.cc:(.text+0x9b8a): undefined reference to `TreeliteGetLastError'
dlr_treelite.cc:(.text+0x9cfa): undefined reference to `TreelitePredictorQueryNumOutputGroup'
dlr_treelite.cc:(.text+0xa390): undefined reference to `TreeliteGetLastError'
dlr_treelite.cc:(.text+0xa9cb): undefined reference to `TreelitePredictorPredictInst'
dlr_treelite.cc:(.text+0xac8d): undefined reference to `TreeliteGetLastError'
libdlr_static.a(dlr_tvm.cc.o): In function `dlr::TVMModel::GetWeightNames[abi:cxx11]() const':
dlr_tvm.cc:(.text+0x1ad): undefined reference to `tvm::runtime::GraphRuntime::GetWeightNames[abi:cxx11]() const'
libdlr_static.a(dlr_tvm.cc.o): In function `dlr::TVMModel::Run()':
dlr_tvm.cc:(.text+0xd8c): undefined reference to `tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)'
dlr_tvm.cc:(.text+0xeb6): undefined reference to `tvm::runtime::ExtTypeVTable::Get(int)'
libdlr_static.a(dlr_tvm.cc.o): In function `dlr::TVMModel::GetOutput(int, float*)':
dlr_tvm.cc:(.text+0x104c): undefined reference to `tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)'
dlr_tvm.cc:(.text+0x11ae): undefined reference to `tvm::runtime::ExtTypeVTable::Get(int)'
libdlr_static.a(dlr_tvm.cc.o): In function `dlr::TVMModel::GetInput(char const*, float*)':
dlr_tvm.cc:(.text+0x154c): undefined reference to `tvm::runtime::GraphRuntime::GetInputIndex(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
dlr_tvm.cc:(.text+0x156e): undefined reference to `tvm::runtime::GraphRuntime::GetInput(int) const'
dlr_tvm.cc:(.text+0x15c1): undefined reference to `tvm::runtime::NDArray::CopyFromTo(DLTensor*, DLTensor*, void*)'
libdlr_static.a(dlr_tvm.cc.o): In function `dlr::TVMModel::SetInput(char const*, long const*, float*, int)':
dlr_tvm.cc:(.text+0x31c2): undefined reference to `tvm::runtime::GraphRuntime::GetInputIndex(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
dlr_tvm.cc:(.text+0x31e5): undefined reference to `tvm::runtime::GraphRuntime::GetInput(int) const'
dlr_tvm.cc:(.text+0x3bf9): undefined reference to `tvm::runtime::ModuleNode::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)'
dlr_tvm.cc:(.text+0x3d66): undefined reference to `tvm::runtime::ExtTypeVTable::Get(int)'
libdlr_static.a(dlr_tvm.cc.o): In function `dlr::TVMModel::SetupTVMModule(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)':
dlr_tvm.cc:(.text+0x712a): undefined reference to `vtable for tvm::runtime::GraphRuntime'
dlr_tvm.cc:(.text+0x74a1): undefined reference to `tvm::runtime::GraphRuntime::Init(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::Module, std::vector<DLContext, std::allocator<DLContext> > const&)'
dlr_tvm.cc:(.text+0x7642): undefined reference to `tvm::runtime::GraphRuntime::LoadParams(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
dlr_tvm.cc:(.text+0x7702): undefined reference to `tvm::runtime::GraphRuntime::NumInputs() const'
dlr_tvm.cc:(.text+0x7745): undefined reference to `tvm::runtime::GraphRuntime::GetInputName[abi:cxx11](int) const'
dlr_tvm.cc:(.text+0x7807): undefined reference to `tvm::runtime::GraphRuntime::GetWeightNames[abi:cxx11]() const'
dlr_tvm.cc:(.text+0x7ed3): undefined reference to `tvm::runtime::GraphRuntime::NumOutputs() const'
dlr_tvm.cc:(.text+0x7f30): undefined reference to `tvm::runtime::GraphRuntime::GetOutput(int) const'
dlr_tvm.cc:(.text+0x8509): undefined reference to `tvm::runtime::Module::LoadFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
libdlr_static.a(dlr_tvm.cc.o): In function `tvm::runtime::TVMRetValue::Clear()':
dlr_tvm.cc:(.text._ZN3tvm7runtime11TVMRetValue5ClearEv[_ZN3tvm7runtime11TVMRetValue5ClearEv]+0x88): undefined reference to `tvm::runtime::ExtTypeVTable::Get(int)'
libdlr_static.a(dlr_tvm.cc.o): In function `tvm::runtime::GraphRuntime::~GraphRuntime()':
dlr_tvm.cc:(.text._ZN3tvm7runtime12GraphRuntimeD2Ev[_ZN3tvm7runtime12GraphRuntimeD5Ev]+0x11): undefined reference to `vtable for tvm::runtime::GraphRuntime'
dlr_tvm.cc:(.text._ZN3tvm7runtime12GraphRuntimeD2Ev[_ZN3tvm7runtime12GraphRuntimeD5Ev]+0xbbf): undefined reference to `vtable for tvm::runtime::ModuleNode'
collect2: error: ld returned 1 exit status
Makefile:4: recipe for target 'run-style-trans-dlr2' failed
make: *** [run-style-trans-dlr2] Error 1

neo-ai-dlr/build/lib content and file sizes

-rwxrwxr-x 1 ubuntu ubuntu 1762400 Dec 31 03:50 libdlr.so
-rw-rw-r-- 1 ubuntu ubuntu 1192284 Dec 31 03:50 libdlr_static.a
-rw-rw-r-- 1 ubuntu ubuntu  259464 Dec 31 03:50 libtreelite_runtime_static.a
  1. libdlr_static.a size is only 67% from libdlr.so
  2. why we need libtreelite_runtime_static.a here? Is it just intermediate archive created during build process which is needed to create libdlr libs?

Error with running DLR on RPi

I am trying to perform machine learning on the edge using a sagemaker neo model as an AWS greengrass deployment package, as per the tutorial here: https://docs.aws.amazon.com/greengrass/latest/developerguide/ml-dlc-console.html

I installed the DLR package for raspberry pi model 3b+ using the pre-built wheel here: https://neo-ai-dlr.readthedocs.io/en/latest/install.html

While running the following set of code, it seems that the inference is successful (test-dlr.log), but the following error occurs: /home/pi/neo-ai-dlr/src/dlr_tvm.cc:71: No metadata found

#!/usr/bin/env python
import os
from dlr import DLRModel
import numpy as np
import time
import logging

logging.basicConfig(filename='test-dlr.log', level=logging.DEBUG)

current_milli_time = lambda: int(round(time.time() * 1000))

def run_inference():
    model_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'models/resnet50')
    device = 'cpu'
    model = DLRModel(model_path, device)

    synset_path = os.path.join(model_path, 'synset.txt')
    with open(synset_path, 'r') as f:
        synset = eval(f.read())

    image = np.load(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'dog.npy')).astype(np.float32)
    input_data = {'data': image}

    for rep in range(4):
        t1 = current_milli_time()
        out = model.run(input_data)
        t2 = current_milli_time()

        logging.debug('done m.run(), time (ms): {}'.format(t2 - t1))

        top1 = np.argmax(out[0])
        logging.debug('Inference result: {}, {}'.format(top1, synset[top1]))
    
    import resource
    logging.debug("peak memory usage (bytes on OS X, kilobytes on Linux) {}".format(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss))

    return {
        'synset_id': top1,
        'prediction': synset[top1],
        'time': t2 - t1
    }

if __name__ == '__main__':
    res = run_inference()
    cls_id = res['synset_id']
    exp_cls_id = 151
    assert cls_id == exp_cls_id, "Inference result class id {} is incorrect, expected class id is {}".format(cls_id, exp_cls_id)
    print("All tests PASSED!")

test-dlr.log

After deployment on a Lambda function through AWS greengrass, the same error is observed in the log file, but the inference did not successfully run (optimizedImageClassification.log).

optimizedImageClassification.log

What can I do to resolve this error?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.