TensorRT 5.0.2.6 onnx run error

Hi,
I’ve upgraded tensorrt to verion 5.0.2.6 and now i can’t run the SampleONNXMnist sample also I can’t run my own onnx model and i got core dumped error
you can see the error below:

ERROR: Network must have at least one output
sample_onnx_mnist: sampleOnnxMNIST.cpp:64: void onnxToTRTModel(const string&, unsigned int, nvinfer1::IHostMemory*&): Assertion `engine’ failed.
Aborted (core dumped)

Can any one help me?

Hello,

What platform (windows/linux) are you on? I’m able to use the TensorRT 18.11 container and seems to run mnist sample fine.

NVIDIA Release 18.11 (build 817536)

NVIDIA TensorRT 5.0.2 (c) 2016-2018, NVIDIA CORPORATION.  All rights reserved.
Container image (c) 2018, NVIDIA CORPORATION.  All rights reserved.

https://developer.nvidia.com/tensorrt

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

root@2ca610df4db7:/workspace# ls
README.md  tensorrt
root@2ca610df4db7:/workspace# cd tensorrt/
TensorRT-Release-Notes.pdf  bin/                        data/                       doc/                        python/                     samples/
root@2ca610df4db7:/workspace# cd tensorrt/samples/
Makefile            getDigits/          sampleFasterRCNN/   sampleINT8API/      sampleMNISTAPI/     sampleNMT/          sampleSSD/          trtexec/
Makefile.config     python/             sampleGoogleNet/    sampleMLP/          sampleMovieLens/    sampleOnnxMNIST/    sampleUffMNIST/
common/             sampleCharRNN/      sampleINT8/         sampleMNIST/        sampleMovieLensMPS/ samplePlugin/       sampleUffSSD/
root@2ca610df4db7:/workspace# cd tensorrt/samples/sampleOnnxMNIST/
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST# ls
Makefile  README  sampleOnnxMNIST.cpp
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST# cat README
This sample demonstrates conversion of an MNIST network in ONNX format to
a TensorRT network. The network used in this sample can be found at https://github.com/onnx/models/tree/master/mnist
(model.onnx)
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST# make
../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleOnnxMNIST.cpp
Linking: ../../bin/sample_onnx_mnist_debug
:
Compiling: sampleOnnxMNIST.cpp
Linking: ../../bin/sample_onnx_mnist
# Copy every EXTRA_FILE of this sample to bin dir
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST# ../../bin/sample_onnx_mnist
----------------------------------------------------------------
Input filename:   ../../data/mnist/mnist.onnx
ONNX IR version:  0.0.3
Opset version:    1
Producer name:    CNTK
Producer version: 2.4
Domain:
Model version:    1
Doc string:
----------------------------------------------------------------
 ----- Parsing of ONNX model ../../data/mnist/mnist.onnx is Done ----

---------------------------

@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@#-:.-=@@@@@@@@@@@@@@
@@@@@%=     . *@@@@@@@@@@@@@
@@@@%  .:+%%% *@@@@@@@@@@@@@
@@@@+=#@@@@@# @@@@@@@@@@@@@@
@@@@@@@@@@@%  @@@@@@@@@@@@@@
@@@@@@@@@@@: *@@@@@@@@@@@@@@
@@@@@@@@@@- .@@@@@@@@@@@@@@@
@@@@@@@@@:  #@@@@@@@@@@@@@@@
@@@@@@@@:   +*%#@@@@@@@@@@@@
@@@@@@@%         :+*@@@@@@@@
@@@@@@@@#*+--.::     +@@@@@@
@@@@@@@@@@@@@@@@#=:.  +@@@@@
@@@@@@@@@@@@@@@@@@@@  .@@@@@
@@@@@@@@@@@@@@@@@@@@#. #@@@@
@@@@@@@@@@@@@@@@@@@@#  @@@@@
@@@@@@@@@%@@@@@@@@@@- +@@@@@
@@@@@@@@#-@@@@@@@@*. =@@@@@@
@@@@@@@@ .+%%%%+=.  =@@@@@@@
@@@@@@@@           =@@@@@@@@
@@@@@@@@*=:   :--*@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Prob 0  0.0000 Class 0:
 Prob 1  0.0000 Class 1:
 Prob 2  0.0000 Class 2:
 Prob 3  1.0000 Class 3: **********
 Prob 4  0.0000 Class 4:
 Prob 5  0.0000 Class 5:
 Prob 6  0.0000 Class 6:
 Prob 7  0.0000 Class 7:
 Prob 8  0.0000 Class 8:
 Prob 9  0.0000 Class 9:

root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST#

Thank you for your response.
I use Ubuntu 18 and upgrade tensorrt to 5.0.2.6 also I installed onnx-tensorrt to run the yolo-onnx model in python. now I want to run the yolo-onnx in c++ framework.
It seems because of the onnxIR version, as u see it is ONNX IR version: 0.0.3
and in the TensorRT-Developer-Guide was mentioned that only support ONNX IR version 7!
And now. I don’t know how to convert the onnx model to version 7!

Hello,

In general, the ONNX Parser is designed to be backward compatible, therefore, a model file produced by an earlier version of ONNX exporter should not cause a problem.

In case where it’s not compatible, convert the earlier ONNX model file into a later supported version. For more information on this subject, see ONNX Model Opset Version Converter. https://github.com/onnx/onnx/blob/master/docs/OpsetVersionConverter.md

For more details, please reference: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Hi there, trying to build the sampleOnnxMNIST sample on TensorRT5.0.2GA is not working for me. I’ve installed trt via the deb package, and built onnx-tensorrt from source via their git, but when trying to make the package I get the following:

../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleOnnxMNIST.cpp
sampleOnnxMNIST.cpp: In function ‘void onnxToTRTModel(const string&, unsigned int, nvinfer1::IHostMemory*&)’:
sampleOnnxMNIST.cpp:45:63: error: cannot convert ‘nvinfer1::INetworkDefinition’ to ‘nvinfer1::INetworkDefinition*’ for argument ‘1’ to ‘nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)’
     auto parser = nvonnxparser::createParser(*network, gLogger);
                                                               ^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleOnnxMNIST.o' failed
make: *** [../../bin/dchobj/sampleOnnxMNIST.o] Error 1

Any ideas what am I missing out?

Hi,
it’s exactly my problem while I could run this sample on trt 5 before upgrading to 5.0.2!

Note on this, I still encounter this error, but in the TensorRT container it works, I’ve tried to find the dockerfile to see if I did any version mismatch but can’t find it.

The container has this TRT version:

ii  tensorrt                      5.0.2.6-1+cuda10.0                    amd64        Meta package of TensorRT

The pip list output:

Package             Version   
------------------- ----------
absl-py             0.6.1     
appdirs             1.4.3     
astor               0.7.1     
atomicwrites        1.2.1     
attrs               18.2.0    
certifi             2018.10.15
chardet             3.0.4     
decorator           4.3.0     
gast                0.2.0     
graphsurgeon        0.3.2     
grpcio              1.16.1    
h5py                2.8.0     
idna                2.7       
Keras-Applications  1.0.6     
Keras-Preprocessing 1.0.5     
Mako                1.0.7     
Markdown            3.0.1     
MarkupSafe          1.0       
more-itertools      4.3.0     
numpy               1.15.3    
onnx                1.3.0     
pathlib2            2.3.2     
Pillow              5.3.0     
pip                 18.1      
pluggy              0.8.0     
protobuf            3.6.1     
py                  1.7.0     
pycuda              2018.1.1  
pytest              3.9.3     
pytools             2018.5.2  
requests            2.20.1    
setuptools          40.5.0    
six                 1.11.0    
tensorboard         1.12.0    
tensorflow          1.12.0    
tensorflow-gpu      1.12.0    
tensorrt            5.0.2.6   
termcolor           1.1.0     
torch               0.4.1     
torchvision         0.2.1     
typing              3.6.6     
typing-extensions   3.6.6     
uff                 0.5.5     
urllib3             1.24.1    
Werkzeug            0.14.1    
wget                3.2       
wheel               0.32.2

And it’s on Python 3.5.2.

The python API doesn’t work so no luck for GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX.
I can just use the c libraries but that issue I’ve put above still bothers me on native.

Hello,

Is there a reason why installing GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX in addition to standalone TensorRT? TensorRT already includes an ONNX parser.

1 Like

Hello regarding the dockerfile, from NGC.nvidia.com, you can look at the “layer” of the image you are pulling, which should give you an idea how the image is constructed.

Hi @NVES, I thought we needed to install onnx-tensorrt to have the onnx parser working. I uninstalled onnx-tensorrt, reinstalled TRT5GA and tried to build the samples and it’s still failing on the ONNX part.

Compiling: sampleINT8API.cpp
g++ -Wall -std=c++11 -I"/usr/local/cuda/include" -I"/usr/local/include" -I"../include" -I"../common" -I"/usr/local/cuda/include" -I"../../include"  -D_REENTRANT -g -c -o ../../bin/dchobj/sampleINT8API.o sampleINT8API.cpp
sampleINT8API.cpp: In member function ‘bool sampleINT8API::build()’:
sampleINT8API.cpp:448:102: error: cannot convert ‘nvinfer1::INetworkDefinition’ to ‘nvinfer1::INetworkDefinition*’ for argument ‘1’ to ‘nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)’
     auto parser = SampleUniquePtr<nvonnxparser::IParser>(nvonnxparser::createParser(*network, gLogger));
                                                                                                      ^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleINT8API.o' failed
make[1]: *** [../../bin/dchobj/sampleINT8API.o] Error 1
make[1]: Leaving directory '/home/bpinaya/Documents/tensorrt/samples/sampleINT8API'
Makefile:37: recipe for target 'all' failed
make: *** [all] Error 2

I checked on the layers of the docker image but there is a script /nvidia/build-scripts/installTRT.sh that I can’t find.

On TRT 4, building the samples (onnx) worked from scratch, I think I’ll try again on a clean install but it’d be awesome if you release the dockerfiles eventually. I’ll update this if I have luck on a clean build.

1 Like

Ok, so if I understand correctly: With tensorrt we can use the c++ parser API to parse ONNX models, without having to install GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX right?
But for the python API we need to install it?
From the container you linked I can start python and type:

import onnx

And that works of course but trying to import:

import onnx_tensorrt.backend as backend

results in:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'onnx_tensorrt'

I guess I’m confused as of the purpose of GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX.

My goal is to take the network I have (trained in pytorch) latter parsed to ONNX to tensorrt.
I’ve serialized networks before, but just from caffe, loading it from ONNX is where I seem to be struggling. Thanks for your time!!

Correct, you get the onnx parser with standard TRT install. For more details, reference https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#import_onnx_python

You are right, from a clean setup all the samples build correctly, I think it was an issue of onnx-tensorrt that messed things up.
Thanks for the help!

I want to install keras on Nvidia Jetson Xavier,but the version of CUDA is 10.0. It’s a pity,I can’t install keras sucessfully.I don’t know how to install it on Xavier, and I have installed tensorflow sucessfully. Can anyone help me?

Hello, Please post your question on Jetson AGX Xavier - NVIDIA Developer Forums , you’ll get support coverage there.

Thanks for your reply!

recipe for target ‘…/…/bin/sample_mnist_debug’ failed