Parser Error Output

Hi,

I am currently exploring different models to be used with DeepStream. During that I’ve often run into the tensorrt ONNXParser refusing to work (likely due to missing layers). I am using the following code slightly adapted from the introductory examples:

with trt.Builder(TRT_LOGGER) as builder, \
    builder.create_network() as network, \
    trt.OnnxParser(network, TRT_LOGGER) as parser:
 
    builder.max_workspace_size = GiB(1)
 
    # Load the Onnx model and parse it in order to populate the TensorRT network.
    with open(onnx_path, 'rb') as model:
        success = parser.parse(model.read())
        assert success, f'{parser.num_errors} detected during parsing'
    …

When running this on a ONNX file which contains a layer not supported by TensorRT, an assertion error is raised from the last line. What I’d like to know is: Is there any way to get more information about why the parser raises an error?

Currently, I am looking at what the parser writes to stdout, which stops right before the layer I am suspecting to be not supported by TensorRT. Then, I’ll need to look at the ONNX file to see which layer that is.

Here’s a minimal working example to reproduce the error: https://gist.github.com/dseuss/bd4f3385451241a48338c0e01f74d4fc

Note that the logger is set to INFO, but no detailed error message is raised, but success evaluates to False (running in TensorRT 5.0.2.6 and 5.1.0.5)

However, I did make some progress using the master branch of onnx-tensorrt (GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX) – see this minimal example: https://gist.github.com/dseuss/131c209daa707ea4d13f810385f3b049:

  • using TensorRT 5.0.2.6, I get the expected error message (that there’s no importer for the Slice op)
  • using TensorRT 5.1.5.0, I can actually parse the example

Bit of background: I need to run a model with Slice/Gather ops in DeepStreamSDK, which requires TensorRT 5.0.2.6). Up to now this was impossible since I could not import the model. With the solution outlined above there are two possible ways forward:

  1. Update the DeepStream ONNX importer to the newest version of onnx-tensorrt
  2. Create serialised TensorRT engines using python + onnx-tensorrt and import those into DeepStream

Unfortunately, I can’t do 1. since we don’t have access to the full source code. Also, 2. doesn’t work since the DeepStream binaries are linked against TensorRT 5.0.2.6.

Is there anything we can do to implement one of these solutions? Or is there any chance of getting an updated version of DeepStream from Nvidia linked against a newer version of TensorRT?

I need to run a model with Slice/Gather ops
TensorRT 5.0 does not support Slice/Gather ops. TensorRT 5.1 supports them.

Can you wait Deepstream 4.0 which is based on TensorRT 5.1. we will release Deepstream 4.0 soon.

Thanks Chris, waiting eagerly on the release…

You can also checkout https://github.com/onnx/onnx-tensorrt/tree/v5.0 to build a new libnvonnxparser.so to replace.

It looks this version has added Gather/Slice, but official tensorRT 5.0 not.
https://github.com/onnx/onnx-tensorrt/blob/v5.0/builtin_op_importers.cpp
DEFINE_BUILTIN_OP_IMPORTER(Gather)
ctx->network()->addGather()
DEFINE_BUILTIN_OP_IMPORTER(Slice)
ctx->network()->addPadding()

That’s a great suggestion. Unfortunately, onnx-tensorrt only supports those ops when compiled with TRT 5.1 – with TRT 5.0 it raises an error during import. That makes it impossible to use inside DeepStream since that’s linked to TRT 5.0