failed to parse ONNX model

Hi

i tried to train my model. After calling the following command, to test my training:
imagenet-camera --model=/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=$DATASET/labels.txt

I’ve got the following assertion
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
While parsing node number 69 [Gather]:
ERROR: onnx2trt_utils.hpp:277 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[TRT] failed to parse ONNX model ‘MyModel/resnet18.onnx’
[TRT] device GPU, failed to load MyModel/resnet18.onnx
[TRT] failed to load MyModel/resnet18.onnx
[TRT] imageNet – failed to initialize.
imagenet-console: failed to initialize imageNet

what could cause the error?

Hi,

It looks like that imagenet-camera fails when converting the Gather node into TensorRT.

While parsing node number 69 [Gather]:
ERROR: onnx2trt_utils.hpp:277 In function convert_axis:
 [8] Assertion failed: axis >= 0 && axis < nbDims

Which JetPack version do you use?
ONNX parser supports gather layer from TensorRT5.1, which is just available in the JetPack4.2.1.
If you haven’t used v4.2.1, would you mind to give it a try first?

Thanks.

Hi rolf.gasser, this issue was patched in the ResNet model definition in torchvision. You should uninstall torchvision and re-install from my fork. See this related GitHub issue for the procedure: [url]Re-training on the Cat/Dog Dataset · Issue #370 · dusty-nv/jetson-inference · GitHub

Then re-train and it should load for you in TensorRT.

Hi,

thanks for the fast responses.

@AastaLLL: I downloaded yesterday the latest JetPack4.2.1 with TensorRT5.1.

@dusty_nv: I’ve uninstalled torchvision, installed again the last version and trained and converted the PyTorch Model to the ONNX model again. After using again the following command:
“imagenet-camera --model=/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=$DATASET/labels.txt”

I had again the same error:

WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
While parsing node number 69 [Gather]:
ERROR: onnx2trt_utils.hpp:277 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[TRT] failed to parse ONNX model ‘MyModel/resnet18.onnx’
[TRT] device GPU, failed to load MyModel/resnet18.onnx
[TRT] failed to load MyModel/resnet18.onnx
[TRT] imageNet – failed to initialize.
imagenet-console: failed to initialize imageNet

After searching in google I found the following hint:

and replaced x = x.reshape(x.size(0), -1) with x = torch.flatten(x, 1)

After that the imagenet-camera seems to run…

Yes, that is also the same patch that I applied to my fork of torchvision v0.3.0 here:

https://github.com/dusty-nv/vision/tree/v0.3.0

It is this branch that jetson-inference will install (if you select PyTorch to be installed by the automated script). If you use the upstream torchvision master from PyTorch, it won’t have this patch.

Ok, thanks for your support!

Hi

I think I have a similar issue.

I’m using pytorch v1.1.0 + torchvision v0.3.0 + tensorrt v5.1.6.1.
I tried to run a custom model.

Should I use the same way to fix this issue?
I mean I should change something in “vgg.py”?
I’m not sure what to do yet.

This is what I got:

While parsing node number 28 [Gather]:
ERROR: onnx2trt_utils.hpp:277 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[TRT] failed to parse ONNX model ‘./vgg16-ssd.onnx’
[TRT] device GPU, failed to load ./vgg16-ssd.onnx
detectNet – failed to initialize.

Thanks

Hi, I am not sure about the change(s) required for VGG network, but see here for the patch used for ResNet:

[url]https://github.com/dusty-nv/vision/commit/5c461366585df964503df4d05df00aea65deb0a9[/url]

Essentially I replaced x = x.reshape(x.size(0), -1) with x = torch.flatten(x, 1)