Failing to parse onnx file generated by pytorch

I tried to generate several models in onnx format using pytorch and they all failed to be parsed using tensorRT.

I got the same error for every model I tried to parse:

While parsing node number 153 [Gather]:
ERROR: onnx2trt_utils.hpp:277 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[E] failed to parse onnx file
[E] Engine could not be created
[E] Engine could not be created

I generated the onnx files like this:

model = models.resnext50_32x4d(pretrained=False).to(device)
model.fc = nn.Linear(model.fc.in_features, len(stats['train']) * 3).to(device)
dummy_input = Variable(torch.rand(batch_size, *image_size)).to(device)
torch.onnx.export(model, dummy_input, onnx_path, verbose=True, input_names=['input'], output_names=['output'])

Hi! The same for Upsample

----------------------------------------------------------------
Input filename:   model.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
Parsing model
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
While parsing node number 69 [Gather -> "208"]:
ERROR: /home/alex/tools/onnx-tensorrt/onnx2trt_utils.hpp:335 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
%206 : Long() = onnx::Constant[value={2}](), scope: ResNet18_OneConvDecoder/DecoderBlock/Sequential[block]/Upsample[0]
  %207 : Tensor = onnx::Shape(%205), scope: ResNet18_OneConvDecoder/DecoderBlock/Sequential[block]/Upsample[0]
  %208 : Long() = onnx::Gather[axis=0](%207, %206), scope: ResNet18_OneConvDecoder/DecoderBlock/Sequential[block]/Upsample[0]
  %209 : Tensor = onnx::Constant[value={2}]()
  %210 : Tensor = onnx::Mul(%208, %209)

Hello, it is ez to fix this problem. Change “x.view(x.size(0),-1)” as “x.flatten(1)” at “resnet.py” document in torchvision.

Thank you.
It worked!

And then you will find out that Pytorch output and TensorRT output cannot match when you parser a classification model. VGG index output will be same but ResNet and DenseNet index output will quite be different. And all scores cannot match in these two platform unless you input a zeros data.

Not make sense. I hope that Nvidia can fix this problem.