I changed a Faster RCNN model with a Googlenet feature extractor (trained with Caffe) in TensorRT 4, I got the following error:
ERROR: inception_5a/output: all concat input tensors must have the same dimensions except on the concatenation axis
RPROIFused outputs a blob, and the blob is passed to an inception modele (inception_5a/), then inception_5a/output concatenates four tensors from the module in the architecture.
How to solve the error above ?
How can I get the dimension of output blob from RPROIFused ?
Is it possible to print input/ output dimensions of blobs in each layer when building a TensorRT network ?
(Update)
Hi, the output dimension of RPROIFused is normal. The error might occur when inception_5a/output tries to concatenate four tensors together. Each blob is a 4D tensor (NCHW), and another error message shows nvinfer1::DimsCHW nvinfer1::getCHW(const nvinfer::Dims&): Assertion `d.nbDims >=3’ failed.
Does the message above mean concatenation layer in TensorRT doesn’t support concatenate 4D tensors together?
With TRT5 engineering was able to get past the parsing error. Engineering modified the prototxt to something similar to what we do in the shipped fasterRCNN sample (please ask the customer to refer to that sample)
This is the change I made to the prototxt. Customer may have to update the param values based on their network
By doing this, in TRT 5, the caffe parser will add the plugin to the network using the plugin registry and also populate all the necessary plugin params from the prototxt. This is similar to what we do in the fasterRCNN sample.
Regarding the question, to get the output dimension of the plugin, you can call the getOutputDimension function on the created plugin object. There is no way to print the output dimensions by default for the caffe parser.
I’ve change my prototxt settings based on the content you provide above, and execute the faster rcnn sample in TensorRT 5.0.2.6.
However, the error message still remains:
ERROR: inception_5a/output: all concat input tensors must have the same dimensions except on the concatenation axis
nvinfer1::DimsCHW enginehelper::getCHW(const nvinfer1::Dims&): Assertion `d.nbDims >=3' failed.
I’m wondering if the Concat layer whose name is inception_5a/output doesn’t accept 4D input tensors, do input tensors to Concat layer in TensorRT must be 3D or 2D tensors ?
I confirm that I also have the same problem as RahnRYHuang, I get exactly the same error message.
I am using TensorRT 5.0.2.6 with CUDA 10 and cudnn 7.4, on Ubuntu 18.04 with RTX 2080.
When I try to add region_proposal_param and roi_pooling_param in the prototxt, I get the following error :
libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 1990:25: Message type "ditcaffe.LayerParameter" has no field named "region_proposal_param".
ERROR: CaffeParser: Could not parse deploy file
The concat error may origin from the incorrect handling in RPROIFused plugin.
The dimension of rpn_bbox_pred changes with the customized GoogleNet feature layer.
There are some parameters in the RPROIFused plugin and they also need to be updated.
Could you share the original .prototxt used for training with us?
We will check how to customize the plugin based on your model.
The attachments are the training and testing prototxt files of our customized Faster RCNN model. We also found another error of placing pooling layer after RPROIFused layer called
ERROR: cudnnPoolingLayer.cpp (130) - Cudnn Error in execute: 3
I am also trying to deploy fastercnn with googlenet as feature extractr. I am having a similar problem.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2004:25: Message type “ditcaffe.LayerParameter” has no field named “region_proposal_param”.
[TensorRT] ERROR: CaffeParser: Could not parse deploy file
[TensorRT] ERROR: Failed to parse caffe model
File “/usr/lib/python2.7/dist-packages/tensorrt/legacy/utils/init.py”, line 352, in caffe_to_trt_engine
assert(blob_name_to_tensor)
Traceback (most recent call last):
File “”, line 7, in
File “/usr/lib/python2.7/dist-packages/tensorrt/legacy/utils/init.py”, line 360, in caffe_to_trt_engine
raise AssertionError(‘Caffe parsing failed on line {} in statement {}’.format(line, text))
AssertionError: Caffe parsing failed on line 352 in statement assert(blob_name_to_tensor)
Can you please look into the error. Here are my prototxtfile and caffemodel:
[url]faster_rcnn - Google Drive