I am trying to apply TensorRT 2.1 with my own trained Faster-RCNN model which is somewhat different to the sample Faster RCNN model provided. During building engine, I bump into the following error message which I can’t debug.
Begin parsing model…
End parsing model…
Begin building engine…
sample_fasterRCNN: NvPluginFasterRCNN.cu:81: virtual void nvinfer1::plugin::RPROIPlugin::configure(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, int): Assertion `inputDims[0].d[0] == (2 * A) && inputDims[1].d[0] == (4 * A)’ failed.
Aborted (core dumped)
It seems the shape of tensor in RoI pooling is different to the defined in NvPluginFasterRCNN.cu and the file is not visible to me.
How do you think I can go through this error to infer my own faster rcnn model?
Can you please test with the latest version of TensorRT, 4.0RC, and see if your issue still exists. If it still exists, please file a bug here: https://developer.nvidia.com/nvidia-developer-program
Please include the steps/files used to reproduce the problem along with the output of infer_device.