Using TensorRT 2.1 with my own trained caffemodel

I am trying to apply TensorRT 2.1 with my own trained Faster-RCNN model which is somewhat different to the sample Faster RCNN model provided. During building engine, I bump into the following error message which I can’t debug.

Begin parsing model…
End parsing model…
Begin building engine…
sample_fasterRCNN: NvPluginFasterRCNN.cu:81: virtual void nvinfer1::plugin::RPROIPlugin::configure(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, int): Assertion `inputDims[0].d[0] == (2 * A) && inputDims[1].d[0] == (4 * A)’ failed.
Aborted (core dumped)

It seems the shape of tensor in RoI pooling is different to the defined in NvPluginFasterRCNN.cu and the file is not visible to me.
How do you think I can go through this error to infer my own faster rcnn model?

Hi,

Have you figured out this problem? I got the same issue when porting the Faster RCNN trained with COCO to TensorRT.

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth

Can you please test with the latest version of TensorRT, 4.0RC, and see if your issue still exists. If it still exists, please file a bug here: https://developer.nvidia.com/nvidia-developer-program
Please include the steps/files used to reproduce the problem along with the output of infer_device.