I am using tensorRT3 in TX2 platform. I try to convert a caffe model to GIE, and the caffe model contains Batch Normalization layers and PRelu layers which tensorRT does not support natively. Tensor RT user guide tells me that it can be solved by plugin modules, but I can not find a detailed solution.
I would really appreciate any additional information on how to handle custom layers using plugins.
When I use tensorRT to parse the layer, there is an error,
" Error parsing text-format ditcaffe.NetParameter: 65:12: Message type âditcaffe.LayerParameterâ has no field named âbn_paramâ".
I rename the filed âbn_paramâ as âparamâ, another error appears,
" Error parsing text-format ditcaffe.NetParameter: 66:18: Message type âditcaffe.ParamSpecâ has no field named âscale_fillerâ ".
I convert a caffe model to GIE, and I use plugin modules to implement Batch Normalization layers. When the tensorRT building engine (using buildCudaEngine function), there is a error, âCustom layer bn0_1 returned non-zero initializationâ(bn0_1 is the BN layer name).
Hi, I success to process caffe model with tensorRT, thanks for your help.
But there is a error when I use fp16 mode, âcudnnLayerUtils.cpp:98: void * nvinfer1::cudnn::getTensorMen(const nvinfer1::cudnn::EngienTensor&,void , void): Assertion âstart[vectorIndex]%spv == 0â failedâ. I use tensorRT3.0, cuda8.0, and cudnn7.
I installed 3.0.4 with CUDA 9 and CUDNN 7, but I still get: python: cudnnLayerUtils.cpp:98: void* nvinfer1::cudnn::getTensorMem(const nvinfer1::cudnn::EngineTensor&, void**, void**): Assertion `start[vectorIndex]%spv == 0â failed.