How to handle the custom layer in tensorRT

I am using tensorRT3 in TX2 platform. I try to convert a caffe model to GIE, and the caffe model contains Batch Normalization layers and PRelu layers which tensorRT does not support natively. Tensor RT user guide tells me that it can be solved by plugin modules, but I can not find a detailed solution.

I would really appreciate any additional information on how to handle custom layers using plugins.

Thanks.

Hi,

  1. TensorRT scale layer supports batch normalization.

  2. A custom layer can be implemented with plugin API. Please check our face recognition sample for details.

Thanks.

1 Like

Hi,AastaLLL,

Thanks for your support. I will see the face recognition sample for details of using plugin API.

Could you provide a example of how to use scale layers to implement batch normalization layers ?

Thanks.

Hi,

TensorRT can parse ‘BatchNorm’ layer directly.

For example,

layer {
  name: "Layer1/BatchNorm"
  type: "BatchNorm"
  bottom: "Layer1"
  top: "Layer1/BatchNorm"
  ...
}

Thanks.

AastaLLL,thanks for your reply.

The structure of the BatchNorm that I use is showed below,

layer {
name: “bn0_1”
type: “BN”
bottom: “concat0_1”
top: “bn0_1”
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 1.0
decay_mult: 0.0
}
bn_param {
scale_filler {
type: “constant”
value: 1.0
}
shift_filler {
type: “constant”
value: 0.001
}
bn_mode: INFERENCE
}
}

When I use tensorRT to parse the layer, there is an error,
" Error parsing text-format ditcaffe.NetParameter: 65:12: Message type “ditcaffe.LayerParameter” has no field named “bn_param”".

I rename the filed “bn_param” as “param”, another error appears,
" Error parsing text-format ditcaffe.NetParameter: 66:18: Message type “ditcaffe.ParamSpec” has no field named “scale_filler” ".

How can I solve it.

Thanks.

Hi,

Please change bn_param to batch_norm_param and give it a try.
Thanks.

Hi, AastaLLL,

I change bn_param to batch_norm_param as you tell me, and it seems that TensorRT recognize the field,

thanks for your help. But there is another error,

“ditcaffe.BatchNormParameter” has no field named “shift_filler”,

“ditcaffe.BatchNormParameter” has no field named “bn_mode”.

How to change those field names ?

Thanks.

Hi,

  1. bn_mode can directly be removed since TensoRT is always in INFERENCE mode.

  2. Not sure if shift_filler is identical to the bias_filler. Maybe you can give it a try.

Thanks.

Hi,

I convert a caffe model to GIE, and I use plugin modules to implement Batch Normalization layers. When the tensorRT building engine (using buildCudaEngine function), there is a error, “Custom layer bn0_1 returned non-zero initialization”(bn0_1 is the BN layer name).

How to solve this issue ?

thanks.

Hi,

Looks like there is some non-expected behaviour in your plugin layer.
You can get the more implementation hint in our samples:

Native sample:
/usr/src/tensorrt/samples/samplePlugin/samplePlugin.cpp
/usr/src/tensorrt/samples/sampleCharRNN/sampleCharRNN.cpp
/usr/src/tensorrt/samples/sampleFasterRCNN/sampleFasterRCNN.cpp

TX2 sample:

Thanks.

Hi, I success to process caffe model with tensorRT, thanks for your help.

But there is a error when I use fp16 mode, “cudnnLayerUtils.cpp:98: void * nvinfer1::cudnn::getTensorMen(const nvinfer1::cudnn::EngienTensor&,void , void): Assertion ‘start[vectorIndex]%spv == 0’ failed”. I use tensorRT3.0, cuda8.0, and cudnn7.

Hi,

This is a known issue and fixed already.
Could you try our latest TensorRT package? (should be v3.0.4)

Thanks.

Hi AastaLLL,

I installed 3.0.4 with CUDA 9 and CUDNN 7, but I still get: python: cudnnLayerUtils.cpp:98: void* nvinfer1::cudnn::getTensorMem(const nvinfer1::cudnn::EngineTensor&, void**, void**): Assertion `start[vectorIndex]%spv == 0’ failed.

Thanks

Hi,

Suppose you are using TensorRT on the x86-based environment.
Could you test our latest TensorRT package which should be version 4.0?

Thanks.

could please suggest how to implement BatchNorm layer with pretrained weights saved in .wts file, while constructing model in TensorRT c++ API.

Thanks

Hi,

You can achieve this with scale layer:

https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_network_definition.html#a37cf24c7c620aa661de167f302559289

virtual IScaleLayer* addScale(ITensor& input, ScaleMode mode, Weights shift, Weights scale, Weights power) = 0;

Thanks.

AastaLLL, Thanks for your help. I have solved this issue.