TensorRT BatchNorm Error

Hey team,

I am trying to run TensorRT over a classification model I have. I have input the model and deploy files as required and I get the following error

Batch normalization moving average is zero
error parsing layer type BatchNorm index 4
Engine could not be created

Any TensorRT pros out there?

BTW it works great with my standard GoogLenet models.

Don’t worry I figured it out (it seems obvious now). The weights for the moving average in my batchNorm layers where set to zero. I modified them to be non-zero using Python, now it is okay.

I am getting a new error now. Any ideas?

caffeParser.cpp:94: virtual nvinfer1::Dims2 CaffeParserPoolingDimsCallback::compute(nvinfer1::Dims2, nvinfer1::Dims2, nvinfer1::Dims2, nvinfer1::Dims2, const char*): Assertion `input.h + 2 * padding.h >= kernel.h' failed.
Aborted

I know it has been a while, but are you still having this problem?

Hum, I thought Batch Normalization layer was not support in TensorRT. From documentation:

“Batch Normalization can be implemented using the TensorRT Scale layer.”

Read more at: NVIDIA Documentation Center | NVIDIA Developer
Follow us: @GPUComputing on Twitter | NVIDIA on Facebook

Did you use a scale layer?

If I have a network containing BN layer, do I need to re-train after changing BN layer to Scale layer?

Thanks.

Hi, I can solve this issue perfectly.

# load caffe model
net_model = caffe.Net(deploy_file_path, caffemodel_path)
# changing all moving average factor to 1 (or others you want)
for p in net_model.params:
    if get_layer_by_name(str(p)).type == "BatchNorm":
        net_model.params[str(p)][2].data[0] = 1
# than save it
net_model.save(caffemodel_path)

If you want any help on this issue, please let me know.

Thanks
Best regards,

I also get the error.How to solve it?

caffeParser.cpp:94: virtual nvinfer1::Dims2 CaffeParserPoolingDimsCallback::compute(nvinfer1::Dims2, nvinfer1::Dims2, nvinfer1::Dims2, nvinfer1::Dims2, const char*): Assertion `input.h + 2 * padding.h >= kernel.h’ failed.
Aborted