TensorRT 6.0.1 + TensorFlow 1.14 - No conversion function registered for layer: FusedBatchNormV3 yet

Hi,

I had my models working fine with TensorFlow 1.13.0 + TensorRT 5.1.

Now I updated to TensorRT 6 and in the release notes it says it was tested with TensorFlow 1.14.0, so I updated also Tensorflow to 1.14.0.

When I try my deployment procedure now, the UFF converter complains:

Converting to UFF graph
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.

  • Why is this the case? The release notes say it was successfully tested with TF 1.14, which doesn’t seem to be the case.

  • Is there a website with a list of the supported layers of the UFF converter? I can only find that list for TensorRT, but not for UFF converter.

Thanks!

Hi,
FusedBatchNormV3 operation is currently not supported by UFF parser.

You can try to convert your model to ONNX instead of UFF using tf2onnx:
https://github.com/onnx/tensorflow-onnx
tf2onnx supports converting this op to BatchNormalization op in ONNX:
https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/onnx_opset/nn.py#L470

And BatchNormalization op is supported by the TensorRT ONNX parser:
https://github.com/onnx/onnx-tensorrt/blob/master/operators.md

Thanks

That’s great, thanks a lot for the detailed answer!

Hi

Did you solved the problem? The onnx supported this layer.

https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/onnx_opset/nn.py#L501

I don’t know, when we work with Tensorflow models, Why we use UFF/ONNX converter and then parse them to TensorRT, we can use directly TF-TRT API, right? If so, what’s advantage of this UFF/ONNX converter method?

Its because TF-TRT still being a TF graph- model with some optimizations however a model converted from UFF/ONNX is completely optimized for TRT or any other deployment in DeepStream for example.