unsupported operation _FusedBatchNormV3

L4T # R32 (release), REVISION: 2.1
UFF Version 0.6.3
TensorRT 5.1.6.1
Cuda 10

I receive the error message “unsupported operation _FusedBatchNormV3” from the script below. However the uff model was created successfully and the parser successfully registered the inputs and and outputs. Is this operation truly not supported and if not is there an existing plug in that I can pull in from somewhere?

output_nodes = [args.output_node_names]
input_node = args.input_node_name
frozen_graph_pb = args.frozen_graph_pb

uff_model = uff.from_tensorflow(frozen_graph_pb, output_nodes) . #Successfully creates uff model

network = builder.create_network()
G_LOGGER = trt.Logger(trt.Logger.INFO)
builder = trt.Builder(G_LOGGER)
builder.max_batch_size = 10
builder.max_workspace_size = 1 << 30

data_type = trt.DataType.FLOAT

parser = trt.UffParser()
input_verified =parser.register_input(input_node, (1,234,234,3))       #returns true
output_verified = parser.register_output(output_nodes[0])              #returns true
buffer_verified = parser.parse_buffer(uff_model, network, data_type)   #returns false

input_verified and output_verified are true, buffer_verified is false, and the following output is produced:

[TensorRT] ERROR: UffParser: Validator error: sequential/batch_normalization_1/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3

Is there an existing plugin for the FusedBatchNormV3 operation or is there a workaround for this?

Hi,
FusedBatchNormV3 operation is currently not supported by UFF parser.

You can try to convert your model to ONNX instead of UFF using tf2onnx:
https://github.com/onnx/tensorflow-onnx
tf2onnx supports converting this op to BatchNormalization op in ONNX:
https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/onnx_opset/nn.py#L470

And BatchNormalization op is supported by the TensorRT ONNX parser:
https://github.com/onnx/onnx-tensorrt/blob/master/operators.md

Thanks