Seeing "Unsupported binary op max with constant right" when converting TensorFlow graph to TensorRT engine.

Does this mean that using a constant as one of the operands in tf.Maximum() is not supported?

Hi~ is there any update on this issue? Get the same problem…

Replaceing it with custom layer when using uff convert

Hi~Can you elaborate on the solution?@yuan159852

the same question occurred when i transposed tensorflow model to tensorrt engine,i got the .pb file and the .uff file, but it went wrong when i transposed .uff file to tensorrt engine.

the orriginal error string is:
[TensorRT] ERROR: UFFParser: Parser error: head/l2_normalize/Maximum: Unsupported binary op max with constant right

environment:
ubuntu 16.04
nvdia GeForce GTX 1060
tensorrt version: 5.0.2.6
tensorflow model: resnet_v1_50

I am receiving the same error any movement on this issue other than creating a custom layer? tf.maximum is supposed to be supported under TensorRT 5.1.2.2.

Facing the same issue. Any solution?

Original topic: https://devtalk.nvidia.com/default/topic/1055138/tensorrt/error-parsing-uff-model-quot-unsupported-operation-_leakyrelu-quot-/

For my case, (because l2_norm is the last layer), i was able to remove it and successfully convert to TRT.

dynamic_graph.remove("orientation/l2_normalize")

TensorRTdocumentation says

If either input is a constant, then at least one of the inputs must be 4 dimensional.

Can someone from Nvidia please share insights why is that and what would be the best solution?

Edit:
As per TensorFlow docs l2_normalize is

output = x / sqrt(max(sum(x**2), epsilon))

,which basibally means that both (left and right) parts of “max” are scalars and epsilon also constant.

Hey there, i created l2norm_helper plugin that contains two oprations required to successfully convert tf.nn.l2_normalize to TRT plan. Please see github repo for instructions.

Plugin overcomes two issues:

  1. l2_normalize/Maximum: Unsupported binary op max with constant right
  2. l2_normalize/Rsqrt: Unary not supported for other non-constant node

TODO: FP16 still not supported

1 Like