convert-to-uff throws invalid data type TypeError

I am using TRT 4.0.1.6 and using the convert-to-uff utility to convert the .pb file into .uff file. If I do the same for the frozen.pb file from sampleUFFSSD, it is still able to convert to .uff by skipping over the unsupported layers. If I do it for another model, I get an error for “invalid” data type. How do I figure out which layer is causing this, as my model is huge, with over 2000 nodes? Here is the traceback

DEBUG: convert reshape to flatten node
Traceback (most recent call last):
  File "/home/dhingratul/.virtualenvs/trt4/bin/convert-to-uff", line 11, in <module>
    sys.exit(main())
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/bin/convert_to_uff.py", line 105, in main
    output_filename=args.output
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 149, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 120, in from_tensorflow
    name="main")
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 76, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 63, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 42, in convert_layer
    return cls.registry_[op](name, tf_node, inputs, uff_graph, **kwargs)
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter_functions.py", line 429, in convert_sum
    return _reduce_helper(name, tf_node, inputs, uff_graph, func="sum", **kwargs)
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter_functions.py", line 416, in _reduce_helper
    array = tf2uff.convert_tf2numpy_const_node(tf_axes_node)
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 121, in convert_tf2numpy_const_node
    np_dtype = cls.convert_tf2numpy_dtype(tf_node.attr['dtype'].type)
  File "/home/dhingratul/.virtualenvs/trt4/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 95, in convert_tf2numpy_dtype
    return np.dtype(dt[dtype])
TypeError: data type "invalid" not understood

I have a same error for ConcatV2. Did you solved this problem?

No, I don’t know where the error is coming from, and my graph is too big to check it a node at a time. In your case, if you know concatV2 is the problem, you can make it a custom layer by adding it into the graphsurgeon script

In my case, the error occurred when concatV2 is input of transpose. This kind of error sems to happen when the converter tries to convert an input that is not a constant operation but is generally expected to be a constant.

@NVES

GPU type : 1080Ti 11GB dual
nvidia driver version: 396.54
CUDA version: CUDA 9.0
CUDNN version: cuDNN 7.1
Python version [if using python]: Python 2.7
Tensorflow version : Tensorflow-gpu 1.7, 1.8, 1.9, 1.10 (Persists with all versions)
TensorRT version: TensorRT 4.0.1.6

Hello,

It looks like tf.reduce_sum layer.

Per traceback, it’s doing convert_sum and calling _reduce_helper meaning that it’s a reduction whose op is sum i.e. reduce_sum.

Usually, this type of error is due to the axis node of the reduction being non-constant.

Thank-you for your quick response, and pointing out what layer might be the issue. Do you mean that when it is changing dynamically? Also,is there a “log” option which gives out verbose failures with convert-to-uff ?

yes, that’s what I meant.

we have plans for a converter --debug option in future versions to make this type of debug easier. Sorry again that we cannot share more information about further release here. Please pay attention to our announcement for the information.

@NVES I checked the code, the axis is either None(default), or set with a proper “axis”, “keep_dims” arguments. But, the shape is not defined for the operations, do I need to freeze the shape of the input tensor to the reduce_sum for the conversion to work? Is there a workaround for not fixing the shape, because I believe in my case this won’t be possible. Also, with TF-TRT integration I am actually able to convert the tf graph to a trt graph using

trt.create_inference_graph()

hello,

It’s not the shape of the input tensor, but instead the axis you’re doing the reduction on. You would need to know the axis (should just be a single value) at engine build time.

Hi,

Was this issue ever solved? I have a similar problem with the tf.reduce_sum operator, and I don’t quite understand what you mean by

“It’s not the shape of the input tensor, but instead the axis you’re doing the reduction on. You would need to know the axis (should just be a single value) at engine build time.”

If i have a fully convolutional network (i.e. image input size is variable) does that mean this does not work?