sampleUffSSD conversion fails? (KeyError: 'image_tensor')

I downloaded TensorRT-4.0.1.6, and tried sampleUFFSSD.

As a first step, I tried to convert a frozen graph to UFF as in the instruction (README.txt in sampleUFFSSD folder.)

convert-to-uff tensorflow --input-file frozen_inference_graph.pb -O NMS -p config.py

An error looks like:

Loading frozen_inference_graph.pb
Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS yet.
Converting as custom op NMS NMS
name: "NMS"
op: "NMS"
input: "Input"
input: "Squeeze"
input: "concat_priorbox"
input: "concat_box_conf"
attr {
  key: "iouThreshold_u_float"
  value {
    f: 0.6000000238418579
  }
}
attr {
  key: "maxDetectionsPerClass_u_int"
  value {
    i: 100
  }
}
attr {
  key: "maxTotalDetections_u_int"
  value {
    i: 100
  }
}
attr {
  key: "numClasses_u_int"
  value {
    i: 91
  }
}
attr {
  key: "scoreConverter_u_str"
  value {
    s: "SIGMOID"
  }
}
attr {
  key: "scoreThreshold_u_float"
  value {
    f: 9.99999993922529e-09
  }
}

Warning: No conversion function registered for layer: concat_box_conf yet.
Converting as custom op concat_box_conf concat_box_conf
name: "concat_box_conf"
op: "concat_box_conf"
input: "BoxPredictor_0/Reshape_1"
input: "BoxPredictor_1/Reshape_1"
input: "BoxPredictor_2/Reshape_1"
input: "BoxPredictor_3/Reshape_1"
input: "BoxPredictor_4/Reshape_1"
input: "BoxPredictor_5/Reshape_1"

Traceback (most recent call last):
  File "/home/abc/.virtualenvs/tf/bin/convert-to-uff", line 11, in <module>
    sys.exit(main())
  File "/home/abc/.virtualenvs/tf/lib/python3.5/site-packages/uff/bin/convert_to_uff.py", line 105, in main
    output_filename=args.output
  File "/home/abc/.virtualenvs/tf/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 149, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/home/abc/.virtualenvs/tf/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 120, in from_tensorflow
    name="main")
  File "/home/abc/.virtualenvs/tf/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 76, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/home/abc/.virtualenvs/tf/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 58, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: 'image_tensor'

It complains there is no ā€˜image_tensorā€™ key, but I checked there is ā€˜image_tensorā€™ node in the frozen graph. What is the problem?

Suspect 1:

In ā€œconfig.pyā€, it seems to replace ā€œPreprocessorā€ ā†’ ā€œInputā€, ā€œToFloatā€ ā†’ ā€œInputā€, ā€œimage_tensorā€ ā†’ ā€œInputā€.

namespace_plugin_map = {
    ...
    "Preprocessor": Input,
    "ToFloat": Input,
    "image_tensor": Input,
     ...
}

And the conversion will delete the original input placeholder ā€œimage_tensorā€. However,
since ā€œToFloatā€ has its input ā€œimage_tensorā€, its replace version ā€œInputā€ will still have its input ā€œimage_tensorā€, which is deleted.

I found the error comes from it. Is it a bug or was there an update? I am wondering how the native sample can have this error. Wasnā€™t there a problem? Could you please make sure?


Suspect 2:

One thing I suspect is inconsistent versions of 'uff" module as follows:

(tf) $ python -m pip show uff    
Name: uff
Version: 0.4.0
Summary: Toolkit for working with the Universal Framwork Format (UFF). Provides a converter from the tensorflow graph format to UFFwith support for more frameworks coming.
Home-page: https://developer.nvidia.com/tensorrt
Author: NVIDIA Corporation
Author-email: cudatools@nvidia.com
License: NVIDIA Software License
Location: /home/abc/.virtualenvs/tf/lib/python3.5/site-packages
Requires: protobuf, numpy
Required-by: 

(tf) $ python                
Python 3.5.2 (default, Nov 17 2016, 17:05:23) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import uff
>>> uff.__version__
'0.3.0'
>>>> uff.__path__
['/home/abc/.virtualenvs/tf/lib/python3.5/site-packages/uff']

Is it usual?


Just in case, here are the files I used:
(However, I downloaded the graph as in the instruction, and I didnā€™t touch config.py, so nothing special.)

frozen_inference_graph: https://drive.google.com/open?id=1PhSquQXaC2Cs1H_TwicCbNYT6Wvgt-E9
config.py: https://drive.google.com/open?id=14JgQh8vOYR0cjMtiRRyXGnXdiEn0zOa-

I have the same issue, I used the same config.py as the example, and the ssd_inception_v2_coco_2018_01_28 referenced in the README.

I get this error

Traceback (most recent call last):
  File "/usr/local/bin/convert-to-uff", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.5/site-packages/uff/bin/convert_to_uff.py", line 105, in main
    output_filename=args.output
  File "/usr/local/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 149, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/usr/local/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 120, in from_tensorflow
    name="main")
  File "/usr/local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 76, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/usr/local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 58, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: 'image_tensor'

I then go and remove image_tensor from the config.py and it converts correctly but then I get a different error when converting to engine

./bin/sample_uff_ssd
data/ssd/sample_ssd.uff
Begin parsing model...
ERROR: UFFParser: Parser error: image_tensor: Invalid DataType value!
ERROR: sample_uff_ssd: Fail to parse
sample_uff_ssd: sampleUffSSD.cpp:667: int main(int, char**): Assertion `tmpEngine != nullptr' failed.
Aborted (core dumped)

Iā€™ve noticed the same problem here. Any solution would be greatly appreciated!

Iā€™ve been able to successfully do the uff conversion. See my setup here: https://devtalk.nvidia.com/default/topic/1037060/tensorrt/trt-4-0-sampleuffssd-int8-calibration-failing/post/5268434/#5268434

I did not have to modify the config.py at all.

During installation however, to get convert-to-uff to successfully show up in my usr/bin, I had to install it from the whl file in the tar install, where the rest of TensorRT was installed via deb file.

Also note, I do see the odd version inconsistency as well for uff where pip show uff is 0.4.0 and python ā†’ uff.verions is 0.3.0.

@paulkwon Your files on drive require permission, if you make them public Iā€™ll check them out.
config.py.log (1.31 KB)

hey joe, i have the latest and greatest

ii  graphsurgeon-tf                                             4.1.2-1+cuda9.0                                             amd64        GraphSurgeon for TensorRT package
ii  libnvinfer-dev                                              4.1.2-1+cuda9.0                                             amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                          4.1.2-1+cuda9.0                                             amd64        TensorRT samples and documentation
ii  libnvinfer4                                                 4.1.2-1+cuda9.0                                             amd64        TensorRT runtime libraries
ii  python-libnvinfer                                           4.1.2-1+cuda9.0                                             amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                       4.1.2-1+cuda9.0                                             amd64        Python development package for TensorRT
ii  python-libnvinfer-doc                                       4.1.2-1+cuda9.0                                             amd64        Documention and samples of python bindings for TensorRT
ii  python3-libnvinfer                                          4.1.2-1+cuda9.0                                             amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                      4.1.2-1+cuda9.0                                             amd64        Python 3 development package for TensorRT
ii  python3-libnvinfer-doc                                      4.1.2-1+cuda9.0                                             amd64        Documention and samples of python bindings for TensorRT
ii  tensorrt                                                    4.0.1.6-1+cuda9.0                                           amd64        Meta package of TensorRT
ii  uff-converter-tf                                            4.1.2-1+cuda9.0                                             amd64        UFF converter for TensorRT package

also when i list the tensor with the -l flag i dont see any input_tensor node anywhere

Hereā€™s the exact output I get when running the converter:

$ convert-to-uff tensorflow --input-file stock_frozen_inference_graph.pb -O NMS -p config.py
Loading stock_frozen_inference_graph.pb
Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS yet.
Converting as custom op NMS NMS
name: "NMS"
op: "NMS"
input: "concat_box_loc"
input: "concat_priorbox"
input: "concat_box_conf"
attr {
  key: "iouThreshold_u_float"
  value {
    f: 0.600000023842
  }
}
attr {
  key: "maxDetectionsPerClass_u_int"
  value {
    i: 100
  }
}
attr {
  key: "maxTotalDetections_u_int"
  value {
    i: 100
  }
}
attr {
  key: "numClasses_u_int"
  value {
    i: 91
  }
}
attr {
  key: "scoreConverter_u_str"
  value {
    s: "SIGMOID"
  }
}
attr {
  key: "scoreThreshold_u_float"
  value {
    f: 9.99999993923e-09
  }
}

Warning: No conversion function registered for layer: concat_box_conf yet.
Converting as custom op concat_box_conf concat_box_conf
name: "concat_box_conf"
op: "concat_box_conf"
input: "BoxPredictor_0/Reshape_1"
input: "BoxPredictor_1/Reshape_1"
input: "BoxPredictor_2/Reshape_1"
input: "BoxPredictor_3/Reshape_1"
input: "BoxPredictor_4/Reshape_1"
input: "BoxPredictor_5/Reshape_1"

Warning: No conversion function registered for layer: concat_priorbox yet.
Converting as custom op concat_priorbox concat_priorbox
name: "concat_priorbox"
op: "concat_priorbox"
input: "PriorBox"
attr {
  key: "axis_u_int"
  value {
    i: 2
  }
}
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: PriorBox yet.
Converting as custom op PriorBox PriorBox
name: "PriorBox"
op: "PriorBox"
input: "strided_slice_6"
input: "strided_slice_7"
attr {
  key: "aspectRatios_u_flist"
  value {
    list {
      f: 1.0
      f: 2.0
      f: 0.5
      f: 3.0
      f: 0.330000013113
    }
  }
}
attr {
  key: "featureMapShapes_u_ilist"
  value {
    list {
      i: 19
      i: 10
      i: 5
      i: 3
      i: 2
      i: 1
    }
  }
}
attr {
  key: "layerVariances_u_flist"
  value {
    list {
      f: 0.10000000149
      f: 0.10000000149
      f: 0.20000000298
      f: 0.20000000298
    }
  }
}
attr {
  key: "maxScale_u_float"
  value {
    f: 0.949999988079
  }
}
attr {
  key: "minScale_u_float"
  value {
    f: 0.20000000298
  }
}
attr {
  key: "numLayers_u_int"
  value {
    i: 6
  }
}

Warning: No conversion function registered for layer: concat_box_loc yet.
Converting as custom op concat_box_loc concat_box_loc
name: "concat_box_loc"
op: "concat_box_loc"
input: "Squeeze"
input: "Squeeze_1"
input: "Squeeze_2"
input: "Squeeze_3"
input: "Squeeze_4"
input: "Squeeze_5"

No. nodes: 795
UFF Output written to stock_frozen_inference_graph.pb.uff

You can find my config.py attached to my previous comment. I just added the .log so it could be uploaded. Is there anything else about my setup you want to know?

Hey joe, thanks for the quick response,
we have identical configy.py and installed packages, so that leaves the actual model , Iā€™m using this download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz

Ahh yes, that appears to be the difference. I havenā€™t pulled the latest model and am still using:
ssd_inception_v2_coco_2017_11_17

Use this to get the older model:
curl -L http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz | tar -xz -C .

ok this works, thanks a lot,
so there are changes in the freezing graph process, the team making the guide should mention that somewhere.

Indeed, it should be added.

Were you able to get the int8 calibration to work?

Iā€™m having the following error: https://devtalk.nvidia.com/default/topic/1037060/tensorrt/trt-4-0-sampleuffssd-int8-calibration-failing/post/5268434/#5268434

ERROR: Tensor FeatureExtractor/InceptionV2/InceptionV2/Mixed_5c/Branch_3/Conv2d_0b_1x1/Relu6/relu2 is uniformly zero; network calibration failed.
sample_uff_ssd: ../builder/cudnnBuilder2.cpp:1227: nvinfer1::cudnn::Engine* nvinfer1::builder::buildEngine(nvinfer1::CudaEngineBuildConfig&, const nvinfer1::cudnn::HardwareContext&, const nvinfer1::Network&): Assertion `it != tensorScales.end()' failed.
Aborted (core dumped)

We are not really interested in int8 only fp16 as in our case accuracy is paramount for us.
But Iā€™ll give it a go.
Now i need to figure out what went wrong with the newer model so i can re freeze my own custom ssd models.

Yes, I also verified the old version of frozen graph works. Thanks for the link!

@paulkown,

I solved the new model problem. The following code can work in old and new model. Please try it and let me know if you have any problem.

namespace_plugin_map = {
    "MultipleGridAnchorGenerator": PriorBox,
    "Postprocessor": NMS,
    "Preprocessor": Input,
    # "ToFloat": Input,
    # "image_tensor": Input,
    "MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
    "concat": concat_box_loc,
    "concat_1": concat_box_conf,
}

namespace_remove = {
    "ToFloat",
    "image_tensor",
    "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3",
}

def preprocess(dynamic_graph):
    # remove the unrelated or error layers
    dynamic_graph.remove(dynamic_graph.find_nodes_by_path(namespace_remove), remove_exclusive_dependencies=False)

    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    # Remove the outputs, so we just have a single output node (NMS).
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)

    # Remove the Squeeze to avoid "Assertion `isPlugin(layerName)' failed"
    Squeeze = dynamic_graph.find_node_inputs_by_name(dynamic_graph.graph_outputs[0], 'Squeeze')
    dynamic_graph.forward_inputs(Squeeze)
1 Like

This works like a charm, thanks a lot

@liuyoungshop, Thanks for providing a more sustainable answer!! It was great help!

if you are using the newest tensorflow object detection api to export the tensorflow frozen graph, please change

"MultipleGridAnchorGenerator/Concatenate": concat_priorbox,

to

"Concatenate": concat_priorbox,

.

@liuyoungshop, the earlier fix for the newer model worked like a charm
I am a newcomer in this whole TensorRT business and I am having a hard time understanding a few things.

  • How did you know that you had to remove those nodes for the newer model?
  • How do you know what to replace an unsupported layer with? any advise and/or resources would be greatly appreciated

Thanks

Hi All,

I am trying to convert model (.pb) to uff in my host PC to do inference on Jetson TX2. I tried converting it on TX2 also but received the same KeyError: 'NMS

Model: ssd_inception_v2_coco_2017_11_17
Tensorflow Version tried: 1.9 | 2.0 | 1.13 | 1.14
uff version: ā€˜0.2.0ā€™

Command:

python3 convert_to_uff.py --input-file frozen_inference_graph.pb -O NMS -p config.py

Error:

Loading frozen_inference_graph.pb
Using output node NMS
Converting to UFF graph
Traceback (most recent call last):
  File "convert_to_uff.py", line 93, in <module>
    main()
  File "convert_to_uff.py", line 89, in main
    debug_mode=args.debug
  File "/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
    name="main")
  File "/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 42, in convert_tf2uff_node
    tf_node = tf_nodes[name]
KeyError: 'NMS'

Any leads will be great. Thank you.

@rohit167 May I know which TensorRT version? and the L4t version? I am using jetpack 4.2.2 and had no problem with tensorflow 1.12.

Thanks @liuyoungshop and @paulkown. It is working with TensorFlow version 1.15.2, the latest Object Detection API, and TensorRT 6.