No attr named 'identical_element_shapes' in NodeDef (Solved)

precision_mode=‘FP32’)
File “/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tensorrt/python/trt_convert.py”, line 153, in create_inference_graph
int(msg[0]))
tensorflow.python.framework.errors_impl.NotFoundError: No attr named ‘identical_element_shapes’ in NodeDef:
[[Node: map/TensorArray = TensorArrayV3clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=, tensor_array_name=“”]] for ‘map/TensorArray’ (op: ‘TensorArrayV3’) with input shapes: .

List of Pakages
TensorRT:4.0.1.6
Cuda:9.0
Linux:16.0.4
Python :3.5
Tensorflow:1.10

Hello p146103,

Thank you for posting on TensorRT Forum. In order for the community to effectively help you, please describe the issue you are seeing.

What is your platform/GPU? What is the usage scenario? What commands are you executing when you see the error? Any usage/source file you can provide will help us debug too.

regards,
NVIDIA Enterprise Support

simple I want to convert frozen graph to trt_graph then use this graph to detect the object

link of code:Optimize frozen tensorflow graph using TensorRT · GitHub

List of Pakages
Gpu:GTX 1080
TensorRT:4.0.1.6
Cuda:9.0
Linux:16.0.4
Python :3.5
Tensorflow:1.10

Thanks. Can you share the pd with us?

This is very likely because you are using a layer that is not supported by TensorRT. What model are you using?

Frozen file

perfect. Thanks. I can locally repro this. Will let you know what we find.

Any Update about my problem

Hello @NVES any update about my problem
Thanks

Hello,

I apologize for the wait. We are able to reproduce the issue and engineers are reviewing. Will keep you updated.

Hi @NVES any update about my problem

OR
suggest me any code to convert frozen graph to trt_engine

List of Pakages
Gpu:GTX 1080
TensorRT:4.0.1.6
Cuda:9.0
Linux:16.0.4
Python :3.5
Tensorflow:1.7

Thanks

To convert tf frozen graph to trt_engine, please reference:

https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/workflows/tf_to_tensorrt.html#Converting-the-TensorFlow-Model-to-UFF

what about my file which I provide you?

Hi @NVES any update about my problem?
are you working that or not ?
I am waiting for your response

Hello

My apologies for the delay. We just confirmed the fix has been upstreamed to tensorflow/tensorflow:1.12.0-rc0-devel-gpu-py3. This was a TF-TRT bug.

@p146103 I am having the same issue. Were you able to get this resolved?

FYI, I was able to fix the problem using the NVIDIA-IOT code base which makes very specific modifications to common networks to get them to run. See this answer: https://devtalk.nvidia.com/default/topic/1043918/jetson-tx2/using-tf-trt-to-convert-mobilenet-ssdlite-model-gives-errors/post/5296261/#5296261