TensorRT 3.0 RC uff-converter-tf tool corrupt with colon (:) in node name
If there is colon (:) in the end of node name in pb, uff-converter-tf tool stop at: [code]raise UffException("%s input doesn't exist" % i)[/code] we just edit graph.py to: [code]def _check_and_get_node(self, node): node = node.to_uff() for i in node.inputs: i = i.rsplit(':', 1)[0] # we copy the code from split_node_name_and_output() if i not in self.nodes: raise UffException("%s input doesn't exist" % i) self.meta_graph.descriptor.check_node(node, self.meta_graph.referenced_data) return node[/code] the uff file generated, but if we import it with UffParser in TensorRT, it report: [code]../data/proj/flow10.uff ERROR: UFFParser: Graph error: This input (preprocess_image/split:1) doesn't exist ERROR: sample_flow10: Fail to parse ERROR: sample_flow10: Model load failed[/code] It seems we can't leave the ':' in node name of uff file. What we should do to go further?
If there is colon (:) in the end of node name in pb, uff-converter-tf tool stop at:
raise UffException("%s input doesn't exist" % i)

we just edit graph.py to:
def _check_and_get_node(self, node):
node = node.to_uff()
for i in node.inputs:
i = i.rsplit(':', 1)[0] # we copy the code from split_node_name_and_output()
if i not in self.nodes:
raise UffException("%s input doesn't exist" % i)
self.meta_graph.descriptor.check_node(node, self.meta_graph.referenced_data)
return node


the uff file generated, but if we import it with UffParser in TensorRT, it report:
../data/proj/flow10.uff
ERROR: UFFParser: Graph error: This input (preprocess_image/split:1) doesn't exist
ERROR: sample_flow10: Fail to parse
ERROR: sample_flow10: Model load failed

It seems we can't leave the ':' in node name of uff file. What we should do to go further?

#1
Posted 12/05/2017 07:08 AM   
Hi, colon is a particular symbol in TensorFlow. You can use '/' instead of ':'. Thanks.
Hi,

colon is a particular symbol in TensorFlow.
You can use '/' instead of ':'.

Thanks.

#2
Posted 12/05/2017 08:55 AM   
We can do it in UFFParser since it is C++ compiled class in TensorRT 3.0 SDK. Can we modify the uff-converter-tf python source converter.py to let it replace ':' with '/', or we should update our TensorFlow froze code to let it do that for us?
We can do it in UFFParser since it is C++ compiled class in TensorRT 3.0 SDK. Can we modify the uff-converter-tf python source converter.py to let it replace ':' with '/', or we should update our TensorFlow froze code to let it do that for us?

#3
Posted 12/05/2017 09:04 AM   
When I go back to TensorRT download page, I realized TensorRT 3.0 has been released, not RC any more, but our question isn't solved after I update the TensorRT, I tried to replace ':' with '/', but nodes[input] raise error, so I just remove the suffix, then I got the uff file. But the UFFParser doesn't recognize the ExpandDims operation. all we known it is custom layer, so it should be Okay if ExpandDims is unsupported. [code]../../data/mnist/hello.uff ERROR: UFFParser: Validator error: stream0/.../.../.../ExpandDims: Unsupported operation _ExpandDims ERROR: sample_uff_hello: Fail to parse ERROR: sample_uff_hello: Model load failed[/code]
When I go back to TensorRT download page, I realized TensorRT 3.0 has been released, not RC any more, but our question isn't solved after I update the TensorRT,
I tried to replace ':' with '/', but nodes[input] raise error, so I just remove the suffix, then I got the uff file. But the UFFParser doesn't recognize the ExpandDims operation. all we known it is custom layer, so it should be Okay if ExpandDims is unsupported.
../../data/mnist/hello.uff
ERROR: UFFParser: Validator error: stream0/.../.../.../ExpandDims: Unsupported operation _ExpandDims
ERROR: sample_uff_hello: Fail to parse
ERROR: sample_uff_hello: Model load failed

#4
Posted 12/06/2017 01:49 PM   
Hi, We want to know more about this issue first. Usually, we name a TensorFlow layer without the special symbol. Is the particular symbol you mentioned added by TensorFlow automatically? If yes, uffparser should be fine with this naming strategy. From the last comment, it looks like the error comes from non-supported operation rather than a particular symbol. Is this correct? Thanks.
Hi,

We want to know more about this issue first.

Usually, we name a TensorFlow layer without the special symbol.
Is the particular symbol you mentioned added by TensorFlow automatically?
If yes, uffparser should be fine with this naming strategy.

From the last comment, it looks like the error comes from non-supported operation rather than a particular symbol.
Is this correct?

Thanks.

#5
Posted 12/07/2017 06:00 AM   
We generate the pb file by froze our trained TensorFlow model, the network is VGG16 and has a preprocess step, some node input's node channel number "input":"name:3"(the "3" is just the channel number support by TensorFlow) is add by TensorFlow automatically. My team leader is prepare the simplified pb file, which include the customer operation and particular symbol. you will receive email soon which has TensorFlow test.py and froze bp file. Would you like to help us to convert it to uff and load by UffParser?
We generate the pb file by froze our trained TensorFlow model, the network is VGG16 and has a preprocess step, some node input's node channel number "input":"name:3"(the "3" is just the channel number support by TensorFlow) is add by TensorFlow automatically.
My team leader is prepare the simplified pb file, which include the customer operation and particular symbol. you will receive email soon which has TensorFlow test.py and froze bp file. Would you like to help us to convert it to uff and load by UffParser?

#6
Posted 12/07/2017 12:41 PM   
Hi, We post the test.py which generate all our customer layer and special symbol, I paste them here: test.py [code]from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import sys import tensorflow as tf import numpy as np from tensorflow.python.platform import gfile slim = tf.contrib.slim ##################### # tf.split operation # tf.square operation # tf.real_div operation a = tf.placeholder(dtype=tf.float32, shape=[10, 20, 30]) a1, a2 = tf.split(a, [10, 10], 1) a3 = tf.square(a1) + tf.square(a2) a4, a5 = tf.split(a3, [5, 5], 0) a6 = tf.realdiv(a4, a5) ##################### # tf.expand_dims and squeeze operation b = tf.placeholder(dtype=tf.float32, shape=[10, 20, 30]) b1 = tf.expand_dims(b, 0) b2 = tf.squeeze(b1, 0) ##################### # tf.slice operation c = tf.placeholder(dtype=tf.float32, shape=[10, 20, 30]) c1 = tf.slice(c, [0, 0, 0], [1, 1, 30]) ##################### # tf.unstack operation d = tf.placeholder(dtype=tf.float32, shape=[10, 20, 30]) d1 = tf.unstack(d) ##################### # Launch the default graph. with tf.Session() as sess: sess.run(b2, feed_dict={b: np.random.rand(10,20,30)}) ##################### # export the graph my_graph_def = tf.get_default_graph().as_graph_def(add_shapes=True) with gfile.FastGFile("test.pb", 'w') as f: f.write(my_graph_def.SerializeToString())[/code] test.pb.txt [code]node { name: "Placeholder" op: "Placeholder" attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } attr { key: "dtype" value { type: DT_FLOAT } } attr { key: "shape" value { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } node { name: "Const" op: "Const" attr { key: "_output_shapes" value { list { shape { dim { size: 2 } } } } } attr { key: "dtype" value { type: DT_INT32 } } attr { key: "value" value { tensor { dtype: DT_INT32 tensor_shape { dim { size: 2 } } tensor_content: "\n\000\000\000\n\000\000\000" } } } } node { name: "split/split_dim" op: "Const" attr { key: "_output_shapes" value { list { shape { } } } } attr { key: "dtype" value { type: DT_INT32 } } attr { key: "value" value { tensor { dtype: DT_INT32 tensor_shape { } int_val: 1 } } } } node { name: "split" op: "SplitV" input: "Placeholder" input: "Const" input: "split/split_dim" attr { key: "T" value { type: DT_FLOAT } } attr { key: "Tlen" value { type: DT_INT32 } } attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 10 } dim { size: 30 } } shape { dim { size: 10 } dim { size: 10 } dim { size: 30 } } } } } attr { key: "num_split" value { i: 2 } } } node { name: "Square" op: "Square" input: "split" attr { key: "T" value { type: DT_FLOAT } } attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 10 } dim { size: 30 } } } } } } node { name: "Square_1" op: "Square" input: "split:1" attr { key: "T" value { type: DT_FLOAT } } attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 10 } dim { size: 30 } } } } } } node { name: "add" op: "Add" input: "Square" input: "Square_1" attr { key: "T" value { type: DT_FLOAT } } attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 10 } dim { size: 30 } } } } } } node { name: "Const_1" op: "Const" attr { key: "_output_shapes" value { list { shape { dim { size: 2 } } } } } attr { key: "dtype" value { type: DT_INT32 } } attr { key: "value" value { tensor { dtype: DT_INT32 tensor_shape { dim { size: 2 } } tensor_content: "\005\000\000\000\005\000\000\000" } } } } node { name: "split_1/split_dim" op: "Const" attr { key: "_output_shapes" value { list { shape { } } } } attr { key: "dtype" value { type: DT_INT32 } } attr { key: "value" value { tensor { dtype: DT_INT32 tensor_shape { } int_val: 0 } } } } node { name: "split_1" op: "SplitV" input: "add" input: "Const_1" input: "split_1/split_dim" attr { key: "T" value { type: DT_FLOAT } } attr { key: "Tlen" value { type: DT_INT32 } } attr { key: "_output_shapes" value { list { shape { dim { size: 5 } dim { size: 10 } dim { size: 30 } } shape { dim { size: 5 } dim { size: 10 } dim { size: 30 } } } } } attr { key: "num_split" value { i: 2 } } } node { name: "RealDiv" op: "RealDiv" input: "split_1" input: "split_1:1" attr { key: "T" value { type: DT_FLOAT } } attr { key: "_output_shapes" value { list { shape { dim { size: 5 } dim { size: 10 } dim { size: 30 } } } } } } node { name: "Placeholder_1" op: "Placeholder" attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } attr { key: "dtype" value { type: DT_FLOAT } } attr { key: "shape" value { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } node { name: "ExpandDims/dim" op: "Const" attr { key: "_output_shapes" value { list { shape { } } } } attr { key: "dtype" value { type: DT_INT32 } } attr { key: "value" value { tensor { dtype: DT_INT32 tensor_shape { } int_val: 0 } } } } node { name: "ExpandDims" op: "ExpandDims" input: "Placeholder_1" input: "ExpandDims/dim" attr { key: "T" value { type: DT_FLOAT } } attr { key: "Tdim" value { type: DT_INT32 } } attr { key: "_output_shapes" value { list { shape { dim { size: 1 } dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } } node { name: "Squeeze" op: "Squeeze" input: "ExpandDims" attr { key: "T" value { type: DT_FLOAT } } attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } attr { key: "squeeze_dims" value { list { i: 0 } } } } node { name: "Placeholder_2" op: "Placeholder" attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } attr { key: "dtype" value { type: DT_FLOAT } } attr { key: "shape" value { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } node { name: "Slice/begin" op: "Const" attr { key: "_output_shapes" value { list { shape { dim { size: 3 } } } } } attr { key: "dtype" value { type: DT_INT32 } } attr { key: "value" value { tensor { dtype: DT_INT32 tensor_shape { dim { size: 3 } } tensor_content: "\000\000\000\000\000\000\000\000\000\000\000\000" } } } } node { name: "Slice/size" op: "Const" attr { key: "_output_shapes" value { list { shape { dim { size: 3 } } } } } attr { key: "dtype" value { type: DT_INT32 } } attr { key: "value" value { tensor { dtype: DT_INT32 tensor_shape { dim { size: 3 } } tensor_content: "\001\000\000\000\001\000\000\000\036\000\000\000" } } } } node { name: "Slice" op: "Slice" input: "Placeholder_2" input: "Slice/begin" input: "Slice/size" attr { key: "Index" value { type: DT_INT32 } } attr { key: "T" value { type: DT_FLOAT } } attr { key: "_output_shapes" value { list { shape { dim { size: 1 } dim { size: 1 } dim { size: 30 } } } } } } node { name: "Placeholder_3" op: "Placeholder" attr { key: "_output_shapes" value { list { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } attr { key: "dtype" value { type: DT_FLOAT } } attr { key: "shape" value { shape { dim { size: 10 } dim { size: 20 } dim { size: 30 } } } } } node { name: "unstack" op: "Unpack" input: "Placeholder_3" attr { key: "T" value { type: DT_FLOAT } } attr { key: "_output_shapes" value { list { shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } shape { dim { size: 20 } dim { size: 30 } } } } } attr { key: "axis" value { i: 0 } } attr { key: "num" value { i: 10 } } } versions { producer: 22 } [/code] we will also send you the email will attach test.py test.pb test.py.txt and uff_convter_tf tool python code we edited. you can try to duplicate our problems, Thanks.
Hi,
We post the test.py which generate all our customer layer and special symbol, I paste them here:
test.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import sys
import tensorflow as tf
import numpy as np
from tensorflow.python.platform import gfile

slim = tf.contrib.slim

#####################
# tf.split operation
# tf.square operation
# tf.real_div operation
a = tf.placeholder(dtype=tf.float32, shape=[10, 20, 30])
a1, a2 = tf.split(a, [10, 10], 1)
a3 = tf.square(a1) + tf.square(a2)
a4, a5 = tf.split(a3, [5, 5], 0)
a6 = tf.realdiv(a4, a5)

#####################
# tf.expand_dims and squeeze operation
b = tf.placeholder(dtype=tf.float32, shape=[10, 20, 30])
b1 = tf.expand_dims(b, 0)
b2 = tf.squeeze(b1, 0)

#####################
# tf.slice operation
c = tf.placeholder(dtype=tf.float32, shape=[10, 20, 30])
c1 = tf.slice(c, [0, 0, 0], [1, 1, 30])

#####################
# tf.unstack operation
d = tf.placeholder(dtype=tf.float32, shape=[10, 20, 30])
d1 = tf.unstack(d)


#####################
# Launch the default graph.
with tf.Session() as sess:
sess.run(b2, feed_dict={b: np.random.rand(10,20,30)})


#####################
# export the graph
my_graph_def = tf.get_default_graph().as_graph_def(add_shapes=True)
with gfile.FastGFile("test.pb", 'w') as f:
f.write(my_graph_def.SerializeToString())


test.pb.txt
node {
name: "Placeholder"
op: "Placeholder"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
node {
name: "Const"
op: "Const"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 2
}
}
}
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_INT32
tensor_shape {
dim {
size: 2
}
}
tensor_content: "\n\000\000\000\n\000\000\000"
}
}
}
}
node {
name: "split/split_dim"
op: "Const"
attr {
key: "_output_shapes"
value {
list {
shape {
}
}
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_INT32
tensor_shape {
}
int_val: 1
}
}
}
}
node {
name: "split"
op: "SplitV"
input: "Placeholder"
input: "Const"
input: "split/split_dim"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "Tlen"
value {
type: DT_INT32
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 10
}
dim {
size: 30
}
}
shape {
dim {
size: 10
}
dim {
size: 10
}
dim {
size: 30
}
}
}
}
}
attr {
key: "num_split"
value {
i: 2
}
}
}
node {
name: "Square"
op: "Square"
input: "split"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 10
}
dim {
size: 30
}
}
}
}
}
}
node {
name: "Square_1"
op: "Square"
input: "split:1"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 10
}
dim {
size: 30
}
}
}
}
}
}
node {
name: "add"
op: "Add"
input: "Square"
input: "Square_1"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 10
}
dim {
size: 30
}
}
}
}
}
}
node {
name: "Const_1"
op: "Const"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 2
}
}
}
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_INT32
tensor_shape {
dim {
size: 2
}
}
tensor_content: "\005\000\000\000\005\000\000\000"
}
}
}
}
node {
name: "split_1/split_dim"
op: "Const"
attr {
key: "_output_shapes"
value {
list {
shape {
}
}
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_INT32
tensor_shape {
}
int_val: 0
}
}
}
}
node {
name: "split_1"
op: "SplitV"
input: "add"
input: "Const_1"
input: "split_1/split_dim"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "Tlen"
value {
type: DT_INT32
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 5
}
dim {
size: 10
}
dim {
size: 30
}
}
shape {
dim {
size: 5
}
dim {
size: 10
}
dim {
size: 30
}
}
}
}
}
attr {
key: "num_split"
value {
i: 2
}
}
}
node {
name: "RealDiv"
op: "RealDiv"
input: "split_1"
input: "split_1:1"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 5
}
dim {
size: 10
}
dim {
size: 30
}
}
}
}
}
}
node {
name: "Placeholder_1"
op: "Placeholder"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
node {
name: "ExpandDims/dim"
op: "Const"
attr {
key: "_output_shapes"
value {
list {
shape {
}
}
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_INT32
tensor_shape {
}
int_val: 0
}
}
}
}
node {
name: "ExpandDims"
op: "ExpandDims"
input: "Placeholder_1"
input: "ExpandDims/dim"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "Tdim"
value {
type: DT_INT32
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 1
}
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
}
node {
name: "Squeeze"
op: "Squeeze"
input: "ExpandDims"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
attr {
key: "squeeze_dims"
value {
list {
i: 0
}
}
}
}
node {
name: "Placeholder_2"
op: "Placeholder"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
node {
name: "Slice/begin"
op: "Const"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 3
}
}
}
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_INT32
tensor_shape {
dim {
size: 3
}
}
tensor_content: "\000\000\000\000\000\000\000\000\000\000\000\000"
}
}
}
}
node {
name: "Slice/size"
op: "Const"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 3
}
}
}
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "value"
value {
tensor {
dtype: DT_INT32
tensor_shape {
dim {
size: 3
}
}
tensor_content: "\001\000\000\000\001\000\000\000\036\000\000\000"
}
}
}
}
node {
name: "Slice"
op: "Slice"
input: "Placeholder_2"
input: "Slice/begin"
input: "Slice/size"
attr {
key: "Index"
value {
type: DT_INT32
}
}
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 1
}
dim {
size: 1
}
dim {
size: 30
}
}
}
}
}
}
node {
name: "Placeholder_3"
op: "Placeholder"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 10
}
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
node {
name: "unstack"
op: "Unpack"
input: "Placeholder_3"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
shape {
dim {
size: 20
}
dim {
size: 30
}
}
}
}
}
attr {
key: "axis"
value {
i: 0
}
}
attr {
key: "num"
value {
i: 10
}
}
}
versions {
producer: 22
}

we will also send you the email will attach test.py test.pb test.py.txt and uff_convter_tf tool python code we edited.
you can try to duplicate our problems, Thanks.

#7
Posted 12/07/2017 01:38 PM   
The email sent, hope you received it. Thanks.
The email sent, hope you received it. Thanks.

#8
Posted 12/07/2017 01:58 PM   
Hi, It looks like there are some non-supported layers in your model. Do you plan to implement it with Plugin API? It's not easy to parse a frozen .pb file with UFF parser. A better idea is to remove the non-supported layer, parse the weight to UFF format and add the non-supported layer back via Plugin API. Thanks
Hi,

It looks like there are some non-supported layers in your model.
Do you plan to implement it with Plugin API?

It's not easy to parse a frozen .pb file with UFF parser.
A better idea is to remove the non-supported layer, parse the weight to UFF format and add the non-supported layer back via Plugin API.

Thanks

#9
Posted 12/08/2017 06:36 AM   
We think the Plugin API should be good choice for these non-supported layers, .pb file is not directly parse by UFF parser, it need converter to .uff file first then parse by UFF parse, with has the same function with caffe parse in TensorRT. If we remove all non-supported layer, then the deep learn network connect will broken, should we need connect them again with custom layer(plugin) in TensorRT? If there are too many non-supported layers, should it be good choice to construct a full TensorRT network same as with TensorFlow? I think the "2.3.3. Converting A Model From An Unsupported Framework To TensorRT With The TensorRT Python API" in TensorRT 3.0 User Guide"(DU-08602-001_v3.0 | November 2017) section just construct a same network with PyTorch. I create a new topic to help follow our TensorFlow to TensorRT problem, we can discuss very detailed with each line of data and each line of code one by one, would you like to help us?
We think the Plugin API should be good choice for these non-supported layers, .pb file is not directly parse by UFF parser, it need converter to .uff file first then parse by UFF parse, with has the same function with caffe parse in TensorRT.
If we remove all non-supported layer, then the deep learn network connect will broken, should we need connect them again with custom layer(plugin) in TensorRT?
If there are too many non-supported layers, should it be good choice to construct a full TensorRT network same as with TensorFlow? I think the "2.3.3. Converting A Model From An Unsupported Framework To TensorRT With The TensorRT Python API" in TensorRT 3.0 User Guide"(DU-08602-001_v3.0 | November 2017) section just construct a same network with PyTorch.
I create a new topic to help follow our TensorFlow to TensorRT problem, we can discuss very detailed with each line of data and each line of code one by one, would you like to help us?

#10
Posted 12/08/2017 01:39 PM   
Scroll To Top

Add Reply