Error converting TF model for Jetson Nano using tf.trt

I am trying to convert a TF 1.14.0 saved_model to tensorRT on the Jetson Nano. I have saved my model via tf.saved_model.save and am trying to convert it on the Nano. However, I get the following error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/importer.py", line 427, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1 of node StatefulPartitionedCall was passed float from acoustic_cnn/conv2d_seq_layer/conv3d/kernel:0 incompatible with expected resource.

I have seen this issue discussed on the web, but no solution works for me. I tried:

  1. setting tf.keras.backend.set_learning_phase(0) (source)

  2. Using is_dynamic_op=True, precision_mode='FP32' (source)
    And still get the error.

  3. Also, I am using TF Eager so I dont see how I would modify the
    graphdef as suggested here

Let me know what else you think I should do?

For reference, below is the code I use for conversion and here is the link to my saved_model

Conversion code

import numpy as np
import tensorflow as tf
from ipdb import set_trace
from tensorflow.python.compiler.tensorrt import trt_convert as trt

INPUT_SAVED_MODEL_DIR = 'tst'
OUTPUT_SAVED_MODEL_DIR = 'tst_out'

tf.enable_eager_execution()

def load_run_savedmodel():
    mod = tf.saved_model.load_v2('tst')
    inp = tf.convert_to_tensor(np.ones((32, 18, 63, 8)), dtype=tf.float32)
    out = mod(inp)

def convert_savedmodel():

    tf.keras.backend.set_learning_phase(0)

    params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
        # precision_mode='FP16',
        # is_dynamic_op=True
    )

    converter = trt.TrtGraphConverter(input_saved_model_dir=INPUT_SAVED_MODEL_DIR,
                                      is_dynamic_op=True,
                                      precision_mode='FP32'
                                      )

    converter.convert()
    converter.save(OUTPUT_SAVED_MODEL_DIR)

    load_infer_savedmodel()

    return None

def load_infer_savedmodel():
    with tf.Session() as sess:
        # First load the SavedModel into the session
        tf.saved_model.loader.load(
            sess, [tf.saved_model.tag_constants.SERVING], output_saved_model_dir)
        set_trace()
        output = sess.run([output_tensor], feed_dict={input_tensor: input_data})


if __name__ == '__main__':
    convert_savedmodel()
    # load_infer_savedmodel()

I get the same error as you get on TX2 with TF 1.14.
Did you find a solution?
thanks

Unfortunately, I did not. Some some from the Nvidia team would be greatly appreciated.

Hi,

Looks to similar to below issue:
https://github.com/onnx/tensorflow-onnx/issues/77

It may also occur if different Tf version is used.
Will recommend to use the same Tf version of the .pb file.

Thanks