incorrect number of kernel weights for Keras to .pb model (expecting 3x as many as available)

I have successfully gotten the examples (inception_v1, v2 etc) to run in TensorRT on the TX2 however I am having trouble getting my own model to run.

Here is a simplified keras model

#!/usr/bin/env python

from keras.layers import Lambda, Input, Dense, \
                         Flatten, Reshape, Subtract, Multiply
from keras.models import Model
from keras.losses import mse, binary_crossentropy
from keras.utils import plot_model
from keras import backend as K

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import argparse
import os
import sys
import h5py
import datetime

tf_sess = tf.Session()
K.set_session(tf_sess)

# network parameters
x_len = 64
y_len = 64
linear_len = x_len*y_len
#input_shape = (linear_len,)
input_shape = (x_len, y_len, 1)
intermediate_dim = 1024
batch_size = 128
latent_dim = 200
epochs = 100

# build model
inputs = Input(shape=input_shape, name='input')
interm = Flatten()(inputs)
outputs = Dense(latent_dim, name='latent')(interm)

# instantiate model
model = Model(inputs, outputs, name='model')
model.summary()

# save the model
base_filename = datetime.datetime.now().strftime("%Y%m%d_%H%M%S") + '_'    
model.save(base_filename + 'toy_model.h5')

then freezing with:

#!/usr/bin/env python

import sys
import datetime

from keras.models import load_model                                     
import keras.backend as K                                               
from tensorflow.python.framework import graph_io
from tensorflow.python.tools import freeze_graph
from tensorflow.core.protobuf import saver_pb2
from tensorflow.python.training import saver as saver_lib

def convert_keras_to_pb(keras_model, out_names, models_dir, model_filename):
    model = load_model(keras_model)
    K.set_learning_phase(0)
    sess = K.get_session()
    saver = saver_lib.Saver(write_version=saver_pb2.SaverDef.V2)
    checkpoint_path = saver.save(sess, './saved_ckpt', global_step=0, 
                                    latest_filename='checkpoint_state')
    graph_io.write_graph(sess.graph, '.', 'tmp.pb')
    freeze_graph.freeze_graph('./tmp.pb', '',
                              False, checkpoint_path, out_names,
                              "save/restore_all", "save/Const:0",
                              models_dir+model_filename, False, "")

if __name__ == "__main__":
    keras_model = sys.argv[1]
    out_names = sys.argv[2]
    models_dir = sys.argv[3]
    model_filename = keras_model.replace('.h5','.pb')
    convert_keras_to_pb(keras_model, out_names, models_dir, model_filename)

then attempting to compile with:

~/tf_to_trt_image_classification$ python scripts/convert_plan.py ./20180802_094247_toy_model.pb ./20180802_094247_toy_model.plan input 64 64 latent/BiasAdd 1 0 float

gives:

Using output node latent/BiasAdd
Converting to UFF graph
DEBUG: convert reshape to flatten node
No. nodes: 7
UFF Output written to data/tmp.uff
UFFParser: parsing latent/bias
UFFParser: parsing input
UFFParser: parsing flatten_1/Reshape
UFFParser: parsing latent/kernel
UFFParser: parsing latent/MatMul
UFFParser: parsing latent/BiasAdd
UFFParser: parsing MarkOutput_0
latent/BiasAdd: kernel weights has count 819200 but 2457600 was expected

I have tried several configurations of the model including one that has 3 channels (as opposed to this one that has a single channel) and I get the same results -

Any help would be greatly appreciated -

Hi,

Could you share which TensorRT version do you use?
If you are not using our latest TensorRT 4.0, could you give it a try?

Thanks.

Yep I am using TensorRT 4.0.4

Hi,

We write a simple converter for your use-case and it works correctly.
(Sorry that we didn’t try the convert_plan.py since we don’t have *.plan file)

import tensorflow as tf
import tensorrt as trt
from tensorrt.parsers import uffparser
import uff

G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
MAX_WORKSPACE = 1 << 20
MAX_BATCHSIZE = 1

FILE = '20180807_165049_toy_model.pb'

graph = tf.Graph()
with graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(FILE, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

  sess = tf.Session()
  graphdef = tf.get_default_graph().as_graph_def()
  frozen_graph = tf.graph_util.convert_variables_to_constants(sess, graphdef, ['latent/BiasAdd'])
  tf_model = tf.graph_util.remove_training_nodes(frozen_graph)

uff_model = uff.from_tensorflow(tf_model, ['latent/BiasAdd'])
parser = uffparser.create_uff_parser()
parser.register_input('input', (1, 64, 64), 0)
parser.register_output('latent/BiasAdd')

engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, MAX_BATCHSIZE, MAX_WORKSPACE)

Thanks.

I will try the converter you specified above on an x86 machine to get the .pb file converted to an engine that can be run with the python api. I was trying to use convert_plan.py script installed with Jetpack to convert it to an engine that can run on Jetson TX2 (which it is my understanding that currently the python api is not available on Jetson).

The ultimate goal is to get models that I develop (in Keras) and train on a high performance x86 machine deployed to Jetson TX2. It seems like the above script will help me deploy this model to x86 via TensorRT which does increase my understanding of the workflow, however falls short of the above-stated ultimate goal.

I will try it and let you know if it works but I am still looking for a solution to get a keras → frozen model (.pb) → Jetson TX2.

As an aside, the convert_plan.py takes the .plan file as an input and generates that file, it doesn’t require the file to run. This .plan file is then used to infer with the graph via the C api.

I was able to generate the engine from the .pb frozen model with the code you provided, thank you. From the forum entries it doesn’t seem like the TensorRT python API will be available for TX2 anytime soon, so is there any guidance on getting my custom model created for tx2 with the c++ API?

Thank you,

Hi,

We have a sample to illustrate plugin API with UFF model.
Please check this sample for information:
/usr/src/tensorrt/samples/sampleUffSSD

Thanks.