How to feed a 3 channel image to tensorrt

The formate of my tensorflow model is NHWC, I convert it to UFF and import it to tensorrt.
how to feed a 3 channel image to tensorrt? Should I change the formate of image to CHW?

Hi,

If your tensorflow model is trained with NHWC format, please feed HWC image into tensorRT.

The uffparser is able to parse most of my graph, but it fails on Concat – i think this is because my model is in NHWC format but the “matching non channel dimensions” check is done assuming NCHW. The error message I get is below:

Parameter check failed at: Network.cpp::addConcatenation::152, condition: first->getDimensions().d[j] == dims.d[j] && "All non-channel dimensions must match across tensors.

I’m so close to getting this working - is there a simple workaround to enable me to convert a Concat op with NHWC format?

Please help!

Hi,

As you said, TensorRT expect user to have NCHW format and it links multiple tensors of the same height and width across the channel dimension.
An possible alternative is to design your concat layer with UFF plugin API from TensorRT 4.0 or please use NCHW format.

Thanks

Could you please tell me why TensorRT expect input format is NCHW?

I have a model which uses format NHWC so it uses the tf.nn.conv2d function with ‘NHWC’ as a parameter inside the mode, eg my model:

[input_NHWC => conv2d_NHWC => output].

So is it ok if I add a tf.transpose layer before the old model to convert NCHW to NHWC? Because my model now has input with ‘NHWC’, so is it the right implementation as the expectation of TensorRT? Eg new model:

[input_NCHW => transpose() => INPUT_NHWC => conv2d_NHWC => output]

p/s: my code use Tensorflow
thanks

Hi,

We use NCHW format for performance.
TensorFlow uses NHWC since it runs a little faster on CPU.
Here is some explanation from TensorFlower:
[url]https://www.tensorflow.org/performance/performance_guide#data_formats[/url]

I think you can just try to convert your model without handling.
We have some mechanism to deal with the different format already.
But please remember that it won’t be able to cover all of the use cases.

Thanks.

@noitq.hust, I have exactly the same question as you have. Did you need to transpose your input from NCHW to NHWC in your TF graph before converting it to UFF? That is, did you do the following before converting it to UFF:
[input_NCHW => transpose() => INPUT_NHWC => conv2d_NHWC => output]

Right now I have a TensorFlow model which was trained with NHWC input RGB image, and the converted .uff.pbtxt showed that the input order is NHWC (in addition to the .uff, I also generated a human readable .uff.pbtxt to show the converted nodes in the UFF model). I don’t know if I need to re-generate the UFF by taking CHW as input and transpose it to HWC. Your prompt reply is highly appreciated. Right now, the detection result is not correct if I supply an HWC order input.

I tried that solution but it didn’t work. Actually, I stopped convert my model to TensorRT because I encountered other problems like “ERROR: conv0/conv0_relu: at least one non-batch dimension is required for input” or “ERROR: conv0_conv: kernel weights has count 864 but 27648 was expected” or something like this. I think those problems are the use-cases which isn’t supported by TensorRT.