how to fit multi pyramids in tensorRT ?

I have use tensorRT to accelerate my MTCNN. before the PNet I have to preprocess to build image pyramid, such as 768432, 480270 … , so I have build multi prototxt file and send to tensorRT to parse to fit every scale image ? Because the PNet is a FCN network, so it can support evety scale image to it, in caffe, I can just set

input_layer->Reshape(1, num_channels_, img.rows, img.cols);

But in tensorRT, it seems a network has a certain dim when it parse it.

If I change my input image width and height, should I change my prototxt input? such as below 12*12

name: "PNet"
input: "data"
input_dim: 1
input_dim: 3
input_dim: 12
input_dim: 12

Hi,

Sorry that we are not clear with your question.
Could you explain it more?

Here are some suggestions we can give:
1. Please decide the input dimension when creating a TensorRT engine. Dynamic resolution is not supported currently.
2. TensorRT can take multiple inputs. Check here for details:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/tensorNet.cpp#L143[/url]

Thanks

Hi,

My question is: I use the same caffe model.And then I want to input two image, one with width=1920,height=1080, another with width=960, height=540. In the caffe, I can reshape the net, so How can I reshape in the tensorRT ?

Hi,

If you only have ONE Caffe model, the input dimension should be identical.

Usually, we resize input image to the model size via CUDA.
In your use-case, you can resize the two images with CUDA kernel and feed TensorRT a BATCH=2 buffer.

ex.

  1. resize image A from 3x1920x1080 to 3x12x12
  2. resize image B from 3x960x540 to 3x12x12
  3. Prepare an input buffer of resized image A and B: 2x3x12x12
  4. Launch TensorRT with Batchsize=2

Check here for CUDA resize code:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/imageNet.cu#L96[/url]

Thanks.

Hi, AastaLLL,

You misunderstood my meaning. What I mean is I only have ONE Caffe model, and I want to have a dynamic input dimension in model which support 19201080 and 960540, not use resize.

Hi,

As mentioned in #3, the dynamic input is not supported by TensorRT.
We are checking the possibility, but can’t disclose more information here.

Thanks and sorry for the inconvenience.

Hi, AastaLLL

How can I do in this case ? is there a convenient way to slove this problem ?

Thanks.

Hi,

Happy New Year and sorry for the late reply.

Is your data dimension dynamic cross all the layers?
If yes, this use case is not supported by TensorRT currently.
Please use NvCaffe / BVLC Caffe as a workaround.

If your model is only dynamic in the input layer, you can try to implement it with plugin API.
Here is the information of our plugin samples:
Native sample:
/usr/src/tensorrt/samples/samplePlugin/samplePlugin.cpp
/usr/src/tensorrt/samples/sampleCharRNN/sampleCharRNN.cpp
/usr/src/tensorrt/samples/sampleFasterRCNN/sampleFasterRCNN.cpp
TX2 sample:

Thanks.

@ClancyLian So, Had you find out the resolution to solve it.

Hi AastaLLL,

Any news about this dynamic input feature? Is it available in TensorRT 4.0.1?

Is there really no workaround for this?

Thanks.

Hi,

Sorry that this is still NOT available.
Thanks.

any plan to support this situation?

Hi AastaLLL,

Can you confirm if the dynamic input feature is supported yet? Is it available in TensorRT 5.1.5.0?

Thanks.

Hi,

No. It is not available for TensorRT5.x.

Thanks.