How to use createStreamTensor to create a output tensor of which the maxLen of it is depending on a dynamic value?

I get an output tensor which has the dimension of (n, c, w, h) in deepstream decoding stage.
I and I want to create the output tensor of the same dimension in next process.
How can I do that?

As the example code I’ve read, I should invoke the createStreamTensor in the initialize function of the next process.
but as the code entering the initialize function, I can’t predict the (n, c, w, h) of the decoding frame. can I preserve a big enough buffer for that decoding output? How much big is big enough?

or can I adjust the output tensor size dynamically? and how?

The stream tensor shape of color space converter is same as video dimension if there is no inference task. but it is useless as you must prepare the output stream tensor of your task/module before execute as getNbOutputs and getOutputTensor will use it.

The shape always fixed in any DNN framework, deepstream’s IStreamTensor derived this. Think about do you need adjust tensor shape at any time or just beginning.

let’s do without the dimension. It’s about the buffer size I need to allocate for the output tensor, and it highly depends on the previous output size, I must either allocate a buffer big enough for any situation or I can allocate it dynamically every time so that I can make sure no data missing.

the resolution of the input video may vary from 1080p to 4k, which require for different buffer sizes.

Have you ever meet this problem? How do you handle this problem?

If there is several video source with different resolution, and they will all connect to one inference task, then deep stream will scale all frame to DNN input size.

I meet this problem actually when I do with different resolution video but without inference task, the deep stream just reject this work flow. Maybe I will create a fake DNN will just make deep stream to scale videos to normalize size for me in the future.

how to set the scale options?

Hi,

Here is some clarification:
1. For TensorRT, network size is fixed when creating.
Dynamic input is not supported yet.

2. For the image, you may have different resolution.
Deepstream will resize it to the network size for TensorRT.
Ex. Network size is 224x224 and image is 640x480.
Deepstream will resize the image from 640x480 to 224x224.

As a result, the output buffer of TensorRT is fixed since the network size is decided when creating.
Thanks.

use fake inference task?

I have the same scenario as yours. I use the decoding and color space conversion functions of deepstream only. Our training models are based on caffe.
I wonder if there will be some technical problems.

It seems we are in same scenario. I have post other forums to discuss var. data layout problem with NVIDIA, and finally we just directly call tensorrt with our own code, and only use deepstream decoding and color space conversion function.

I can tell that: you can’t decode different resolution videos in one device worker instance with or without inference task connected.

Thanks for you share the information with us. my previous reply is not correct, sorry for disturb you.

by saying ‘you can’t decode different resolution videos in one device worker instance’ you mean the color space conversions of different resolution videos will be required for different graphics card memory buffer?
I can’t see any video resolution setting in the code of example.

Hi,

1. In DeepStream, we provide a color space converter for NV12 to RGB planar:

4.2.1.1 Adding a module into the analysis pipeline

// Add frame paser
IModule* pConvertor = pDeviceWorker->addColorSpaceConvertorTask(BGR_PLANAR);

2. You can design a custom module by referring to color space converter
Dynamic resolution also can be handled via cutsom modules.

Thanks.

I don’t think I mean the different resolution video solution, I just found it is not allowed in deepstream, here is my test:
https://devtalk.nvidia.com/default/topic/1028120/deepstream-for-tesla/what-is-shape-of-converters-output-with-two-video/post/5229640/#5229640