Can I call inference and display module without waiting for all channel to complete decoding

Hi,

I have a problem about Deepstream in my application. Basically, I define 4 channels for my application to decode, inference and display four videos through network streams simultaneously. It works pretty well if four videos have the same frame rate.

However, the problem is both the inference and display module will wait for all video to finish decoding and pass all image to inference (with a batch size of 4). So I’m wondering if the inference task can be called individually after each decoding channel is complete without waiting for all channels to complete?

My use case is that different videos have different frame rates. So I don’t want video with lower frame rate delay the inference of video with higher frame rate.

Thanks

Hi,

Looks like you also post another topic on our Tesla board:
https://devtalk.nvidia.com/default/topic/1032407/

Could you share which platform do you use with us first?
Thanks.

Sorry for the duplication, I replied you in the other thread. Thanks.

Let’s track this issue on topic 1032407 since you are using Tesla.

Thanks.