Running samples with a video data stream as input

Hello,

Is it possible to pass a video data stream as an argument to the pre-compiled samples of the drive px?

For example, something like ./sample_drivenet --video=rtsp://ip.address:port/video_stream

I’ve tried that directly but the application fails since it expects one .h264 or .raw video as input.

Thank you,
Fabio

Dear Reway,

It should be able to take compressed stream.
May I know why would you send compressed stream for inference?
Compression helps with storage. But while you are inferencing, why not just push the raw format(i.e.uncompressed)? Thanks.

Dear SteveNV,

I would send a compressed stream because the samples expect one .h264 as input (or .raw)

Regarding .raw: I’ve tried passing my video stream through a named pipe and then passing it as an argument to the samples (since then I have a pointer to the file system), however either the samples won’t open the file or I get that they cant seek in file.

What would be the best approach to get a live stream into these samples?

Thanks!

Hello,

i am facing the same issue right now :) , did you find a solution on how to take a video stream instead of .h264 or .raw video as input?

Thank you

Dear Reway,

./sample_drivenet --video=rtsp://ip.address:port/video_stream

You tries to push a network stream to a sample, this is not supported by the sample.

What would be the best approach to get a live stream into these samples?

The sample is shared as source. The best approach would be for you to implement a layer which does whatever it needs to do to get video from the network stream or pipe, create dwImageCUDA out of it in the right format (see source of the sample) and give it to dwDrivenet for inference. Thanks.

Dear SteveNV, Reway,

To do this I plan to use OpenCV to handle each video frame before they are sent on the network, and when they are received on the DriveNet’s desktop side. Those frames are received in YUV format, which is supported by the DriveWorks according to the documentation.

My question is: how to convert the OpenCV frame object (YUV) to a dwImageCuda frame ?
May I use this function: dwSensorCamera_getImageCUDA(dwImageCUDA **image,
dwCameraOutputType format,
dwCameraFrameHandle_t frameHandle)
, in which case the frameHandle variable points to my OpenCV image ?

I have difficulties to find how to convert an OpenCV YUV frame to a dwImageCuda frame. Or how to convert an OpenCV YUV frame to “dwImageGeneric” frame, and then to “dwImageCUDA” frame.

Thank you