It should be able to take compressed stream.
May I know why would you send compressed stream for inference?
Compression helps with storage. But while you are inferencing, why not just push the raw format(i.e.uncompressed)? Thanks.
I would send a compressed stream because the samples expect one .h264 as input (or .raw)
Regarding .raw: I’ve tried passing my video stream through a named pipe and then passing it as an argument to the samples (since then I have a pointer to the file system), however either the samples won’t open the file or I get that they cant seek in file.
What would be the best approach to get a live stream into these samples?
You tries to push a network stream to a sample, this is not supported by the sample.
What would be the best approach to get a live stream into these samples?
The sample is shared as source. The best approach would be for you to implement a layer which does whatever it needs to do to get video from the network stream or pipe, create dwImageCUDA out of it in the right format (see source of the sample) and give it to dwDrivenet for inference. Thanks.
To do this I plan to use OpenCV to handle each video frame before they are sent on the network, and when they are received on the DriveNet’s desktop side. Those frames are received in YUV format, which is supported by the DriveWorks according to the documentation.
My question is: how to convert the OpenCV frame object (YUV) to a dwImageCuda frame ?
May I use this function: dwSensorCamera_getImageCUDA(dwImageCUDA **image,
dwCameraOutputType format,
dwCameraFrameHandle_t frameHandle), in which case the frameHandle variable points to my OpenCV image ?
I have difficulties to find how to convert an OpenCV YUV frame to a dwImageCuda frame. Or how to convert an OpenCV YUV frame to “dwImageGeneric” frame, and then to “dwImageCUDA” frame.