How to push a frame from OpenCV to DeepStream Pipeline as Video Packets ?

Hi We are trying to build a product, which reads the streaming video from the Webcam using OpenCV.
Now, we are following deepstream sdk samples. In perticular -/deepstream/samples/nvDecInfer_detection. The problem is to implement the same but for streaming video rather than reading from the file.

  1. We are reading the frame using OpenCV function
  2. As per the deepstream docs, we need to push the video packets to thread. The snippet code from the sample
  3. // what the users need to do is 
    // push video packets into a packet cache
    std::vector<std::thread > vUserThreads;
    for (int i = 0; i < g_nChannels; ++i) {
          vUserThreads.push_back(std::thread(userPushPacket,vpDataProviders[i],pDeviceWorker,i) );	
    }
    
  4. Video Frame is not equal to Video Packets. So what is right way to push the Video frame from OpenCV ?
  5. Reference :

    Sample code we are using to read the frame from Webcam using OpenCV

    int openWebCam() {
    	  // Create a VideoCapture object and open the input file
    	  // If the input is the web camera, pass 0 instead of the video file name
    	  VideoCapture cap(0);
    
    	  // Check if camera opened successfully
    	  if(!cap.isOpened()){
    	    cout << "Error opening video stream or file" << endl;
    	    return -1;
    	  }
    
    	  while(1){
    
    	    Mat frame;
    	    // Capture frame-by-frame
    	    cap >> frame;
    
    	    // If the frame is empty, break immediately
    	    if (frame.empty())
    	      break;
    
    	    // Display the resulting frame
    	    imshow( "Frame", frame );
    
    	    // Press  ESC on keyboard to exit
    	    char c=(char)waitKey(25);
    	    if(c==27)
    	      break;
    	  }
    
    	  // When everything done, release the video capture object
    	  cap.release();
    
    	  // Closes all the frames
    	  destroyAllWindows();
    
    	  return 0;
    }
    

Hi,

DeepStream only supports raw video stream/files (eg: h264).
You can modify the output of camera to the non-decoded h264 stream and feed it into DeepStream SDK.

Another alternative is to use DeepStream for decoding directly.
There are some simpler application can convert the camera input into h264 stream. (eg: gstreamer)

Thanks.

Just for documentation -

I found a way using ffmpeg I can stream directly to a file. And then point the file to Nvidia DeepStream SDK samples path.

How to stream web cam to live stream ?

ffmpeg -pix_fmt yuv420p -y -f v4l2 -vcodec h264 -video_size 1280x1020 -i /dev/video0 out.h264

But the above approach is slow. there is delay of 45 seconds.
So I used to ffmpeg for nvidia hardwares.

ffmpeg -pix_fmt yuv420p -y -f v4l2 -vcodec h264_cuvid -video_size 1280x1020 -i /dev/video0  out.h264

More on above command - FFmpeg | NVIDIA Developer

But still I am unable to find the zero time lag. All above methods have some lag(depends on the machine).

Maybe you can try GStreamer.