OpenCV VideoCapture and hardware-accelerated video encoding support

Hello!
I need help with cv::VideoCapture (OpenCV) configuration (for example web-camera Logitech c920 - FullHD, usb 2.0) for hardware-accelerated video codec (Jetson/H264 Codec - eLinux.org)
All experiments were carried out at the Jetson TK1 in performance mode Jetson/Performance - eLinux.org

Case 1.
cv::VideoCapture cap(0);
Result:
OpenCV 2.4.12 - 27 fps
OpenCV 3.1.0 - 13 fps

Case 2.
cv::VideoCapture cap(“v4l2src device=/dev/video0 ! video/x-raw-yuv,width=1920,height=1080,framerate=30/1,format=(fourcc)I420 ! ffmpegcolorspace ! appsink”);
Result:
OpenCV 2.4.12 - don’t work
OpenCV 3.1.0 - 18 fps

Case 3.
cv::VideoCapture cap(“v4l2src queue-size=5, always-copy=false, device=/dev/video0 ! video/x-raw-yuv, width=1920, height=1080, framerate=30/1, format=(fourcc)I420 ! nvomxh264enc ! video/x-h264, stream-format=(string)byte-stream ! video/x-raw-bgr, width=1920, height=1080, framerate=30/1, format=(fourcc)BGRx ! appsink sync=true”);
Result:
OpenCV 2.4.12 - don’t work
OpenCV 3.1.0 - 15 fps

In the console we have 30 fps ( Jetson/H264 Codec - eLinux.org ):
gst-launch -e v4l2src device=/dev/video0 ! ‘video/x-raw-yuv,width=1280,height=720,framerate=30/1’ ! nv_omx_h264enc quality-level=2 ! mp4mux ! filesink location=test.mp4

Tell me please how to configure the hardware video encoding so as to maximize the frequency of the received frames

In the console with 1080p we also have 30 fps ( Jetson/H264 Codec - eLinux.org ):
gst-launch -e v4l2src device=/dev/video0 ! ‘video/x-raw-yuv,width=1920,height=1080,framerate=30/1’ ! nv_omx_h264enc quality-level=2 ! mp4mux ! filesink location=test.mp4

Hi dborisoglebskiy,

Thank you for reporting this issue.
To do the further investigation, would you please provide your test code for your 3 use cases?
We need to reproduce the issue first, then can help to find where might be wrong.

Thanks

#include "opencv2/opencv.hpp"
#include <iostream>
#include <sstream>

int to_int(char* text_number)
{
  int result;
  std::istringstream ss(text_number);
  ss >> result;

  return result;
}

double get_time(const int64 &t)
{
  return double(t) * 1000.0 / cv::getTickFrequency();
}

int main(int argc, char** argv)
{
  int frames_cnt = 100;
  if (argc == 2) {
    frames_cnt = to_int(argv[1]);
  }
  std::cout << "Frames: " << frames_cnt << std::endl;

  //15 fps
  cv::VideoCapture cap("v4l2src queue-size=5, always-copy=false, device=/dev/video0 ! video/x-raw-yuv, width=1920, height=1080, framerate=30/1, format=(fourcc)I420 ! ffmpegcolorspace ! appsink sync=true");

  // OpenCV 3
  cap.set(cv::CAP_PROP_FPS, 30);
  cap.set(cv::CAP_PROP_FRAME_WIDTH, 1920);
  cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1080);

/*
  // OpenCV 2.4.12
  cap.set(CV_CAP_PROP_FPS, 30);
  cap.set(CV_CAP_PROP_FRAME_WIDTH, 1920);
  cap.set(CV_CAP_PROP_FRAME_HEIGHT, 1080);
*/

  // check if we succeeded
  if(!cap.isOpened()) {
    std::cout << "!cap.isOpened()" << std::endl;
    return -1;
  }
  
  int64 start_time = cv::getTickCount();
  for(int i = 0; i < frames_cnt; i++)
  {
    cv::Mat frame;
    cap >> frame; // get a new frame from camera
  }
  int64 finish_time = cv::getTickCount();

  double full_time_in_ms = get_time(finish_time - start_time);
  double time_per_frame = full_time_in_ms / frames_cnt;
  double fps = 1000.0 / time_per_frame; 
  std::cout << "Full time: " << full_time_in_ms / 1000.0 << " seconds" << std::endl;
  std::cout << "Time per frame: " << time_per_frame << " ms" << std::endl;
  std::cout << "Frames per second: " << fps << std::endl;

  return 0;
}

Hi dborisoglebskiy,

Original pipeline is :

cv::VideoCapture cap("v4l2src device=/dev/video0 ! video/x-raw-yuv,width=1920,height=1080,framerate=30/1,format=(fourcc)I420 ! ffmpegcolorspace ! appsink");

In above pipeline 2 software color-space conversions happening on cpu.

First converting YUYV to I420 using v4l2src and then I420 to RGB using ffmpegcolorspace, both on CPU.

The opencv app sink accepts only RGB (opencv-3.1.0/modules/videoio/src/cap_gstreamer.cpp) for gst 0.1

Can you please try and let us know the findings.
for gst 0.1:

cv::VideoCapture cap("v4l2src device=\"/dev/video0\" ! video/x-raw-rgb,width=1920,height=1080,framerate=30/1,format=(fourcc)RGB ! appsink");

for gst 1.0:

cv::VideoCapture cap("v4l2src device=\"/dev/video0\" ! video/x-raw,width=1920,height=1080,format=BGR ! appsink");

In the course of the experiment, there were several warnings:

For gstreamer0.1

(camera_timer:6135): GStreamer-CRITICAL **: gst_element_get_static_pad: assertion 'GST_IS_ELEMENT (element)' failed
(camera_timer:6135): GStreamer-CRITICAL **: gst_pad_get_caps_reffed: assertion 'GST_IS_PAD (pad)' failed
(camera_timer:6135): GStreamer-CRITICAL **: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed
(camera_timer:6135): GStreamer-CRITICAL **: gst_structure_get_int: assertion 'structure != NULL' failed
(camera_timer:6135): GStreamer-CRITICAL **: gst_structure_get_int: assertion 'structure != NULL' failed
(camera_timer:6135): GStreamer-CRITICAL **: gst_structure_get_fraction: assertion 'structure != NULL' failed

For gstreamer1.0

(camera_timer:2559): GStreamer-CRITICAL **: gst_element_get_static_pad: assertion 'GST_IS_ELEMENT (element)' failed
(camera_timer:2559): GStreamer-CRITICAL **: gst_pad_get_current_caps: assertion 'GST_IS_PAD (pad)' failed
(camera_timer:2559): GStreamer-CRITICAL **: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed
(camera_timer:2559): GStreamer-CRITICAL **: gst_structure_get_int: assertion 'structure != NULL' failed
(camera_timer:2559): GStreamer-CRITICAL **: gst_structure_get_int: assertion 'structure != NULL' failed
(camera_timer:2559): GStreamer-CRITICAL **: gst_structure_get_fraction: assertion 'structure != NULL' failed
libv4l2: warning v4l2 mmap buffers still mapped on close()

In all cases, the FPS value is not changed.

My camera - “Logitech c920 - FullHD, USB 2.0”.
Command “v4l2-ctl --list-formats” show 3 supported formats:

ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUV 4:2:2 (YUYV)

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'H264' (compressed)
	Name        : H.264

	Index       : 2
	Type        : Video Capture
	Pixel Format: 'MJPG' (compressed)
	Name        : MJPEG

Maybe instead of “video/x-raw” is better to use “video/x-h264”?

Hi dborisoglebskiy,

For gstreamer warning there might be setup issue. Please check on fresh setup. We are observing around 27 fps at our end.

My camera - “Logitech c920 - FullHD, USB 2.0”, maybe instead of “video/x-raw” is better to use >“video/x-h264”?
It depends on the use case. For lower CPU utilization you can use video/x-h264 format.

Thanks

This comment by Andrey1984 was accidentally removed, I’m reposting it as I cannot restore it:


Could you help estimate on how to acquire left and right from stereo camera separately pressuming the only way to acquire both at once is luvcview -d /dev/video0 -f MJPG -S 1280x480 output?
if using /dev/vide0 or cap(0) it returns just a single sensor image

Can there be used a trick like above mentioned ar the following like:

VideoCapture cap;
cap.open(device);
cap.set(CV_CAP_PROP_FOURCC ,CV_FOURCC(‘M’, ‘J’, ‘P’, ‘G’) );
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);

Opencv version is 2.4.9
Thanks
Andrey

Hi dborisoglebskiy,
For hardware-accelerated video encoder, please use GST plugin or MM API.

br
Chenjian