Has anybody tried to change the resolution of the videocapture device (ie a camera) using the CV_CAP_PROP_FRAME_WIDTH and CV_CAP_PROP_FRAME_HEIGHT arguments to the set property function in opencv?
I am able to stream only VGA from the camera.
I’m using opencv4tegra 2.4.12 on a Jetson TK1 running L4T 21.2. I tried the following code.
//Build using g++ opencv.cpp `pkg-config --cflags --libs opencv`
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened())
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH,1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,720);
namedWindow("Preview",1);
Mat frame;
for(;;)
{
cap >> frame; // get a new frame from camera
imshow("Preview", frame);
if(waitKey(2) >= 0) break;
}
return 0;
}
I tried this code using a few USB cameras. I have made sure that the cameras support 1280x720 resolution. But its not working. In fact I am able to use only 640x480 resolution.
Can someone verify that this is a bug in opencv4tegra or if I am doing something wrong?
Thanks for the info. I tried out your suggestion to use gstreamer in opencv. This is my modified code.
//Build using g++ opencv.cpp `pkg-config --cflags --libs opencv`
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int, char**)
{
const char* env = "GST_DEBUG=*:3";
putenv((char*)env);
const char* gst = "v4l2src device=/dev/video0 queue-size=4 always-copy=false ! image/jpeg, width=(int)1280, height=(int)720 ! jpegdec ! ffmpegcolorspace ! video/x-raw-rgb ! appsink";
VideoCapture cap(gst);
if(!cap.isOpened())
{
cout << "cant open camera" << endl;
return -1;
}
namedWindow("Preview",1);
Mat frame;
for(;;)
{
cap >> frame; // get a new frame from camera
imshow("Preview", frame);
if(waitKey(2) >= 0) break;
}
return 0;
}
The camera fails to initialize. ie. fails at cap.isOpened(). I also checked the system calls and the device node is not even opened. Am I missing something?
Also the same gstreamer pipeline works fine when I tested it using gst-launch-0.10. I was wondering if opencv4tegra is built with gstreamer support. So I made a few tests.
The above command returns nothing. This shows that opencv4tegra is built without gstreamer support. To further verify this I built opencv 2.4.9 with gstreamer support.
And the above code I posted also works fine with my custom built version opencv with gstreamer support.
Hence, the option of using gstreamer with opencv4tegra is also out of the question. Is there any other method to use a camera in opencv4tegra at any resolution I choose?
Until this issue is fixed in opencv4tegra, I did a workaround to use higher resolutions from the camera. I use v4l2 ioctls directly to grab the frame from the camera and then use the Mat constructor to load the buffer into an opencv data type (Mat). Then I do the appropriate colorspace conversion to BGR before doing any processing on the buffer.
Mat yuyv_frame = Mat(CAM_HEIGHT,CAM_WIDTH,CV_8UC2,ptr_cam_frame);
cvtColor(yuyv_frame,bgr_frame,COLOR_YUV2BGR_YUY2);
We were having a similar problem with a third party USB3 video camera. We fixed it with help from the camera manufacturer. Details are on this website:
The fix for opencv4tegra in TK1 is still under planning, no firm date yet.
Have you tried the suggestion from “SpaceVRTimD” to see if it can help by increase the vmalloc size?
Hi dilipkumar25,
I try to build opencv with gst but when I use ldd to track ,it shows nothing .could you please post the method to enable the gst in opencv?
The opencv4tegra version provided by Nvidia does not support gstreamer. I’m not sure about the latest one though (2.4.13). You will have to build opencv from source code. Refer to the following links to build opencv from source:
Just to confirm, I have a Logitech 615 USB cam. Are you saying that I cannot get greater than 640 x 480 out of the box on the NVIDIA TK1? (I’m using a 2.0 USB hub)