nvarguscamerasrc OpenCV (Solved)

I used OpenCV4.0 with gstreamer built. But my CSI camera cannot be read by OpenCV.
I installed L4T R31 with Jetack4.1, and replaced OpenCV.
I’ve tested that nvgstcapture-1.0 is working with my camera.
My python code is:

Can anyone provide an example to read CSI camera

gst_str = ('nvarguscamerasrc !' 
    'video/x-raw(memory:NVMM), '
    'width=(int)1280, height=(int)720, '
    'format=(string)NV12, framerate=30/1 ! '
    'nvvidconv ! '
    'video/x-raw, format=(string)BGRx ! '
    'videoconvert ! '
    'video/x-raw, format=(string)RGB ! appsink')

cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
if not cap.isOpened():
    print('Failed to open camera!')
    sys.exit()
while(True):
    _, img = cap.read() # grab the next image frame from camera
    cv2.imshow("cam", img)
    key = cv2.waitKey(10)

Output is:

GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :

GST_ARGUS: 3840 x 2160 FR = 59.999999 fps; Analog Gain range min 1.000000, max 44.400002; Exposure Range min 44000, max 666637000;

GST_ARGUS: 1920 x 1080 FR = 59.999999 fps; Analog Gain range min 1.000000, max 177.000000; Exposure Range min 58000, max 184611000;

GST_ARGUS: 3840 x 2160 FR = 29.999999 fps; Analog Gain range min 1.000000, max 30.000000; Exposure Range min 57000, max 20480000;

GST_ARGUS: 1920 x 1080 FR = 59.999999 fps; Analog Gain range min 1.000000, max 177.000000; Exposure Range min 56000, max 666479000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 3 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 59.999999 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success

(python:9915): GStreamer-CRITICAL **: 17:07:59.361: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed
Failed to open camera!

Hi,
Could you check if cpp code works?
[url]https://devtalk.nvidia.com/default/topic/1024245/jetson-tx2/opencv-3-3-and-integrated-camera-problems-/post/5210735/#5210735[/url]

It is nvcamerasrc in the sample and please replace it with nvarguscamerasrc.

Hi DaneLLL,
I tried cpp code, it still doesn’t work. I attached my code. Can you provide any idea?

int main() {
	VideoCapture cap("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"); //open the default camera
	
	if(!cap.isOpened()) { // check if we succeeded
		cout << "Fail to open camera " << endl;
		return -1;
	}
	for(;;)
	{
		Mat frame;
		cap >> frame; // get a new frame from camera
		imshow("original", frame);
		waitKey(1);
	}
	// the camera will be deinitialized automatically in VideoCapture destructor
	cap.release();

	return 0;
}

the output is “Fail to open camera”

I changed ‘format=(string)I420’ to ‘format=(string)NV12’, but this is still not working.

It might be because you are requesting 1280x720 @24 fps, but this mode is not natively supported by the sensor.
I see 3840 x 2160 @60 fps or @30 fps and (duplicated) 1920 x 1080 @60 fps.
You may try to use one of these for nvarguscamerasrc.
If your app expects 1280x720 input, you may try to convert with nvvidconv.

Hi guodebby,
For Xavier, it should be

CUDA_ARCH_BIN="<b>7.2</b>"

Do you change CUDA_ARCH_BIN to 7.2?

Please follow below steps:
1 Do not install OpenCV 3.3.1 via Jetpack. It is installed by default. Please un-check OpenCV 3.3.1
2 Get the script https://github.com/AastaNV/JEP/blob/master/script/install_opencv3.4.0_Xavier.sh
3 Modify CUDA_ARCH_BIN in the script

CUDA_ARCH_BIN="7.2"

4 Execute the script

$ mkdir OpenCV
$ ./install_opencv3.4.0_Xavier.sh OpenCV
$ sudo ldconfig -v

5 Build and run the sample

$ g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --libs opencv)
$ export DISPLAY=:0
$ ./simple_opencv

simple_opencv.cpp (611 Bytes)

Did anyone verify if gstreamer works with OpenCV 3.4.3?

We have verified 3.4.0. Ideally it should work fine for 3.4.3. Other users may share their experience.

I tested it with 3.4.3 and it won’t work. Gives me the same error as above i.e. ‘gst_is_element’ failed

Your pipeline is wrong for nvarguscamerasrc. You may see this with gst-launch:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)24/1' ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)BGRx' ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink
WARNING: erroneous pipeline: could not link nvarguscamerasrc0 to nvvconv0, nvarguscamerasrc0 can't handle caps video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)24/1

However, using 1080p resolution @30 fps in NV12 format, this works:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)BGRx' ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink

So you would just change your example with:

VideoCapture cap("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink");

If you need to resize, nvvidconv will probably be able to do it, just specify wanted resolution in caps after nvvidconv and test with gst-launch (quoting caps).

for the installation script: if to add make -j8 instead of just make - it will work dramatically faster.
On the other hand, are there any ideas on how to get the scripted installation wo work with python cv2 and cv2.dnn?
Thanks

What am I missing for localhost gstreamer stream playing?

#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin ! nvvidconv ! appsink");
 if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, CV_YUV2BGR_I420);
      imshow("original", bgr);
      waitKey(1);
    }

  cap.release();
}
./simple_opencv
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Failed to open camera.

another terminal:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
int main(void)
{
    cv::VideoCapture cap("rtsprc location=rtsp://127.0.0.1:8554/test ! videoconvert ! videoscale ! appsink");

    if( !cap.isOpened() )
    {
        std::cout << "Not good, open camera failed" << std::endl;
        return 0;
    }

    cv::Mat frame;
    while(true)
    {
        cap >> frame;
        cv::imshow("Frame", frame);
        cv::waitKey(1);
    }
    return 0;
}

references:
https://devtalk.nvidia.com/default/topic/1031294/jetson-tx1/opencv-videocapture-failed-in-capture-rtsp-video-stream/post/5247561/#5247561
https://devtalk.nvidia.com/default/topic/1007962/jetson-tx2/that-proceesing-to-open-ip-camera-with-gstreamer-and-opencv-only-display-a-still-picture-how-to-solve-it-/post/5149913/#5149913
[url]https://devtalk.nvidia.com/default/topic/1004914/gstreamer-pipeline-failed-to-open-ip-camera-in-cv-videocapture-function-/[/url

I’d suggest to try using videoconvert instead of nvvidconv.

External Media
External Mediawhat worked is:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin !  videoconvert ! appsink");
 if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, CV_YUV2BGR_I420);
      imshow("original", bgr);
      waitKey(1);
    }

  cap.release();
}
./simple_opencv
NvMMLiteOpen : Block : BlockType = 279 
NvMMLiteBlockCreate : Block : BlockType = 279 
Allocating new output: 640x480 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3528: Send OMX_EventPortSettingsChanged: nFrameWidth = 640, nFrameHeight = 480 
Gtk-Message: 00:03:21.734: Failed to load module "canberra-gtk-module"
---> NVMEDIA: Video-conferencing detected !!!!!!!!!

Thank you for pointing out!

Next challenge seems to be to improve quality of the video

You may set a higher bitrate. Check this post.

Hi Honey_Patouceul,
Thank you for sharing the link.
Do you mean somewhat like

#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin !  videoconvert ! omxh265enc bitrate=50000000 ! h265parse ! omxh265dec ! nvoverlaysink ! appsink ");
 if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, CV_YUV2BGR_I420);
      imshow("original", bgr);
      waitKey(1);
    }

  cap.release();
}

and could it remove green artifacts and get picture more similar to the output of

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue ! decodebin ! videoconvert ! xvimagesink

Thanks
that one doesn’t seem to open up pop up window.
Will try to combinate futher until the picture is better athan in screeshots attached to previous posts

./simple_opencv
NvMMLiteOpen : Block : BlockType = 279 
NvMMLiteBlockCreate : Block : BlockType = 279 
Allocating new output: 640x480 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3528: Send OMX_EventPortSettingsChanged: nFrameWidth = 640, nFrameHeight = 480 
Framerate set to : 0 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 8 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 8 
NVMEDIA: H265 : Profile : 1 
NvMMLiteOpen : Block : BlockType = 279 
NvMMLiteBlockCreate : Block : BlockType = 279 
Allocating new output: 640x480 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3528: Send OMX_EventPortSettingsChanged: nFrameWidth = 640, nFrameHeight = 480

The terminal execution of gstreamer sequence seems fine. The quality issue arise when it is processed by the sample_opencv with rtsp
I mean that the line below returns fine video

VideoCapture cap("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1280, height=720,format=NV12, framerate=30/1 ! nvvidconv ! video/x-raw,format=I420 ! appsink");

but the line below seems to miss some parameters to return same quality video:

VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin !  videoconvert ! appsink ");

I meant to adjust bitrate for omxh265enc in sender so in the pipeline passed to test-launch.
The default bitrate being very low, the encoding looses much quality.

You may improve using a higher bitrate. It won’t be exactly the same as original, but may be enough depending on what you intend to do.

thank you for pointing out.
However, it seems that I have no arguments to add to the reciever below and thus will be approaching the transmitter parameters.

rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin !  videoconvert ! appsink

A this point I was just to play with opencv a csi stream from remote jetson. But then I intended to approach remote csi camera stream with opencv processing. e.g combine stream of two cameras into one or to kind of 360 view in case of more cameras