v4l2src with gstreamer not working

I’ve a camera, controlled by an FPGA, which sends data in RAW10 format. I’m able to capture RAW images with v4l2. with the following pipeline:

nvidia@nvidia-desktop:~/Pictures$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=300 --stream-to=img.raw
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.00 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.00 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.16 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.12 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 50.09 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

I’ve captured 100 images and converted them to GIF, so they look like this:

Now, i’d like to stream the video, so I thought in using Gstreamer. For that, I’ve seen the available formats, which are these:

nvidia@nvidia-desktop:~/Pictures$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'RG10'
	Name        : 10-bit Bayer RGRG/GBGB
		Size: Discrete 1920x1080
			Interval: Discrete 0.020s (50.000 fps)

Anyway, I try to run the following command to capture live video, but I get the following error. How should I proceed? Do I need to change something from the driver?

nvidia@nvidia-desktop:~/Pictures$ gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=1920, height=1080, format=(string)BGRx" ! xvimagesink -e
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Try placing nvvidconv between the source and caps filter.

I tried it, and still the same error

gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=1920, height=1080, format=(string)BGRx" ! nvvidconv ! xvimagesink -e
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Is the data from the FPGA really BGRx? Is this bayer data?

It is not, it is monochrome, but I haven’t seen a way to specify it is monochrome.
The data coming from the FPGA is RAW10 Monochrome, 10 bits pixel depth.

For the bayer sensor you should need bayer2rgb gstreamer element for debayer it.

But it is not a bayer sensor, it is monochrome. Anyway, is it possible to capture it without processing, and then use a third-party tool to process it in grayscale? Because I’m able to capture frames with v4l2-ctl

I think you may need to modify the v4l2src to support it.

Ok, I’ll take a look to it. Anyway, there is no way to capture the video RAW? Maybe not with nvarguscamerasrc, but other tool…

nvarguscamerasrc only support bayer sensor.

So which tool would you recommend me to use with a monochrome sensor? I don’t care if I need to process the data later with a third-party tool

After review again it looks like output bayer format. I think you can argus camera.

Do you refer to the output of “v4l2-ctl -d /dev/video0 --list-formats-ext” I listed in the comment #1? This is NOT defining the driver. I set it up like this in the device tree mostly because I see no other way to define my sensor. I understand that v4l2-ctl is working as I bypass the ISP, but my sensor is not bayer. How should I define it in the DT?

As far as I see, there has been some progress in this thread

Summarizing, I have a monochrome sensor (10-bit pixel depth - GRAYSCALE) outputting data in RAW8, RAW10, RAW12 or RAW14 formats

If it’s not bayer sensor the v4l2src should be the only otherwise you need to implement your own v4l2 user APP for it.

I’ve achieved to give grayscale support, by modifying the DT to the following, and applying the changes depicted in this post

mode_type = "raw";
pixel_phase = "y10";

And now v4l2-ctl --list-formats is showing the following:

nvidia@nvidia-desktop:~$ v4l2-ctl --list-formats
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'Y10 '
	Name        : 10-bit Greyscale

I’m still able to capture frames, although there is no way to stream it with gst-launch-1.0 v4l2src.

Should be the v4l2src not support grayscale 10.

https://devtalk.nvidia.com/default/topic/1061806/jetson-nano/how-to-display-grayscale-camera/post/5377549/#5377549