ov5693 through V4L2 interface not working in 23.2

Hello,

Anyone got a working ov5693 camera through the V4L2 interface? I followed the steps prescribed in te documentation (section “Video for Linux User Guide”) but I am getting errors from the VI module.

So what I did was:

  1. Downloaded kernel 3.10, branch “l4t/l4t-r23.2”
  2. Applied patch 0001-ARM64-adding-OV5693-V4L2-on-E3326-jetson_cv.patch, which I found in a post here (https://devtalk.nvidia.com/default/topic/920739/l4t-r23-2-for-jetson-tx1-released/)
  3. Did exactly the steps in the docs.

Here is where trouble comes:

  • Using the Yavta tool, there is no SRGGB10 format :(
root@tegra-ubuntu:~# yavta --enum-formats /dev/video0 
Device /dev/video0 opened: vi ().
- Available formats:
	Format 0: RG10 (30314752)
	Type: Video capture (1)
	Name: Bayer 10 RGRG.. GBGB..

	Format 1: RGGB (42474752)
	Type: Video capture (1)
	Name: Bayer 8 RGRG.. GBGB..

	Format 2: UYVY (59565955)
	Type: Video capture (1)
	Name: YUV422 (UYVY) packed

	Format 3: VYUY (59555956)
	Type: Video capture (1)
	Name: YUV422 (VYUY) packed

	Format 4: YUYV (56595559)
	Type: Video capture (1)
	Name: YUV422 (YUYV) packed

	Format 5: YVYU (55595659)
	Type: Video capture (1)
	Name: YUV422 (YVYU) packed

	Format 6: YU12 (32315559)
	Type: Video capture (1)
	Name: YUV420 (YU12) planar

	Format 7: YV12 (32315659)
	Type: Video capture (1)
	Name: YVU420 (YV12) planar

	Format 8: RGB4 (34424752)
	Type: Video capture (1)
	Name: RGBA 8-8-8-8

Video format: RG10 (30314752) 2592x1944
root@tegra-ubuntu:~#
  • and from the available formats only YUYV is supported, it seems
root@tegra-ubuntu:~# yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw
Unsupported video format 'SRGGB10'
root@tegra-ubuntu:~# yavta /dev/video0 -c1 -n1 -s1920x1080 -fRGB4 -Fov.raw
Unsupported video format 'RGB4'
root@tegra-ubuntu:~# yavta /dev/video0 -c1 -n1 -s1920x1080 -fYU12 -Fov.raw
Unsupported video format 'YU12'
root@tegra-ubuntu:~# yavta /dev/video0 -c1 -n1 -s1920x1080 -fYVYU -Fov.raw
Unsupported video format 'YVYU'
root@tegra-ubuntu:~# yavta /dev/video0 -c1 -n1 -s1920x1080 -fYUYV -Fov.raw
Device /dev/video0 opened: vi ().
Video format set: width: 1920 height: 1080 buffer size: 4147200
Video format: YUYV (56595559) 1920x1080
1 buffers requested.
length: 4147200 offset: 0
Buffer 0 mapped at address 0xf7298000.
0 (0) [E] 0 4147200 bytes 1458034399.005346 1458034399.005936
Captured 0 frames in 0.069844 seconds (0.000000 fps, 59378042.494702 B/s).
1 buffers released.
root@tegra-ubuntu:~#

But even it doesn’t record anything, as the frame is empty:

root@tegra-ubuntu:~# hexdump -C ov.raw-000000.bin 
00000000  55 55 55 55 55 55 55 55  55 55 55 55 55 55 55 55  |UUUUUUUUUUUUUUUU|
*
003f4800

dmesg is not very talkative as ti says:

[ 1775.699465] vi vi: MW_ACK_DONE syncpoint time out!

Needless to say, gstreamer pipe doesn’t start as well.

Any idea what might be the problem?

The first format that is enumerated in yavta “RG10” is the user friendly name for SRGGB10. All the other formats that enumerated, do not work with this sensor. Only RG10 is supported by the sensor hardware.

This error seems strange. You shouldn’t be getting this error unless your sensor hardware is disconnected. Make sure the camera board is connected properly.

The above command worked fine for me and I was able to capture the image data. I verified the image data using Irfanview. I obtained the yavta tool from here.

$ git clone git://git.ideasonboard.org/yavta.git
$ make

I just remembered. You might get that error sometimes during VIDIOC_STREAMOFF. Looks like some issue in the cleanup part of the driver. I guess you can ignore that error safely.

Many thanks, it seems the git repo for yavta I chose was a weird mod (do not use this at home: GitHub - fastr/yavta: fork of git://git.ideasonboard.org/yavta.git ). So it captures frames now the command above. The “MW_ACK_DONE” error is not present either.

But the gstreamer problem still exist. Should this pipeline be correct, at it gives internal error:

root@tegra-ubuntu:~/src/yavta# gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420' ! omxh265enc ! fakesink
Setting pipeline to PAUSED ...
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingPipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.000488066
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
root@tegra-ubuntu:~/src/yavta#

Hi

Your gstreamer-pipeline forces the I420 format on the v4l2-source. Since your v4l2src apparently only supports UYVY it will not be able to negotiate a suitable format. Try leaving the format away completely:

gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)1920, height=(int)1080' ! omxh265enc ! fakesink

or alternatively use the video converter (nvvidconv) to convert the video stream to a supported format:

gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420' ! nvvidconv ! omxh265enc ! fakesink

Tried these lines, but result is the same. I think it should be a problem with configuration of the v4l2 element itself, since I gave it only

gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)1920, height=(int)1080' ! fakesink

and it fails the same way :(

So with the test source “soc_camera_platform” I could start only this pipeline:

gst-launch-1.0 v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)1280, height=(int)720, format=(string)RGBx, framerate=(fraction)30/1' ! fakesink

Format is RGBx.

Tried to use the encoder, but this required using “nvvidconv” element, which unfortunately accepts:

video/x-raw
                 format: { I420, UYVY, NV12, GRAY8, BGRx, RGBA }

but not RGBx, so rock bottom for this attempt.


Attempts with the real sensor also run not fruitful, pffff

Using all verbosity possible in gst-launch-1.0 and not giving any format to the v4l2 source as kamm suggested I found it is automatically set to format=(string)YUY2. Moreover, for the pipeline to not give internal error, the default resolution has to be specified, i.e. 2592x1944. Thus the following is started:

gst-launch-1.0 v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)2592, height=(int)1944, format=(string)YUY2, framerate=(fraction)30/1' ! fakesink

Unfortunately, this cannot be used with a encoder as I want, because of not matching src and sink formats between components. The “nvvidconv” will not support YUY2, nor does the encoder “omxh265enc”:

video/x-raw
                 format: { I420, NV12 }

Any idea what to do to get an encoded video?

Hi

Since the hardware-accelerated converter (nvvidconv) does not support the video format of your video source you could use the videoconvert element instead:

gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)2592, height=(int)1944, format=(string)YUY2, framerate=(fraction)30/1' ! videoconvert ! omxh265enc ! fakesink

Unfortunately videoconvert has no hardware acceleration and you might not get full 30 FPS.

Thanks, kamm :)

This seems to be another step forward, as the pipeline is actually started.

root@tegra-ubuntu:~# gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)2592, height=(int)1944, format=(string)YUY2, framerate=(fraction)30/1' ! videoconvert ! 'video/x-raw, format=(string)I420' ! omxh265enc ! filesink location=test_v4l2.mp4 -e
Setting pipeline to PAUSED ...
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingPipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, format=(string)YUY2, framerate=(fraction)30/1, width=(int)2592, height=(int)1944, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)YUY2, framerate=(fraction)30/1, width=(int)2592, height=(int)1944, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw, framerate=(fraction)30/1, width=(int)2592, height=(int)1944, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, format=(string)I420
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw, framerate=(fraction)30/1, width=(int)2592, height=(int)1944, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, format=(string)I420
/GstPipeline:pipeline0/GstOMXH265Enc-omxh265enc:omxh265enc-omxh265enc0.GstPad:sink: caps = video/x-raw, framerate=(fraction)30/1, width=(int)2592, height=(int)1944, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, format=(string)I420
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw, framerate=(fraction)30/1, width=(int)2592, height=(int)1944, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, format=(string)I420
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw, format=(string)YUY2, framerate=(fraction)30/1, width=(int)2592, height=(int)1944, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, format=(string)YUY2, framerate=(fraction)30/1, width=(int)2592, height=(int)1944, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 8 
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8 
===== NVENC blits (mode: 1) into block linear surfaces =====
/GstPipeline:pipeline0/GstOMXH265Enc-omxh265enc:omxh265enc-omxh265enc0.GstPad:src: caps = video/x-h265, alignment=(string)au, width=(int)2592, height=(int)1944, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFileSink:filesink0.GstPad:sink: caps = video/x-h265, alignment=(string)au, width=(int)2592, height=(int)1944, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1
^Chandling interrupt.
Interrupt: Stopping pipeline ...
EOS on shutdown enabled -- Forcing EOS on the pipeline

And this is the last you get from this console.

This time, unfortunately, gstreamer process hangs up, and sometimes the whole system becomes non-responsive (serial console, ssh). Moreoever, system log is filled with error messages.

I just cannot get an output file for example (using filesink) even if the pipeline tries to produce one. Again stuck!

Nvidia gyus, please comment :)

Unfortunately I could not reproduce the hang up. As far as I am aware, the problem is that the output capabilities of omxh265enc are not compatible with the input capabilities of filesink. For that purpose you need the h265parse plugin to create a nice video stream and then a muxer to pack the video stream into a container, e.g. matroskamux. There is also qtmux to create a MP4-container but i am not sure if it works with h265parse.

Example pipeline:

$ gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)2592, height=(int)1944, format=(string)YUY2, framerate=(fraction)30/1' ! videoconvert ! 'video/x-raw, format=(string)I420' ! omxh265enc ! h265parse ! matroskamux ! filesink location=test_v4l2.mkv -e

The problem (at least on my TX1 with L4T R23.1) is, that you dont have access to the h265parse plugin with the gstreamer version that you get by default from Nvidia (1.2.4). Therefore you need to build a newer version yourself (see https://devtalk.nvidia.com/default/topic/901967/jetson-tx1/gstreamer-rtp-h-265-elements-missing/)

$ gst-launch-1.0 --version
gst-launch-1.0 version 1.2.4
GStreamer 1.2.4
https://launchpad.net/distros/ubuntu/+source/gstreamer1.0

Hope that i could help, Regards.

As I pointed out earlier, the OV5693 sensor is a bayer only RAW sensor. It does not support YUYV output from the hardware.
http://www.ovt.com/download_document.php?type=sensor&sensorid=185

And as far as I am aware, the V4L2 driver architecture does not support Bayer to YUYV conversion. The reason that all the other unsupported formats are enumerated is because of the use of a common file for enumerating them. Refer to drivers/media/platform/soc_camera/camera_common.c

static const struct camera_common_colorfmt camera_common_color_fmts[] = {
	{V4L2_MBUS_FMT_SRGGB10_1X10, V4L2_COLORSPACE_SRGB},
	{V4L2_MBUS_FMT_SRGGB8_1X8, V4L2_COLORSPACE_SRGB},
	{V4L2_MBUS_FMT_UYVY8_2X8, V4L2_COLORSPACE_SRGB},
	{V4L2_MBUS_FMT_RGBA8888_4X8_LE, V4L2_COLORSPACE_SRGB},
	{V4L2_MBUS_FMT_YUYV8_2X8, V4L2_COLORSPACE_SRGB},
};

I am not sure if gstreamer-1.0 supports 10bit bayer. If it does, your best bet is to do the following in gstreamer :

v4l2src -> bayer2rgb -> autodisplaysink

Otherwise you’ll have to write your own V4L2 code to capture the camera data and convert it to a viewable format.

Hi kamm and thanks for the help.

I was able to compile all gstreamer-1.0 stuff (v1.7.91 - latest so far), so I got the missing elements. However starting the line you suggest (yes it is started ok) results in the same behaviour as yesterday, system starts outputting a lot of errors and finaly any console is frozen.

I see you are using 23.1 release, so how did you get the v4l2 interface for the camera running? I am asking because I use 23.2 with the patch prescribed in the manual and maybe that is causing trouble.

I put here some of the output produced in the system log:

...
[  316.638610] vi vi: initialized
[  316.642407] soc-camera-pdrv soc-camera-pdrv.0: Probing soc-camera-pdrv.0
[  316.660858] [OV5693]: probing v4l2 sensor.
[  317.535089] vi vi: Tegra camera driver loaded.
[  322.401611] compat_ioctl32: unknown ioctl 'V', dir=3, #26 (0xc050561a)
[  322.429432] compat_ioctl32: unknown ioctl 'V', dir=3, #26 (0xc050561a)
[  322.443003] compat_ioctl32: unknown ioctl 'V', dir=3, #25 (0xc0485619)
[  326.524066] vi vi: MW_ACK_DONE syncpoint time out!
[  328.524369] vi vi: MW_ACK_DONE syncpoint time out!
[  329.043360] vi vi: CSI 2 syncpt timeout, syncpt = 7, err = -11
[  329.043365] vi vi: TEGRA_CSI_CSI_CIL_STATUS 0x00000010
[  329.043369] vi vi: TEGRA_CSI_CSI_CILX_STATUS 0x00040041
[  329.043372] vi vi: TEGRA_CSI_PIXEL_PARSER_STATUS 0x00000080
[  329.043376] vi vi: TEGRA_VI_CSI_ERROR_STATUS 0x00000004
[  331.044214] vi vi: MW_ACK_DONE syncpoint time out!
[  331.063964] vi vi: CSI 2 syncpt timeout, syncpt = 7, err = -11
[  331.573196] Host read timeout at address 5408113c
[  331.573640] vi vi: TEGRA_CSI_CSI_CIL_STATUS 0xffffffff
[  332.076352] vi vi: TEGRA_CSI_CSI_CILX_STATUS 0xffffffff
[  332.579193] vi vi: TEGRA_CSI_PIXEL_PARSER_STATUS 0xffffffff
[  333.081902] vi vi: MW_ACK_DONE syncpoint time out!
[  333.081948] vi vi: TEGRA_VI_CSI_ERROR_STATUS 0xffffffff
...

And then Host read timeout repeats.

@dilipkumar25
Yes, for some reason, the v4l2 implementation skips the ISP modules which should convert Bayer to YUV. Still do not know the reason for this and if it will be updated in the future. Using the nvidia component “nvcamerasrc” makes this conversion, but it is hardcoded to use only the default camera name with predefined configuration such as resolution.

Hi again

So I have not implemented the V4L2 patches on my system, but I simulated your situation by replacing the v4l2src with a videotestsrc element, that should generate the same output as the capabilities ‘video/x-raw, width=(int)2592, height=(int)1944, format=(string)YUY2, framerate=(fraction)30/1’.

Now as dilipkumar25 suggests, the camera can only generates Bayer output instead of YUY2. If this is true, you need to add a bayer2rgb plugin after the v4l2src, as he suggests. So it would be good to verify that you can indeed get a bayer image out of the v4l2src by displaying it, before adding the additional complexity of the encoding. Could you try as he suggests to display the video on a HDMI display:

gst-launch-1.0 v4l2src device="/dev/video0" ! bayer2rgb ! autovideosink

Or maybe

gst-launch-1.0 v4l2src device="/dev/video0" ! bayer2rgb ! nvhdmioverlaysink

If this works, i think the v4l2src can not output YUY2. But then you could try something like:

gst-launch-1.0 v4l2src device="/dev/video0" ! bayer2rgb ! videoconvert ! 'video/x-raw, format=(string)I420' ! omxh265enc ! h265parse ! matroskamux ! filesink location=test_v4l2.mkv -e

About the error log and crash: I have seen the ‘unknown ioctl’ errors before. We have resolved them by modifying v4l2 for a different driver. I suppose it could help in your situation as well.

diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
index e2b0a09..36b7740 100644
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -1008,104 +1008,14 @@ long v4l2_compat_ioctl32(struct file *file, unsigned int cmd, unsigned long arg)
 	if (!file->f_op->unlocked_ioctl)
 		return ret;
 
-	switch (cmd) {
-	case VIDIOC_QUERYCAP:
-	case VIDIOC_RESERVED:
-	case VIDIOC_ENUM_FMT:
-	case VIDIOC_G_FMT32:
-	case VIDIOC_S_FMT32:
-	case VIDIOC_REQBUFS:
-	case VIDIOC_QUERYBUF32:
-	case VIDIOC_G_FBUF32:
-	case VIDIOC_S_FBUF32:
-	case VIDIOC_OVERLAY32:
-	case VIDIOC_QBUF32:
-	case VIDIOC_EXPBUF:
-	case VIDIOC_DQBUF32:
-	case VIDIOC_STREAMON32:
-	case VIDIOC_STREAMOFF32:
-	case VIDIOC_G_PARM:
-	case VIDIOC_S_PARM:
-	case VIDIOC_G_STD:
-	case VIDIOC_S_STD:
-	case VIDIOC_ENUMSTD32:
-	case VIDIOC_ENUMINPUT32:
-	case VIDIOC_G_CTRL:
-	case VIDIOC_S_CTRL:
-	case VIDIOC_G_TUNER:
-	case VIDIOC_S_TUNER:
-	case VIDIOC_G_AUDIO:
-	case VIDIOC_S_AUDIO:
-	case VIDIOC_QUERYCTRL:
-	case VIDIOC_QUERYMENU:
-	case VIDIOC_G_INPUT32:
-	case VIDIOC_S_INPUT32:
-	case VIDIOC_G_OUTPUT32:
-	case VIDIOC_S_OUTPUT32:
-	case VIDIOC_ENUMOUTPUT:
-	case VIDIOC_G_AUDOUT:
-	case VIDIOC_S_AUDOUT:
-	case VIDIOC_G_MODULATOR:
-	case VIDIOC_S_MODULATOR:
-	case VIDIOC_S_FREQUENCY:
-	case VIDIOC_G_FREQUENCY:
-	case VIDIOC_CROPCAP:
-	case VIDIOC_G_CROP:
-	case VIDIOC_S_CROP:
-	case VIDIOC_G_SELECTION:
-	case VIDIOC_S_SELECTION:
-	case VIDIOC_G_JPEGCOMP:
-	case VIDIOC_S_JPEGCOMP:
-	case VIDIOC_QUERYSTD:
-	case VIDIOC_TRY_FMT32:
-	case VIDIOC_ENUMAUDIO:
-	case VIDIOC_ENUMAUDOUT:
-	case VIDIOC_G_PRIORITY:
-	case VIDIOC_S_PRIORITY:
-	case VIDIOC_G_SLICED_VBI_CAP:
-	case VIDIOC_LOG_STATUS:
-	case VIDIOC_G_EXT_CTRLS32:
-	case VIDIOC_S_EXT_CTRLS32:
-	case VIDIOC_TRY_EXT_CTRLS32:
-	case VIDIOC_ENUM_FRAMESIZES:
-	case VIDIOC_ENUM_FRAMEINTERVALS:
-	case VIDIOC_G_ENC_INDEX:
-	case VIDIOC_ENCODER_CMD:
-	case VIDIOC_TRY_ENCODER_CMD:
-	case VIDIOC_DECODER_CMD:
-	case VIDIOC_TRY_DECODER_CMD:
-	case VIDIOC_DBG_S_REGISTER:
-	case VIDIOC_DBG_G_REGISTER:
-	case VIDIOC_DBG_G_CHIP_IDENT:
-	case VIDIOC_S_HW_FREQ_SEEK:
-	case VIDIOC_S_DV_TIMINGS:
-	case VIDIOC_G_DV_TIMINGS:
-	case VIDIOC_DQEVENT:
-	case VIDIOC_DQEVENT32:
-	case VIDIOC_SUBSCRIBE_EVENT:
-	case VIDIOC_UNSUBSCRIBE_EVENT:
-	case VIDIOC_CREATE_BUFS32:
-	case VIDIOC_PREPARE_BUF32:
-	case VIDIOC_ENUM_DV_TIMINGS:
-	case VIDIOC_QUERY_DV_TIMINGS:
-	case VIDIOC_DV_TIMINGS_CAP:
-	case VIDIOC_ENUM_FREQ_BANDS:
-	case VIDIOC_SUBDEV_G_EDID32:
-	case VIDIOC_SUBDEV_S_EDID32:
+	if (_IOC_TYPE(cmd) == 'V' && _IOC_NR(cmd) < BASE_VIDIOC_PRIVATE)
 		ret = do_video_ioctl(file, cmd, arg);
-		break;
+	else if (vdev->fops->compat_ioctl32)
+		ret = vdev->fops->compat_ioctl32(file, cmd, arg);
 
-	default:
-		if (vdev->fops->compat_ioctl32)
-			ret = vdev->fops->compat_ioctl32(file, cmd, arg);
-
-		if (ret == -ENOIOCTLCMD)
-			printk(KERN_WARNING "compat_ioctl32: "
-				"unknown ioctl '%c', dir=%d, #%d (0x%08x)\n",
-				_IOC_TYPE(cmd), _IOC_DIR(cmd), _IOC_NR(cmd),
-				cmd);
-		break;
-	}
+	if (ret == -ENOIOCTLCMD)
+		pr_debug("compat_ioctl32: unknown ioctl '%c', dir=%d, #%d (0x%08x)\n",
+			 _IOC_TYPE(cmd), _IOC_DIR(cmd), _IOC_NR(cmd), cmd);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(v4l2_compat_ioctl32);

Has anyone had any further success with this?

I currently am stuck in the same situation where I can’t get anything in gstreamer-1.0 to negotiate with the ov5693 camera due to the ov5693 only producing 10Bit Raw RGB (RGGB10).

Seems when using v4l2src with the ov5693 camera at /dev/video0, the only capabilities that negotiate is the default string of:

"video/x-raw, format=(string)YUY2, framerate=(fraction)100/1, width=(int)2592, height=(int)1944,interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)1:4:7:1"

But even though this negotiates, it is not the actual format and it only produces 0x0 for every byte (not real data).

I can successfully capture RGGB data using yavta using the following line:

yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw

So i know that the actual underling ov5693 driver works, it is just the integration with the v4l2src element where the problem exists.

Logically i believe the capabilities for the v4l2src element should be:

"video/x-bayer, format=(string)rggb, framerate=(fraction)30/1, width=(int)1920, height=(int)1080"

since the data is RGGB, but just as with every other capability string that isn’t the default, an error gets thrown of:

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2943): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)

As discussed in previous posts here, eventually the data will require a video conversion element to successfully negotiate with other elements (like omxh264/5), but we can’t get to this stage yet because the v4l2src wont negotiate with any of these converters.

Anyone have any ideas?

Also i have seen posts that the proprietary nvidia source camera element (nvcamerasrc) does the Bayer to NV12 conversion internally. Is this actually true and if so is it using a hardware accelerator to do so?

I have the same issue with other folks in this thread. yavta can capture valid data. hexdump shows valid data but my Irfanview only shows gray pattern, which looks the setting is incorrect.

yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw

How did you guys check the RAW file? I tried other tools like dcraw or raw2bmp all shows decode error.
My Irfanview settings are “10BPP, not nomalized, Bayer pattern, Bayer pattern start RG”. I also tried other settings and only this combination allow me to see the grayed imperfect image.

Gstreamer pipeline like below will cause system crash:

gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)1920, height=(int)1080, format=(string)YUY2, framerate=(fraction)30/1' ! videoconvert ! 'video/x-raw, format=(string)I420' ! omxh265enc ! h265parse ! filesink location=test_v4l2.mp4 -e

ok, after rebooting my windows VW, I am able to use Irfanview to check the yavta capture and it is correct.

To me it looks like that current Tegra VI/V4L2 driver seems has some issue. I am just trying to capture one frame via V4L2src and it failed

gst-launch-1.0 -vvv v4l2src device="/dev/video0" num-buffers=1 ! fakesink

Here is the error from the syslog:

gst-launch-1.0 --verbose v4l2src device="/dev/video0" ! 'video/x-raw, width=(int)2592, height=(int)1944, format=(string)YUY2, framerate=(fraction)30/1' ! videoconvert  ! fakesink

ubuntu@tegra-ubuntu:~$ tail -f /var/log/syslog | grep -i 'vi\|camera\|csi\|timeout\|time out'
Apr  5 21:45:51 tegra-ubuntu kernel: [  577.785105] vi vi: vi_remove: ++
Apr  5 21:46:02 tegra-ubuntu kernel: [  588.871062] vi vi: initialized
Apr  5 21:46:02 tegra-ubuntu kernel: [  588.871178] soc-camera-pdrv soc-camera-pdrv.0: Probing soc-camera-pdrv.0
Apr  5 21:46:03 tegra-ubuntu kernel: [  589.679759] vi vi: Tegra camera driver loaded.
Apr  5 21:46:11 tegra-ubuntu kernel: [  598.036177] vi vi: CSI 2 syncpt timeout, syncpt = 7, err = -11
Apr  5 21:46:11 tegra-ubuntu kernel: [  598.042178] vi vi: TEGRA_CSI_CSI_CIL_STATUS 0x00000010
Apr  5 21:46:11 tegra-ubuntu kernel: [  598.047709] vi vi: TEGRA_CSI_CSI_CILX_STATUS 0x00040041
Apr  5 21:46:11 tegra-ubuntu kernel: [  598.053060] vi vi: TEGRA_CSI_PIXEL_PARSER_STATUS 0x00000080
Apr  5 21:46:11 tegra-ubuntu kernel: [  598.059148] vi vi: TEGRA_VI_CSI_ERROR_STATUS 0x00000004
Apr  5 21:46:11 tegra-ubuntu kernel: [  598.065420] vi vi: MW_ACK_DONE syncpoint time out!
Apr  5 21:46:13 tegra-ubuntu kernel: [  600.055596] vi vi: CSI 2 syncpt timeout, syncpt = 7, err = -11

Could anyone from NVIDIA provide some analysis on this? Thanks

How to apply 0001-ARM64-adding-OV5693-V4L2-on-E3326-jetson_cv.patch for 23.2 t

i have downloaded “kernel_src.tbz2” and extracted and cross compile on that folder.
But for applying the patch i need to init git on that folder so whats the procedure for applying patch?

any success in getting gstreamer pipeline working?

i can successfully get image using yavta.

root@tegra-ubuntu:/home/ubuntu/yavta# ./yavta /dev/video0 -c1 -n1 -s1920x1080 -fSRGGB10 -Fov.raw
Device /dev/video0 opened.
Device `vi' on `' is a video capture (without mplanes) device.
Video format set: SRGGB10 (30314752) 1920x1080 (stride 3840) field none buffer size 4147200
Video format: SRGGB10 (30314752) 1920x1080 (stride 3840) field none buffer size 4147200
1 buffers requested.
length: 4147200 offset: 0 timestamp type/source: mono/EoF
Buffer 0/0 mapped at address 0xf6b4b000.
0 (0) [-] none 0 4147200 B 1460369667.629424 3405.333283 0.014 fps ts mono/EoF
Captured 1 frames in 0.059598 seconds (16.779059 fps, 69586112.134214 B/s).
1 buffers released.

But gstreamer pipeline give these kind of error

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:

Hi Gent’s,
I’m still not able to configure my kernel, that I get the /dev/video0 device with onboard camera.

Can someone of you give me the correct kernel-def-config.
I’m “thinking”, that I’m doing it correct, but when I activate TEGRA_CAMERA, I got compile errors…

Would be very helpfull to get a config…

Second, are you guy’s able to use the camera and get it working with openCV4Tegra?

Thanks
Peter

Do you want to share your compile errors?
I followed the L4T by setting those configs, and did not see any issue.

•CONFIG_SOC_CAMERA_OV5693=m
  •CONFIG_VIDEO_TEGRA_VI=m
  •Disable CONFIG_SOC_CAMERA_PLATFORM
  •Disable CONFIG_SOC_CAMERA_OV13860
  •Disable CONFIG_SOC_CAMERA TC358840