Streaming issues with gst-launch-1.0 with a custom camera device

I have been having trouble with gst-launch-1.0 trying to get streaming going. I have a custom one-lane CSI panoramic camera 1920x480 running 30FPS. “v4l2-ctl” works fine, no errors, at 30FPS

Running “cheese” application streams but wI see a noticeable lag in the stream.
Next we tried gst-launch-1.0 with v4l2src daemon:

gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw, width=1920, height=480, framerate=30/1’ ! xvimagesink -ev

I see the first frame, then it freezes (no errors in dmesg).

Next I tried running gst-launch-1.0 with nvcamera daemon:

gst-launch-1.0 nvcamerasrc ! ‘video/x-raw(memory:NVMM) , width=(int)1920, height=(int)480, format=(string)I420, framerate=(fraction)30/1’ ! nvvidconv ! ‘video/x-raw, format=(string)I420’ ! xvimagesink -ev

It immediately fails, so I started the nvcamera daemon manually. This is what I get for failures:

NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
PCLHW_DTParser
LoadOverridesFile: looking for override file [/Calib/camera_override.isp] 1/16LoadOverridesFile: looking for override file [/data/nvcam/settings/camera_overrides.isp] 2/16LoadOverridesFile: looking for override file [/opt/nvidia/nvcam/settings/camera_overrides.isp] 3/16LoadOverridesFile: looking for override file [/var/nvidia/nvcam/settings/camera_overrides.isp] 4/16LoadOverridesFile: looking for override file [/data/nvcam/camera_overrides.isp] 5/16LoadOverridesFile: looking for override file [/data/nvcam/settings/pffcam_center_lipffcam.isp] 6/16LoadOverridesFile: looking for override file [/opt/nvidia/nvcam/settings/pffcam_center_lipffcam.isp] 7/16LoadOverridesFile: looking for override file [/var/nvidia/nvcam/settings/pffcam_center_lipffcam.isp] 8/16---- imager: No override file found. ----
(NvOdmDevice) Error ModuleNotPresent: V4L2Device not available (in dvs/git/dirty/git-master_linux/camera-partner/imager/src/V4L2Device.cpp, function findDevice(), line 231)
(NvOdmDevice) Error ModuleNotPresent: (propagating from dvs/git/dirty/git-master_linux/camera-partner/imager/src/V4L2Device.cpp, function initialize(), line 54)
(NvOdmDevice) Error ModuleNotPresent: (propagating from dvs/git/dirty/git-master_linux/camera-partner/imager/src/devices/V4L2SensorViCsi.cpp, function initialize(), line 97)
NvPclDriverInitializeData: Unable to initialize driver v4l2_sensor
NvPclInitializeDrivers: error: Failed to init camera sub module v4l2_sensor
NvPclStartPlatformDrivers: Failed to start module drivers
NvPclStateControllerOpen: Failed ImagerGUID 2. (error 0xA000E)
NvPclOpen: PCL Open Failed. Error: 0xf
SCF: Error BadParameter: Sensor could not be opened. (in src/services/capture/CaptureServiceDeviceSensor.cpp, function getSourceFromGuid(), line 596)
SCF: Error BadParameter: (propagating from src/services/capture/CaptureService.cpp, function addSourceByGuid(), line 781)
SCF: Error BadParameter: (propagating from src/api/CameraDriver.cpp, function addSourceByIndex(), line 276)
SCF: Error BadParameter: (propagating from src/api/CameraDriver.cpp, function getSource(), line 439)
Segmentation fault (core dumped)

You may post the output of (assuming your camera is /dev/video0):

v4l2-ctl -d /dev/video0 --list-formats-ext

for better advice.

nvidia@tegra-ubuntu:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘RGGB’
Name : 8-bit Bayer RGRG/GBGB
Size: Discrete 1920x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)

Index       : 1
Type        : Video Capture
Pixel Format: 'RG10'
Name        : 10-bit Bayer RGRG/GBGB
	Size: Discrete 1920x480
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 1920x1080
		Interval: Discrete 0.033s (30.000 fps)

Index       : 2
Type        : Video Capture
Pixel Format: 'BG10'
Name        : 10-bit Bayer BGBG/GRGR
	Size: Discrete 1920x480
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 1920x1080
		Interval: Discrete 0.033s (30.000 fps)

Index       : 3
Type        : Video Capture
Pixel Format: 'RG12'
Name        : 12-bit Bayer RGRG/GBGB
	Size: Discrete 1920x480
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 1920x1080
		Interval: Discrete 0.033s (30.000 fps)

nvidia@tegra-ubuntu:~$

I was reading through some of the other topics on this forum that are somewhat related (gst-launch-1.0 not working). Someone from NVidia stated that nvcamerasrc has been deprecated as of R32.1. Should I try upgrading to your latest code base to see if the problem resolves itself?

Upgrading is usually a good thing, as new releases come with previous bugs fixed and new features or optimizations. Tough I am not confident it will solve your problem.

It seems your are using a driver similar to ov5693 with adapted size. You may tell a bit more about your camera and driver and if any additionnal firmware/config or device tree modifications.
Please post the working command with v4l2-ctl. Also, be sure to use

--set-ctrl bypass_mode=0

with v4l2-ctl, otherwise it may interfere with nvcamera deamon.

Does your camera outputs these bayer formats ?
Note that gstreamer may only be able to handle 8 bits bayer frames, so you may try first to get it working in this mode. Note however that you would have to debayer frames with a gstreamer plugin bayer2rgb that runs on CPU.
If your camera rather outputs 10 bits bayer (RG10) then you would have to try with ISP bypass such as nvcamera. If you get this way, it would make sense to upgrade and start your development with argus_camera instead.

1 Like

Our driver is very similar to the OV5693, although we initially were trying for an IMX185 compatibility. The FPGA design used only had support for 1 or 2 lanes, however so we quickly abandoned the IMX185 look & feel.

Our camera only outputs bayer RG8 format. We have the capability to output YUV, but have not tried that yet.

We have only tried to get 8-bit bayer going, since this is the only format that the camera device FPGA supports.

Here is the device-tree configuration for our camera:

		i2c@0 {
		pffcam_a@1a {
			compatible = "nvidia,pffcam";

			reg = <0x1a>;
			devnode = "video0";

			/* Physical dimensions of sensor */
			physical_w = "15.0";
			physical_h = "12.5";

			sensor_model ="pffcam";
			/* Define any required hw resources needed by driver */
			/* ie. clocks, io pins, power sources */

			/* Defines number of frames to be dropped by driver internally after applying */
			/* sensor crop settings. Some sensors send corrupt frames after applying */
			/* crop co-ordinates */
			post_crop_frame_drop = "0";

			/* Convert Gain to unit of dB (decibel) befor passing to kernel driver */
			use_decibel_gain = "true";

			/* if true, delay gain setting by one frame to be in sync with exposure */
			delayed_gain = "true";

			/* enable CID_SENSOR_MODE_ID for sensor modes selection */
			use_sensor_mode_id = "true";

			/**
			* A modeX node is required to support v4l2 driver
			* implementation with NVIDIA camera software stack
			*
			* mclk_khz = "";
			* Standard MIPI driving clock, typically 24MHz
			*
			* num_lanes = "";
			* Number of lane channels sensor is programmed to output
			*
			* tegra_sinterface = "";
			* The base tegra serial interface lanes are connected to
			*
			* discontinuous_clk = "";
			* The sensor is programmed to use a discontinuous clock on MIPI lanes
			*
			* dpcm_enable = "true";
			* The sensor is programmed to use a DPCM modes
			*
			* cil_settletime = "";
			* MIPI lane settle time value.
			* A "0" value attempts to autocalibrate based on mclk_multiplier
			*
			* active_w = "";
			* Pixel active region width
			*
			* active_h = "";
			* Pixel active region height
			*
			* dynamic_pixel_bit_depth = "";
			* sensor dynamic bit depth for sensor mode
			*
			* csi_pixel_bit_depth = "";
			* sensor output bit depth for sensor mode
			*
			* mode_type="";
			* Sensor mode type, For eg: yuv, Rgb, bayer, bayer_wdr_pwl
			*
			* pixel_phase="";
			* Pixel phase for sensor mode, For eg: rggb, vyuy, rgb888
			*
			* readout_orientation = "0";
			* Based on camera module orientation.
			* Only change readout_orientation if you specifically
			* Program a different readout order for this mode
			*
			* line_length = "";
			* Pixel line length (width) for sensor mode.
			* This is used to calibrate features in our camera stack.
			*
			* mclk_multiplier = "";
			* Multiplier to MCLK to help time hardware capture sequence
			* TODO: Assign to PLL_Multiplier as well until fixed in core
			*
			* pix_clk_hz = "";
			* Sensor pixel clock used for calculations like exposure and framerate
			*
			*
			*
			*
			* inherent_gain = "";
			* Gain obtained inherently from mode (ie. pixel binning)
			*
			* min_gain_val = ""; (floor to 6 decimal places)
			* max_gain_val = ""; (floor to 6 decimal places)
			* Gain limits for mode
			* if use_decibel_gain = "true", please set the gain as decibel
			*
			* min_exp_time = ""; (ceil to integer)
			* max_exp_time = ""; (ceil to integer)
			* Exposure Time limits for mode (us)
			*
			*
			* min_hdr_ratio = "";
			* max_hdr_ratio = "";
			* HDR Ratio limits for mode
			*
			* min_framerate = "";
			* max_framerate = "";
			* Framerate limits for mode (fps)
			*
			* embedded_metadata_height = "";
			* Sensor embedded metadata height in units of rows.
			* If sensor does not support embedded metadata value should be 0.
			*/

			mode0 {/*mode PFFCAM_MODE_1920X480_30FPS*/
				mclk_khz = "12000";
				num_lanes = "1";
				tegra_sinterface = "serial_a";
				discontinuous_clk = "no";
				dpcm_enable = "false";
				cil_settletime = "0";
				dynamic_pixel_bit_depth = "8";
				csi_pixel_bit_depth = "8";
				mode_type = "bayer";
				pixel_phase = "rggb";
				pixel_t = "bayer_bggr";

				active_w = "1920";
				active_h = "480";
				readout_orientation = "0";
				line_length = "2200";
				inherent_gain = "1";
				mclk_multiplier = "3";
				pix_clk_hz = "36000000";

				min_gain_val = "0";
				max_gain_val = "12";
				min_hdr_ratio = "16";
				max_hdr_ratio = "16";
				min_framerate = "1.5";
				max_framerate = "30";
				min_exp_time = "30";
				max_exp_time = "660000";
				embedded_metadata_height = "0";

				/* WDR related settings */
				num_control_point = "4";
				control_point_x_0 = "0";
				control_point_x_1 = "2048";
				control_point_x_2 = "16384";
				control_point_x_3 = "65536";
				control_point_y_0 = "0";
				control_point_y_1 = "2048";
				control_point_y_2 = "2944";
				control_point_y_3 = "3712";
			};
			mode1 {/*mode PFFCAM_MODE_1920X1080_30FPS_TP*/
				mclk_khz = "12000";
				num_lanes = "1";
				tegra_sinterface = "serial_a";
				discontinuous_clk = "no";
				dpcm_enable = "false";
				cil_settletime = "0";
				dynamic_pixel_bit_depth = "8";
				csi_pixel_bit_depth = "8";
				mode_type = "bayer";
				pixel_phase = "rggb";
				pixel_t = "bayer_bggr";

				active_w = "1920";
				active_h = "1080";
				readout_orientation = "0";
				line_length = "2200";
				inherent_gain = "1";
				mclk_multiplier = "3";
				pix_clk_hz = "36000000";

				min_gain_val = "0";
				max_gain_val = "12";
				min_hdr_ratio = "16";
				max_hdr_ratio = "16";
				min_framerate = "1.5";
				max_framerate = "30";
				min_exp_time = "30";
				max_exp_time = "660000";
				embedded_metadata_height = "0";

				/* WDR related settings */
				num_control_point = "4";
				control_point_x_0 = "0";
				control_point_x_1 = "2048";
				control_point_x_2 = "16384";
				control_point_x_3 = "65536";
				control_point_y_0 = "0";
				control_point_y_1 = "2048";
				control_point_y_2 = "2944";
				control_point_y_3 = "3712";
			};
			ports {
				#address-cells = <1>;
				#size-cells = <0>;
				port@0 {
					reg = <0>;
					lipffcam_pffcam_out0: endpoint {
						csi-port = <0>;
						bus-width = <1>;
						remote-endpoint = <&lipffcam_csi_in0>;
						};
					};
				};
			};
		};

If you camera provide 8bits RGGB, you may try this pipeline:

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! 'video/x-bayer, format=(string)rggb, width=(int)1920, height=(int)480, framerate=(fraction)30/1' ! bayer2rgb ! videoconvert ! xvimagesink -ev

Note that bayer2rgb is in gst plugins bad package if not yet installed. It may be slow, so it may be better to boost your jetson with MAXN nvpmodel and max clocks.
If you also try with nvcamerasrc, you may try first with 1080p:

gst-launch-1.0 -v nvcamerasrc ! 'video/x-raw(memory:NVMM),format=I420,width=1920,height=1080,framerate=30/1' ! nvvidconv ! xvimagesink

Thanks for these suggestions. I will try both plus the v4l2-ctl command and send the output of all 3 to you. I assume you would like me to run the nvcamera daemon manually so that I can capture any errors that occur?

hello pholden,

we don’t support RAW8 input for nvcamerasrc, please refer to Topic 1015837, and Topic 1046071 for details.
suggest you working with v4l2src for your sensor format.
thanks

v4l2-ctl --all
Driver Info (not using libv4l2):
Driver name : tegra-video
Card type : vi-output, pffcam 30-001a
Bus info : platform:15700000.vi:0
Driver version: 4.4.38
Capabilities : 0x84200001
Video Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 0: no power)
Format Video Capture:
Width/Height : 1920/480
Pixel Format : ‘RGGB’
Field : None
Bytes per Line : 2048
Size Image : 983040
Colorspace : sRGB
Transfer Function : Default
YCbCr Encoding : Default
Quantization : Default
Flags :

Camera Controls

                 group_hold (intmenu): min=0 max=1 default=0 value=0
                 hdr_enable (intmenu): min=0 max=1 default=0 value=0
                    fuse_id (str)    : min=0 max=12 step=2 value='000085010101' flags=read-only, has-payload
                sensor_mode (int64)  : min=0 max=0 step=0 default=0 value=0 flags=slider
                       gain (int64)  : min=0 max=0 step=0 default=0 value=0 flags=slider
                   exposure (int64)  : min=0 max=0 step=0 default=0 value=125 flags=slider
                 frame_rate (int64)  : min=0 max=0 step=0 default=0 value=125829120 flags=slider
                bypass_mode (intmenu): min=0 max=1 default=0 value=0
            override_enable (intmenu): min=0 max=1 default=0 value=0
               height_align (int)    : min=1 max=16 step=1 default=1 value=1
                 size_align (intmenu): min=0 max=2 default=0 value=0
           write_isp_format (int)    : min=1 max=1 step=1 default=1 value=1

Attached jpg is output from:

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! ‘video/x-bayer, format=(string)rggb, width=(int)1920, height=(int)480, framerate=(fraction)30/1’ ! bayer2rgb ! videoconvert ! xvimagesink -ev

Looks more like a test-pattern than an image, but it’s moving which we didn’t get before with a clear first frame.

Output from gst-launch command:

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! ‘video/x-bayer, format=(string)rggb, width=(int)1920, height=(int)480, framerate=(fraction)30/1’ ! bayer2rgb ! videoconvert ! xvimagesink -ev

WARNING: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2854): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.

Output from following gst-launch command using nvcamerasrc:

gst-launch-1.0 -v nvcamerasrc ! ‘video/x-raw(memory:NVMM),format=I420,width=1920,height=480,framerate=30/1’ ! nvvidconv ! xvimagesink

[ 3247.047079] pffcam 30-001a: pffcam_power_on: power on, I2C_CTRL reg = 6
[ 3247.556411] pffcam 30-001a: pffcam_power_on: power on, I2C_CTRL reg = 6 after delay
[ 3247.564843] pffcam 30-001a: pffcam_power_off: power off, I2C_CTRL reg = 6
[ 3247.624382] pffcam 30-001a: pffcam_power_off: power off, I2C_CTRL reg = 6 after wait
[ 3247.632210] pffcam 30-001a: pffcam_power_off: power off, not busy, write shutdown bit to FPGA VIDREG
[ 3247.642665] pffcam 30-001a: pffcam_power_off: power off
[ 3247.900467] pffcam 30-001a: pffcam_power_off: power off, busyflag = 6 after writing shutdown bit
[ 3247.916812] pffcam 30-001a: pffcam_power_on: power on, I2C_CTRL reg = 6
[ 3248.428412] pffcam 30-001a: pffcam_power_on: power on, I2C_CTRL reg = 6 after delay
[ 3248.436746] pffcam 30-001a: pffcam_power_off: power off, I2C_CTRL reg = 6
[ 3248.496415] pffcam 30-001a: pffcam_power_off: power off, I2C_CTRL reg = 6 after wait
[ 3248.504354] pffcam 30-001a: pffcam_power_off: power off, not busy, write shutdown bit to FPGA VIDREG
[ 3248.514868] pffcam 30-001a: pffcam_power_off: power off
[ 3248.772443] pffcam 30-001a: pffcam_power_off: power off, busyflag = 6 after writing shutdown bit
[ 3248.826809] pffcam 30-001a: pffcam_power_on: power on, I2C_CTRL reg = 6
[ 3249.336454] pffcam 30-001a: pffcam_power_on: power on, I2C_CTRL reg = 6 after delay
[ 3249.344630] pffcam 30-001a: pffcam_power_off: power off, I2C_CTRL reg = 6
[ 3249.404420] pffcam 30-001a: pffcam_power_off: power off, I2C_CTRL reg = 6 after wait
[ 3249.412591] pffcam 30-001a: pffcam_power_off: power off, not busy, write shutdown bit to FPGA VIDREG
[ 3249.425144] pffcam 30-001a: pffcam_power_off: power off
[ 3249.684444] pffcam 30-001a: pffcam_power_off: power off, busyflag = 6 after writing shutdown bit
[ 3249.695134] nvcamera-daemon[9550]: unhandled level 2 translation fault (11) at 0x00000000, esr 0x92000006
[ 3249.705019] pgd = ffffffc03edcc000
[ 3249.708554] [00000000] *pgd=00000000ac19c003, *pud=00000000ac19c003, *pmd=0000000000000000

[ 3249.718657] CPU: 4 PID: 9550 Comm: nvcamera-daemon Not tainted 4.4.38 #8
[ 3249.725473] Hardware name: quill (DT)
[ 3249.729221] task: ffffffc1e3c2a580 ti: ffffffc0eaa38000 task.ti: ffffffc0eaa38000
[ 3249.736754] PC is at 0x402efc
[ 3249.739745] LR is at 0x402ef8
[ 3249.742765] pc : [<0000000000402efc>] lr : [<0000000000402ef8>] pstate: 60000000
[ 3249.750193] sp : 0000007f952692d0
[ 3249.753562] x29: 0000007f9526d9d0 x28: 0000000000000000
[ 3249.758930] x27: 0000000000000003 x26: 0000007f9526d340
[ 3249.764335] x25: 0000000000404000 x24: 0000000000000334
[ 3249.769710] x23: 0000007f9526b340 x22: 0000007f9526a340
[ 3249.775083] x21: 0000007f9526b264 x20: 0000007f95269350
[ 3249.780485] x19: 0000007f9526b950 x18: 0000000000000014
[ 3249.785843] x17: 0000007f96adffb0 x16: 0000007f96834540
[ 3249.791227] x15: 0000007f974d5000 x14: 7265766972446172
[ 3249.796582] x13: 656d61432f697061 x12: 2f637273206d6f72
[ 3249.801912] x11: 6620676e69746167 x10: 61706f7270282020
[ 3249.807242] x9 : 3a726574656d6172 x8 : 0000000000000040
[ 3249.812581] x7 : 0000007f906b3290 x6 : 0000000000000001
[ 3249.817924] x5 : 0000000000000000 x4 : 0000007f90000b10
[ 3249.823257] x3 : 0000000000000000 x2 : 0000000000000001
[ 3249.828592] x1 : 0000000000000000 x0 : 0000000000000000

[ 3249.835411] Library at 0x402efc: 0x400000 /usr/sbin/nvcamera-daemon
[ 3249.841677] Library at 0x402ef8: 0x400000 /usr/sbin/nvcamera-daemon
[ 3249.847942] vdso base = 0x7f974d4000

As mentioned by @JerryChang, you would need a RG10 mode for trying with nvcamerasrc (it might not be enough, though). You should avoid any request to nvcamera-deamon for now.

As your camera provides bayer 8 bits RGGB, you may try to use this.

You may first try to capture 10s with v4l2-ctl and gstreamer. Be aware that each of these would use 921K pixels per frame, so almost 300MB each, better to use an external disk, in any case be sure to avoid filling your rootfs disk:

#check known partitions usage
df -H -T
# Only if it shows that partition of current directory has more than 300MB available
v4l2-ctl --set-fmt-video=width=1920,height=480,pixelformat=RGGB --stream-mmap -d /dev/video0 --set-ctrl bypass_mode=0 --stream-count=300 --stream-to=v4l2.rggb

df -H -T
# Only if it shows that partition of current directory still has more than 300MB available
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-bayer, format=(string)rggb, width=(int)1920, height=(int)480, framerate=(fraction)30/1' ! filesink location=gst-v4l2src.rggb

As previously said, bayer2rgb debayers on CPU, so for TX2 it may be slow and some frames may be dropped. If not yet done, you may first boost your Jetson with:

sudo nvpmodel -m0 # switch to MAXN mode enabling all cores
sudo /home/nvidia/jetson_clocks.sh  #Boost clocks to max. in recent releases, this has moved to /usr/bin/jetson_clocks

Now you can try to play the captures from file with gstreamer:

<s>gst-launch-1.0 filesrc location=v4l2.bggr blocksize=921600 ! 'video/x-bayer, width=1920, height=480, framerate=30/1, format=bggr' ! bayer2rgb ! videoconvert ! xvimagesink</s>
gst-launch-1.0 filesrc location=v4l2.rggb blocksize=921600 ! 'video/x-bayer, width=1920, height=480, framerate=30/1, format=rggb' ! bayer2rgb ! videoconvert ! xvimagesink

<s>gst-launch-1.0 filesrc location=gst-v4l2src.bggr blocksize=921600 ! 'video/x-bayer, width=1920, height=480, framerate=30/1, format=bggr' ! bayer2rgb ! videoconvert ! xvimagesink</s>
gst-launch-1.0 filesrc location=gst-v4l2src.rggb blocksize=921600 ! 'video/x-bayer, width=1920, height=480, framerate=30/1, format=rggb' ! bayer2rgb ! videoconvert ! xvimagesink

Does this work in both cases ?

Currently the FPGA only supports RG8. We are working on getting it to output RG10 mode and will try again and report the output.

This might give you a chance to perform debayering on the ISP and would really improve the performance. I have no experience in porting another sensor through ISP, but I think there may be some config files to edit for this.
Someone from NVIDIA or more experienced with this would advise better for this.

I applied performance boost commands you suggested. I’m no longer getting the warnings about dropped frames or computer-too-slow from gst-launch. The output is still garbled, but I can see the outline of my hand when I wave over the camera and it seems to have very little lag now. Hoping that RG10 will clean up the output.

Do you know which config files I need to edit for supporting our sensor?

After switching to RG10 and adjusting the device-tree to reflect the change I ran:

gst-launch-1.0 -v nvcamerasrc ! ‘video/x-raw(memory:NVMM),format=I420,width=1920,height=1080,framerate=30/1’ ! nvvidconv ! xvimagesink

It failed, so I started the nvcamera daemon manually & re-ran gst-launch. Got the following from nvcamera daemon:

sudo /usr/sbin/nvcamera-daemon
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
PCLHW_DTParser
LoadOverridesFile: looking for override file [/Calib/camera_override.isp] 1/16LoadOverridesFile: looking for override file [/data/nvcam/settings/camera_overrides.isp] 2/16LoadOverridesFile: looking for override file [/opt/nvidia/nvcam/settings/camera_overrides.isp] 3/16LoadOverridesFile: looking for override file [/var/nvidia/nvcam/settings/camera_overrides.isp] 4/16LoadOverridesFile: looking for override file [/data/nvcam/camera_overrides.isp] 5/16LoadOverridesFile: looking for override file [/data/nvcam/settings/pffcam_center_lipffcam.isp] 6/16LoadOverridesFile: looking for override file [/opt/nvidia/nvcam/settings/pffcam_center_lipffcam.isp] 7/16LoadOverridesFile: looking for override file [/var/nvidia/nvcam/settings/pffcam_center_lipffcam.isp] 8/16---- imager: No override file found. ----
(NvOdmDevice) Error ModuleNotPresent: V4L2Device not available (in dvs/git/dirty/git-master_linux/camera-partner/imager/src/V4L2Device.cpp, function findDevice(), line 231)
(NvOdmDevice) Error ModuleNotPresent: (propagating from dvs/git/dirty/git-master_linux/camera-partner/imager/src/V4L2Device.cpp, function initialize(), line 54)
(NvOdmDevice) Error ModuleNotPresent: (propagating from dvs/git/dirty/git-master_linux/camera-partner/imager/src/devices/V4L2SensorViCsi.cpp, function initialize(), line 97)
NvPclDriverInitializeData: Unable to initialize driver v4l2_sensor
NvPclInitializeDrivers: error: Failed to init camera sub module v4l2_sensor
NvPclStartPlatformDrivers: Failed to start module drivers
NvPclStateControllerOpen: Failed ImagerGUID 2. (error 0xA000E)
NvPclOpen: PCL Open Failed. Error: 0xf
SCF: Error BadParameter: Sensor could not be opened. (in src/services/capture/CaptureServiceDeviceSensor.cpp, function getSourceFromGuid(), line 596)
SCF: Error BadParameter: (propagating from src/services/capture/CaptureService.cpp, function addSourceByGuid(), line 781)
SCF: Error BadParameter: (propagating from src/api/CameraDriver.cpp, function addSourceByIndex(), line 276)
SCF: Error BadParameter: (propagating from src/api/CameraDriver.cpp, function getSource(), line 439)
Segmentation fault (core dumped)

Running the following:

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! ‘video/x-bayer, format=(string)rggb, width=(int)1920, height=(int)480, framerate=(fraction)30/1’ ! bayer2rgb ! videoconvert ! xvimagesink -ev

This now produces SOF errors, even though v4l2-ctl is working & running at 30FPS.

Is there a different decoder needed for rggb10?