Video tearing: how to set XV_SYNC_TO_VBLANK in nvidia_drv.so

We are having video tearing problems using the xvimagesink gstreamer element.
We can’t use xvoverlaysink, even though that provides smooth video, because
of severe image distortion (red screen) if it’s used on both HDMI outputs.
The problem does seem to be update sync’d with vertical blanking, we get
good video on one screen if /sys/module/window/parameters/no_vsync = 0, the
tearing problem if it’s = 1.

Searching does seem to indicate the X11 XV subsystem supports XV_SYNC_TO_VBLANK
flag or atom.

Searching the shared libraries used by Xorg seems to indicate this flag does exist

# lsof -p 18344 | cut -c 70- | egrep '(/usr|/lib)' | while read lib; do echo $lib; strings $lib | grep XV_SYNC; done                          
/usr/lib/xorg/Xorg
/usr/lib/aarch64-linux-gnu/libevdev.so.2.1.12
/usr/lib/aarch64-linux-gnu/libmtdev.so.1.0.0
/usr/lib/xorg/modules/input/evdev_drv.so
/usr/lib/xorg/modules/libwfb.so
/usr/lib/xorg/modules/libfb.so
/usr/lib/xorg/modules/drivers/nvidia_drv.so
XV_SYNC_TO_VBLANK
XV_SYNC_TO_VBLANK
XV_SYNC_TO_VBLANK
XV_SYNC_TO_VBLANK
XV_SYNC_TO_VBLANK
XV_SYNC_TO_VBLANK
XV_SYNC_TO_VBLANK
/usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so

The open questions is is this flag supported, how can it be set, and does it actually work.

Setting the environment variable XV_SYNC_TO_VBLANK=1 before starting Xorg doesn’t change anything.

It does seem like it is possible to set this programmatically (mplayer patch):

Atom xv_atom = xv_intern_atom_if_exists("XV_SYNC_TO_VBLANK");
 if (xv_atom == None)
    return -1;
 return XvSetPortAttribute(mDisplay, xv_port, xv_atom, 1) == Success;

The gstreamer xvimagesink does set some xv_port attributes, ( but not XV_SYNC_TO_BLANK

(Port attributes that xvimagesink sets, from sys/xvimage/xvcontext.c)

static const char autopaint[] = "XV_AUTOPAINT_COLORKEY";
    static const char dbl_buffer[] = "XV_DOUBLE_BUFFER";
    static const char colorkey[] = "XV_COLORKEY";
    static const char iturbt709[] = "XV_ITURBT_709";
... (goes on to use XInternAtom() and  XvSetPortAttribute () to set these

So…

a) Can XV_SYNC_TO_VBLANK be set in nvidia_drv.so using XvSetPortAttribute()?

b) If not, how can it be set.

Thanks in advance,

Cary

Hi cobrien,
Are you able to try nveglglessink?

Thanks for the suggestion.

Unfortunately, using this sink I see the same exact problem, tearing across
the display when there’s fast horizontal panning of the camera.

This happens even if I have
export __GL_SYNC_TO_VBLANK=1

Before starting the X server. Note I am only running Xorg, no window manager.
This is a display-only application (control is external).

Cary

Update:

The XV_SYNC_TO_VBLANK flag seems to be supported in the xv extension. This can be
seen by running a pipeline with GST_DEBUG=xcontext:6 and xvimagesink

:00:00.106360624 22731       0x61f860 DEBUG               xcontext xvcontext.c:85:gst_lookup_xv_port_from_adaptor: XV Adaptor NV17 Video Texture with 32 ports
0:00:00.106527151 22731       0x61f860 DEBUG               xcontext xvcontext.c:165:gst_xvcontext_get_xv_support: Checking 7 Xv port attributes
0:00:00.106541135 22731       0x61f860 DEBUG               xcontext xvcontext.c:173:gst_xvcontext_get_xv_support: Got attribute XV_SET_DEFAULTS
0:00:00.106552815 22731       0x61f860 DEBUG               xcontext xvcontext.c:173:gst_xvcontext_get_xv_support: Got attribute XV_ITURBT_709
0:00:00.106562511 22731       0x61f860 DEBUG               xcontext xvcontext.c:173:gst_xvcontext_get_xv_support: Got attribute XV_SYNC_TO_VBLANK
...

It was fairly easy to rebuild gstreamer using the script in /usr/bin/gst-install, modified
to pull gstreamer 1.8.1. I modified gst-plugins-base-1.8.1/sys/xvimage/xvcontext.c
to set XV_SYNC_TO_VBLANK to 1.

Unfortunately this had no effect on the tearing and jagged edges of the video during
fast horizontal panning.

Any suggestions?

Is there a comprehensive list of nvidia driver parameters that can be set in Xorg.conf?

Thanks in advance.

Please share a pipeline so that we can reproduce the issue.

Here is the pipeline that integrates with our display framework, and is efficient, but
exhibits tearing with fast horizontal panning across scenes with vertical edges:

rtspsrc location=rtsp://56.168.8.200/axis-media/media.amp ! rtph264depay ! h264parse ! queue  ! omxh264dec ! videoconvert ! textoverlay font-desc="Sans, 12" ! xvimagesink sync=false

The individual videos are set to render in separate X gui windows.

This pipeline works well full-screen, with no tearing, however we can only display on one
screen. If we display on two screens one screen turns red, the other gets occasional red horizontal
ribbons across it

rtspsrc location=rtsp://56.168.8.200/axis-media/media.amp ! rtph264depay ! h264parse ! queue  ! omxh264dec ! nvvidconv ! nvoverlaysink display-id=2 sync=false

The source is an axis camera running at 1920x1080 30 hz. It is on an office chair so we can
swivel it back and forth quickly to simulate the customers pan-tilt-zoom operation.

Note that we are running without a window manager in the application. Control is external.
With lightdm running the video quality is somewhat better, but because of the compositing
window manager the CPU utilization (of compiz) for 3x hd streams is very high and the video lags.

Thanks,

Cary

Hi cobrien,
Do you also see the issue if you replace rtsp source with video file source? Do you also see it in 1080p30 local video playback or only happens in rtsp streaming?

The tearing is visible playing back a video, however how noticible it is depends on the amount of motion
in the video. Note that for playing back video from a local file I set sync=true, not sync=false
which we do for live playback to minimize latency.

filesrc location=/opt/data/sintel-1280-surround.mp4 ! decodebin ! videoconvert ! textoverlay font-desc="Sans, 12" ! xvimagesink sync=true

Our carrier board doesn’t have a camera, so I can’t playback local video.

Cary

I have uploaded a 33 mb transport stream file (about 10 seconds) that can be used
to test this problem.

https://send.firefox.com/download/44935637b4/#RhOQNNnOmC4I2FH8Z5lMnw

Note this will be deleted after 24 hours or two downloads.

Playing this file with

gst-launch-1.0 filesrc location=file3.ts ! decodebin ! videoconvert ! xvimagesink

On the TX2 shows tearing of vertical edges. Playing the same pipeline on a PC doesn’t

Hope this helps,

Cary

Hi cobrien,

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! avdec_h264 ! xvimagesink
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264prse ! omxh264dec ! nvvidconv ! xvimagesink
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264prse ! omxh264dec ! nvoverlaysink
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264prse ! omxh264dec ! nvegltransform ! nveglglessink

We have run above pipelines without seeing tearing. Are you on clean r28.2.1?

I went through these configurations very carefully with fresh onstalls of 28.2.1 on both
the NVIDIA eval board (1 hdmi) and our carrier board (2xHDMI, 1xDSI->HDMI).

Cornet carrier, window manager stopped (just Xorg)

systemctl stop lightdm
Xorg &
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! avdec_h264 ! xvimagesink
Visible ‘tearing’
Total cpu 33%

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvvidconv ! xvimagesink
Tearing, edge distortion
cpu utlization 7$

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink
Good quality video
cpu 2.5%

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvegltransform ! nveglglessink
tearing
cpu 3%

NVIDIA Eval Board, Same results.

Added second HDMI to cornet board

With window manager running, tried overlay video manager.
First screen:
gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink
Good quality video

Second screen:
vxBaseWorkerFunction[2575] comp OMX.Nvidia.std.iv_renderer.overlay.yuv420 Error -2147479552

This has do do with overlay allocations per screen. Ran the following script:

fbs="fb0 fb1"
for fb in $fbs
do
        echo 4 > /sys/class/graphics/$fb/blank
done

# disconnect...
echo 0x00 > /sys/class/graphics/fb0/device/win_mask
echo 0x00 > /sys/class/graphics/fb1/device/win_mask
# and reconnect
echo 0x03 > /sys/class/graphics/fb0/device/win_mask
echo 0x0c > /sys/class/graphics/fb1/device/win_mask


for fb in $fbs
do
        echo 0 > /sys/class/graphics/$fb/blank
done

Now I could run, individually, both

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink

and

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvoverlaysink display-id=1

However they couldn’t be run concurrently. Doing this resulted in bright red banding covering the second display, as described in this topic (with attached picture).

https://devtalk.nvidia.com/default/topic/1042576/problem-with-nvoverlaysink-on-3rd-display-and-with-textoverlay/?offset=6#5288994

This happens with or without the window manager running.

So that rules out using the nvoverlaysink. Even if this were solved we would have
the problem of running a screen with 4 videos, one of the requirements.

The next option that provides good quality video is to run the application while lighdm/unity/compiz
are running.

It is possible to use a pipeline terminating in xvimagesink, however the cpu utilization of the compiz
process is approaching 90%, i.e. all of one core. The video doesn’t show tearing, but it does show
jerky behavior and occasional lags when displaying a live video.

So right now we don’t have a good way forward.

Any ideas would be helpful.

Thanks in Advance

Cary

Hi cobrien,
xvimagesink is a 3rd party element and there can be certain memcpy occupying CPU cycles.

Have you tried nveglglessink? Below is an example of launching multiple windows:
[url]https://devtalk.nvidia.com/default/topic/976743/jetson-tx1/get-rgb-frame-data-from-nvvidconv-gstreamer-1-0/post/5022878/#5022878[/url]

Running just Xorg, and the following pipeline:

gst-launch-1.0 filesrc location=file3.ts ! tsdemux ! h264parse ! omxh264dec ! nvegltransform ! nveglglessink

I would still see horizontal dislocations (tearing) of the video.

Just to add one more piece of information to this puzzle, I tried rendering the test file using
the MMAPI example program in tegra_multimedia_api/samples/00_video_decode (after JetPack is
installed). Looking at the code, it looks like the underlying rendering pipeline is based
on egl (i.e it uses NvEglRenderer).

Running it on the eval board, with the window manager running, there was no tearing.

Running it on our custom board, with 1 HDMI connected, there was no tearing.

As soon as the second HDMI was connected and initialized, the tearing started up
on the first display where the video was being displayed. Unplug the second
monitor and the tearing went away.

We have a requirement for a multi-monitor display, so this is yet another problem for us.

Cary

https://devtalk.nvidia.com/default/topic/1025021/jetson-tx1/screen-tearing-when-dual-monitor/1

Looks like this is a “known bug” with TX1. I guess this is the same with the newer TX2 as well. Wow!

Dual display output is not a general case. If you need further support, please contact NVIDIA salesperson to let us understand your usecase and prioritize the issue. Thanks.

Hello DaneLLL,

This seems to be a classic NVIDIA answer to any question which reveals an issue with the product. Can you please provide me a contact name and number to reach out to prioritize the issue?

Thanks a lot.

Please go to Contact NVIDIA Sales Representatives | NVIDIA

Thanks

The topic you posted in previous link is an old issue on GLUT. However, looks like we are not talking about GLUT in current one.

I wonder if you could share the app you are using. Is it just sample from MMAPI?
Please try to use nvoverlaysink or DRMrenderer in MMAPI sample first. These light-weight sink should work without tearing.