Problem with nvoverlaysink on 3rd display, and with textoverlay

We are trying to get higher quality video than we can with xvvideosink. The major problem
with that is a horizontal ‘tear’ across the video, possibly due to frame update/display update
lack of sync. This is particularly noticeable with fast horizontal panning across vertical edges.
We need to use sync=false to cut down on latency. This is with HD (1920x1080) video from an axis
camera.

As an alternative, we are looking at nvoverlaysink, which seems to not exhibit this problem.
I can get it to display on two of the three displays (2xHDMI, one CSI to HDMI Converter).
I’m using display-id=0,1,2 to select the displays.

The error I get on the last one

NvxBaseWorkerFunction[2575] comp OMX.Nvidia.std.iv_renderer.overlay.yuv420 Error -2147479552

Which is similar to the error reported here:

https://devtalk.nvidia.com/default/topic/1037264/?comment=5270892

I’ve gone and set distributed the overlay buffers between the frame buffers as follows:

cd /sys/class/graphics/fb0 
   echo 4 > blank 
   echo 0x3 > device/win_mask  
   echo 0 > blank 
   cd /sys/class/graphics/fb1
   echo 4 > blank 
   echo 0x0c > device/win_mask 
   echo 0 > blank
   cd /sys/class/graphics/fb2
   echo 4 > blank 
   echo 0x30 > device/win_mask 
   echo 0 > blank

As suggested in the reference.

Also I cannot seem to put a “textoverlay” element in the pipeline without getting an error.
Here’s the pipeline I’m using (Without overlay)

gst-launch-1.0 rtspsrc location=rtsp://56.168.8.200/axis-media/media.amp \
 ! queue \
 ! decodebin \
 ! nvvidconv \
 ! 'video/x-raw(memory:NVMM), format=I420' \
 ! nvoverlaysink sync=false overlay-w=1918 overlay-h=1048 overlay-x=2 overlay-y=2 display-id=1

My questions are:

a) Is there a way to prevent ‘tearing’ using the xvimagesink or another renderer that will render directly into an x-window buffer?

b) Is there a limit on the number of overlays, or how they are distributed or accessed?

c) In the nvoverlaysink, what is the difference between display-id, overlay, and overlay-profile.

d) How do these map to bits in device/win_mask.

e) What elements or capsfilters would I need to insert a “textoverlay” element in the above pipeline.

f) Any other recommendations for smooth video display?

Thanks in Advance

Cary

Hi cobrien,

  1. We are not using xvimagesink since it is a 3rdparty element. If you still need to use it, please share the way to reproduce this issue with us. Also, would you mind using eglglessink? This sink is EGL+ X based.

  2. Total # of overlays on tx2 is 6, no matter how much display you have connected.
    Under normal usecase, lightdm would occupy one. Thus, if you don’t disable lightdm, there would be 5 overlay/display window. If you want to know more about window/overlay, you could refer to TRM → Display Controller.

  3. display-id indicates the physical monitor. If you have 3 monitors, the id should be 0~2.
    overlay indicates the display controller window as I told in question 2.
    overlay-profile should be able to give out the calculation of jitter of rendering.

  4. The mapping of win_mask is not hard to understand. As I told there are 6 overlay on tx2 system.
    We would used a byte to indicate the window distribution on each monitor.

For example, if I have 3 monitors and would like to share these 6 overlays.

I would give the first 2 window with id 5,6 to display 1, id 3,4 to display2 and id 1,2 to display 3.

id   xx123456  
d1   00000011  -> 0x3 for display 1
d2   00001100  -> 0xc for display 2
d3   00110000  -> 0x30 for display 3
  1. What is the textoverlay here? Are you talking about nvosd library? There is a sample in mmapi sample.

  2. You could try either gstreamer with nvoverlaysink or mmapi sample.

Just found this thread for textoverlay.
https://devtalk.nvidia.com/default/topic/982988/jetson-tx1/nvvidconv-plugin-problem-with-higher-gstreamer-version-than-1-2-4/post/5044127/#5044127

Thanks for the feedback.

a) eglglessink. I tried this. It will only run with Xorg running, not without X, and not with
lightdm. Even running under Xorg there is a noticible ‘tear’, a horizontal shift in the video, about
1/4th of the way down from the top, when the camera pans.

b) I could not get nvvideosink to work, it is unclear how to pass the “display” pointer parameter.

c) The highest quality video by far is with nvoverlaysink, however this only works on 2 of the
3 monitors. I am investigating the overlay field allocations. One problem is that there
is a requirement for quad (2x2) display, which would be a problem with just two overlays
per display. It will take a while to restructure the existing xvimagesink-based
application to work with nvoverlaysink.

d) Interestingly xvimagesink had better quality video running inside lightdm. When run outside
with just Xorg running there was the tearing problem. There is perhaps some interacting with
compiz behind the scenes that makes it work better I couldn’t isolate it.

e) I’m using the gstreamer textoverlay element to display text on the video stream. The stream
I would like to be able to run is something like this:

st-launch-1.0 rtspsrc location=rtsp://56.168.8.200/axis-media/media.amp
 ! decodebin
 ! videoconvert
 ! textoverlay text=ABCD
 ! nvvidconv ! nvoverlaysink

However I get an error from the decoder:

ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0
  /GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0: Internal data stream error.

Even though if I just drop the data after textoverlay it runs fine

st-launch-1.0 rtspsrc location=rtsp://56.168.8.200/axis-media/media.amp 
! decodebin ! videoconvert ! textoverlay text=ABCD ! fakesink

How this is giving the decoder problems isn’t clear to me.

Thanks for any insight you can provide.

Cary

You may try to add verbose option -v to gst-launch-1.0 and look at used formats and framerates, especially for source and sink caps of omxh264dec. If you see differences, you may try to specify caps to be according to the working case.

WayneWWW

With you suggestion in #2 about setting device/win_mask to allocate the overlays I was
able to run 3 nvoverlaysink pipelines, one on each display. Note that I had to
clear all the overlays first, then set the ones I wanted for each fb/display.

Unfortunately, I’ve got a video problem on HDMI-2. There is a huge red ragged line that
appears across the screen, gets bigger and bigger, fills the whole screen, and then
goes away. The cycle takes less than a minute. I don’t seem to be able to attach
a picture

Thanks in advance.

Cary

cobrien,

For this red line issue, could you try to disable lightdm and try again?

An alternative way to resolve this:

add this line in Xorg.conf
Option      "TegraReserveDisplayBandwidth" "false"

WayneWWW

I tried adding this in Xorg.conf

...
Section "Device"
    Identifier  "Tegra0"
    Driver      "nvidia"
# Allow X server to be started even if no display devices are connected.
    Option      "AllowEmptyInitialConfiguration" "true"
    Option      "TegraReserveDisplayBandwidth" "false"
EndSection

But it didn’t seem to be accepted, based on Xorg.log.0

...
[  4440.103] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[  4440.111] (II) Module glx: vendor="NVIDIA Corporation"
[  4440.111]    compiled for 4.0.2, module version = 1.0.0
[  4440.111]    Module class: X.Org Server Extension
[  4440.111] (II) NVIDIA GLX Module  28.1.0  Release Build  (integ_stage_rel)  (buildbrain@mobile-u64-553)  Thu Jul 20 00:50:32 PDT 2017
[  4440.111] (II) LoadModule: "nvidia"
[  4440.111] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
[  4440.112] (II) Module nvidia: vendor="NVIDIA Corporation"
[  4440.112]    compiled for 4.0.2, module version = 1.0.0
[  4440.112]    Module class: X.Org Video Driver
[  4440.112] (II) NVIDIA dlloader X Driver  28.1.0  Release Build  (integ_stage_rel)  (buildbrain@mobile-u64-553)  Thu Jul 20 00:51:56 PDT 2017
...
[  4445.689] (WW) NVIDIA(0): Option "TegraReserveDisplayBandwidth" is not used

The problem with the red streaks is still there if I try
to run 2 decode streams terminating in nvoverlaysink elements.

It is worse if I change from 2 SD streams to 1 SD and 1 HD stream.
With 2SD streams I get periodic red striping across one display.
With 1 SD and 1 HD stream I get partly transparent red over the
whole second screen.

Note that our carrier board supports 3 monitors.

0 (DSI-0) goes to a DSI to HDMI converter circuit
1 (HDMI-0)
2 (HDMI-1)

And that using gstreamer pipelines terminating in xvimagesink elements
does not show the red banding, but unfortunately the video shows tearing
and ragged edges for fast motion (which is what we were trying to fix).

Ideas?

Thanks in Advance,

Cary

I don’t know why you have this line in above log. What release are you using?

[  4445.689] (WW) NVIDIA(0): Option "TegraReserveDisplayBandwidth" is not used

You suggested trying this two messages up.

So what are the release version of BSP you are using? Please read the my comment carefully. I don’t want to waste your time.

If this log is shown, it looks like you are using a BSP that is prior to rel-28.2. Could you confirm?

Finally some good progress. I now understand why you were asking about “TegraReserveDisplayBandwidth”
flag.

Our current firmware is based on

Tegra186_Linux_R28.1.0_aarch64.tbz2
Tegra_Linux_Sample-Root-Filesystem_R28.1.0_aarch64.tbz2

That flag is not supported.

n Xorg.log.0

[    17.326] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
[    17.359] (II) Module nvidia: vendor="NVIDIA Corporation"
[    17.359]    compiled for 4.0.2, module version = 1.0.0
[    17.359]    Module class: X.Org Video Driver
[    17.359] (II) NVIDIA dlloader X Driver  28.1.0  Release Build  (integ_stage_rel)  (buildbrain@mobile-u64-553)  Thu Jul 20 00:51:56 PDT 2017

...

[    23.284] (WW) NVIDIA(0): Option "TegraReserveDisplayBandwidth" is not used

I completly cleared out the build area and started over with

Tegra186_Linux_R28.2.1_aarch64.tbz2
Tegra_Linux_Sample-Root-Filesystem_R28.2.1_aarch64.tbz2

Now it is running the newer driver and the flag is accepted

[    55.285] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
[    55.307] (II) Module nvidia: vendor="NVIDIA Corporation"
[    55.307]    compiled for 4.0.2, module version = 1.0.0
[    55.307]    Module class: X.Org Video Driver
[    55.307] (II) NVIDIA dlloader X Driver  28.2.1  Release Build  (integ_stage_rel)  (buildbrain@mobile-u64-773)  Thu May 17 00:16:09 PDT 2018
[    55.307] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
...


[   790.905] (**) NVIDIA(0): Option "TegraReserveDisplayBandwidth" "false"

This is a newer driver, and the flag is accepted.

I believe it is possible, since the nvidia drivers are copied out of the Linux_for_Tegra
directory into the Rootfs, depending on the order that the changes were made,
I may have had old drivers and a new rootfs.

Now that the new drivers are in place, if I adjust the masks in

/sys/class/graphics/fb?/device/win_mask

So each display has 2 overlay bits, I can finally run the nvoverlay sink (which
provides the best video performance) on all 3 monitors without tearing and without
any of the monitors turning red.

Since I can only run one nvoverlaysink, I’m going to have redo the quad screen using
thee gstreamer ‘videomixer’ element. An initial test shows this to be possible, but it does
seem to use a lot of cpu.

Note that without

Option      "TegraReserveDisplayBandwidth" "false"

The red monitor problem returns, even with the newer rootfs and drivers.

Sorry about the confusion, and thanks for your patience. Our customer will
be very glad to hear there has been some progress.

Next project is to get the textoverlay back working.

Cary