nvivafilter: Could not get EGL display connection

Since I’m planning to use nvivafilter in one of my pipelines, I wanted to get the simplest possible pipeline working with the sample cuda process. This is the pipeline I tried:

gst-launch-1.0 -e videotestsrc ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink

But the pipeline gets stuck in the “Pipeline is PREROLLING” stage. So I set GST_DEBUG to 4 to see whata is happening. This is what I get:

0:00:00.065806689 23007       0x553960 WARN                     omx gstomx.c:2826:plugin_init: Failed to load configuration file: Valid key file could not be found in search
dirs (searched in: /home/nvidia/.config:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
Setting pipeline to PAUSED ...
0:00:00.080519027 23007       0x553960 ERROR            nvivafilter gstnvivafilter.c:684:gst_nvivafilter_egl_init: gst_nvivafilter_egl_init: Could not get EGL display connection

0:00:00.080586868 23007       0x553960 ERROR            nvivafilter gstnvivafilter.c:1404:gst_nvivafilter_start: gst_nvivafilter_start: failed gst_nvivafilter_egl_init

0:00:00.080627540 23007       0x553960 WARN                GST_PADS gstpad.c:1106:gst_pad_set_active:<nvivafilter0:sink> Failed to activate pad
Pipeline is PREROLLING ...
0:00:00.081538327 23007       0x49a8f0 WARN                GST_PADS gstpad.c:4092:gst_pad_peer_query:<videotestsrc0:src> could not send sticky events

Something about not being able to get EGL display connection. Any ideas as to what is wrong here?

You’d better use NVMM memory as input for nvivafilter (despite with gst-inspect it says it can receive CPU memory, I have only been able to use it with contiguous memory, which makes sense for using CUDA).

So these pipelines should work up to R28.2.0 (not sure about later releases):

#For NV12 format processing
gst-launch-1.0 videotestsrc is-live=true ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! nvivafilter customer-lib-name="./libnvsample_cudaprocess.so" cuda-process=true ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink

#For RGBA format processing
gst-launch-1.0 videotestsrc is-live=true ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! nvivafilter customer-lib-name="./libnvsample_cudaprocess.so" cuda-process=true ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvegltransform ! nveglglessink

Thanks for the reply. Unfortunately none of those pipelines work for me. I’m on R28.2.1. The first pipeline gets stuck on “Pipleline is PREROLLING” like before (and the same “could not get EGL connection” message):

0:00:00.031316237  2982   0x58a860 WARN   omx gstomx.c:2826:plugin_init: Failed to load configuration file: Valid key file could not be found in search
dirs (searched in: /home/nvidia/.config:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
Setting pipeline to PAUSED ...
0:00:00.052542402  2982   0x58a860 ERROR   nvivafilter gstnvivafilter.c:684:gst_nvivafilter_egl_init: gst_nvivafilter_egl_init: Could not get EGL display connection

0:00:00.052638050  2982   0x58a860 ERROR   nvivafilter gstnvivafilter.c:1404:gst_nvivafilter_start: gst_nvivafilter_start: failed gst_nvivafilter_egl_init

0:00:00.052694754  2982   0x58a860 WARN   GST_PADS gstpad.c:1106:gst_pad_set_active:<nvivafilter0:sink> Failed to activate pad
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:00:00.056371307  2982   0x58f5e0 WARN   GST_PADS gstpad.c:4092:gst_pad_peer_query:<capsfilter0:src> could not send sticky events

The second gives an error like this:

Setting pipeline to PAUSED ...

Using winsys: x11
ERROR: Pipeline doesn't want to pause.
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to NULL ...
Freeing pipeline ...

Since it says “using x11” and I don’t have X running, that’s not very surprising.

You may try to set DISPLAY:

export DISPLAY=:0

and retry.
On R28.2.1, you may use NV12 instead of I420.

As it happens the DISPLAY environment variable is already set since I’m using X forwarding with ssh, without any luck. That said, I want this to work without X as that is the environment we’re using in production (and in development, too, most of the time).

Or do you mean that nvivafilter will not work without X?

Well, nvivafilter plugin is intended to be used for CUDA processing, which in turn may require a X server…

In my case I logged into tx2 from Xavier with ssh -Y, the pipeline was failing to run but setting DISPLAY to :0 made it work (although there was no display attached on tx2, and I wasn’t able to see any result, but the pipeline was running).

“ssh -Y” is supposed to set DISPLAY. It certainly does in my case. And I’m pretty sure CUDA does not require X, since I’ve already written, compiled and run CUDA code on this very machine without having X installed.

The trouble with “ssh -Y” is that then the requirements for OpenGL, CUDA, and anything the GPU does will migrate to the host machine instead of running on the Jetson. It is true that “-Y” sets the equivalent of a remote DISPLAY…but it won’t be the DISPLAY you expect. @Honey_Patouceul is correct that most uses of GPU use the X server. The GPU driver actually uses the X ABI and the X server loads the driver…much (not everything, but most) of the GPU calls go through that ABI even if it isn’t a graphics call. You will be surprised at how many CUDA programs require the X server and a DISPLAY (basically a context naming a buffer) to work.

When you use “-Y”, and it works, most likely it is because your host PC is doing the GPU work, and not the Jetson. How many of your programs were you able to run without the “-Y”?

I mentioned “ssh -Y” only because I was told to set the DISPLAY variable. OpenGL has nothing to do with this discussion as far as I can tell, we were talking about EGL, which is specifically created to decouple APIs like OpenGL from windowing systems like X.

CUDA on the other hand, I am sure, does not need X since, as I said, I already compiled and run CUDA programs without X. Those programs did not have any graphical output (we’re talking about CUDA after all, not OpenGL), so the fact that they were being run under “ssh -Y” has no relevance.

But I feel we’re getting sidetracked. Am I to understand that X is a requirement for nvivafilter? I cannot find anything about this in the documentation, but then again, the documentation for nvivafilter is not that extensive, or at least I haven’t managed to find anything besides the gst-inspect output and a sample app, plus a few mentions in a number of other documents. If you can point me to a reference, I’d be much obliged.

FWIW, I do not believe nvivafilter depends on X. It just doesn’t make sense to me. But of course I could be dead wrong.

My only point is that much of the software you are considering goes through the X11 ABI even if it isn’t used with any kind of physical display. I am aware there are lower level cases where CUDA can run without X11 being involved, but most of the software you see will require some sort of context…which is what DISPLAY is for, and the “ssh -Y” is an alternative to…“ssh -Y” is also a DISPLAY, but it sets up for a remote DISPLAY instead of local. Using both DISPLAY and forwarding is a conflict…you have to pick just one of those at a time. If you pick forwarding, then it is no longer the Jetson doing the work…it won’t pick some parts for local GPU work and others for remote…all GPU work will be redirected.

If you look at the logs in “/var/log/Xorg.0.log” you will find mention of a few ABI versions. These are standards for X. The NVIDIA GPU driver has this same ABI version built in to it, and the NVIDIA driver generally uses the Xorg binary as the means of interfacing to that driver. As an example:

[     9.410] (II) Module ABI versions:
[     9.410]    X.Org ANSI C Emulation: 0.4
[     9.410]    <u><b>X.Org Video Driver: 20.0</b></u>
[     9.411]    X.Org XInput driver : 22.1
[     9.411]    X.Org Server Extension : 9.0

Calling it a Video Driver is misleading. It would make more sense to call it something like a “GPU Interface Driver”.

So what about nvivafilter? Are you suggesting installing X would fix the problem? Should I have it running, too?

I don’t know about nvivafilter, but when you get an error similar to yours the first thing I would do on a headless system is install a virtual X server and run that as logged in to as the same user that runs the application (an actual X server with a display is simpler). Consider it a test which is necessary due to your earlier log content. See this excerpt:

Using winsys: x11

What the DISPLAY variable does (and “ssh -Y” is just a remote/distributed variation on this) is provide a context. That context is essentially a buffer with an interface accessed via the X server. The interface doesn’t actually care if there is video hardware examining that buffer or not, and it doesn’t care if the operations imposed on that buffer are performed by CPU or GPU. The number of cases where an X server is not used is rather tiny, and I think people assume that having an X server is slow because it is a desktop system, but this isn’t the case: An X server does not provide a desktop…an X server simply runs a single program, and if it turns out that program is a login manager or a desktop, then you get a normal GUI environment. Consider a virtual X server. I don’t have any advice on a particular server.

Okay. Thank you for all the effort you are putting into this. Unfortunately, this is not working. I was really hoping for one of the Nvidia people who might actually know how nvivafilter works to weigh in here. As to X, I really think we should let that line of thinking to rest. I do know how X works.

So I managed to find the issue on my own. The problem was actually the existence of the DISPLAY environment variable. Unset that, and everything works fine.

1 Like

But how can I use the X11 forwarding still? What if i want to have the EGL to forward display to X11 server? Is there a way that i can keep the $DISPLAY while makes it work? Thanks.

If you use “ssh -Y ip_of_jetson” or “ssh -X ip_of_jetson”, then apps started on command line will forward to the host PC. Beware that this may not do what you think it does if CUDA is involved. Forwarding implies any X events (including CUDA) are forwarded to the host PC GPU, and do not run on the Jetson (and so for example there will be a missing library error if the host PC lacks either the X library or CUDA library of the proper version).

If you manually, on command line of the Jetson (even if over ssh), “export DISPLAY=:0” (or maybe “:1”, that varies), then it will run and display directly on the Jetson.

If you require remote viewing of a CUDA app then you should consider a virtual desktop environment where the desktop runs on the Jetson, but is virtual instead of going to a real monitor…and then the client software on the host PC interacts with this (X events don’t forward). I have no recommendations on virtual desktop software.

Thanks for the reply!

I switched to using xRDP. In the remote desktop, there is no error in ‘glxinfo’. But it still has error when I tried to run deepstream sample with display(it runs well if I attached a monitor directly to the jetson). It seems xRDP is still using X11 and the problem still exists. Should i find a virtual remote desktop app? Thank you very much!

I couldn’t give you advice on which virtual desktop, but in the long run, if you want everything on the Jetson (and especially with the ability to display elsewhere), then a virtual desktop is pretty much your only way of proceeding.