Create a GL 3+ Context without X

Been really struggling with this, and found a lot of unhelpful information that seems like it doesn’t apply to the TX1. I have a piece of GL 3 based mapping software that we are trying to compile on the TX1.

I’ve managed to get an OpenGL 2.1 context using GLFW 3.2 (and X), but if I try to request a GL 3.0 or higher context it fails. (I tried 3.0, 3.2, 3.3, 4.1 and 4.5) I’ve also tried using EGL, but eglQueryString() returns only “OpenGL_ES”.

I did install Mesa as I was too lazy to download the GL and EGL headers (why doesn’t it just ship with them?), but I’m certain I’m not accidentally linking against the mesa libs. I explicitly link against the tegra lib using the absolute path, and I even went as far as to uninstall the mesa libs and still got the same error (and not a link error). The vendor strings also don’t mention Mesa.

The whole point of the Tegra platform is great GPU support. It’s not supposed to be this hard to create a mildly modern GL context on it is it?

I pretty often run into issues like this after installing Mesa components after an Nvidia driver install on x86 (like after installing Steam), which usually requires me to reinstall the Nvidia driver, but I’ve never done that on the Jetson.

Maybe you’re running into what’s described here: [url]https://devtalk.nvidia.com/default/topic/946136/building-an-opengl-application/?offset=4#4914221[/url]?

If mesa overwrote something, it’s probably libglx.so (directly or via symlink breaking). This can verify most of the time if a package overwrote the nVidia files:

sha1sum -c /etc/nv_tegra_release

If you know of a particular file which is broken its easy enough to put the correct file back in place.

After an extra long weekend/vacation, I did figure out what the issue was. There seems to be an issue in the latest version of GLFW, and downgrading to 3.1 fixed the issue. I can now create GL 3 core contexts. I’m still not quite sure how to make X-less contexts, but I can burn that bridge when I get to it.

Thanks for the suggestions. They were at least helpful in finding the right direction.

Food for thought…libGLX (and its content) relates to setting up OpenGL for an X11 server…libGL itself is independent of environment when using GLX or equivalent for setup. I’m not sure what the new Vulkan API does, but running OpenGL in Vulkan means some sort of replacement for GLX while GL remains constant. If you explore the changes which had to be made for OpenGL in Vulkan you are simultaneously studying changes to any other non-X11 environment.

On the other hand, it may be the windowing system you are really wanting to get rid of…if you just want to get rid of the windowing environment and have your app always at the foreground while dumping typical window manager overhead and bugs, then you can replace running a window manager (such as lightdm) with directly running your application…the app would talk directly to the X11 server with no intermediate window manager layer. Do you really need to get rid of X11, or do you just need to get rid of the window manager?

We don’t strictly need to get rid of X, which is why I’m not stressing out about it too much yet. The code will be running on an embedded TX1 without a display hooked up to it. The maps it renders will be streamed to a tablet (or other low powered client) for display. X isn’t going to completely ruin our performance or anything, but it’s a hassle we probably want to avoid if we can.

You could set up the Jetson to boot to console/text mode only, and then use “ssh -Y” (or similar) to remote access the Jetson from the tablet and launch your application…this would display to the tablet, assuming the tablet has the correct OpenGL support on it. X11 would never run, but packages supporting OpenGL would be available for compile on the Jetson.

FYI, “ssh -Y” could be tested without actually modifying the Jetson to run text mode.

NOTE: Remote display of OpenGL this way would use the tablet’s GPU. I suspect CUDA may also attempt to run on the tablet (based on CUDA GPU operations being mistaken for graphics GPU operations).

VirtualGL and TurboVNC may be able to help you.
I didn’t tested them on Jetson TX1 but they worked on Jetson TK1.
[url][HowTo]Install VirtualGL and TurboVNC to Jetson TK1 - Jetson TK1 - NVIDIA Developer Forums

TurboVNC:
[url]http://www.turbovnc.org/[/url]
The VirtualGL Project:
[url]http://www.virtualgl.org/[/url]

TurboVNC is a remote desktop server and VirtualGL send OpenGL rendered image on server to client.
I can run X on Jetson TK1 without connecting to display and TurboVNC+VirtualGL allow running CUDA samples using OpenGL on Jetson and viewing rendered screen on android tablet.

Well, SSH isn’t really an option in this case. We need to render map tiles to send to a client. The client (Android tablet) does the actual rendering along with the UI and such.

With the TX1, we have plenty enough power to generate and serve preview resolution map tiles as aerial photos are taken. We are trying to cut down on the time it takes to get feedback on the photos so the pilot can check the coverage and such without needing to transmit them to a server and wait for a full resolution mosaic to come back.

It sounds like the VNC method might work best, but how is the tablet communicating? I’m assuming the issue isn’t using ssh so much as it is Android not having X11.

Speaking of servers, this may be a bit out of the box, but there is no reason you couldn’t put a web server on the Jetson and drop preview images into the web server. A simple CGI program could automatically detect and present new images as they reach the folder. Multiple devices could preview what’s going on and be o/s and device independent.

No no, I think it’s still not clear what we are doing.

The application started to do flight planning for aerial photography and uses slippy tiles (the 256 x 256 pixel images that Google maps and everybody uses). We can now display the stitched mosaics directly in the application as a second tile layer over the generic satellite images and projected onto actual terrain meshes. The stitching happens server side after uploading all of the photos. That’s what we are trying to shortcut.

@linuxdev The plan is more or less exactly what you said. The TX1 stitches the photos in real time as they are captured and we can run a trivial web server on it to serve them to the client device (which will eventually be more than just Android). We definitely don’t want to run remote X or VNC since the GUI runs natively on the client and can operate from several data sources, not just the TX1.

I think the confusion might be that we are running GL on the TX1 to perform the stitching, but it’s not doing any realtime rendering.

GL has to have a context to render to, even if it is virtual (a real video card buffer, or a frame buffer in RAM). Consider that the X11 server does not need to be one with real video hardware connected to it…there are X11 servers out there used for things like testing X11 server code which are entirely software…basically frame buffers faking being a real video display device. Such software servers would respond normally to GLX, and anything capable of generating the tiles could save to a file…there just wouldn’t be a physical monitor, although the GL software would not know that. Assuming you are required to use OpenGL, you’re required to have something like X11 or Vulkan…there is no requirement a display actually be connected if you can force the server to pretend it has a given mode (resolution, color mode, so on).

You can access GUI of programs running on TurboVNC server from VNC client on Android tablet.
You make a GUI program for X11 and run it on TX1, you can see and interact the program from Android tablet.
You dont need to make a android client app and TurboVNC server/client do encode/decode to transfer images efficiently.

If it doesn’t work, following page tells how to allow X without HDMI at Boot by adding a option to /etc/X11/xorg.conf.
http://jetsonhacks.com/2016/05/03/jetson-allow-graphics-without-hdmi/
If you run your OpenGL app on the X on your Jetson from ssh, set “DISPLAY” environment variable like this:

export DISPLAY=:0
firefox &

If you really don’t want to use X, how about to use CUDA instead of OpenGL?
CUDA works without X. You can test about it with CUDA samples but some of them use OpenGL.
You need to rewrite your program including GLSL shaders.
As far as I know, CUDA doesn’t have API to use rasterizer but it have texture functions.
There are sample CUDA program to do image processing.

For posterity, I figured out a solution. The TX1 we had came with an older version of L4T (23.something?)

Once we upgraded to 24.something, then EGL did support OpenGL as an API, and we were able to go the usual route of eglQueryDevicesEXT/eglGetPlatformDisplayEXT to create a X-less GL context. We ran into a couple problems though:

  • If we didn't link against libGLESv2 in addition to libGL, it would silently fail to bind a GL 3.3 core context. No EGL errors are thrown, but all the GL functions tend to fail including glGetString(). We are definitely not using GLES at all so it seems odd that we need to link it.
  • The EGL extension functions we call to get an offscreen EGL display are provided by extensions that aren't listed by eglQueryString(display, EGL_EXTENSIONS)). We just eglGetProcAddress() anyway and assert that the functions aren't NULL.

I feel I should make a formal bug report about these somewhere, but I’m not really sure where.

You could send a report here:
linux-tegra-bugs@nvidia.com