Jetson Nano Headless Rendering

I am trying to use graphical applications to do “offscreen” rendering on a headless board. Inspecting the xorg.conf shows the following:

# Allow X server to be started even if no display devices are connected.
Option "AllowEmptyInitialConfiguration" "true"

This seems to indicate that an X server should start even when no monitor is plugged into the HDMI port. However, when running headless and I SSH into the Nano, I get errors.

> glxinfo
Error: unable to open display

>DISPLAY=:0.0 glxinfo
No protocol specified
Error: unable to open display :0.0

To be clear, I am not looking to do X forwarding or VNC. Rather I want to render on the Jetson Nano (application will save output as an image sequence), which requires an X server.

I’ve already looked at a couple other solutions, but didn’t have total success. Xvfb did allow for headless rendering, but it used VMWare for OpenGL and only supported version 3.3 (instead of 4.6 actually supported by the Nano’s GPU). I also tried to add a virtual screen in my xorg.conf, but no success there either.

So it turns out that the Jetson Nano’s display is at :1, and the “AllowEmptyInitialConfiguration” allows X to start, but doesn’t automatically do so on boot.

The following commands work:

> startx &
> DISPLAY=:1.0 glxinfo
name of display: :1.0
display: :1  screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
...

Does anyone know how to automatically start X on boot?

2 Likes

I tried adding the following to my ~/.bashrc:

# Start X if no DISPLAY
if [ -z "$DISPLAY" ]
then
  startx > /dev/null 2>&1 &
  export DISPLAY=:1.0
fi

This seems to work, but I am not sure what will happen if multiple users log in simultaneously - will the $DISPLAY variable already be set for the second user? If so, is it ok to share the display? If not, will the new display be :2.0 instead of :1.0?

Each login session should apply that as an environment variable for that session only. As long as it is the same user, then it should be ok to use export of “:1.0” for each login. Failure would occur if two different users used this at the same time.

Note that you can try and find out. Just log in to that user via two separate ssh connections and see if you find “:1.0” from “echo $DISPLAY”. If they all show this, then you are set. If not, then one of the sessions may not be a “login” session. “:2.0” would require an actual separate X server. Multiple logins can use a single server until different users are there simultaneously.

Thanks!

I tried some things out and DISPLAY is not automatically set for a second login of the same user, but can use the same X server started from a different shell. Also, I switched from graphical.target to multi-user.target, which does make the X server launch at :0.0. Here is my updated script:

if [ -z "$DISPLAY" ]
  then
  DISPLAY=:0.0 glxinfo > /dev/null 2>&1
  if [ $? -ne 0 ]
  then
    echo "starting new x server at :0.0"
    startx > /dev/null 2>&1 &
  else
    echo "connecting to existing x server at :0.0"
  fi
  export DISPLAY=:0.0
else
  echo "already configured for x server at $DISPLAY"
fi

However, when a second user attempts to use the X server started by the first user, it gives an error about not being able to connect to DISPLAY. Some searching lead me to see this as a permissions issue, with many people stating to call the following right after starting the X server:

xhost +

This does allow the second user to launch programs that require X (doesn’t complain about no DISPLAY), but ends up crashing with a segfault. Any ideas what might be causing that?

My hope is to create some default user account that auto logs in and starts an X server. Then any user who connects via SSH can run programs that depend on X without first having to spawn their own X server. This will prevent numerous X servers from running simultaneously.

I found a solution! Turns out the .Xauthority file was only readable by the user, which was at the root of the segfault. Here are the steps I took to enable a shared X server that is automatically started on a headless board.

  1. Switch from graphical target to multi-user target
    > sudo systemctl set-default multi-user.target
    
  2. Create new dummy user that will auto login and launch X
    > sudo adduser startup
    

    choose a password (can leave other fields blank)

  3. Add the following at the end of the dummy user's .bashrc to automatically start X on login
    # Start X Server if no DISPLAY
    if [ -z "$DISPLAY" ]
    then
      DISPLAY=:0.0 glxinfo > /dev/null 2>&1
      if [ $? -ne 0 ]
      then
        echo "starting new x server at :0.0" > .xout.log
        startx -- :0 > /dev/null 2>&1 &
        sleep 5
        DISPLAY=:0.0 xhost + >> .xout.log 2>&1
      else
        echo "connecting to existing x server at :0.0"
      fi
      export DISPLAY=:0.0
    else
      echo "already configured for x server at $DISPLAY"
    fi
    
  4. Create .Xauthority file with proper read permissions
    > touch .Xauthority
    > chmod 644 .Xauthority
    
  5. Automatically log in dummy user on boot
    > sudo systemctl edit getty@tty1
    

    Add the following and save

    [Service]
    ExecStart=
    ExecStart=-/sbin/agetty -a startup --noclear %I $TERM
    
  6. Reboot

After this, any other user can connect via SSH and run programs that depend on X. One note is that the DISPLAY variable is not automatically set, so users will have to either prepend commands with DISPLAY=:0.0 or export DISPLAY=:0.0.

can anybody get nbody running with the mentioned approach over the ssh in a way the output will be displayed?

# Allow containers to communicate with Xorg
$ sudo xhost +si:localuser:root
$ sudo docker run --runtime nvidia --network host -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r32.3.1

root@nano:/# apt-get update && apt-get install -y --no-install-recommends make g++
root@nano:/# cp -r /usr/local/cuda/samples /tmp
root@nano:/# cd /tmp/samples/5_Simulations/nbody
root@nano:/# make
root@nano:/# ./nbody
root@linux:/tmp/samples/5_Simulations/nbody# DISPLAY=:0.0
root@linux:/tmp/samples/5_Simulations/nbody# ./nbody 
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
	-fullscreen       (run n-body simulation in fullscreen mode)
	-fp64             (use double precision floating point values for simulation)
	-hostmem          (stores simulation data in host memory)
	-benchmark        (run benchmark to measure performance) 
	-numbodies=<N>    (number of bodies (>= 1) to run in simulation) 
	-device=<d>       (where d=0,1,2.... for the CUDA device to use)
	-numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
	-compare          (compares simulation results running once on the default GPU and once on the CPU)
	-cpu              (run n-body simulation on the CPU)
	-tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "NVIDIA Tegra X1" with compute capability 5.3

> Compute 5.3 CUDA device: [NVIDIA Tegra X1]

however video output never appears

Docker wasn’t really meant for graphical applications. It can be made to work, but doing so is hacky and requires exposing a lot of host capabilites. That being said, apps such as gstreamer should run fine in a container with no caps whatsoever provided you use a network sink and then just EXPOSE the necessary port. I dont’ know about your simulation, but if you can hook it up to an encoder you can probably send the video around as you choose though something as basic as netcat.

As to your issue, is “localuser” your actual username? If not, I think it might work if you replaced it with yours.

I agree that it is tricky to run e.g. Ubuntu Desktop with VNC server within docker. However, it works. It allows connections from outside. It is terrific.

" localuser Local connection user id"
from xserver - What are xhost and xhost +si? - Ask Ubuntu

  • To me, it seems that localuser is not to be changed to user name.
  • The sample is from CUDA Samples. I believe previously I was able to run nbody over ssh -X

Re: localuser, I have no idea. Just a guess.
Re: x11 in docker, VNC and ssh +x also will work fine, but probably have a huge performance penalty with your kind of app on Nano. It’s certainly the more secure option… But why not build and run it outside the container?