PRIME offloading not working on Ubuntu with Nvidia 440 driver

Trying glxinfo using Intel:
$ glxinfo | grep vendor

server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: Intel Open Source Technology Center

Then trying glxinfo using PRIME offloading and getting an error:
$ __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep vendor

X Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  152 (GLX)
  Minor opcode of failed request:  24 (X_GLXCreateNewContext)
  Value in failed request:  0x0
  Serial number of failed request:  39
  Current serial number in output stream:  40

$ lsmod | grep nvidia

nvidia_uvm            954368  0
nvidia_drm             45056  2
nvidia_modeset       1114112  1 nvidia_drm
nvidia              20402176  2 nvidia_uvm,nvidia_modeset
drm_kms_helper        184320  2 nvidia_drm,i915
drm                   487424  16 drm_kms_helper,nvidia_drm,i915
ipmi_msghandler       106496  2 ipmi_devintf,nvidia

$ nvidia-smi

Mon Feb  3 02:40:46 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce MX150       Off  | 00000000:02:00.0 Off |                  N/A |
| N/A   40C    P8    N/A /  N/A |      0MiB /  2002MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

$ ls -l /etc/X11/

total 84
-rwxr-xr-x 1 root root   709 Jan 20  2017 Xreset
drwxr-xr-x 2 root root  4096 Jan 10 21:10 Xreset.d
drwxr-xr-x 2 root root  4096 Jan 10 21:10 Xresources
-rwxr-xr-x 1 root root  3730 Dec 14  2018 Xsession
drwxr-xr-x 2 root root  4096 Feb  2 00:32 Xsession.d
-rw-r--r-- 1 root root   265 Jan 20  2017 Xsession.options
-rw-r--r-- 1 root root    13 Dec  5  2016 XvMCConfig
-rw-r--r-- 1 root root   630 Apr 16  2019 Xwrapper.config
drwxr-xr-x 2 root root  4096 Nov 30 01:49 app-defaults
drwxr-xr-x 2 root root  4096 Oct  8 21:48 cursors
-rw-r--r-- 1 root root    15 Oct  8 21:41 default-display-manager
drwxr-xr-x 4 root root  4096 Apr 16  2019 fonts
-rw-r--r-- 1 root root 17394 Jan 20  2017 rgb.txt
drwxr-xr-x 2 root root  4096 Oct  8 21:48 xinit
drwxr-xr-x 2 root root  4096 Oct 25  2018 xkb
-rw-r--r-- 1 root root   448 Feb  3 02:49 xorg.conf
drwxr-xr-x 2 root root  4096 Apr 16  2019 xsm

$ cat /etc/X11/xorg.conf

Section "Files"
    ModulePath "/usr/lib/xorg/modules"
    ModulePath "/usr/lib/nvidia/xorg"
EndSection

Section "ServerLayout"
	Identifier "layout"
	Screen 0 "iGPU"
	Option "AllowNVIDIAGPUScreens"
EndSection

Section "Device"
	Identifier "iGPU"
	Driver "modesetting"
	BusID "PCI:0:2:0"
EndSection

Section "Screen"
	Identifier "iGPU"
	Device "iGPU"
EndSection

Section "Device"
	Identifier "nvidia"
	Driver "nvidia"
	BusID "PCI:1:0:0"
EndSection

$ ls -l /usr/share/X11/xorg.conf.d/

total 28
-rw-r--r-- 1 root root  190 Feb  3 02:07 10-nvidia.conf
-rw-r--r-- 1 root root 1350 Jan 14 16:43 10-quirks.conf
-rw-r--r-- 1 root root  149 Feb  3 02:19 11-nvidia-offload.conf
-rw-r--r-- 1 root root 1429 Aug 13 14:13 40-libinput.conf
-rw-r--r-- 1 root root  590 Aug 24  2018 51-synaptics-quirks.conf
-rw-r--r-- 1 root root 1751 Aug 24  2018 70-synaptics.conf
-rw-r--r-- 1 root root 3458 Nov 21 19:13 70-wacom.conf

$ cat /usr/share/X11/xorg.conf.d/10-nvidia.conf

Section "OutputClass"
	Identifier "nvidia"
	MatchDriver "nvidia-drm"
	Driver "nvidia"
	Option "AllowEmptyInitialConfiguration"
	ModulePath "/usr/lib/x86_64-linux-gnu/nvidia/xorg"
EndSection

$ cat /usr/share/X11/xorg.conf.d/11-nvidia-offload.conf

# DO NOT EDIT. AUTOMATICALLY GENERATED BY gpu-manager

Section "ServerLayout"
    Identifier "layout"
    Option "AllowNVIDIAGPUScreens"
EndSection

Files .local/share/xorg/Xorg.1.log and nvidia-bug-report.log.gz are in the attachment.
nvidia-bug-report.log.gz (283 KB)
Xorg.1.log (50.4 KB)

Hi Hant0508, what is the output of the following command on your machine please ?

$ xrandr --listproviders
Providers: number : 2
Provider 0: id: 0x43 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 1 associated providers: 0 name:modesetting
Provider 1: id: 0x1ee cap: 0x0 crtcs: 0 outputs: 0 associated providers: 0 name:NVIDIA-G0

It seems to me that the first failure was to be expected given that your NVIDIA card isn’t properly detected by the X server (nvidia-smi output should at least mention Xorg). Do you really need all of those *.conf files in /etc/X11/xorg.conf.d and your /etc/X11/xorg.conf ? Usually the X server is pretty good at «guessing» the proper values. In your case and from Xorg.0.log you can see that one of the Nvidia module has been loaded then unloaded. Could you try to move every one of your conf files out of /etc/X11/xorg.conf.d (and xorg.conf as well) and see what happens. Then if you get the following message in Xorg.0.log :

[    93.021] (**) NVIDIA(G0): Enabling 2D acceleration
[    93.021] (EE) NVIDIA(G0): GPU screens are not yet supported by the NVIDIA driver

you could add the file /etc/X11/xorg.conf.d/10-nvidia-gpu-screens.conf. It is the only file I had to create in order to make of the NVIDIA card a second OpenGL renderer. I did not need any xorg.conf either. And then PRIME render offloading should work just like you expected. By the way, did you install the xserver-xorg-video-nvidia package ? I do not see any mention of libglxserver_nvidia.so in your Xorg.0.log file. I’ll provide mine as a reference, just be advised that I’m running Debian so a few paths or libraries names might differ.

$ tree /etc/X11/xorg.conf*
/etc/X11/xorg.conf.d
└── 10-nvidia-gpu-screens.conf

0 directories, 1 file

About the nvidia-persistence daemon, I’m not certain you’ll benefit from it with only one NVIDIA GPU.

file:///etc/X11/xorg.conf.d/10-nvidia-gpu-screens.conf and file:///var/log/Xorg.0.log
archive.tar.gz (7.93 KB)

Anyway you’ll get everything you need in Nvidia documentation, the Chapter 34. PRIME Render Offload is especially interesting. You’ll find it here: https://download.nvidia.com/XFree86/Linux-x86_64/440.59/README/primerenderoffload.html

You can make sure it works as expected using nvidia-smi, for instance:

$ nvidia-smi stats -c 1 -d gpuUtil,temp
0, temp    , 1583095494175489, 42
0, gpuUtil , 1583095493634034, 22

And of course with glxgears

$ glxgears 
29162 frames in 5.0 seconds = 5832.225 FPS
29454 frames in 5.0 seconds = 5890.655 FPS
29366 frames in 5.0 seconds = 5873.187 FPS

For the last one you might need to define the following file:

$ cat ~/.drirc
<device screen="0" driver="dri2">
	<application name="Default">
		<option name="vblank_mode" value="0"/>
	</application>
</device>

By the way, the address on the PCI bus you used in your xorg.conf file is not correct for the NVIDIA card. Your lspci output gives PCI:2:0:0 but you’ve written PCI:1:0:0. That is the kind of mistake that can be avoided when you let Xorg find the correct values by itself. The modesetting driver is rather trustworthy.