NVIDIA 364.12 release: Vulkan, GLVND, DRM KMS, and EGLStreams

NVIDIA’s 364.12 GPU driver release series brings together a lot of technologies that users and developers may be interested in:

First, 364.12 is the first mainline NVIDIA GPU driver release to include Vulkan support. We’re fully commited to supporting all of OpenGL, GLX, EGL, and Vulkan. But, Vulkan is definitely the most forward-looking. The full 1.0 spec, including the Window System Integration (WSI) extensions, is here:

https://www.khronos.org/registry/vulkan/specs/1.0-wsi_extensions/pdf/vkspec.pdf

It is worth noting that Vulkan defines window system bindings itself (see, e.g., VK_KHR_xcb_surface, VK_KHR_xlib_surface, and VK_KHR_wayland_surface) and thus is independent of GLX or EGL. The Vulkan WSI extensions VK_KHR_display and VK_KHR_display_swapchain define how to present in the absense of a window system, which is interesting in the context of the following sections.

In 364.12, NVIDIA’s Vulkan driver supports VK_KHR_xcb_surface and VK_KHR_xlib_surface, though not yet VK_KHR_wayland_surface or VK_KHR_display or VK_KHR_display_swapchain. Those are still in development.

Next, OpenGL Vendor-Neutral Dispatch (GLVND) is important for two major reasons:

(1) It redefines the Linux OpenGL ABI in such a way that multiple OpenGL implementations can cleanly coexist on the file system: over time, this should put an end to the age-old Linux libGL.so collision problems.

(2) It cleanly defines what symbols should be exported by each library, in order to use EGL with full OpenGL, rather than just EGL + OpenGL ES. Using EGL with full OpenGL on Linux isn’t new, but the GLVND division of libOpenGL.so for OpenGL symbols, libGLX.so for GLX symbols, and libEGL.so for EGL symbols is nice. The sample code referenced below links against libEGL.so and libOpenGL.so.

I gave a GLVND talk at XDC 2013:

http://www.x.org/wiki/Events/XDC2013/XDC2013AndyRitgerVendorNeutralOpenGL/

And a status report at XDC 2014:

http://www.x.org/wiki/Events/XDC2014/XDC2014RitgerGLABI/

The GLVND implementation is maturing. We shipped it experimentally starting in 361.16, and enabled it by default in 364.12 (it can still be disabled at install time, if desired). There has been a lot of interest, feedback, and contributions from Mesa developers and distribution packagers. Thanks! Based on recent feedback, we’re about to make an ABI-breaking change to GLVND, which will hopefully make future ABI compatibility easier to manage. Distros should probably hold off on packaging the upstream GLVND until after that ABI change has settled.

There are a lot more GLVND packaging details here:

https://devtalk.nvidia.com/default/topic/915640/unix-graphics-announcements-and-news/multiple-glx-client-libraries-in-the-nvidia-linux-driver-installer-package/

After the next round of GLVND ABI issues are ironed out, we hope distros will start packaging GLVND. The NVIDIA driver .run installer will install its own copy of GLVND, if it doesn’t detect a distro-provided copy on the filesystem.

GLVND source is here:

https://github.com/NVIDIA/libglvnd

If you want to participate in discussion around it, there is some discussion in the github “issues” link on that page, and other discussions have taken place on the mesa-dev mailing list.

Lastly, in 364.12 we are finally providing DRM KMS support.

Our display programming support is centralized in a kernel module named nvidia-modeset.ko. Traditional display interactions (X11 modesets, OpenGL SwapBuffers, VDPAU presentation, SLI, stereo, framelock, gsync, etc) initiate from our various user-mode driver components and flow to nvidia-modeset.ko. This has been shipping since 358.09.

New in 364.12, we’ve added a kernel module named nvidia-drm.ko which registers as a DRM driver. It provides GEM and PRIME DRM capabilities, to support graphics display offload on optimus notebooks. It also, on new enough kernels (>= Linux kernel 4.1 with CONFIG_DRM and CONFIG_DRM_KMS_HELPER), provides MODESET and ATOMIC DRM capabilities to support atomic DRM KMS.

The DRM KMS support in nvidia-drm.ko is still unproven, and has some interaction issues with SLI, so it is disabled by default. You can enable it with nvidia-drm.ko’s “modeset” kernel module parameter. E.g.,

modprobe -r nvidia-drm ; modprobe nvidia-drm modeset=1

This much should be sufficient for simple DRM KMS clients that use the “dumb buffer” mechanism for creating and producing surfaces to present through DRM KMS (DRM_IOCTL_MODE_{CREATE_DUMB,MAP_DUMB,DESTROY_DUMB}), such as the xf86-video-modesetting X driver and boot splashscreen managers like Plymouth.

However, more sophisticated DRM KMS clients in the Linux ecosystem, such as most Wayland compositors, currently use gbm to allocate and manage graphics buffers. We do not currently provide a gbm backend driver as part of NVIDIA’s GPU driver package. To ease migration of the existing ecosystem, that is something we’re exploring for a future release.

But, really, we feel that gbm isn’t quite the right API for applications to express their surface presentation requests. At XDC 2014, I made the case for a family of EGLStreams-based EGL extensions to be used instead:

http://www.x.org/wiki/Events/XDC2014/XDC2014RitgerEGLNonMesa/

The concept is that an application creates an EGL object, an EGLOutputLayer, that corresponds to a specific DRM KMS plane. Then, the application creates an EGLStream where the stream’s producer is an EGLSurface and the stream’s consumer is the EGLOutputLayer. Calling eglSwapBuffers() on the EGLSurface presents the content from the EGLSurface to the DRM KMS plane.

There are several nice properties of this approach:

  • EGLStreams have explicit producers and consumers.

** If the driver knows exactly how a buffer will be used, it can select the optimal memory format and auxiliary resources that best suit the needs of the specified producer and consumer.

** Otherwise, the driver may have to assume the least common denominator of all possible producers and consumers.

  • EGLStreams have explicit transition points between producer’s production and consumer’s consumption.

** When the driver knows exactly when a surface is being handed off from the producer to the consumer, the driver can resolve any synchronization or coherency requirements.

** As an example, NVIDIA GPUs use color compression to reduce memory bandwidth usage (this is particularly important on Tegra). The 3D engine understands color compression but display does not. We need to decompress using the 3D engine before handing off the surface to display, but decompression is expensive, so we only want to do it when necessary. E.g., it would be wasteful and unnecessary to decompress if the consumer were texture, rather than display.

  • EGLStreams encapsulate details that may differ between GPU vendors or GPU generations.

** E.g., When performing multisampled rendering, with NVIDIA GPUs, we can downsample the multisampled rendering using the 3D engine or the display engine. If presentation from rendering through display is encapsulated within an API, then the driver implementation has the flexibility to take advantage of downsample-on-scanout when possible.

This family of EGLStreams-based EGL extensions are implemented in 364.12. Here is an example of how to use them for presentation:

https://github.com/aritger/eglstreams-kms-example

We have also posted Weston patches to the wayland-devel mailing list, to demonstrate how a Wayland compositor could take advantage of this:

https://lists.freedesktop.org/archives/wayland-devel/2016-March/027547.html

I should also acknowledge that the current EGL extensions are not a perfect solution, yet: an EGLstream targets a DRM-KMS plane as its consumer, but there currently isn’t an EGL specification way for all the DRM-KMS planes to consume from their respective EGLStreams atomically. This certainly needs to be addressed, but for all the reasons described above, we feel an EGLstream-based approach is the right trajectory.

For what it is worth, the sort of explicitness of EGLStreams is also the direction taken in Vulkan: the VK_KHR_display and VK_KHR_display_swapchain extensions allow applications to create surfaces associated with specific display planes, and queue swaps to them. The graphics driver therefore has knowledge of how the surface is going to be used at surface allocation time, and the graphics driver is in the call chain when the surface is enqueued to be displayed.

Anyway, for users interested in running a Wayland compositor on top of NVIDIA’s Linux driver:

  • Install 364.12 on a recent distro with DRM Atomic KMS.
  • Enable NVIDIA’s DRM KMS with the “modeset” nvidia-drm.ko kernel module parameter.
  • Build and run Weston with the patches we post to the wayland-devel mailing list.

I should note that Wayland clients shouldn’t require any modification for gbm vs EGLStreams: the difference between the two approaches should only affect the Wayland compositor implementation and the EGL driver.

Going forward, our hope is that:

  • We can have some discussion with the DRM community about how to better incorporate EGLStreams into atomic KMS.
  • Other EGL implementors will consider implementing EGLStreams and friends.
  • Wayland compositor authors will consider adding a path for EGLStreams-based presentation, using eglstreams-kms-example and/or our Weston patches as example.

Does the DRM Kernel Module allow me to use a custom EDID by adding “drm_kms_helper.edid_firmware=DFP-0:edid/edid.bin” to the kernel line after I placed it into /usr/lib/firmware/edid? I have a 1440p monitor that I can overclock significantly, by supplying it my handcrafted EDID. On Xorg I do this through /etc/X11/xorg.conf, but for weston the only way would be the kernel parameter I mentioned. That’s how the open source drivers do it.

EDIT: When I run the eglstreams-kms-example I only get 60 FPS instead of 96. I first modprobe the drm_kms_helper module with this option and then modprobed nvidia_drm with modeset=1. For some reason just putting nvidia_drm.modeset=1 into the kernel line (it’s not built-in so of course doesn’t work) or as “options nvidia_drm modeset=1” into a /etc/modprobe.d/*.conf doesn’t work. I always have to remove it and manually modprobe it with this option.

Impressive release guys!

Many thanks for all your hard work :)

This is huge! Congratulations!

Great Step !!

Big Thanks to all devs

blackout24: I think in your modprobe.d/*.conf file, you need a hyphen, not an underscore:

“options nvidia-drm modeset=1”

(with that, it works for me).

Good point on the edid_firmware drm_kms_helper kernel module parameter. That is intended to work, but from a quick skim of the nvidia-drm source, I don’t think it is wired up properly right now. I’ll try to get that fixed for a follow-on release.

This is truly awesome, huge props to NVIDIA !

However reloading the nvidia-drm module with modeset=1 doesn’t seem to do anything. The kernel console still stays in very low resolution despite the journal showing the successful loading of the KMS module.

This is normal. Reposting here what I posted at the Phoronix forums:

The console uses fbdev. The presence of KMS does not imply the presence fbdev. That the open KMS drivers include fbdev compatibility is merely an implementation detail, not something inherent in KMS. It seems nvidia does not provide fbdev compatibility, something I expected actually.

When using KMS ; the driver name on xorg.conf should not be “nvidia”.

So there won’t be a x86 version? Or is it available somewhere else than Index of /XFree86/Linux-x86 ?

Update: oh yeah, it’s there on FTP server :)

PS. All links in the release announcement are broken (us.download.nvidia.com is dead)

Good work! It’s for new Ubuntu 16.04 :)

Edit: I updated the original post with a link to the Weston patches:
https://lists.freedesktop.org/archives/wayland-devel/2016-March/027547.html

For the questions fbdev question: yes, that is something we hope to provide eventually.

“New in 364.12, we’ve added a kernel module named nvidia-drm.ko which registers as a DRM driver. It provides GEM and PRIME DRM capabilities, to support graphics display offload on optimus notebooks.”

Great news, could you please explain how that works exactly?

The doc doesn’t seem to be up to date:
http://us.download.nvidia.com/XFree86/Linux-x86_64/364.12/README/optimus.html

EDIT:
Is it like nouveau with the DRI_PRIME=1 env var?

The existing prime support code just moved from nvidia.ko to nvidia-drm.ko. There’s no additional prime stuff supported in 364.12 that wasn’t available in earlier drivers.

I can’t figure out how to get the stupid forum software to un-ghost Andy’s replies in this thread, so I’ll just quote them here:

and

Can you please confirm that “Support for Vulkan” includes Fermi (GTX 400/500 series) class GPUs? By my reading of the documentation I’m ecstatic.

No, Fermi is not supported by the Vulkan driver at this time.

Even with “options nvidia-drm modeset=1” instead of nvidia_drm in my modprobe.d/*.conf it doesn’t apply the parameter at boot. I even added this file to my initramfs and it should be picked up by the modconf hook in mkinitcpio on Arch. I noticed that I can use both nvidia-drm and nvidia_drm, when I manually load the module. I first thought that it might be a typo in the README, because the module is called nvidia_drm. I’m out of ideas at this point.

Hello! I remember that GLvnd wasn’t enabled by default in 361.xx driver. I think, few applications had check “if “GLX vendor” is NVIDIA, then that code, else another code”. Now NVIDIA check had success, but the special code isn’t needed anymore. And application is failing.

So I remember Windows NT history. In 4.0, it haven’t much of components. Games released in 90th had check of WinNT. Then in WinNT 5.0 all components were added, an old games weren’t start anyway.

http://www.microsoft.com/library/media/1033/technet/images/prodtechnol/winxppro/maintain/lgcya01_big.gif

Did you remember that? This is just changing useragent of OS! Just make something that:

__GL_OLD_APP=1 ./limbo.x86

That’s great news. Thank you guys for your hard work!