30-bit depth with Linux driver does not produce 30-bit output on monitor

Hello devs,

I am not able to get GTX 750 Ti + Display Port (internal) + Dell U2713H run in Deep Colour aka 30-bit depth RGB.

The LCD should have 10-bit input, 14-bit internal LUT and 8-bit + FRC panel. Based on reviews, FRC technology makes the LCD visually appear almost as native 10-bit panel (similarly as 6-bit + FRC are pretty much comparable to native 8-bit panels).

KDE display manager displays weird colours (expected), after login to KDE majority of apps display correctly, as described on Oyranos blog [1]. Based on this blog article, the graphics editor Krita should be capable of actually displaying 10-bit output (with OpenGL backend active).

[1] Image Editing with 30-bit Monitors | Oyranos

However, the 1024-step greyscale, red, green and blue ramps are displayed with only 8-bit gradation - exactly the same as with Depth 24 instead of Depth 30 in xorg.conf or when compared with any other image viewer supporting only 8-bit output (GIMP, GwenView).

I do not have experience with nVidia Quadro cards or with native 10-bit LCD panels, unfortunately.

The questions are:

x are the GeForce card expected to really support 30-bit output in X?

x what is the test setup of nVidia devs to verify it works and delivers smooth gradation in Linux?
nvidia-bug-report.log.gz (148 KB)

ftp://download.nvidia.com/XFree86/Linux-x86_64/340.32/README/depth30.html

ftp://download.nvidia.com/XFree86/Linux-x86_64/340.32/README/README.txt

Opensuse has a couple of patches for xorg-server (in xorg-x11-server pkg), which might be applicable here:

U_fb-Fix-invalid-bpp-for-24bit-depth-window.patch
u_render-Cast-color-masks-to-unsigned-long-before-shifting-them.patch

Does the problem still occur if you use something other than KDE, or if you disable its compositor? With a problematic window open, please run xwininfo, click on the window in question, and verify that it prints “Depth: 30” rather than “Depth: 24” or “Depth: 32”. Finally, please check the nvidia-settings display device page and make sure that dithering is disabled.

@aplattner: the default compositor in KDE is KWin and it does not support 30 bit display with OpenGL effects enabled (due to 8 bit alpha channel). I actually tested the setup with Compiz.

Also nvidia-settings confirm that dithering is disabled for the display.

Running xwininfo confirms that some windows have Depth: 30 (Krita, nvidia-settings, other KDE apps), while some have Depth: 32 (Firefox).

What attracted my attention is the difference also in other items, namely Visual and Colormap:

Krita                       Firefox
...
  Depth: 30                    Depth: 32
  Visual: 0x21                 Visual: 0x23
  Visual Class: TrueColor      Visual Class: TrueColor
...
  Colormap: 0x20 (installed)   Colormap: 0x4000003 (not installed)
...

The glxinfo -v | grep -A7 -e “(ID: 21)|(ID: 23)” output provides some more info, however I am not skilled enough to say if it is okay or not (e.g. the buffer depth size):

Visual ID: 21  depth=30  class=TrueColor, type=(none)
    bufferSize=30 level=0 renderType=rgba doubleBuffer=1 stereo=0
    rgba: redSize=10 greenSize=10 blueSize=10 alphaSize=0 float=N sRGB=Y
    auxBuffers=4 <b>depthSize=24</b> stencilSize=0
    accum: redSize=16 greenSize=16 blueSize=16 alphaSize=16
    multiSample=0  multiSampleBuffers=0
    visualCaveat=None
    Opaque.
--
Visual ID: 23  depth=32  class=TrueColor, type=(none)
    bufferSize=30 level=0 renderType=rgba doubleBuffer=1 stereo=0
    rgba: redSize=10 greenSize=10 blueSize=10 alphaSize=0 float=N sRGB=Y
    auxBuffers=4 depthSize=24 stencilSize=0
    accum: redSize=16 greenSize=16 blueSize=16 alphaSize=16
    multiSample=0  multiSampleBuffers=0
    visualCaveat=None
    Opaque.

@brebs: I guess that as long as the window compositor and the app itself uses OpenGL, the x-server patches do not have any impact, do they? Anyway, I may try to patch my xorg-server later this week and re-test.

EDIT: I got a confirmation that the output of glxinfo and xwininfo is the same for a working setup of Quadro and native 10 bit monitor.

I am experiencing the same issue. With a bit of help from #nvidia, here’s what I’ve managed to figure out so far:

For rereference,
glxinfo -v: http://sprunge.us/YfNL
xdpyinfo: http://sprunge.us/ZLWb

According to the former, visuals 0x21 etc. should all be 10 bit, but according to the latter, visual 0x21 has “8 significant bits in color specification”, whereas 0x23 has 10. Not sure where this discrepancy is coming from, but I’ve made sure to try both visuals for all of my testing.

Hardware used: Asus GTX 970 Strix, LG 31MU97-b (connected via DisplayPort), Spyder 3 Express colorimeter.
Drivers used: x11-drivers/nvidia-drivers-346.35
Other infos: I’m running xmonad directly inside X11, with no compositor in between.

First off, I’m using ArgyllCMS to independently verify the bit depth of the display - ArgyllCMS works by creating a visual patch (via Xrender, as far as I can tell) and then adjusting the colors (not sure whether numerically or by adjusting the video table LUT, but it seems to be the former), measuring them using the colorimeter and confirming that it sees a difference. I’ve gotten conflicting results from it. Sometimes, 8 bits, and sometimes 10 bits. I’m not sure whether to attribute it to measurement error or not.
xwininfo output on its render window: http://sprunge.us/FICc

Second, I’m using mpv (a video renderer) to test. I’ve modified mpv to render a grayscale ramp, using OpenGL at rgb32f precision, and according to GLX it’s outputting with visual 0x21. I’ve also modified it to pick 0x23 instead. Result: I get visible 8 bit banding.
xwininfo output on its render window: http://sprunge.us/UIYc

Third, I’m using this program which I adapted from a GLUT example: http://sprunge.us/PQAX
Result: I get visible 8 bit banding.
xwininfo output: http://sprunge.us/bfQh

Fourth, I’m using this Xrender-based program which I adopted from an xrender example: http://sprunge.us/bfjY
The first version just used DefaultVisual, which gave it 0x21, but I have also extended it to use XMatchVisualInfo. The output I get from that program confirms that pVisual->bits_per_rgb = 10.
Result: I get visible 8 bit banding.
xwininfo output: http://sprunge.us/XiIb

Fifth, I’m displaying a 16-bit grayscale (stored as PNG) directly using imagemagick’s display program.
Result: I get visible 8 bit banding.
xwininfo output: http://sprunge.us/RUhK

I think I ran out of things to try. Has anybody ever managed to get 10 bit output working on GeForce cards, or is it a marketing lie? There is a similar enquiry here: Anybody has hands-on experience with 30bit LCD + nVidia GeForce on Linux?: PC Talk Forum: Digital Photography Review but it does not seem like anybody has answered.

Can we please get an official response on this topic? Is this a bug that will be fixed any time soon, or are we being intentionally misled?

Update: Curious to see whether the problem was on the OpenGL end or on the driver/output end, I took a raw screen capture of the mpv test window using imagemagick. I analyzed the resulting 16-bit values from the PGM file, and they were:

0 0 64 64 128 128 192 192 256 256 ...

The step (64) is 1/1024 of 65536, and therefore the ramp corresponds exactly to the 10 bit values

0 0 1 1 2 2 3 3 4 4 ...

In other words, OpenGL is indeed rendering a true 10-bit ramp into the X11 window, but either X.org or the GeForce drivers are not correctly delivering it to my display.

Here’s what bothers me about the whole situation:

ftp://download.nvidia.com/XFree86/Linux-x86_64/346.35/README/depth30.html

This also seems pretty far from the truth. In reality, even if I enable dithering to 8 bits, I get no difference in the output - it just truncates the 10 bit values (which I’ve confirmed are true 10 bit values, above) to 8 bits. (Furthermore, if I enable 6 bit dithering, it dithers the truncated 8 bit values to 6 bits of precision)

This happens both on a display connected via DP and a display connected via DVI.

Another confusing thing is this article on the Nvidia support database: Error | NVIDIA

It says:

This answer is quite confusing in general. “10-bit per color out to a full screen Direct X surface” but I thought GeForce cards do not support 10-bit colors on Windows at all? Also, how come this answer is not dependent on factors like “if connected via DisplayPort” or similar, but other sentences are? What does “10-bit per color out” mean in this context?

This also suggests that OpenGL programs require Quadro GPUs, but as far as I’m aware, that’s still the status quo on Windows. From the earlier documentation, it would seem like the situation is different on Linux, but this seems not to be the case in practice.

Another thing I am wondering: Does the possibility exist that this is dependent on the specific manufacturer of the card? I have gotten a non-reference design from Asus, but maybe that’s the reason why 10-bit output does not actually work.

Since I don’t have the luxury to easily replace it by another manufacturers, the best I’m going to have to do is ask here and hope for an answer. If I do not get one within the next week or so, I will unfortunately be forced to return the card as the promised advantages (10-bit output support) are simply not present. I may try my luck with another version of the GTX 970, but I want to avoid the stress if at all possible.

Update:

I talked to a friend of mine, and it seems that on his Gainward 9800 GT, the situation is different. He can configure Depth 30 in Xorg.conf, and after doing so, the 10 bit depth gradients (eg. from the above programs) actually get dithered down to 8 bit depth when dithering is enabled, which is not a result I can replicate on my Asus GTX 970 Strix.

Unfortunately, he does not have any cards with DisplayPort outputs, nor a 10-bit capable monitor, so he can’t confirm whether or not true 10-bit output works.

Based on this result, I’m starting to genuinely suspect that the issue may be due to Asus’ non-refernce design. I really, really want to know if the situation would improve with a different card design, and if so, which ones have working 10-bit support and which ones do not.

Problem persists on a Gainward GTX 960, so it’s probably not related to Asus in particular, nor is it related to the GTX 970 in particular.

Most likely this is either a fundamental problem with the GTX 9xx series, or an issue with the drivers, although I’m not sure which other series are affected.

Problem also persists on the 349.16 drivers.

I’ve tried using the OpenGL “exclusive mode” in order to try and bypass as much of the normal compositing logic as possible, and it didn’t seem to make a difference. However, I ran this test with a 10-bit and 8-bit monitor both attached (well, two 10-bit monitors but one was over HDMI), and the “exvlusive fullscreen” stretched itself over the entire X screen (ie. over both monitors), so maybe that negatively influenced it.

(I also noticed that the 10-bit display turned off for a while when starting the test, whereas the 8-bit display did not. So maybe it was actually setting both monitors to 8 bit? Or maybe both were 8 bit before and it was setting the 10-bit monitor to 10-bit?)

Either way, I don’t think this had a significant change, since it’s still drawing to the X screen itself, which I can already confirm works fine even with a basic OpenGL window.

Specifically, I applied this patch: http://sprunge.us/ZQEH

Also, I hear that 10-bit output works on exclusive mode on Windows, which is part of the reason I tested it; and a (different) friend of mine has confirmed that it works on a GTX 970 under Windows (using madVR in DirectX exclusive mode - OpenGL was not tested), but he also gets banding under Linux.

I continue to feel like this is a driver/software issue only, due in particular to this extra bit of evidence of it working under Windows.

(Driver version I tested is still 352.09)

I tested the 352.30 drivers, problem remains unfixed.

Can confirm that the issue also happens on a Zotac GTX 750. (No 10-bit output, no working dithering)

Dear All
month ago I did try almost everything possible to get real 10bit output. I did never manage to get a real 10 bit output using Displayport 4k resolution and 60hz. Always max 8 bit. Also I found that Linux and most of the apps are not ready for this step at the moment. Also dithering did not always do what is was intended to do. Guess so we have to wait Years to come…

Stay with 24bit or change OS.

I think I lost 150h trying or so.

Last try was a GTX750TI and nvidia 352.3, Kernel 4.05

Did you manage to it with any other resolution setting?

I found that almost all of the programs I care about are either 10-bit compatible or can easily be made 10-bit compatible. (This is all free software, after all, so we can fix breakage)

The only thing I can’t fix is nvidia’s driver, which is where the problem seems to be, as evidenced again by the fact that 10-bit output works fine via DirectX in fullscreen mode, and that the X screen has 10-bit content just fine - and finally given the fact that Quadro cards seem to work fine even on Linux (so it can’t be a bug in the non-nvidia side of things). It must simply be a software issue in the driver, almost everything else can be ruled out.

That seems to be the moral that nvidia is giving us Linux users. I personally think I will wait for AMDGPU/mesa to become stable, and then look into implementing 10-bit support for those - so that Linux users can have at least one choice of 10-bit capable graphics hardware.

I have working 10bpc (no banding) on GTX 580 via dihtering on 8bpc monitor in dual gpu setup. Also I have real 30bit on Quadro 4000 over DP on 10bpc monitor. I have also tested Xinerama with Quadro + gtx with 4 monitors, 2 per gpu and it also allows 30 bit on all displays and dithered 30bit from gtx looks the same as native or dithered 30bit from Quadro.


Tested with OpenSUSE Factory 13.2, KDE4 using QT native hack, Krita with OpenGL renderer and Photoshop 16bpc gradient picture as test sample.

Hardware:

MSI GTX 580 3gb: Eizo FS2331 (8bpc dithering, DVI) / Eizo EV2335W (8bpc dithering, DVI) / Samsung SyncSomeCrap whatever (8bpc dithering, HDMI)

PNY Quadro 4000: Eizo FS2331 (8bpc dithering, DVI) / Eizo CX240 (10bpc native, DP)


They all look the same (in terms of banding) and there’s noticable difference between dithering enabled and disabled.

Expected issues:
messed up colors in KDE before applying Qt native hack
Gwenview should work ONLY in software transitions mode when Qt hack is working (no matter if 10bpc or 8bpc)
Digikam thumbnails should be still messed up despite Qt native hack
Krita with OpenGL renderer should display no banding, whereas with disabled OpenGL banding is visible again.
Compiz should work only in Xrender mode with tiny glitches but in general usable.

Maybe NVidia removed support of 10bpc from gpus with DP output to promote Quadro xD

[EDIT:] It was meant to be joke, but actually it seems to be pretty possible - I’ve just talked with friend who also has exactly the same OpenSUSE release, the same soft - basically from software aspect it’s identical, but he has GTX 680 (with DP output, whereas 580 has only HDMI and DVI) and it seems to not work on his machine, so yeah, It seems that everything from 6xx above doesn’t work but everything from 580 below does. Of course you won’t get “real” 30bit then because those cards didn’t have DP yet, BUT - dithering works so well that I wouldn’t really care about it - especially that you can use monitor with 8bit panel then and it’ll still work fine.

if we’d assume that mentioned before 9xx and 7xx were representative it means that 5xx, 6xx, 7xx and 9xx are covered for now.

If anyone has gpu from other series please also tell if it works as it seems to be really unclear which cards support 10bpc and which don’t

Just to cheer up people disappointed with lack of 10bpc support with DP cards - I bought Quadro just to have 30bit color on Linux in fact (I was skeptical about gtx 30bit support, it turns out to be unnecessary as I have 580 anyways) and I looked for second hand Quadro 4000 (now, just few months ago so I bought pretty old card) for like 150$, they’re really cheap like most of used professional grade hardware. As it performs in OpenGL better than gtx580 I find it pretty good option and in fact quite cheap, you can always have second gpu for gaming (or for CUDA like in my case).

I guess it’s really reasonable option both from economical aspect (PC runs much cooler than when I used gtx for X server and Quadro consumes less power) and convenience (if someone wants to use card for CUDA, he needs second gpu for X anyways because when card is computing CUDA, X has like 1fps drops and is pretty unusable). So yeah - cheap, used Quadro + gtx is pretty nice option for Linux if someone is looking for both 30bit color and high performance computing/gaming

Nothing really to add, but I’ve tried to get 30-bit colour working with my 760 connected via DisplayPort to a Dell 2709W monitor, with the same results as haasn. Everything appears to be at 30-bit colour depth (nvidia-settings, xdpyinfo, xwininfo), but still banding occurs.

I’ve confirmed via the DirectX10 sample ‘10BitScanout10’ on Windows that 10-bit scan out does actually work, but on Linux it still looks like it’s being forced to 8-bit somewhere.

I am concerned that xdpyinfo says “significant bits in color specification: 8 bits” for the default visual. That seems to indicate that it’s ignoring 2 bits of information, but even when forcing it to one of the visuals that says it has 10 bits it doesn’t make any difference.

Regards
elFarto

With driver version 364.12-r1 THIS FINALLY WORKS!!.

All of my tests now succeed, and I can see a noticeable difference when turning dithering on or off.

Follow the
http://us.download.nvidia.com/XFree86/Linux-x86_64/390.59/README/depth30.html
My graphics card is GTX1080 and Dell UP2516D screen connected with mDP.
Software: kernel 4.16.9-200.fc27.x86_64 and nvidia driver 390.59
It seems work but as I use mate compiz as desktop environment, the windows control bar shows incorrect color.