I’m relatively new to nvidia-linux (I’d used nvidia cards under Windows in the past, but my Linux experience for the past several years has all been Intel Mesa) and have been getting used to the various tweaks exposed through nvidia-settings (coolbits, etc).
One thing I’ve noticed is that setting system-wide MSAA or FXAA for OpenGL applications doesn’t really seem to work in practice; because most modern Linux desktops are technically OpenGL applications, enabling MSAA or FXAA results in heavily antialised text (as though desktop applications render to textures, which I think is actually the default behaviour in compiz), which is often illegible, and in any case completely unnecessary.
I wouldn’t bother with the system-wide settings at all if not for the fact that many modern Linux titles don’t seem to have their own internally defined MSAA or FXAA options for some reason, as though they’re expecting you to use system-wide settings instead.
There are already application profile rules in recent drivers to disable G-SYNC for most compositors, so it might make sense to disable antialiasing there too. I’ll run the idea by my coworkers.
The PowerMizer thing isn’t actually a bug: it needs to lock the GPU to a high enough performance state so that there is enough memory bandwidth available to drive all of the displays.
Sorry to derail this thread, but I still think the implementation could be more efficient – in my case I’m running a Titan X, and I’d be very surprised if going from two displays to three necessitated going from powermizer state 0 to state 3, as is the current behavior. Surely it’d be enough to change the rule for very powerful GPUs so that instead of automatically locking the highest state, it instead just blocks the lowest one, so that for e.g. the Titan X, it’s only allowed to drop down into state 1 rather than 0 when three displays are connected? That would still be an enormous improvement in terms of power consumption over forcing state 3, and it’d offer significantly increased memory bandwidth over state 0. It seems like the powermizer code generally is pretty all-or-nothing at the moment; when it drops down from state 3 after exiting a game, it cycles down through states 2 and 1 in a few seconds going back to 0, but I’ve never observed it actually using 1 or 2 for an extended period of time.
It’s a little more complicated than that. In order to change the memory clocks, the driver has to pause the memory interface, re-train the links, and then turn it on again. Depending on the configuration, it might not have enough time to do that before the display underflows. So if that situation arises, it just locks it to the highest speed to avoid glitching the displays.
Additionally, please provide me with screenshots with and without the overrides. If the issue appears on the screen but not in the screenshots, please also let me know.