1070 ti cards will not enter Performance State P3 - only P0,1,2.

I have been running a 1070ti now for a while under Arch (Manjaro technically, and others) and cannot get the card past performance level 2 (0,1,2,3) through any methods I have tried.

I have tried varying setups with three different 1070ti cards (from two different manufacturers) in three different computers on a few distros.

I have tried using PowerMizer, as well as xorg.conf settings, and sending cli commands through nvidia-server. My current system uses /etc/X11/mhwd.d/nvidia.conf instead of xorg.conf, just to eliminate the question of “are you editing the right config file”, because that seems to be a common issue people have.

My 1060 and 950 both hit the appropriate performance levels in the same systems without issue.

Pretty soon these are going to all be shoved in their respective boxes (they already are but I can still poke a stick at one for the moment) and I won’t be able to make changes to configs anymore, so I am hoping to get this figured out in the next few days if life allows.

Driver:384.98
Kernel:4.9.54-1-lts (64bit) (has been tried with 4.14, etc)

My guess? I suspect its a driver issue. Perhaps I need to be running 387 for the 1070ti to be properly supported? I thought I tried that already, but if necessary I can try that again.

Notes when viewing the nvidia-bug-report.log:

A heads up, there is almost certainly some "funk" in there from trying to get them to run headless (unsuccessfully).  Just know, thats not causing the issue.  Fresh install, no changes to anything, same issue.

Ignore the *** /etc/X11/xorg.conf stuff.. its not used in this distro as mentioned above.  I am including the one that is in use at this moment.

The report is from a box running a 1st gen i3, because its the one I am testing on.  It is limited to PCIe 2, but the same issue occurs on a brand new Ryzen build running in a 16x PCIe 3 slot.

SEO: Performance State P2

I am putting the nvidia-bug-report.log and nvidia.conf (aka xorg.conf) in separate comments to make it a bit easier to read through.

/// removed because its not relevant to the problem

see answer in 4th post.

/// removed because its not relevant to the problem

see answer in 4th post.

Just an update:

I thought some decent search terms finally:
“linux driver 1070 ti”

It brought me to:

“NVIDIA 387.22 Linux Driver Released With GTX 1070 Ti Support”

So, yeah, that is probably the issue. I’d swear I tried it already, but I will try that again and report back.

https://www.geforce.com/drivers however goes nowhere at the moment (Tried 3 browsers). Just a blank white page. So… I’ll have to wait, or get it from AUR.

Your description is a bit chaotic. Some hints:
Performance level: displayed by nvidia-setting, range from 0 (lowest) to 2-5 (highest) depending on gpu.
Performance state: displayed by nvidia-smi, range from P15 (near sleep) to P0 (highest)
Drivers can be obtained here:
[url]https://http.download.nvidia.com/XFree86/Linux-x86_64/[/url]
You don’t tell if you’re running only X or also Cuda workloads on the 1070. In case of Cuda, it is by design limited to only reaching P2 (performance state!) on Pascal consumer gpus.

nvidia-smi shows at its greatest, P2, nvidia-settings shows levels 0, 1 , 2, 3 (four levels), and at its greatest only ever hit 2. I appreciate knowing that they are separate things. They just in this case happened to line up.

As for Pascal being a factor, my 1060 doesn’t have this issue.

Original Post: “My 1060 and 950 both hit the appropriate performance levels in the same systems without issue.”

All of that said, I did find a copy of the 387.34 (off the top of my head) driver I had already tried it with on a disk, and I have the same issue with it as with 384. However, it will on initial load hit its highest power state (I hadn’t noticed that before - maybe its a change, maybe its not), so perhaps it has to do with the workload type on just the 1070 ti, since I am running Cuda loads (while running X) but the 1060 still hits its highest power level regardless.

I still suspect this is a bug, not a feature, by how it presents itself. That said, perhaps it is seeing a cuda workload and limiting itself. I’ll poke it with a pointy stick later and see if I can convince myself its working as designed. Bug or feature, I want to get it figured out, but I suspect it is not hurting anything. I wouldn’t mind getting the full speeds out of the framebuffer, but if that is the only way it is limiting itself then I think I can live with that if it is by design.

I think since this is an artificial limit, it’s not always working as expected. Another user reported that while running only X/OpenGL loads, his card was thinking he was running Cuda and the limiter hit.
Run a plain GL benchmark like Unigine something on it and see if it hits the lights.
There are a lot of threads about OC 1070 here which are discussing this issue.

Good idea. Will do.