Having Trouble OverClocking GTX 1070

Hi there folks.

I am running Ubuntu Linux with 6 GTX 1070s and am having some problems overclocking my GPUs. See below output:

owocki@owocki-desktop:~$         nvidia-smi -i ${i} -pl 170
Failed to set power management limit for GPU 0000:01:00.0: Insufficient Permissions
Terminating early due to previous errors.
owocki@owocki-desktop:~$
owocki@owocki-desktop:~$ sudo !!
sudo         nvidia-smi -i ${i} -pl 170
Power limit for GPU 0000:01:00.0 was set to 170.00 W from 170.00 W.

Warning: persistence mode is disabled on this device. This settings will go back to default as soon as driver unloads (e.g. last application like nvidia-smi or cuda application terminates). Run with [--help | -h] switch to get more information on how to enable persistence mode.

All done.
owocki@owocki-desktop:~$ nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[3]=200

ERROR: The attribute 'GPUGraphicsClockOffset' specified in assignment
       '[gpu:0]/GPUGraphicsClockOffset[3]=200' cannot be assigned (it is a read-only attribute).

owocki@owocki-desktop:~$ nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[0]=200

ERROR: The attribute 'GPUGraphicsClockOffset' specified in assignment
       '[gpu:0]/GPUGraphicsClockOffset[0]=200' cannot be assigned (it is a read-only attribute).

owocki@owocki-desktop:~$ nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[0]=1500

ERROR: The attribute 'GPUMemoryTransferRateOffset' specified in assignment
       '[gpu:0]/GPUMemoryTransferRateOffset[0]=1500' cannot be assigned (it is a read-only
       attribute).

owocki@owocki-desktop:~$ sudo nvidia-smi -ac 1911,4004
Setting applications clocks is not supported for GPU 0000:01:00.0.
Treating as warning and moving on.
Setting applications clocks is not supported for GPU 0000:02:00.0.
Treating as warning and moving on.
Setting applications clocks is not supported for GPU 0000:03:00.0.
Treating as warning and moving on.
Setting applications clocks is not supported for GPU 0000:04:00.0.
Treating as warning and moving on.
Setting applications clocks is not supported for GPU 0000:05:00.0.
Treating as warning and moving on.
Setting applications clocks is not supported for GPU 0000:06:00.0.
Treating as warning and moving on.
All done.

Here are the specs of my rig:

owocki@owocki-desktop:~$ nvidia-smi
Fri Nov 18 07:30:56 2016
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57                 Driver Version: 367.57                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1070    Off  | 0000:01:00.0      On |                  N/A |
| 80%   48C    P2    95W / 170W |   1832MiB /  8106MiB |     95%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1070    Off  | 0000:02:00.0     Off |                  N/A |
| 80%   46C    P2   118W / 170W |   1823MiB /  8113MiB |     82%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 1070    Off  | 0000:03:00.0     Off |                  N/A |
| 80%   48C    P2   121W / 170W |   1823MiB /  8113MiB |     85%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GeForce GTX 1070    Off  | 0000:04:00.0     Off |                  N/A |
| 80%   46C    P2   114W / 170W |   1823MiB /  8113MiB |     79%      Default |
+-------------------------------+----------------------+----------------------+
|   4  GeForce GTX 1070    Off  | 0000:05:00.0     Off |                  N/A |
| 80%   49C    P2   109W / 170W |   1823MiB /  8113MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   5  GeForce GTX 1070    Off  | 0000:06:00.0     Off |                  N/A |
| 80%   47C    P2   104W / 170W |   1823MiB /  8113MiB |     82%      Default |
+-------------------------------+----------------------+----------------------+

Can anyone tell me what I might do to get my GPUGraphicsClockOffset and GPUMemoryTransferRateOffset to update?

Thanks,
@owocki

You might need to set the coolbits flag in your xorg.conf if you haven’t already. If you can overclock via nvidia-xsettings, you should be able to do the same via nvidia-settings.

Relevant latest readme explaining coolbits for Linux drivers:
http://us.download.nvidia.com/XFree86/Linux-x86_64/367.57/README/xconfigoptions.html

Thanks for the note. Unfortunately, I’ve already set the “Coolbits” option on this GPU’s x config to be 31 so I’m fairly confident that’s not the issue.

Maybe I’ll try nvidia-xsettings next, since nvidia-settings doesnt seem to be able to do the trick.

As far as I am aware, consumer GPUs like the GTX 1070 don’t support setting specific application clocks, they only use clock auto boosting. Professional GPUs, i.e. Teslas, support setting application clocks, and I think (not sure!) pro-sumer cards, i.e. Titan.

Thanks for the response.

Do you have a source on this? I tried googling but can’t seem to find anything.

Any way to trick my mobo into thinking this is a proconsumer level card?

I have been using all kinds of GPU (consumer, Tesla, Quadro) for many years. As I said, I cannot state authoritatively that no consumer card supports application clocks, but based on experience I believe this is so. If you look at the nvidia-smi output above, it explicitly tells you that application clocks are not supported on your particular GPU. I have a lower-end Quadro K2200 here, and it, too, doesn’t support application clocks.

I am not sure what controls the availability of application clocks. Possibly the VBIOS, otherwise likely the driver, based on the GPU it detects. Possibly both in combination.

The primary purpose for application clocks is to ensure that all GPUs in a cluster or supercomputer (Tesla GPUs) can run at exactly the same speed. With just autoboost, one could have GPUs running at up to four or five different speeds, as they boost to different clock frequencies depending on local temperature and manufacturing tolerances. That makes things difficult when one tries to split work evenly across the nodes of a cluster. This situation does not typically come up with consumer GPUs.

Figured this out. For anyone finding this via google, reposting from a reddit/r/nvidia thread where I got the answer.

You may need to update your driver, latest linux is 375.20, pascal overclocking was introduced in 370.x, and it seems you are using 367.57.

Hi All,
I have updated to last version of 375.20 but I still cannot enable ‘OverClock’ in GTX 1070.
Now I am useing UBUNTU 16.04, with 6 GPU Palit 1070.

The purpose of useing is Mining.

Please suggest.
Aoddy

Sorry for necroposting but I’ve just put added a 1070 to my existing 970 rig but I cannot budge the 1070 off Performance Level 2 yet I can move the 970 however I need. Xorg and coolbits all set, nvidia-settings app gives me all the necessary control options for both cards. I’m using Ubuntu 16.04 so when I issue…

$ nvidia-smi -L
GPU 0: GeForce GTX 1070 (UUID: GPU-b847f7f4-0732-52bf-7d0a-d9fe3699de77)
GPU 1: GeForce GTX 970 (UUID: GPU-345e8749-a649-b5f5-c9f7-555b22f79350)

$ sudo nvidia-smi -i 0 -ac 4004,1911
Setting applications clocks is not supported for GPU 0000:00:06.0.
Treating as warning and moving on.
All done.

Lots of posts about driver versions but I’ve tested this on both the nvidia-375 and nvidia-381 packages and they both behave the same. As far as I can tell my 1070 is on the latest vbios so I’m a little stumped. Has anyone gotten this to work?

I am having this same issue on Ubuntu 16.04 with a GTX 970 and GTX 1070

Tried binary 381 and 375 packages, neither does the trick.

Same problem for me… Using the latest version (375.66 )

So I’ve noticed something odd between the nvidia-settings GUI vs. command line app for my 1070, I’ll try to explain clearly so comments/questions welcomed.

nvidia-smi (cli) reports the card running at P2 which I infer means Performance Level 2.
nvidia-settings (gui) reports the same.
I set my app running which is genoil’s ethminer using cuda 8.0, I take a note of how fast its running.
nvidia-settings (cli) I issue “nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=1200”
nvidia-settings (gui) shows no change to the memory clock speed, P2 is still the case?! (I close and reopen the app to be sure). The ethminer app does NOT show any performance improvements.
nvidia-settings (cli) I issue “nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[3]=1200”
nvidia-settings (gui) now reports I’ve changed the memory clock speed by populating the text box, Performance Level 2 however is still in effect but ethminer is now showing a significant improvement in performance aka the memory clock change has most certainly made an impact.

Looking at the PerfModes it seems my 970 requires P3 to be able to amend the graphics and memory clocks however the 1070 will allow these to be amended in any Performance Level. So the oddity I find is why I need to issue a P3 level command (/GPUMemoryTransferRateOffset[3]) for this to take affect when nvidia-smi shows the card to be in P2?

Similar error trying to OC this:

MSI GTX 1060 3GT OC
ubuntu server 16.10
Nvidia v375.66
installed lightdm

$nvidia-smi -i 0 -ac 4004,900
Setting applications clocks is not supported for GPU 0000:01:00.0.
Treating as warning and moving on.
All done.

This card was previously installed in w10 rig & OC with no problem at all, so it’s no a firmware/HW issue. I believe problem is a poorly developed linux drivers (I’m not blaming dev capabilities, I assume lack of enough resources)

Same issue here:

Fresh install of Ubuntu 16.04.02 server x64
Nvidia driver version 375.66
Palit Gamerock GTX1070

when running command “sudo nvidia-smi -ac 4004,1961” i get

Setting applications clocks is not supported for GPU 0000:01:00.0.
Treating as warning and moving on.
All done.

when running nvidia-smi it shows power state at P2

please advise when a fix will be available

I am also having the same issue not being able to change clock settings on my gtx1070.

I suspect when you do

nvidia-smi -q -d SUPPORTED_CLOCKS

are you all getting N/A like I am?

It seems to not be recognizing the GPU clocks for some reason, but it can run and load CUDA code fine.

I’m running Debian x64 with 4.11 kernel and I have MSI GTX 1070 GAMING X 8G and NVIDIA driver 375.66.

yes i am also seeing that, the command i ran to find the max clock speed was

nvidia-smi -q -d CLOCK

and it was those values i used for the other command, i am however about to nuke this install and put windows on there :(

Yea it seems it will be a while before it is fixed in the driver :(

Interestingly one can still set the ‘max supported clocks’ for the Maxwell GTX 980 in Windows 7/8/10. However for Pascal this currently does not seem to be supported. For the GTX 980 setting the boost clock to ‘max supported’ (NVSMI) boosts performance from ~5.1 Teraflops to ~5.7 Teraflops

For Windows at least the boost clock for Pascal does kick in for compute tasks, and I have seen an EVGA GTX 1080ti SC hit over 12.7 Teraflops with boost (measured via CUDA-Z) when it usually idles at 11 Teraflops. In general I tend to see slightly better performance in Windows (7/8/10) over linux (Ubuntu 14.04), but there could be other unknown factors influencing this observation.

FozzyB try this [url]The optimized code by David Li by davilizh · Pull Request #18 · ethereum-mining/ethminer · GitHub

thanks but i am running ubuntu server with only a command line and no x server installed, below is the output of the nvidia-settings command

ERROR: libgtk-3.so.0: cannot open shared object file: No such file or directory
libnvidia-gtk3.so: cannot open shared object file: No such file or directory
libgtk-x11-2.0.so.0: cannot open shared object file: No such file or directory
libnvidia-gtk2.so: cannot open shared object file: No such file or directory

ERROR: A problem occured when loading the GUI library. Please check your installation and library path. You may need to specify this library when
calling nvidia-settings. Please run nvidia-settings --help for usage information.

usually errors like this mean i am missing some packages, does this mean i will need a gui installed or can someone advise which packages i need to install?