overclocking issues

hello,
i’ve installed today latest beta driver to try the long waited overclocking feature but unfortunately without success since nothing changes…
i’ve tried with both 8 and 12 as coolbits values, nvidia-settings correctly shows me the performance level editing checkbox and i can even fill values for the gpu and memory clock (tried with 100,300 and 200,500) but i can’t notice any slight improvement.
even querying GPU3DClockFreqs shows that the frequencies didn’t change:

Attribute 'GPU3DClockFreqs' (mescalito:1.0): 562,2800.
    The valid values for 'GPU3DClockFreqs' are in the ranges 562 - 1124, 700 - 3360 (inclusive).
    'GPU3DClockFreqs' can use the following target types: X Screen, GPU.
  Attribute 'GPU3DClockFreqs' (mescalito:1[gpu:0]): 562,2800.
    The valid values for 'GPU3DClockFreqs' are in the ranges 562 - 1124, 700 - 3360 (inclusive).
    'GPU3DClockFreqs' can use the following target types: X Screen, GPU.

if i try to set the frequencies from the cli:

$ nvidia-settings -a GPUOverclockingState=1 -a GPU3DClockFreqs=600,700



ERROR: Error assigning value 39322300 to attribute 'GPU3DClockFreqs' (mescalito:1.0) as specified in assignment 'GPU3DClockFreqs=600,700' (Unknown Error).


ERROR: Error assigning value 39322300 to attribute 'GPU3DClockFreqs' (mescalito:1[gpu:0]) as specified in assignment 'GPU3DClockFreqs=600,700' (Unknown Error).

what did i do wrong?
thank you.

ps: this is alienware’s x51 gtx660-oem

If you go back to the nvidia-settings GUI, make sure you hit the Enter key after you type the new offset values, otherwise they won’t stick, not immediately obvious since there’s no apply settings button.

I noticed the same behaviour with my GTX 750 Ti. Sure, you have to hit enter and it works flawlessly for the memory but not for the GPU. Has that something to do with the default TDP limit?

that was it! thank you!

I just saw it only seems to work in Adaptive mode for the GM107. When I set it to Prefer Maximum Performance the overclocking doesn’t seem to have any effect on the Max Graphics Clock. Also when Fedora 20 comes back from hibernate the driver stays in Adaptive mode and cannot be switched back to Prefer Maximum Performance mode.

I saw some strange behavior as well when I was changing back and forth between settings. I think one of the ones I tried was the performance level and I managed to get the GTX Titan Black to not respond to clock changes anymore until I rebooted. It seemed to be stuck in some sort of throttling state (600MHz clocks-ish), even though temperatures were fine.

Having the same issue here

:~/ > nvidia-settings -a GPUOverclockingState=1 -q GPU3DClockFreqs

  Attribute 'GPU3DClockFreqs' (lessid:0.0): 700,1800.
    The valid values for 'GPU3DClockFreqs' are in the ranges 405 - 1400, 450 - 2160 (inclusive).
    'GPU3DClockFreqs' can use the following target types: X Screen, GPU.
  Attribute 'GPU3DClockFreqs' (lessid:0[gpu:0]): 700,1800.
    The valid values for 'GPU3DClockFreqs' are in the ranges 405 - 1400, 450 - 2160 (inclusive).
    'GPU3DClockFreqs' can use the following target types: X Screen, GPU.

~/ > nvidia-settings -a GPUOverclockingState=1 -a GPU3DClockFreqs=800,4200

    The valid values for 'GPU3DClockFreqs' are in the ranges 405 - 1400, 450 - 2160 (inclusive).
    'GPU3DClockFreqs' can use the following target types: X Screen, GPU.
    The valid values for 'GPU3DClockFreqs' are in the ranges 405 - 1400, 450 - 2160 (inclusive).
    'GPU3DClockFreqs' can use the following target types: X Screen, GPU.

~/ > nvidia-settings -a GPUOverclockingState=1 -a GPU3DClockFreqs=800,2100

ERROR: Error assigning value 52430900 to attribute 'GPU3DClockFreqs' (lessid:0.0) as specified in assignment 'GPU3DClockFreqs=800,2100' (Unknown Error).


ERROR: Error assigning value 52430900 to attribute 'GPU3DClockFreqs' (lessid:0[gpu:0]) as specified in assignment 'GPU3DClockFreqs=800,2100' (Unknown Error).

So what is the correct way to use the CLI? Doing this in the GUI works as expected.

At the moment, there isn’t a way to adjust clocks for GTX 400 series and later GPUs (i.e. “Coolbits” “8” vs. “Coolbits” “1”) on the CLI. The “GPU3DClockFreqs” attribute is only valid for older GPUs that use “Coolbits” “1” overclocking.

“Coolbits” “8” CLI control should be coming in a future driver release: it will use different attributes, due to fundamental differences in the way reclocking works on GPUs compatible with “Coolbits” “1” and those compatible with “Coolbits” “8”.

[EDIT: deleted the previous text in this post, which incorrectly stated that the recently released 337.19 driver does not have the nvidia-settings command line support for “Coolbits” “8” overclocking]

Looks like “a future driver release” is available already. The recently released 337.19 driver includes command line support for “Coolbits” “8” overclocking. See the text about “GPUGraphicsClockOffset” and “GPUMemoryTransferRateOffset” in the nvidia-settings(1) man page for information on how to use the new command line interface.

Thanks ddadap, you pointed me in the right direction. There’s nothing in the README yet about GPUGraphicsClockOffset etc., but I guess that will come in the future. I admit I didn’t think to check the man for nvidia-settings since the README didn’t mention it.

Success story:

I’m now running this script successfully on GTX460/337.19/Debian Jessie:

#!/bin/bash

# Set fan speed to 60%
nvidia-settings -a "[gpu:0]/GPUFanControlState=1" \
                -a "[fan:0]/GPUCurrentFanSpeed=60"

# Enable overclock
# nvidia-settings --query GPUPerfModes
nvidia-settings -a "[gpu:0]/GPUGraphicsClockOffset[3]=100" \
                -a "[gpu:0]/GPUMemoryTransferRateOffset[3]=400"

I would like to report though that if you query a perf level which doesn’t exist (e.g. nvidia-settings --query “[gpu:0]/GPUGraphicsClockOffset[4]” while you only have perf levels 0-3, X (or presumably the driver) will crash.

:~/ > nvidia-settings --assign "[gpu:0]/GPUGraphicsClockOffset[3]=100" --assign "[gpu:0]/GPUMemoryTransferRateOffset[3]=600"

works as expected. 10q

royttt,

Thanks for reporting the crash. I filed NVIDIA bug #1511340 to track this issue.