clock rate on my GTX680 is only 706 MHz is it broken?
I've just bought a new EVGA GTX680 and found that according to deviceQuery its GPU Clock rate is only 706 MHz. Isn't it supposed to be 1000+ MHz? Did they send me a broken part? Help!!
I've just bought a new EVGA GTX680 and found that according to deviceQuery its GPU Clock rate is only 706 MHz. Isn't it supposed to be 1000+ MHz? Did they send me a broken part? Help!!

#1
Posted 04/19/2012 10:07 AM   
You are getting the same issue as this reviewer: http://www.phoronix.com/scan.php?page=article&item=nvidia_geforce_gtx680&num=3

[quote]
At least PowerMizer does work with the binary driver for automatically switching between performance levels. While PowerMizer works, after running the tests I realized there was a slight problem... The third (highest) performance level indicates a 705MHz core clock, 3004MHz memory clock, and 1411MHz processor clock. Okay, the GDDR5 memory clock is right, but the rest are not; the graphics core clock is some 300MHz too low. The performance level two is also the same as the performance level three. In checking what the Phoronix Test Suite was reporting, which reads its values using the nvidia-settings extension and in the case of clock frequencies via the "GPU3DClockFreqs", it too found the GK104 core topping out at 705MHz rather than 1006MHz.

In contacting the NVIDIA Linux team, they investigated and at first thought it might have been a defective video BIOS or other issue. However, in the end the NVIDIA Linux developers believe the card is operating correctly, it's just not being reported as such. With Kepler each of the GPU's performance levels has a range of frequencies and so they think it's basically just showing the low-end values. However, the reporting should be improved in a future release. For more details see this news posting: http://www.phoronix.com/scan.php?page=news_item&px=MTA4ODc
[/quote]
You are getting the same issue as this reviewer: http://www.phoronix.com/scan.php?page=article&item=nvidia_geforce_gtx680&num=3





At least PowerMizer does work with the binary driver for automatically switching between performance levels. While PowerMizer works, after running the tests I realized there was a slight problem... The third (highest) performance level indicates a 705MHz core clock, 3004MHz memory clock, and 1411MHz processor clock. Okay, the GDDR5 memory clock is right, but the rest are not; the graphics core clock is some 300MHz too low. The performance level two is also the same as the performance level three. In checking what the Phoronix Test Suite was reporting, which reads its values using the nvidia-settings extension and in the case of clock frequencies via the "GPU3DClockFreqs", it too found the GK104 core topping out at 705MHz rather than 1006MHz.



In contacting the NVIDIA Linux team, they investigated and at first thought it might have been a defective video BIOS or other issue. However, in the end the NVIDIA Linux developers believe the card is operating correctly, it's just not being reported as such. With Kepler each of the GPU's performance levels has a range of frequencies and so they think it's basically just showing the low-end values. However, the reporting should be improved in a future release. For more details see this news posting: http://www.phoronix.com/scan.php?page=news_item&px=MTA4ODc

#2
Posted 04/19/2012 01:16 PM   
It is a bug in the runtime, it will be fixed in the next CUDA release.
CUDA 4.2 is now officially out, it should be in the release notes.
It is a bug in the runtime, it will be fixed in the next CUDA release.

CUDA 4.2 is now officially out, it should be in the release notes.

#3
Posted 04/19/2012 01:56 PM   
[quote name='mfatica' date='19 April 2012 - 06:56 AM' timestamp='1334843769' post='1398303']
It is a bug in the runtime, it will be fixed in the next CUDA release.
CUDA 4.2 is now officially out, it should be in the release notes.
[/quote]
Can't find it in CUDA_Toolkit_Release_Notes.txt.

I wrote a simple benchmark to bench the clock rate:
[code]
__global__ void wait( long long int *a, int n )
{
long long int c0 = clock64(), c1;
while( (c1 = clock64() - c0) < n );
*a = c1;
}
[/code]
It doesn't do anything but waits for n cycles. I run it with 1 thread and a large n, measure its runtime and get 1.23 GHz. Now I wonder if it is overclocked /wink.gif' class='bbc_emoticon' alt=';)' />

Also I benched SGEMM - got 1.3 Tflop/s so far ('N','T' case). Not much compared to the declared 3 Tflop/s peak. Is this what I'm supposed to get?
[quote name='mfatica' date='19 April 2012 - 06:56 AM' timestamp='1334843769' post='1398303']

It is a bug in the runtime, it will be fixed in the next CUDA release.

CUDA 4.2 is now officially out, it should be in the release notes.



Can't find it in CUDA_Toolkit_Release_Notes.txt.



I wrote a simple benchmark to bench the clock rate:



__global__ void wait( long long int *a, int n )

{

long long int c0 = clock64(), c1;

while( (c1 = clock64() - c0) < n );

*a = c1;

}


It doesn't do anything but waits for n cycles. I run it with 1 thread and a large n, measure its runtime and get 1.23 GHz. Now I wonder if it is overclocked /wink.gif' class='bbc_emoticon' alt=';)' />



Also I benched SGEMM - got 1.3 Tflop/s so far ('N','T' case). Not much compared to the declared 3 Tflop/s peak. Is this what I'm supposed to get?

#4
Posted 04/19/2012 02:47 PM   
The official 4.2 release was posted yesterday ( but it seems no notice was sent to developers).


http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_Toolkit_Release_Notes_And_Errata.txt
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
NVIDIA CUDA Toolkit v4.2 Release Notes Errata
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
----------------------------------------
Known Issues
----------------------------------------
* Functions cudaGetDeviceProperties, cuDeviceGetProperties, and
cuDeviceGetAttribute may return the incorrect clock frequency for the SM clock
on Kepler GPUs. [Windows and Linux]


1.3 TFlops seems in the right ballpark.
The official 4.2 release was posted yesterday ( but it seems no notice was sent to developers).





http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_Toolkit_Release_Notes_And_Errata.txt

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

NVIDIA CUDA Toolkit v4.2 Release Notes Errata

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

----------------------------------------

Known Issues

----------------------------------------

* Functions cudaGetDeviceProperties, cuDeviceGetProperties, and

cuDeviceGetAttribute may return the incorrect clock frequency for the SM clock

on Kepler GPUs. [Windows and Linux]





1.3 TFlops seems in the right ballpark.

#5
Posted 04/19/2012 02:57 PM   
[quote name='vvolkov' date='19 April 2012 - 07:47 AM' timestamp='1334846845' post='1398320']
I wrote a simple benchmark to bench the clock rate:
...
and get 1.23 GHz. Now I wonder if it is overclocked /wink.gif' class='bbc_emoticon' alt=';)' />
[/quote]
Just realized it could be the new dynamic clocking feature a.k.a. "GPU Boost". Can't reproduce 1.23 GHz though. Now I get 1.123 GHz. I guess that was a typo /wink.gif' class='bbc_emoticon' alt=';)' />
[quote name='vvolkov' date='19 April 2012 - 07:47 AM' timestamp='1334846845' post='1398320']

I wrote a simple benchmark to bench the clock rate:

...

and get 1.23 GHz. Now I wonder if it is overclocked /wink.gif' class='bbc_emoticon' alt=';)' />



Just realized it could be the new dynamic clocking feature a.k.a. "GPU Boost". Can't reproduce 1.23 GHz though. Now I get 1.123 GHz. I guess that was a typo /wink.gif' class='bbc_emoticon' alt=';)' />

#6
Posted 04/28/2012 10:28 PM   
[quote name='vvolkov' date='28 April 2012 - 03:28 PM' timestamp='1335652112' post='1402070']
Just realized it could be the new dynamic clocking feature a.k.a. "GPU Boost". Can't reproduce 1.23 GHz though. Now I get 1.123 GHz. I guess that was a typo /wink.gif' class='bbc_emoticon' alt=';)' />
[/quote]
That's right - just checked it using [url="http://www.evga.com/Precision/Default.asp"]EVGA Precision[/url]. It does adjust clock dynamically!

I get 324 MHz when nothing much happens on screen, 1005 MHz when some activity and 1123 MHz when running CUDA. Memory clock also jumps up and down, but it doesn't seem to depend on the memory intensity of the kernel.
[quote name='vvolkov' date='28 April 2012 - 03:28 PM' timestamp='1335652112' post='1402070']

Just realized it could be the new dynamic clocking feature a.k.a. "GPU Boost". Can't reproduce 1.23 GHz though. Now I get 1.123 GHz. I guess that was a typo /wink.gif' class='bbc_emoticon' alt=';)' />



That's right - just checked it using EVGA Precision. It does adjust clock dynamically!



I get 324 MHz when nothing much happens on screen, 1005 MHz when some activity and 1123 MHz when running CUDA. Memory clock also jumps up and down, but it doesn't seem to depend on the memory intensity of the kernel.

#7
Posted 04/29/2012 03:49 AM   
Scroll To Top