GTX Titan drivers for Linux 32/64 bit release?
Hello, I've just acquired a GTX Titan, but the beta 313.09 drivers do not detect the card currently. See here for details: [url]http://www.evga.com/forums/fb.ashx?m=1875567[/url] Any timeframe for release?
Hello, I've just acquired a GTX Titan, but the beta 313.09 drivers do not detect the card currently. See here for details:

http://www.evga.com/forums/fb.ashx?m=1875567

Any timeframe for release?

#1
Posted 03/02/2013 07:07 PM   
Try 313.18 drivers, they supposedly work.
Try 313.18 drivers, they supposedly work.

#2
Posted 03/03/2013 05:31 PM   
Thanks for the answer, I wasn't even aware those existed.. they don't come up by default when you look for them on NVIDIA's website. They do work with the GTX Titan as you mentioned, however the PowerMizer settings for the Titan only have P0 - P3, with P2 = P3 = 575 MHz, 3004 MHz clocks. P0 is 324/324, P1 is 540/810 respectively... let's see if I can change PowerMizer with Coolbits... Edit: If this is right, then maybe the clocks are actually being run at the right frequency: [url]http://www.phoronix.com/scan.php?page=news_item&px=MTA4ODc[/url] I will have to check with a known CUDA benchmark I already performed in Windows 7 x64...
Thanks for the answer, I wasn't even aware those existed.. they don't come up by default when you look for them on NVIDIA's website. They do work with the GTX Titan as you mentioned, however the PowerMizer settings for the Titan only have P0 - P3, with P2 = P3 = 575 MHz, 3004 MHz clocks. P0 is 324/324, P1 is 540/810 respectively... let's see if I can change PowerMizer with Coolbits...

Edit: If this is right, then maybe the clocks are actually being run at the right frequency:
http://www.phoronix.com/scan.php?page=news_item&px=MTA4ODc

I will have to check with a known CUDA benchmark I already performed in Windows 7 x64...

#3
Posted 03/04/2013 02:39 AM   
[quote="vacaloca"]I will have to check with a known CUDA benchmark I already performed in Windows 7 x64...[/quote] Here are some {GTX 680, K20c} numbers for SDK "nbody" demo, [url]https://devtalk.nvidia.com/default/topic/525539/cuda-setup-and-installation/the-performance-of-nvidia-gtx-650-in-nbody-example/post/3725569/#3725569[/url] Would be fun to see some Titan numbers! [quote="rjl"]device 0 is k20c device 1 is EVGA GTX680 FTW, 1150mhz nbody -benchmark -numbodies=212992 -device=0 [1305.439 single-precision GFLOP/s at 20 flops per interaction] nbody -benchmark -numbodies=212992 -device=0 -fp64 [604.975 double-precision GFLOP/s at 30 flops per interaction] nbody -benchmark -numbodies=262144 -device=1 [1333.898 single-precision GFLOP/s at 20 flops per interaction] nbody -benchmark -numbodies=262144 -device=1 -fp64 [113.706 double-precision GFLOP/s at 30 flops per interaction] [/quote]
vacaloca said:I will have to check with a known CUDA benchmark I already performed in Windows 7 x64...


Here are some {GTX 680, K20c} numbers for SDK "nbody" demo,

https://devtalk.nvidia.com/default/topic/525539/cuda-setup-and-installation/the-performance-of-nvidia-gtx-650-in-nbody-example/post/3725569/#3725569

Would be fun to see some Titan numbers!

rjl said:device 0 is k20c
device 1 is EVGA GTX680 FTW, 1150mhz

nbody -benchmark -numbodies=212992 -device=0
[1305.439 single-precision GFLOP/s at 20 flops per interaction]

nbody -benchmark -numbodies=212992 -device=0 -fp64
[604.975 double-precision GFLOP/s at 30 flops per interaction]

nbody -benchmark -numbodies=262144 -device=1
[1333.898 single-precision GFLOP/s at 20 flops per interaction]

nbody -benchmark -numbodies=262144 -device=1 -fp64
[113.706 double-precision GFLOP/s at 30 flops per interaction]

#4
Posted 03/04/2013 05:41 AM   
Haha, okay, you twisted my leg... here you go: (also, based on the results, it seems that it's probably running the stock clocks, which is good! This is an EVGA SuperClocked version) [quote]device 0 is GTX Titan (detected as D15U-50) nbody -benchmark -numbodies=212992 -device=0 [1828.241 single-precision GFLOP/s at 20 flops per interaction] nbody -benchmark -numbodies=212992 -device=0 -fp64 [796.249 double-precision GFLOP/s at 30 flops per interaction] nbody -benchmark -numbodies=229376 -device=0 [1848.131 single-precision GFLOP/s at 20 flops per interaction] nbody -benchmark -numbodies=229376 -device=0 -fp64 [801.974 double-precision GFLOP/s at 30 flops per interaction][/quote] Also... apparently it runs at PCI-E 3.0 speeds under Linux... that's certainly a surprise. ;) [code] [CUDA Bandwidth Test] - Starting... Running on... Device 0: D15U-50 Quick Mode Host to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 11202.8 Device to Host Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 11802.7 Device to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 221209.6 [/code]
Haha, okay, you twisted my leg... here you go:

(also, based on the results, it seems that it's probably running the stock clocks, which is good! This is an EVGA SuperClocked version)

device 0 is GTX Titan (detected as D15U-50)

nbody -benchmark -numbodies=212992 -device=0
[1828.241 single-precision GFLOP/s at 20 flops per interaction]

nbody -benchmark -numbodies=212992 -device=0 -fp64
[796.249 double-precision GFLOP/s at 30 flops per interaction]

nbody -benchmark -numbodies=229376 -device=0
[1848.131 single-precision GFLOP/s at 20 flops per interaction]

nbody -benchmark -numbodies=229376 -device=0 -fp64
[801.974 double-precision GFLOP/s at 30 flops per interaction]

Also... apparently it runs at PCI-E 3.0 speeds under Linux... that's certainly a surprise. ;)

[CUDA Bandwidth Test] - Starting...
Running on...

Device 0: D15U-50
Quick Mode

Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 11202.8

Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 11802.7

Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 221209.6

#5
Posted 03/04/2013 06:24 AM   
[quote="vacaloca"]Haha, okay, you twisted my leg... here you go: ... nbody -benchmark -numbodies=229376 -device=0 [1848.131 single-precision GFLOP/s at 20 flops per interaction][/quote] Oh my, [i]two TFLOPS per card[/i]. [quote]... runs at PCI-E 3.0 speeds under Linux[/quote] Gets better and better. Thanks vacaloca, [b]and NVidia[/b]!
vacaloca said:Haha, okay, you twisted my leg... here you go:

...

nbody -benchmark -numbodies=229376 -device=0
[1848.131 single-precision GFLOP/s at 20 flops per interaction]

Oh my, two TFLOPS per card.

... runs at PCI-E 3.0 speeds under Linux

Gets better and better. Thanks vacaloca, and NVidia!

#6
Posted 03/04/2013 03:00 PM   
You're welcome. :) It actually does hit a bit more than 2 TFLOPS when DP support is disabled. Noticed that in Windows this morning when I installed the new 314.14 drivers and settings reset to defaults. Performancewise in nbody, the new drivers in Windows did not make any difference. Thought I'd mention that 313.26 drivers are out that officially support GTX Titan: [url]https://devtalk.nvidia.com/default/topic/533558[/url]
You're welcome. :) It actually does hit a bit more than 2 TFLOPS when DP support is disabled. Noticed that in Windows this morning when I installed the new 314.14 drivers and settings reset to defaults. Performancewise in nbody, the new drivers in Windows did not make any difference.

Thought I'd mention that 313.26 drivers are out that officially support GTX Titan:
https://devtalk.nvidia.com/default/topic/533558

#7
Posted 03/04/2013 07:44 PM   
So I tested out the new 313.26 drivers, and by default the GTX Titan is being run at PCI-E 2.0 speeds instead of PCI-E 3.0: [code]root@Tesla:/usr/local/cuda/samples/1_Utilities/bandwidthTest$ ./bandwidthTest [CUDA Bandwidth Test] - Starting... Running on... Device 0: GeForce GTX TITAN Quick Mode Host to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 5864.8 Device to Host Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 6396.7 Device to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 221238.9[/code] To enable PCI-E 3.0 speeds, there is now a flag that can be passed to the nvidia kernel module: NVreg_EnablePCIeGen3=1 I had to set it using the kernel options in the generated grub.cfg for it to work... So, on Ubuntu 12.10, I edited /etc/default/grub and modified the kernel boot options line* to: [quote]GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nvidia-313.NVreg_EnablePCIeGen3=1"[/quote] After that, I ran: [quote]sudo update-grub[/quote]and after a reboot, PCI-E 3.0 speeds are enabled again. =) For those curious, I'm running the card on an MSI X79A-GD45 (8D) motherboard -- its successor seems to be the [url=http://www.msi.com/product/mb/X79A-GD45-Plus.html]X79A-GD45 Plus[/url]. I have to enable PCI-E 3.0 support manually in Windows as well, because X79 platform is officially not supported at PCI-E 3.0 speeds. *Note: The module is named nvidia-313 because I am using xorg-edgers repository and that's what they named the module to differentiate it from the default nvidia-current drivers.
So I tested out the new 313.26 drivers, and by default the GTX Titan is being run at PCI-E 2.0 speeds instead of PCI-E 3.0:

root@Tesla:/usr/local/cuda/samples/1_Utilities/bandwidthTest$ ./bandwidthTest 
[CUDA Bandwidth Test] - Starting...
Running on...

Device 0: GeForce GTX TITAN
Quick Mode

Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 5864.8

Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 6396.7

Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 221238.9


To enable PCI-E 3.0 speeds, there is now a flag that can be passed to the nvidia kernel module: NVreg_EnablePCIeGen3=1

I had to set it using the kernel options in the generated grub.cfg for it to work... So, on Ubuntu 12.10, I edited /etc/default/grub and modified the kernel boot options line* to:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nvidia-313.NVreg_EnablePCIeGen3=1"

After that, I ran:
sudo update-grub
and after a reboot, PCI-E 3.0 speeds are enabled again. =)

For those curious, I'm running the card on an MSI X79A-GD45 (8D) motherboard -- its successor seems to be the X79A-GD45 Plus. I have to enable PCI-E 3.0 support manually in Windows as well, because X79 platform is officially not supported at PCI-E 3.0 speeds.

*Note: The module is named nvidia-313 because I am using xorg-edgers repository and that's what they named the module to differentiate it from the default nvidia-current drivers.

#8
Posted 03/06/2013 04:02 PM   
Scroll To Top