GTX Titan drivers for Linux 32/64 bit release?

Hello, I’ve just acquired a GTX Titan, but the beta 313.09 drivers do not detect the card currently. See here for details:

[url]http://www.evga.com/forums/fb.ashx?m=1875567[/url]

Any timeframe for release?

Try 313.18 drivers, they supposedly work.

Thanks for the answer, I wasn’t even aware those existed… they don’t come up by default when you look for them on NVIDIA’s website. They do work with the GTX Titan as you mentioned, however the PowerMizer settings for the Titan only have P0 - P3, with P2 = P3 = 575 MHz, 3004 MHz clocks. P0 is 324/324, P1 is 540/810 respectively… let’s see if I can change PowerMizer with Coolbits…

Edit: If this is right, then maybe the clocks are actually being run at the right frequency:
[url]http://www.phoronix.com/scan.php?page=news_item&px=MTA4ODc[/url]

I will have to check with a known CUDA benchmark I already performed in Windows 7 x64…

Here are some {GTX 680, K20c} numbers for SDK “nbody” demo,

https://devtalk.nvidia.com/default/topic/525539/cuda-setup-and-installation/the-performance-of-nvidia-gtx-650-in-nbody-example/post/3725569/#3725569

Would be fun to see some Titan numbers!

Haha, okay, you twisted my leg… here you go:

(also, based on the results, it seems that it’s probably running the stock clocks, which is good! This is an EVGA SuperClocked version)

Also… apparently it runs at PCI-E 3.0 speeds under Linux… that’s certainly a surprise. ;)

[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: D15U-50
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			11202.8

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			11802.7

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			221209.6

Oh my, two TFLOPS per card.

Gets better and better. Thanks vacaloca, and NVidia!

You’re welcome. :) It actually does hit a bit more than 2 TFLOPS when DP support is disabled. Noticed that in Windows this morning when I installed the new 314.14 drivers and settings reset to defaults. Performancewise in nbody, the new drivers in Windows did not make any difference.

Thought I’d mention that 313.26 drivers are out that officially support GTX Titan:
[url]Linux, Solaris, and FreeBSD driver 313.26 - Announcements and News - NVIDIA Developer Forums

So I tested out the new 313.26 drivers, and by default the GTX Titan is being run at PCI-E 2.0 speeds instead of PCI-E 3.0:

root@Tesla:/usr/local/cuda/samples/1_Utilities/bandwidthTest$ ./bandwidthTest 
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GTX TITAN
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			5864.8

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			6396.7

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			221238.9

To enable PCI-E 3.0 speeds, there is now a flag that can be passed to the nvidia kernel module: NVreg_EnablePCIeGen3=1

I had to set it using the kernel options in the generated grub.cfg for it to work… So, on Ubuntu 12.10, I edited /etc/default/grub and modified the kernel boot options line* to:

After that, I ran:

and after a reboot, PCI-E 3.0 speeds are enabled again. =)

For those curious, I’m running the card on an MSI X79A-GD45 (8D) motherboard – its successor seems to be the X79A-GD45 Plus. I have to enable PCI-E 3.0 support manually in Windows as well, because X79 platform is officially not supported at PCI-E 3.0 speeds.

*Note: The module is named nvidia-313 because I am using xorg-edgers repository and that’s what they named the module to differentiate it from the default nvidia-current drivers.