dev driver 5.0 for mac

Hi,
I just downloaded Cuda 5.0 preview but there is no 5.0 dev driver.

PS: i am running snow leopard

I have exactly the same problem.

SOLVED:
the dev driver is in the new developper zone !

Hey guys,

I’m curious if/when the driver for OS X will include support for the GeForce 6xx cards?
Would be nice to upgrade the Mac Pro to one of these powerful but energy efficient cards.

The main obstacle here is the need for an EFI version of the GeForce card itself. Even with an OS X driver, if you grab a GTX 680 off the shelf and stick it in a Mac Pro, it wouldn’t work because the card firmware assumes the existence of a PC BIOS. If you then use Boot Camp (which emulates a BIOS), you could use the card in Windows (or Linux I assume) on the Mac Pro, but not in OS X. This is why they sell special Mac versions of graphics cards.

Thanks for your reply, seibert.

Do you know if all this will change as PC motherboards move over to EFI instead of BIOS? How does it work if someone buys a motherboard which has EFI – do they have to buy a special graphics card too, or how does it work?

I hear that ”non-Mac” GeForce 580 cards work fine in a Mac Pro and OS X with the current release of the CUDA driver. How can that be, these cards must have a PC BIOS, right?

I have no idea how this will work when the PC world switches over.

I have not heard this. All I’ve seen are reports that non-Mac cards work with Boot Camp. Can you point me to someone’s report that this is working?

Here: http://www.youtube.com/watch?v=xYa39Y5nR6A

”AS OF 10.7.3 AND The new Nvidia drivers, you NO LONGER need to install ATY Init nor do you need to edit anything!”

Wow, that’s nice to see. Perhaps the growing support on the PC side for EFI has silently brought the two product lines together. Now we just need Kepler in the MacBook Pro lineup…

Thanks for the info!

The 500 series cards work now in the Mac Pro. Except that you have to “blind boot”, that is you don’t see anything until the OS is fully loaded. That means you can’t use any of the EFI tools, such as holding down the option key to select a startup disk when you boot. It’s not that much of a problem though, assuming you still have your Apple-supplied card around to use in an emergency.

That’s great news! I’m considering purchasing a GTX 560 Ti, as they are on sale right now. If I leave my existing 8800GT in the system [Mac Pro 3.1 , will that show graphics during boot if the GTX 560 is installed too?

I just got my new Retina MacBook Pro with the Kepler 650m but haven’t been able to get CUDA working yet.

I’ve installed the developer preview driver, toolkit, and the code examples, got NVCC paths set up, and compiled all the examples.
But when run, all the samples fail with “CUDA driver version is insufficient for CUDA runtime version.”

So it sounds like I’m using the wrong driver, but I keep checking and rechecking and it doesn’t seem so.
This is a fresh install on virgin one-day old Mac, so there’s no old stale drivers or anything that could interfere.

I downloaded the latest CUDA driver from NVidia’s registered developer site (same place as the toolkit and SDK.)
The CUDA driver panel itself says CUDA driver 5.0.9, GPU driver 7.28.1.295.10.15f03

The toolkit is correct… I can tell by using nvcc --version which gives the correct “release 5.0” response.

That GPU driver version looks very fishy. But it doesn’t look like that is updated separately from the CUDA driver… the developer site gives only ONE driver file. (That driver file is labelled devdriver_5.0.8_macos.dmg though it seems to have installed 5.0.9)

I had no trouble getting 5.0 running for Linux and Windows, but my lack of previous OSX experience may be hurting me with this new toy.

Any suggestions about what stupid thing I missed?
Thanks!

It looks like my Retina MBP and CUDA problem is a little deeper than I thought… CUDA 4.2 also has (different) problems. With CUDA 4.2, applications just say they can’t find any CUDA devices. (Enumeration via runtime or driver shows no GPUs.)

But I’ve figured out that problem. Normally the RMBP uses the integrated Ivy Bridge GPU (enough to drive the 2D screen). The display is handed off to the Kepler 650m when there’s 3D work to do, or to drive multiple displays. But it seems that when the integrated GPU is being used, the 650m becomes unavailable to CUDA as well! So unavailable that it’s not even identified during a device enumeration, so any CUDA program fails. But OpenGL CUDA apps (like smokeParticles or simpleOpenGL or volumeFiltering) will still work since they first kick the OS into 3D mode and wake up the GPU, and then CUDA can see it. But this is only if the app opens OpenGL before CUDA. For example, nbody checks for CUDA before opening its OpenGL window, so it aborts and fails.

A temporary, ugly workaround is to force OSX to always use the 650m for display (you can force this in the Energy Savings panel). But then of course the CUDA apps are sharing the GPU with display, giving screen stutters as the running CUDA kernel freezes the display. It also means the watchdog timer is running.

Bugs filed and NVidia engineers are already digging into them! Like allanmac also found this week with his compiler reports, NVidia is really responding quickly to CUDA bug reports… Kudos as always to our NV heroes!

Excellent! It would be very nice if the NVIDIA GPU could be used exclusively for CUDA.