Forcing Retina MacBook Pro to use Intel Graphics for desktop, freeing memory on CUDA device?
I recently got one of the 15 inch Retina MacBook Pro laptops for CUDA development, and it automatically switches between the integrated Intel HD 4000 GPU and the discrete GeForce GT 650M when I start a CUDA program. This works quite smoothly, but I'm noticing that depending on what applications I have running on my desktop, I can easily have less than 250 MB of device memory available for CUDA applications. Has anyone discovered a way to force the operating system to continue using the Intel graphics for the desktop, freeing the NVIDIA GPU to be used exclusively for CUDA applications?
I recently got one of the 15 inch Retina MacBook Pro laptops for CUDA development, and it automatically switches between the integrated Intel HD 4000 GPU and the discrete GeForce GT 650M when I start a CUDA program. This works quite smoothly, but I'm noticing that depending on what applications I have running on my desktop, I can easily have less than 250 MB of device memory available for CUDA applications.

Has anyone discovered a way to force the operating system to continue using the Intel graphics for the desktop, freeing the NVIDIA GPU to be used exclusively for CUDA applications?

#1
Posted 02/01/2013 03:12 PM   
I have the same laptop for the same reason. The critical tool to help is GfxCardStatus ( [url]http://gfx.io/[/url] ). It will allow you to FORCE the integrated GPU on all the time, so the GPU can be always available for CUDA. But, unfortunately, the NV CUDA drivers are really quite unfriendly on the Mac. Even with the above tool helping, the drivers [b]still[/b] don't kill the watchdog killer. So your kernels will be killed if they take too long. And live debugging may not work either, though I have not tried much after a few failures. This is still a lot better than it was in 5.0 beta.. when I first got the laptop, you could not use CUDA unless the discrete GPU was used for display! So only the OpenGL SDK examples ever worked. :-)
I have the same laptop for the same reason.

The critical tool to help is GfxCardStatus ( http://gfx.io/ ).
It will allow you to FORCE the integrated GPU on all the time, so the GPU can be always available for CUDA.

But, unfortunately, the NV CUDA drivers are really quite unfriendly on the Mac.
Even with the above tool helping, the drivers still don't kill the watchdog killer. So your kernels will be killed if they take too long. And live debugging may not work either, though I have not tried much after a few failures.


This is still a lot better than it was in 5.0 beta.. when I first got the laptop, you could not use CUDA unless the discrete GPU was used for display! So only the OpenGL SDK examples ever worked. :-)

#2
Posted 02/01/2013 05:16 PM   
Ah, I should have mentioned that I've already been playing with gfxCardStatus, but it seems that in recent versions it prevents you from switching on the integrated GPU if a process requires the discrete GPU: http://gfx.io/switching.html#integrated-only-mode-limitations I downloaded the source for gfxCardStatus and manually removed the check, so I could force the integrated GPU on while a CUDA program was running. Unfortunately, as soon as the program terminated, the unloading of the CUDA context crashed the entire system and I had to reboot. (Everything went black!) How are you using gfxCardStatus, and which version is it?
Ah, I should have mentioned that I've already been playing with gfxCardStatus, but it seems that in recent versions it prevents you from switching on the integrated GPU if a process requires the discrete GPU:

http://gfx.io/switching.html#integrated-only-mode-limitations

I downloaded the source for gfxCardStatus and manually removed the check, so I could force the integrated GPU on while a CUDA program was running. Unfortunately, as soon as the program terminated, the unloading of the CUDA context crashed the entire system and I had to reboot. (Everything went black!)

How are you using gfxCardStatus, and which version is it?

#3
Posted 02/01/2013 07:02 PM   
I'm using the same version of GfxCardStatus. But the important rule is to force integrated GPU on *FIRST* and LEAVE it there forever. Trying to switch it later causes unpredictable issues (as you found). More bad news about the RMBP and CUDA (Apple's blame, not NV's). If you install Linux or Windows, the Apple EFI bootloader will disable the integrated GPU entirely, forcing the sideloaded OS to run on the discrete GPU fulltime. This is likely simply to give OSX a huge battery life advantage over other OSes. But unfortunately it sucks for CUDA developers. I was planning to develop on all three OSes with one machine.
I'm using the same version of GfxCardStatus.
But the important rule is to force integrated GPU on *FIRST* and LEAVE it there forever.
Trying to switch it later causes unpredictable issues (as you found).



More bad news about the RMBP and CUDA (Apple's blame, not NV's). If you install Linux or Windows, the Apple EFI bootloader will disable the integrated GPU entirely, forcing the sideloaded OS to run on the discrete GPU fulltime. This is likely simply to give OSX a huge battery life advantage over other OSes. But unfortunately it sucks for CUDA developers. I was planning to develop on all three OSes with one machine.

#4
Posted 02/01/2013 09:40 PM   
When I use gfxCardStatus to set the GPU mode to "Integrated Only", I can't run any CUDA programs because they don't find any CUDA devices on the system. Is there something else I'm forgetting to do?
When I use gfxCardStatus to set the GPU mode to "Integrated Only", I can't run any CUDA programs because they don't find any CUDA devices on the system. Is there something else I'm forgetting to do?

#5
Posted 02/02/2013 03:17 PM   
I used to have that problem (CUDA didn't see the GPU unless it was actively being used for display) but that was fixed for me with CUDA Toolkit 5.0 final. You can see some of my troubleshooting back in this old thread: [url]https://devtalk.nvidia.com/default/topic/519429/dev-driver-5-0-for-mac/[/url]
I used to have that problem (CUDA didn't see the GPU unless it was actively being used for display) but that was fixed for me with CUDA Toolkit 5.0 final. You can see some of my troubleshooting back in this old thread: https://devtalk.nvidia.com/default/topic/519429/dev-driver-5-0-for-mac/

#6
Posted 02/04/2013 03:56 AM   
Hmm, the latest CUDA 5.0 drivers still show the same problem of the CUDA device not being visible unless the GUI is running on it, so I think I'll need to file a bug report.
Hmm, the latest CUDA 5.0 drivers still show the same problem of the CUDA device not being visible unless the GUI is running on it, so I think I'll need to file a bug report.

#7
Posted 02/04/2013 03:28 PM   
I just checked again... and I confirm your results! The GPU isn't seen by CUDA unless the GPU is actively being used. I remember this working months ago, but obviously that's not true. I hypothesize that I usually work with a web browser open at the CUDA docs page, and that usually kicks on the discrete GPU so CUDA would silently work properly. I submitted this as a bug back in June.. it's case 174841 if you want to reference it in your own report.
I just checked again... and I confirm your results! The GPU isn't seen by CUDA unless the GPU is actively being used.

I remember this working months ago, but obviously that's not true. I hypothesize that I usually work with a web browser open at the CUDA docs page, and that usually kicks on the discrete GPU so CUDA would silently work properly.

I submitted this as a bug back in June.. it's case 174841 if you want to reference it in your own report.

#8
Posted 02/04/2013 07:53 PM   
Scroll To Top