command line GPU load display on OS X

Hi all,

I need a command line based tool that runs on OS X and shows me the CUDA load in each GPU in the system. Is there anything for OS X at all?

Thanks

  • Balt

by “the CUDA load” you mean…?

an utilization figure/ percentage, as reported by the nvidia xserver for instance, or something more eloquent perhaps?

just a figure showing how many cores are at work and how much memory is in use would be useful. Similar to CPU load for a processor.

memory might not be too difficult; i think cores would be more difficult

i see the nvidia x server reports utilization (as a percentage) as well as memory use

hypothetically, you have 2 options available: use given apis, or write custom code
i can conceive of code that can determine whether a gpu has kernels running perhaps, but not code that can determine how many (if the code does record such primary data itself, from the start)

What’s the nvidia x server?

I’m pretty sure the X server is the part of the OS that handles all graphics requests made by programs (OpenGL or DirectX, I’m assuming) and draws to the screen. Or something to that effect, I’m pretty ignorant myself.

Basically, servers sit and listen for clients to give them requests.

there is x server, as alluded to by MutantJohn, and there is nvidia x server

because i have cuda installed, under applications i find:
nsight eclipse
nvidia visual profiler
nvidia x server

it is a gui that reports on the gpus; there might be a command line version too, i do not know

Hmmm… I have nothing in Applications, it’s all in /Developer/NVIDIA. Or does that require v7.0? I’m at 6.5 still because I can’t upgrade OS X due to some other apps I need that will no longer work.

i have this annoying habit of casting (reading) os x as centos
and you were “command line this”, “command line that”
so, for a moment i was deeply under the impression that you are running linux

however, i am sure i have seen a similar tool under windows
if that is the case, surely os x must have such a tool as well

herewith a snapshot of the nvidia x server
i use it when debugging code that runs iterations, that can take minutes/ hours to complete
it is a cheap way to get a rough idea of (the device memory footprint, and) how hard an individual gpu is working/ not working

thanks for this. I’ll take a look at what v7 delivers on OS X. I may just have to bite the bullet and upgrade.

I’m not sure you’ll find much of anything for OS X.

The canonical tool for this would be something based on NVML, (which is what nvidia-smi uses, for example) but that library is not supported on OS X.

You could write a fairly trivial CUDA app that just monitors and reports memory usage by running cudaMemGetInfo repeatedly. You could derive a fairly basic “in use/not in use” indicator based on whether the memory in use is above a certain default value.

AFAIK there aren’t any apps anywhere, for any platform, that show you “how many cores are at work”. NVML does offer methods to assess compute utilization, but these are rather coarse and again, not available on OS X.

Thanks. v7 on OSX 10.10 didn’t really bring anything to the table. And then I had to downgrade to 10.8.5 again because Apple, bafflingly, has disabled proper external GPU support in 10.9 and onwards, so my external GPU cluster used for CUDA based rendering no longer works. I may just have get linux running on that machine… it’s a shame really because the MacPro hardware is incredibly well finished.