GPU Priority

Hi,
I’m a bit of a newbie to GPU processing, CUDA etc. however I’ve got a new machine with two NVidia graphics cards on Ubuntu 10.10. Got everything installed and working fine and both GPUs show up in the NVidia X Server Settings.

The two cards I have are a GeForce GTX 470 and Tesla C2050 card. Obviously the Tesla is more powerful and so I was expecting the GPU programs to run on that card rather than the GeForce however when I run the example codes supplied, these seem to run on the GForce (by running nvidia-smi -a from terminal and looking at the GPU usage). My monitor is connected to the GeForce card.

I wondered if there was a way I can basically prioritise the cards so that the Tesla card is used as my primary GPU card? It seems slightly stupid to use the lower spec card to run things but that seem to be the situation at the minute as it’s showing up as GPU0 while the Tesla is GPU1.

Any help with this would be appreciated.

Thanks
Jonathan

Hi,
I’m a bit of a newbie to GPU processing, CUDA etc. however I’ve got a new machine with two NVidia graphics cards on Ubuntu 10.10. Got everything installed and working fine and both GPUs show up in the NVidia X Server Settings.

The two cards I have are a GeForce GTX 470 and Tesla C2050 card. Obviously the Tesla is more powerful and so I was expecting the GPU programs to run on that card rather than the GeForce however when I run the example codes supplied, these seem to run on the GForce (by running nvidia-smi -a from terminal and looking at the GPU usage). My monitor is connected to the GeForce card.

I wondered if there was a way I can basically prioritise the cards so that the Tesla card is used as my primary GPU card? It seems slightly stupid to use the lower spec card to run things but that seem to be the situation at the minute as it’s showing up as GPU0 while the Tesla is GPU1.

Any help with this would be appreciated.

Thanks
Jonathan

use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.

use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.

Hi avidday,

Just to check I’ve got this right after looking at the nvidia-smi docs, I’ll want to run the following (GTX470 is GPU 0 and C2050 is GPU 1):

nvidia-smi -g 0 -c 2

Thanks

Jonathan

Hi avidday,

Just to check I’ve got this right after looking at the nvidia-smi docs, I’ll want to run the following (GTX470 is GPU 0 and C2050 is GPU 1):

nvidia-smi -g 0 -c 2

Thanks

Jonathan

Yes, that should be right.

Yes, that should be right.

Hey,
I’ve set the GPU priority you suggestted but now none of the compiled example codes work with the GPU. I’m getting errors when I try and run them such as:

OceanFFT:

Error on run:
CUFFT error in file ‘oceanFFT.cpp’ in line 313.

Code:
cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );

// create FFT plan
CUFFT_SAFE_CALL(cufftPlan2d(&fftPlan, meshW, meshH, CUFFT_C2R) ); <— This line here

nbody:

Error on run:
Compute 2.0 CUDA device: [GeForce GTX 470]
bodysystemcuda_impl.h(131) : cudaSafeCall() Runtime API error : invalid device ordinal.

Code:
glBindBuffer(GL_ARRAY_BUFFER, 0);
cutilSafeCall(cudaGraphicsGLRegisterBuffer(&m_pGRes[i],
m_pbo[i],
cudaGraphicsMapFlagsNone)); ← This line here

I’ve tried recompiling the code after making that GPU priority change but the errors are still there. Any suggestions?

I was hoping that this would just do something like set the CUDA_VISIBLE_DEVICES=1 criteria for all codes but apparently not. If I could find something like that, it would be really helpful.

Thanks
Jonathan

Hey,
I’ve set the GPU priority you suggestted but now none of the compiled example codes work with the GPU. I’m getting errors when I try and run them such as:

OceanFFT:

Error on run:
CUFFT error in file ‘oceanFFT.cpp’ in line 313.

Code:
cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );

// create FFT plan
CUFFT_SAFE_CALL(cufftPlan2d(&fftPlan, meshW, meshH, CUFFT_C2R) ); <— This line here

nbody:

Error on run:
Compute 2.0 CUDA device: [GeForce GTX 470]
bodysystemcuda_impl.h(131) : cudaSafeCall() Runtime API error : invalid device ordinal.

Code:
glBindBuffer(GL_ARRAY_BUFFER, 0);
cutilSafeCall(cudaGraphicsGLRegisterBuffer(&m_pGRes[i],
m_pbo[i],
cudaGraphicsMapFlagsNone)); ← This line here

I’ve tried recompiling the code after making that GPU priority change but the errors are still there. Any suggestions?

I was hoping that this would just do something like set the CUDA_VISIBLE_DEVICES=1 criteria for all codes but apparently not. If I could find something like that, it would be really helpful.

Thanks
Jonathan

Interestingly, it looks like the examples are specifying devices manually rather than just allowing the CUDA context to go to whatever default device is available. For example:

This fragment is selecting the device with the highest GFLOPS rating, which is actually your GTX 470. (448 CUDA Cores @ 1.215 GHz, vs. 1.15 GHz for the C2050)

However, in your own code, if you do not call cudaSetDevice(), then the right thing will happen if the GTX 470 is set to compute prohibited mode.

Interestingly, it looks like the examples are specifying devices manually rather than just allowing the CUDA context to go to whatever default device is available. For example:

This fragment is selecting the device with the highest GFLOPS rating, which is actually your GTX 470. (448 CUDA Cores @ 1.215 GHz, vs. 1.15 GHz for the C2050)

However, in your own code, if you do not call cudaSetDevice(), then the right thing will happen if the GTX 470 is set to compute prohibited mode.