GPU Priority
Hi,
I'm a bit of a newbie to GPU processing, CUDA etc. however I've got a new machine with two NVidia graphics cards on Ubuntu 10.10. Got everything installed and working fine and both GPUs show up in the NVidia X Server Settings.

The two cards I have are a GeForce GTX 470 and Tesla C2050 card. Obviously the Tesla is more powerful and so I was expecting the GPU programs to run on that card rather than the GeForce however when I run the example codes supplied, these seem to run on the GForce (by running nvidia-smi -a from terminal and looking at the GPU usage). My monitor is connected to the GeForce card.

I wondered if there was a way I can basically prioritise the cards so that the Tesla card is used as my primary GPU card? It seems slightly stupid to use the lower spec card to run things but that seem to be the situation at the minute as it's showing up as GPU0 while the Tesla is GPU1.

Any help with this would be appreciated.

Thanks
Jonathan
Hi,

I'm a bit of a newbie to GPU processing, CUDA etc. however I've got a new machine with two NVidia graphics cards on Ubuntu 10.10. Got everything installed and working fine and both GPUs show up in the NVidia X Server Settings.



The two cards I have are a GeForce GTX 470 and Tesla C2050 card. Obviously the Tesla is more powerful and so I was expecting the GPU programs to run on that card rather than the GeForce however when I run the example codes supplied, these seem to run on the GForce (by running nvidia-smi -a from terminal and looking at the GPU usage). My monitor is connected to the GeForce card.



I wondered if there was a way I can basically prioritise the cards so that the Tesla card is used as my primary GPU card? It seems slightly stupid to use the lower spec card to run things but that seem to be the situation at the minute as it's showing up as GPU0 while the Tesla is GPU1.



Any help with this would be appreciated.



Thanks

Jonathan

#1
Posted 11/09/2010 05:22 PM   
Hi,
I'm a bit of a newbie to GPU processing, CUDA etc. however I've got a new machine with two NVidia graphics cards on Ubuntu 10.10. Got everything installed and working fine and both GPUs show up in the NVidia X Server Settings.

The two cards I have are a GeForce GTX 470 and Tesla C2050 card. Obviously the Tesla is more powerful and so I was expecting the GPU programs to run on that card rather than the GeForce however when I run the example codes supplied, these seem to run on the GForce (by running nvidia-smi -a from terminal and looking at the GPU usage). My monitor is connected to the GeForce card.

I wondered if there was a way I can basically prioritise the cards so that the Tesla card is used as my primary GPU card? It seems slightly stupid to use the lower spec card to run things but that seem to be the situation at the minute as it's showing up as GPU0 while the Tesla is GPU1.

Any help with this would be appreciated.

Thanks
Jonathan
Hi,

I'm a bit of a newbie to GPU processing, CUDA etc. however I've got a new machine with two NVidia graphics cards on Ubuntu 10.10. Got everything installed and working fine and both GPUs show up in the NVidia X Server Settings.



The two cards I have are a GeForce GTX 470 and Tesla C2050 card. Obviously the Tesla is more powerful and so I was expecting the GPU programs to run on that card rather than the GeForce however when I run the example codes supplied, these seem to run on the GForce (by running nvidia-smi -a from terminal and looking at the GPU usage). My monitor is connected to the GeForce card.



I wondered if there was a way I can basically prioritise the cards so that the Tesla card is used as my primary GPU card? It seems slightly stupid to use the lower spec card to run things but that seem to be the situation at the minute as it's showing up as GPU0 while the Tesla is GPU1.



Any help with this would be appreciated.



Thanks

Jonathan

#2
Posted 11/09/2010 05:22 PM   
use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.
use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.

#3
Posted 11/09/2010 05:33 PM   
use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.
use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.

#4
Posted 11/09/2010 05:33 PM   
[quote name='avidday' date='09 November 2010 - 05:33 PM' timestamp='1289324039' post='1143958']
use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.
[/quote]

Hi avidday,
Just to check I've got this right after looking at the nvidia-smi docs, I'll want to run the following (GTX470 is GPU 0 and C2050 is GPU 1):

nvidia-smi -g 0 -c 2

Thanks
Jonathan
[quote name='avidday' date='09 November 2010 - 05:33 PM' timestamp='1289324039' post='1143958']

use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.





Hi avidday,

Just to check I've got this right after looking at the nvidia-smi docs, I'll want to run the following (GTX470 is GPU 0 and C2050 is GPU 1):



nvidia-smi -g 0 -c 2



Thanks

Jonathan

#5
Posted 11/09/2010 11:06 PM   
[quote name='avidday' date='09 November 2010 - 05:33 PM' timestamp='1289324039' post='1143958']
use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.
[/quote]

Hi avidday,
Just to check I've got this right after looking at the nvidia-smi docs, I'll want to run the following (GTX470 is GPU 0 and C2050 is GPU 1):

nvidia-smi -g 0 -c 2

Thanks
Jonathan
[quote name='avidday' date='09 November 2010 - 05:33 PM' timestamp='1289324039' post='1143958']

use nvidia-smi to put the GTX470 into compute prohibited mode. Then the cuda runtime will automagically push contexts onto the C2050.





Hi avidday,

Just to check I've got this right after looking at the nvidia-smi docs, I'll want to run the following (GTX470 is GPU 0 and C2050 is GPU 1):



nvidia-smi -g 0 -c 2



Thanks

Jonathan

#6
Posted 11/09/2010 11:06 PM   
Yes, that should be right.
Yes, that should be right.

#7
Posted 11/10/2010 04:21 AM   
Yes, that should be right.
Yes, that should be right.

#8
Posted 11/10/2010 04:21 AM   
Hey,
I've set the GPU priority you suggestted but now none of the compiled example codes work with the GPU. I'm getting errors when I try and run them such as:

[b][u]OceanFFT:[/u][/b]

[u]Error on run: [/u]
CUFFT error in file 'oceanFFT.cpp' in line 313.

[u]Code: [/u]
cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );

// create FFT plan
CUFFT_SAFE_CALL(cufftPlan2d(&fftPlan, meshW, meshH, CUFFT_C2R) ); <--- This line here


[b][u]nbody:[/u][/b]

[u]Error on run:[/u]
Compute 2.0 CUDA device: [GeForce GTX 470]
bodysystemcuda_impl.h(131) : cudaSafeCall() Runtime API error : invalid device ordinal.

[u]Code:[/u]
glBindBuffer(GL_ARRAY_BUFFER, 0);
cutilSafeCall(cudaGraphicsGLRegisterBuffer(&m_pGRes[i],
m_pbo[i],
cudaGraphicsMapFlagsNone)); <-- This line here



I've tried recompiling the code after making that GPU priority change but the errors are still there. Any suggestions?

I was hoping that this would just do something like set the CUDA_VISIBLE_DEVICES=1 criteria for all codes but apparently not. If I could find something like that, it would be really helpful.

Thanks
Jonathan
Hey,

I've set the GPU priority you suggestted but now none of the compiled example codes work with the GPU. I'm getting errors when I try and run them such as:



OceanFFT:



Error on run:

CUFFT error in file 'oceanFFT.cpp' in line 313.



Code:

cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );



// create FFT plan

CUFFT_SAFE_CALL(cufftPlan2d(&fftPlan, meshW, meshH, CUFFT_C2R) ); <--- This line here





nbody:



Error on run:

Compute 2.0 CUDA device: [GeForce GTX 470]

bodysystemcuda_impl.h(131) : cudaSafeCall() Runtime API error : invalid device ordinal.



Code:

glBindBuffer(GL_ARRAY_BUFFER, 0);

cutilSafeCall(cudaGraphicsGLRegisterBuffer(&m_pGRes[i],

m_pbo[i],

cudaGraphicsMapFlagsNone)); <-- This line here







I've tried recompiling the code after making that GPU priority change but the errors are still there. Any suggestions?



I was hoping that this would just do something like set the CUDA_VISIBLE_DEVICES=1 criteria for all codes but apparently not. If I could find something like that, it would be really helpful.



Thanks

Jonathan

#9
Posted 11/10/2010 10:08 AM   
Hey,
I've set the GPU priority you suggestted but now none of the compiled example codes work with the GPU. I'm getting errors when I try and run them such as:

[b][u]OceanFFT:[/u][/b]

[u]Error on run: [/u]
CUFFT error in file 'oceanFFT.cpp' in line 313.

[u]Code: [/u]
cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );

// create FFT plan
CUFFT_SAFE_CALL(cufftPlan2d(&fftPlan, meshW, meshH, CUFFT_C2R) ); <--- This line here


[b][u]nbody:[/u][/b]

[u]Error on run:[/u]
Compute 2.0 CUDA device: [GeForce GTX 470]
bodysystemcuda_impl.h(131) : cudaSafeCall() Runtime API error : invalid device ordinal.

[u]Code:[/u]
glBindBuffer(GL_ARRAY_BUFFER, 0);
cutilSafeCall(cudaGraphicsGLRegisterBuffer(&m_pGRes[i],
m_pbo[i],
cudaGraphicsMapFlagsNone)); <-- This line here



I've tried recompiling the code after making that GPU priority change but the errors are still there. Any suggestions?

I was hoping that this would just do something like set the CUDA_VISIBLE_DEVICES=1 criteria for all codes but apparently not. If I could find something like that, it would be really helpful.

Thanks
Jonathan
Hey,

I've set the GPU priority you suggestted but now none of the compiled example codes work with the GPU. I'm getting errors when I try and run them such as:



OceanFFT:



Error on run:

CUFFT error in file 'oceanFFT.cpp' in line 313.



Code:

cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );



// create FFT plan

CUFFT_SAFE_CALL(cufftPlan2d(&fftPlan, meshW, meshH, CUFFT_C2R) ); <--- This line here





nbody:



Error on run:

Compute 2.0 CUDA device: [GeForce GTX 470]

bodysystemcuda_impl.h(131) : cudaSafeCall() Runtime API error : invalid device ordinal.



Code:

glBindBuffer(GL_ARRAY_BUFFER, 0);

cutilSafeCall(cudaGraphicsGLRegisterBuffer(&m_pGRes[i],

m_pbo[i],

cudaGraphicsMapFlagsNone)); <-- This line here







I've tried recompiling the code after making that GPU priority change but the errors are still there. Any suggestions?



I was hoping that this would just do something like set the CUDA_VISIBLE_DEVICES=1 criteria for all codes but apparently not. If I could find something like that, it would be really helpful.



Thanks

Jonathan

#10
Posted 11/10/2010 10:08 AM   
Interestingly, it looks like the examples are specifying devices manually rather than just allowing the CUDA context to go to whatever default device is available. For example:

[quote name='jc8654' date='10 November 2010 - 04:08 AM' timestamp='1289383680' post='1144366']
[b][u]OceanFFT:[/u][/b]

[u]Error on run: [/u]
CUFFT error in file 'oceanFFT.cpp' in line 313.

[u]Code: [/u]
cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );
[/quote]

This fragment is selecting the device with the highest GFLOPS rating, which is actually your GTX 470. (448 CUDA Cores @ 1.215 GHz, vs. 1.15 GHz for the C2050)

However, in your own code, if you do not call cudaSetDevice(), then the right thing will happen if the GTX 470 is set to compute prohibited mode.
Interestingly, it looks like the examples are specifying devices manually rather than just allowing the CUDA context to go to whatever default device is available. For example:



[quote name='jc8654' date='10 November 2010 - 04:08 AM' timestamp='1289383680' post='1144366']

OceanFFT:



Error on run:

CUFFT error in file 'oceanFFT.cpp' in line 313.



Code:

cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );





This fragment is selecting the device with the highest GFLOPS rating, which is actually your GTX 470. (448 CUDA Cores @ 1.215 GHz, vs. 1.15 GHz for the C2050)



However, in your own code, if you do not call cudaSetDevice(), then the right thing will happen if the GTX 470 is set to compute prohibited mode.

#11
Posted 11/10/2010 12:26 PM   
Interestingly, it looks like the examples are specifying devices manually rather than just allowing the CUDA context to go to whatever default device is available. For example:

[quote name='jc8654' date='10 November 2010 - 04:08 AM' timestamp='1289383680' post='1144366']
[b][u]OceanFFT:[/u][/b]

[u]Error on run: [/u]
CUFFT error in file 'oceanFFT.cpp' in line 313.

[u]Code: [/u]
cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );
[/quote]

This fragment is selecting the device with the highest GFLOPS rating, which is actually your GTX 470. (448 CUDA Cores @ 1.215 GHz, vs. 1.15 GHz for the C2050)

However, in your own code, if you do not call cudaSetDevice(), then the right thing will happen if the GTX 470 is set to compute prohibited mode.
Interestingly, it looks like the examples are specifying devices manually rather than just allowing the CUDA context to go to whatever default device is available. For example:



[quote name='jc8654' date='10 November 2010 - 04:08 AM' timestamp='1289383680' post='1144366']

OceanFFT:



Error on run:

CUFFT error in file 'oceanFFT.cpp' in line 313.



Code:

cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );





This fragment is selecting the device with the highest GFLOPS rating, which is actually your GTX 470. (448 CUDA Cores @ 1.215 GHz, vs. 1.15 GHz for the C2050)



However, in your own code, if you do not call cudaSetDevice(), then the right thing will happen if the GTX 470 is set to compute prohibited mode.

#12
Posted 11/10/2010 12:26 PM   
Scroll To Top