CUDA accessing ALL devices, even those which are blacklisted
Hi all,

I'm using linux cgroups (through SLURM, http://www.schedmd.com/) to control CUDA and OpenCL jobs on machines with multiple nvidia gpu. Cgroups are nices since they isolate processes and prevent them from accessing unallocated resources.

I can effectively prevent code to run on unallocated gpu, but the it seems that just initializing CUDA requires visiting every nvidia gpu in the system. Because cgroups is preventing this, every run will fail. For example, running /opt/cuda/sdk/C/bin/linux/release/deviceQuery through this setup will fail:
standard output:
[code]/opt/cuda/sdk/C/bin/linux/release/deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 10
-> invalid device ordinal[/code]
error output:
[code][deviceQuery] starting...
[deviceQuery] test results...
FAILED
> exiting in 3 seconds: 3...2...1...done!
srun: error: shockwave: task 0: Exited with exit code 1[/code]

Running deviceQuery outside of the queueing system works just fine:
[code]$ /opt/cuda/sdk/C/bin/linux/release/deviceQuery
[deviceQuery] starting...

/opt/cuda/sdk/C/bin/linux/release/deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Found 3 CUDA Capable device(s)

Device 0: "GeForce GTX 580"
CUDA Driver Version / Runtime Version 4.1 / 4.1
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 3072 MBytes (3220897792 bytes)
(16) Multiprocessors x (32) CUDA Cores/MP: 512 CUDA Cores
GPU Clock Speed: 1.54 GHz
Memory Clock rate: 2004.00 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: No
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "GeForce 210"
CUDA Driver Version / Runtime Version 4.1 / 4.1
CUDA Capability Major/Minor version number: 1.2
Total amount of global memory: 512 MBytes (536674304 bytes)
( 2) Multiprocessors x ( 8) CUDA Cores/MP: 16 CUDA Cores
GPU Clock Speed: 1.23 GHz
Memory Clock rate: 600.00 Mhz
Memory Bus Width: 64-bit
Max Texture Dimension Size (x,y,z) 1D=(8192), 2D=(65536,32768), 3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(8192) x 512, 2D=(8192,8192) x 512
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 16384
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 2147483647 bytes
Texture alignment: 256 bytes
Concurrent copy and execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: No
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: No
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): No
Device PCI Bus ID / PCI location ID: 3 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 2: "GeForce GTX 580"
CUDA Driver Version / Runtime Version 4.1 / 4.1
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 3072 MBytes (3220897792 bytes)
(16) Multiprocessors x (32) CUDA Cores/MP: 512 CUDA Cores
GPU Clock Speed: 1.54 GHz
Memory Clock rate: 2004.00 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: No
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 65 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.1, CUDA Runtime Version = 4.1, NumDevs = 3, Device = GeForce GTX 580, Device = GeForce 210
[deviceQuery] test results...
PASSED

> exiting in 3 seconds: 3...2...1...done![/code]

Is there a way to prevent CUDA from trying to access every possible devices on the system? Or at least not fail completely when it cannot access one device (that it shouldn't access anyway)?
Hi all,



I'm using linux cgroups (through SLURM, http://www.schedmd.com/) to control CUDA and OpenCL jobs on machines with multiple nvidia gpu. Cgroups are nices since they isolate processes and prevent them from accessing unallocated resources.



I can effectively prevent code to run on unallocated gpu, but the it seems that just initializing CUDA requires visiting every nvidia gpu in the system. Because cgroups is preventing this, every run will fail. For example, running /opt/cuda/sdk/C/bin/linux/release/deviceQuery through this setup will fail:

standard output:

/opt/cuda/sdk/C/bin/linux/release/deviceQuery Starting...



CUDA Device Query (Runtime API) version (CUDART static linking)



cudaGetDeviceCount returned 10

-> invalid device ordinal


error output:

[deviceQuery] starting...

[deviceQuery] test results...

FAILED

> exiting in 3 seconds: 3...2...1...done!

srun: error: shockwave: task 0: Exited with exit code 1




Running deviceQuery outside of the queueing system works just fine:

$ /opt/cuda/sdk/C/bin/linux/release/deviceQuery

[deviceQuery] starting...



/opt/cuda/sdk/C/bin/linux/release/deviceQuery Starting...



CUDA Device Query (Runtime API) version (CUDART static linking)



Found 3 CUDA Capable device(s)



Device 0: "GeForce GTX 580"

CUDA Driver Version / Runtime Version 4.1 / 4.1

CUDA Capability Major/Minor version number: 2.0

Total amount of global memory: 3072 MBytes (3220897792 bytes)

(16) Multiprocessors x (32) CUDA Cores/MP: 512 CUDA Cores

GPU Clock Speed: 1.54 GHz

Memory Clock rate: 2004.00 Mhz

Memory Bus Width: 384-bit

L2 Cache Size: 786432 bytes

Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)

Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048

Total amount of constant memory: 65536 bytes

Total amount of shared memory per block: 49152 bytes

Total number of registers available per block: 32768

Warp size: 32

Maximum number of threads per block: 1024

Maximum sizes of each dimension of a block: 1024 x 1024 x 64

Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535

Maximum memory pitch: 2147483647 bytes

Texture alignment: 512 bytes

Concurrent copy and execution: Yes with 1 copy engine(s)

Run time limit on kernels: No

Integrated GPU sharing Host Memory: No

Support host page-locked memory mapping: Yes

Concurrent kernel execution: Yes

Alignment requirement for Surfaces: Yes

Device has ECC support enabled: No

Device is using TCC driver mode: No

Device supports Unified Addressing (UVA): Yes

Device PCI Bus ID / PCI location ID: 1 / 0

Compute Mode:

< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >



Device 1: "GeForce 210"

CUDA Driver Version / Runtime Version 4.1 / 4.1

CUDA Capability Major/Minor version number: 1.2

Total amount of global memory: 512 MBytes (536674304 bytes)

( 2) Multiprocessors x ( 8) CUDA Cores/MP: 16 CUDA Cores

GPU Clock Speed: 1.23 GHz

Memory Clock rate: 600.00 Mhz

Memory Bus Width: 64-bit

Max Texture Dimension Size (x,y,z) 1D=(8192), 2D=(65536,32768), 3D=(2048,2048,2048)

Max Layered Texture Size (dim) x layers 1D=(8192) x 512, 2D=(8192,8192) x 512

Total amount of constant memory: 65536 bytes

Total amount of shared memory per block: 16384 bytes

Total number of registers available per block: 16384

Warp size: 32

Maximum number of threads per block: 512

Maximum sizes of each dimension of a block: 512 x 512 x 64

Maximum sizes of each dimension of a grid: 65535 x 65535 x 1

Maximum memory pitch: 2147483647 bytes

Texture alignment: 256 bytes

Concurrent copy and execution: Yes with 1 copy engine(s)

Run time limit on kernels: Yes

Integrated GPU sharing Host Memory: No

Support host page-locked memory mapping: Yes

Concurrent kernel execution: No

Alignment requirement for Surfaces: Yes

Device has ECC support enabled: No

Device is using TCC driver mode: No

Device supports Unified Addressing (UVA): No

Device PCI Bus ID / PCI location ID: 3 / 0

Compute Mode:

< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >



Device 2: "GeForce GTX 580"

CUDA Driver Version / Runtime Version 4.1 / 4.1

CUDA Capability Major/Minor version number: 2.0

Total amount of global memory: 3072 MBytes (3220897792 bytes)

(16) Multiprocessors x (32) CUDA Cores/MP: 512 CUDA Cores

GPU Clock Speed: 1.54 GHz

Memory Clock rate: 2004.00 Mhz

Memory Bus Width: 384-bit

L2 Cache Size: 786432 bytes

Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)

Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048

Total amount of constant memory: 65536 bytes

Total amount of shared memory per block: 49152 bytes

Total number of registers available per block: 32768

Warp size: 32

Maximum number of threads per block: 1024

Maximum sizes of each dimension of a block: 1024 x 1024 x 64

Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535

Maximum memory pitch: 2147483647 bytes

Texture alignment: 512 bytes

Concurrent copy and execution: Yes with 1 copy engine(s)

Run time limit on kernels: No

Integrated GPU sharing Host Memory: No

Support host page-locked memory mapping: Yes

Concurrent kernel execution: Yes

Alignment requirement for Surfaces: Yes

Device has ECC support enabled: No

Device is using TCC driver mode: No

Device supports Unified Addressing (UVA): Yes

Device PCI Bus ID / PCI location ID: 65 / 0

Compute Mode:

< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >



deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.1, CUDA Runtime Version = 4.1, NumDevs = 3, Device = GeForce GTX 580, Device = GeForce 210

[deviceQuery] test results...

PASSED



> exiting in 3 seconds: 3...2...1...done!




Is there a way to prevent CUDA from trying to access every possible devices on the system? Or at least not fail completely when it cannot access one device (that it shouldn't access anyway)?

#1
Posted 02/01/2012 12:33 AM   
You can use the CUDA_VISIBLE_DEVICES environment variable to control visibility. If you ran :

[code]CUDA_VISIBLE_DEVICES=0,2 /opt/cuda/sdk/C/bin/linux/release/deviceQuery[/code]

you will see output only for the GTX580's. The driver will not try to initialize on the GT210.

(See http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_C_Best_Practices_Guide.pdf Section 12.5 for a complete explanation of this variable and its effects)
You can use the CUDA_VISIBLE_DEVICES environment variable to control visibility. If you ran :



CUDA_VISIBLE_DEVICES=0,2 /opt/cuda/sdk/C/bin/linux/release/deviceQuery




you will see output only for the GTX580's. The driver will not try to initialize on the GT210.



(See http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_C_Best_Practices_Guide.pdf Section 12.5 for a complete explanation of this variable and its effects)

#2
Posted 04/25/2012 04:05 AM   
Scroll To Top