Two GPUs, but 2nd GPU not detected. How to fix?

Hi, i have two cards and only one of them is detected when I run the utility ‘deviceQuery’. Interestingly, when i run ‘nvidia-smi’ on command line, it sees both cards. On ubuntu 14.04, there is an application NVIDIA X Server SEttings. In the setting, both cards are reported present.

NVS 310 (Device 0)
GeForece GTX 660 Ti (Device 1)

Hardware : Dell Precision T5810.

Thks.

HT

which version of CUDA do you have installed?

CDUA 8.0.

Here is what i see,

rspace@mybox:$ nvidia-smi -L
GPU 0: NVS 310 (UUID: GPU-4d878303-a8b3-17ae-0b7d-bf3e6559662c)
GPU 1: GeForce GTX 660 Ti (UUID: GPU-cf228eb7-8e58-72b1-c7f8-b8403f656998)

rspace@mybox:$./deviceQuery

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: “NVS 310”
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 2.1
Total amount of global memory: 962 MBytes (1008271360 bytes)
( 1) Multiprocessors, ( 48) CUDA Cores/MP: 48 CUDA Cores
GPU Max Clock rate: 1046 MHz (1.05 GHz)
Memory Clock rate: 875 Mhz
Memory Bus Width: 64-bit
L2 Cache Size: 65536 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 3 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = NVS 310
Result = PASS

what driver version?

what is the result of:

echo $CUDA_VISIBLE_DEVICES

(should be blank)

what happens if you reboot?

what happens if you stop the X-server?

driver : 375.66

I actually set CUDA_VISIBLE_DEVICES to 1 in .bashrc. This is my attempt to use the GTX 660, but failed.

I’ve rebooted it few times. Same deal.

I’m not sure how to stop X-server (Ctrl-Atl-F1, sudo service lightdm stop, is that how you do it??)

That may be the problem. Don’t do that. Remove any instance of setting CUDA_VISIBLE_DEVICES in your system.

I’m sure you can find plenty of descriptions of how to stop the X server on Ubuntu if you do a bit of searching.

OK, will get ride of that in .bashrc. When X-server stopped, what do i expect to find out? should i reboot and see what happens?

Hey hey, guess what, it’s now working…All i did is reboot. Now deviceQuery finds both GPUs. And what’s even better is, GTX 660 is now Device 0 !!! I just tested a tensorflow script, calling with the environment variable in front,

CUDA_VISIBLE_DEVICES=0 python myscript.py 

While it’s running, i pulled nvidia-smi, and I saw the memory usage shot up to ~2.7 Gb on the GTX 660, proof that it’s being used.

Thank you SO MUCH. (You’re God…)

Hi,

I am having the same issue.

echo $CUDA_VISIBLE_DEVICES prints 0 and stopping the X-server didn’t help.

The two GPUs are installed and detected by nvidia-smi as:

±----------------------------------------------------------------------------+
| NVIDIA-SMI 384.98 Driver Version: 384.98 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108… Off | 00000000:17:00.0 Off | N/A |
| 0% 43C P0 63W / 250W | 0MiB / 11172MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 GeForce GTX 108… Off | 00000000:65:00.0 Off | N/A |
| 0% 44C P0 63W / 250W | 0MiB / 11169MiB | 0% Default |
±------------------------------±---------------------±---------------------+

But device query detects the first GPU only.

Detected 1 CUDA Capable device(s)

Device 0: “GeForce GTX 1080 Ti”
CUDA Driver Version / Runtime Version 9.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 11172 MBytes (11715084288 bytes)
(28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
GPU Max Clock rate: 1607 MHz (1.61 GHz)
Memory Clock rate: 5505 Mhz
Memory Bus Width: 352-bit
L2 Cache Size: 2883584 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 23 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Ti
Result = PASS

That is the problem.

You’ll need to find out where in your system scripts (.bashrc perhaps?) it is being set to zero.

You want it to be unset for normal operations (i.e. it should print nothing).

Try

export CUDA_VISIBLE_DEVICE=“0,1”

then rerun your tests.

1 Like

Thank You,

You are right, I found CUDA_VISIBLE_DEVICES being set to 0 somewhere in the bashrc. I sat it to “0,1” as you advised and it worked.

Peer access from GeForce GTX 1080 Ti (GPU0) → GeForce GTX 1080 Ti (GPU1) : Yes
Peer access from GeForce GTX 1080 Ti (GPU1) → GeForce GTX 1080 Ti (GPU0) : Yes

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 2, Device0 = GeForce GTX 1080 Ti, Device1 = GeForce GTX 1080 Ti
Result = PASS