cudaGetDeviceCount() returns 1 for M60 Gpu? Should'n it be 2?

Hi,

Ran deviceQuery example with M60 gpu, it only shows device properties for one gpu, as cudaGetDeviceCount() returns 1.

Operating System: Windows 10
Cuda version: tested with 7.5 and 9.0
Gpu : M60 (graphics mode)

I want to use both gpus on M60 and I am not getting how to detect it has two gpus, any help appreciated.
Device Manager shows it as two gpus.

Thanks and Regards,
Gaurav.

What does nvidia-smi show?

Are you in a virtualized environment, possibly an amazon EC2 instance?

I think that cudaGetDeviceCount() returns you only number of used PCIE slots, and you can handle with whole tesla from one interface.

Thank you for responses.

@cbuchner1 - No it is not on any virtualised enviornment.

nvidia-smi shows two gpu
±----------------------------------------------------------------------------+
| NVIDIA-SMI 385.54 Driver Version: 385.54 |
|-------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 TCC | 00000000:03:00.0 Off | Off |
| 32% 35C P8 14W / 120W | 73MiB / 8124MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla M60 TCC | 00000000:04:00.0 Off | Off |
| 32% 31C P8 15W / 120W | 1MiB / 8124MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

nvidia-smi --query-gpu=count --format=csv
count
2
2

Thank you for responses.

@cbuchner1 - No it is not on any virtualised enviornment.

nvidia-smi shows two gpu
±----------------------------------------------------------------------------+
| NVIDIA-SMI 385.54 Driver Version: 385.54 |
|-------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 TCC | 00000000:03:00.0 Off | Off |
| 32% 35C P8 14W / 120W | 73MiB / 8124MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla M60 TCC | 00000000:04:00.0 Off | Off |
| 32% 31C P8 15W / 120W | 1MiB / 8124MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

nvidia-smi --query-gpu=count --format=csv
count
2
2

do you by chance have the CUDA_VISIBLE_DEVICES environment variable set, so CUDA processes can only see one of the two GPUs?

@cbuchner1: Thanks mate. You pointed to exact problem. CUDA_VISIBLE_DEVICES was set to 0 and that was causing the issue.