vkEnumeratePhysicalDevices returns 1 VkPhysicalDevice with 3 1080's installed

Hello,

I’m having trouble using Vulkan with 2 of my 3 GPUs. I’ve got three 1080 cards installed:
±----------------------------------------------------------------------------+
| NVIDIA-SMI 381.22 Driver Version: 381.22 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 0000:03:00.0 Off | N/A |
| 26% 37C P0 42W / 180W | 0MiB / 8114MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 GeForce GTX 1080 Off | 0000:04:00.0 Off | N/A |
| 27% 43C P0 42W / 180W | 0MiB / 8114MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 2 GeForce GTX 1080 Off | 0000:81:00.0 Off | N/A |
| 0% 40C P0 39W / 180W | 0MiB / 8114MiB | 2% Default |
±------------------------------±---------------------±---------------------+

However I receive only 1 physical device during the query (Pardon my rust code):
let mut len = 0;
assert_eq!(vk::SUCCESS, self.vk.EnumeratePhysicalDevices(self.id, &mut len, ptr::null_mut()));
let mut result = Vec::with_capacity(len as usize);
result.set_len(len as usize);
assert_eq!(vk::SUCCESS, self.vk.EnumeratePhysicalDevices(self.id, &mut len, result.as_mut_ptr()));
println!(“{} physical GPU device(s) installed”, len);

This always reports 1 device installed. I’m running Ubuntu 16.04 server (w/o X) on Vulkan 1.0.46. I’m aware that I won’t be able to use device sharing unless I explicitly asked for the extensions to load, I just want to target a particular GPU and run code on it without sharing any resources. I am not loading any extensions.

FWIW Here is my entire install procedure (minus my app) after a brand new Ubuntu install:
sudo -i
echo blacklist nouveau | tee -a /etc/modprobe.d/blacklist-nouveau.conf
echo blacklist lbm-nouveau | tee -a /etc/modprobe.d/blacklist-nouveau.conf
echo options nouveau modeset=0 | tee -a /etc/modprobe.d/blacklist-nouveau.conf
echo alias nouveau off | tee -a /etc/modprobe.d/blacklist-nouveau.conf
echo alias lbm-nouveau off | tee -a /etc/modprobe.d/blacklist-nouveau.conf
echo options nouveau modeset=0 | tee -a /etc/modprobe.d/nouveau-kms.conf
update-initramfs -u
apt install -y build-essential linux-image-extra-virtual module-init-tools
curl -L -o /tmp/NVIDIA.run http://us.download.nvidia.com/XFree86/Linux-x86_64/381.22/NVIDIA-Linux-x86_64-381.22.run
sh /tmp/NVIDIA.run --accept-license --no-questions --ui=none
cd /tmp
curl -L -o /tmp/Vulkan.run https://vulkan.lunarg.com/sdk/download/1.0.46.0/linux/vulkansdk-linux-x86_64-1.0.46.0.run
sh ./Vulkan.run
cd /tmp/VulkanSDK/1.0.46.0/x86_64/lib
(tar cf - . | (cd /usr/lib; tar xf -))
rm /usr/lib/libVkLayer*

Also FWIW here is the result of Vulkaninfo:

VULKAN INFO

Vulkan API Version: 1.0.46

INFO: [loader] Code 0 : Found ICD manifest file /etc/vulkan/icd.d/nvidia_icd.json, version “1.0.0”

Instance Extensions:

Instance Extensions count = 12
VK_KHR_surface : extension revision 25
VK_KHR_xcb_surface : extension revision 6
VK_KHR_xlib_surface : extension revision 6
VK_KHR_display : extension revision 21
VK_EXT_debug_report : extension revision 5
VK_KHR_get_physical_device_properties2: extension revision 1
VK_KHX_device_group_creation : extension revision 1
VK_KHX_external_memory_capabilities : extension revision 1
VK_KHX_external_semaphore_capabilities: extension revision 1
VK_EXT_display_surface_counter : extension revision 1
VK_EXT_direct_mode_display : extension revision 1
VK_EXT_acquire_xlib_display : extension revision 1

Layers: count = 0

Presentable Surfaces:

‘DISPLAY’ environment variable not set… Exiting!

John, Thanks for reporting this issue. I have raised this issue with our engineering team.

Wen Su,

Thank you. I should note that I still see this issue with the 384.47 beta driver.

We have fixed this issue in upcoming r384_00 driver.

Hi Sandip,

I can confirm this is fixed for me in the 384.59 driver. Thanks!