OpenCL <-> DirectX11 Interop Issue - Please help!

Hi All,

I’m trying migrate an OpenCL implementation(which worked on Intel HD graphics) to an nVidia based GPU(GTX 1070).

I’ve encountered a very basic problem which I’m hoping you can assist with.
Once I try and map a D3D11Texture2D texture to an OpenCL image, I’m always getting a strange error -1003(CL_INVALID_D3D10_RESOURCE_KHR) - which according to the documentation it corresponds to to DirectX 10.

I’m probably doing something fundamentally wrong, and was hoping you can shed some light on the matter.

Thats the simple code we are trying to run, when pclCreateFromD3D11Texture2DNV returns status “-1003”:

The include and library path:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64 (OpenCL.lib)
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\include

Thanks,
Itzik.

#include <d3d11.h>
#include <d3d11_1.h>
#include <windows.h>
#include <CL/cl.hpp>
#include <CL/cl_d3d11_ext.h>
int main() {
HRESULT hr;
	D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_1 };
	UINT flags = 0;
	ID3D11Device * device;
	if (FAILED(hr = D3D11CreateDevice(
		NULL,
		D3D_DRIVER_TYPE_HARDWARE,
		NULL,
		flags,
		featureLevels,
		ARRAYSIZE(featureLevels),
		D3D11_SDK_VERSION,
		&device,
		NULL,
		NULL
	)))
	{
		return 0;
	}


	D3D11_TEXTURE2D_DESC textureDesc = { 0 };
	textureDesc.Width = 3840;
	textureDesc.Height = 2160;
	textureDesc.MipLevels = 1;
	textureDesc.ArraySize = 1;
	textureDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
	textureDesc.SampleDesc.Count = 1;
	textureDesc.Usage = D3D11_USAGE_DEFAULT;
	textureDesc.BindFlags = D3D11_BIND_RENDER_TARGET;
	textureDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;
	textureDesc.CPUAccessFlags = FALSE;

	ID3D11Texture2D * pTexture;
	if (FAILED(hr = device->CreateTexture2D(
		&textureDesc,
		NULL,
		&pTexture)))
	{
		return 0;
	}

	std::vector<cl::Platform> platforms;
	cl::Platform::get(&platforms);

	const cl::Platform& p = platforms[0];

	cl_context_properties props[7] = {
		CL_CONTEXT_D3D11_DEVICE_NV, reinterpret_cast<cl_context_properties>(device),
		CL_CONTEXT_PLATFORM, reinterpret_cast<cl_context_properties>(p()),
		0
	};

	cl::Context mContext = cl::Context(CL_DEVICE_TYPE_GPU, props);
	std::vector<cl::Device> devs = mContext.getInfo<CL_CONTEXT_DEVICES>();

	const cl::Device& d = devs[0];

	std::string dext = d.getInfo<CL_DEVICE_EXTENSIONS>();
	std::string pext = p.getInfo<CL_PLATFORM_EXTENSIONS>();
	
	if (dext.find("cl_nv_d3d11_sharing") == std::string::npos) {
		return 0;
	}

	if (pext.find("cl_nv_d3d11_sharing") == std::string::npos) {
		return 0;
	}
	clCreateFromD3D11Texture2DNV_fn pclCreateFromD3D11Texture2DNV =
		(clCreateFromD3D11Texture2DNV_fn)clGetExtensionFunctionAddressForPlatform(p(), "clCreateFromD3D11Texture2DNV");

	cl_int status = 0;

	cl::Memory clMem = cl::Memory(pclCreateFromD3D11Texture2DNV(mContext(), CL_MEM_READ_ONLY, pTexture, 0, &status));
return 1;
}

A short update:

I did manage to get the sample to work, when I changed the texture color space to DXGI_FORMAT_R8G8B8A8_UNORM, from its initial value of DXGI_FORMAT_B8G8R8A8_UNORM.

Problem is, I don’t have control of the texture’s color space, and rather not perform a texture copy prior to my processing.

Do you have any idea as to why DXGI_FORMAT_B8G8R8A8_UNORM mapping is returned with an error?

Thanks,
Itzik.

Bump.