CUDA, OpenGL and .NET interop
The following c++ code executes fine when invoked from a native C++ app. [code] extern "C" __declspec(dllexport) void Init() { CHECK(cuInit(0)); int argcc = 0; glutInit(&argcc, NULL); glutInitErrorFunc(foo); glutCreateWindow("video"); glewInit(); CUdevice device = 0; CHECK(cuDeviceGet(&device, 0)); CUcontext context = 0; CHECK(cuCtxCreate(&context, CU_CTX_BLOCKING_SYNC, device)); GLuint gl_pbo[1]; glGenBuffersARB(1, gl_pbo); glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, gl_pbo[0]); glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, 400, 0, GL_STREAM_DRAW_ARB); cuGLRegisterBufferObject(gl_pbo[0]); size_t texturePitch = 0; CUdeviceptr pInteropFrame = 0; CHECK(cuGLMapBufferObject(&pInteropFrame, &texturePitch, gl_pbo[0])); } [/code] However, when I put that c++ code in a dll and invoke it from a C# application via interop it works fine up to line 28 where it fails with CUresult = CUDA_ERROR_INVALID_VALUE. In both cases, gl_pbo[0] = 1 prior to line 28. Any suggestions as to what might be wrong or how I might debug/diagnose the problem? Note, this is a abstracted version of my original application, simplified to illustrate the core problem (Most is from Nvidia Video Decode OpenGL sample) . Using: [list] [.] Windows 10, 64bit[/.] [.] Visual Studio 2017, generating 64bit code[/.] [.] CUDA 9.0[/.] [.] OpenGL 4.3[/.] [.] Nvidia Quadro K1100M[/.] [/list] Cheers, Wayne.
The following c++ code executes fine when invoked from a native C++ app.
extern "C" __declspec(dllexport) void Init()
{
CHECK(cuInit(0));

int argcc = 0;
glutInit(&argcc, NULL);
glutInitErrorFunc(foo);

glutCreateWindow("video");

glewInit();

CUdevice device = 0;
CHECK(cuDeviceGet(&device, 0));

CUcontext context = 0;
CHECK(cuCtxCreate(&context, CU_CTX_BLOCKING_SYNC, device));

GLuint gl_pbo[1];
glGenBuffersARB(1, gl_pbo);

glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, gl_pbo[0]);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, 400, 0, GL_STREAM_DRAW_ARB);
cuGLRegisterBufferObject(gl_pbo[0]);

size_t texturePitch = 0;
CUdeviceptr pInteropFrame = 0;
CHECK(cuGLMapBufferObject(&pInteropFrame, &texturePitch, gl_pbo[0]));
}


However, when I put that c++ code in a dll and invoke it from a C# application via interop it works fine up to line 28 where it fails with CUresult = CUDA_ERROR_INVALID_VALUE.

In both cases, gl_pbo[0] = 1 prior to line 28.

Any suggestions as to what might be wrong or how I might debug/diagnose the problem?

Note, this is a abstracted version of my original application, simplified to illustrate the core problem
(Most is from Nvidia Video Decode OpenGL sample) .

Using:
  • Windows 10, 64bit
  • Visual Studio 2017, generating 64bit code
  • CUDA 9.0
  • OpenGL 4.3
  • Nvidia Quadro K1100M


Cheers, Wayne.

#1
Posted 12/19/2017 12:56 PM   
Scroll To Top

Add Reply