Here’s the code causing my program crash when creatTextureSamplerFromGLImage
GLuint texId = 0;
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
std::vector<GLfloat> img_data(256*256 * 4);
GLfloat* img = &img_data[0];
for (int j = 0; j < 256; j++) {
for (int i = 0; i < 256; i++) {
GLfloat c = (((i & 0x8) == 0) ^ ((j & 0x8) == 0)) * 1.0f;
img[0] = 1.0f;
img[1] = c;
img[2] = c;
img[3] = 1.0f;
img += 4;
}
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 256, 256, 0, GL_RGBA, GL_FLOAT, &img_data[0]);
glBindTexture(GL_TEXTURE_2D, 0);
int a = glGetError();
TextureSampler sampler = m_Context->createTextureSamplerFromGLImage(texId, RT_TARGET_GL_TEXTURE_2D);
After executing createTextureSamplerFromGLImage, the program just crashed without any exception throwed. I’m sure that opengl context is set, and glGetError returns 0, and texId has a valid value.
The above code is copyed from the project “simpleGLTexInterop” in the Optix SDK samples, and the “simpleGLTexInterop” sample runs correctly in the SDK solution. So there shouldn’t be a problem in my operation system or gpu hardware.
Something should be wrong with my opengl setting. glew setting, or optix context setting. I don’t have any idea about why, so does anyone know which setting would cause the “createTextureSamplerFromGLImage” function to fail? Any idea is appreciated.
System Information:
Windows 7 Utimate 64 SP1, Nvidia Titan X,driver version 358.91, otpix 3.8.0, cuda v7.0.27. I’m using 32bit application.