OptiX Error: _rtBufferCreateFromGLBO” caught exception … error: cuCLGetDevice()

Dear all!
In my opinion, not all of SDK-precompiled-samples executables (EXEs) run on several types of notebooks and/or particular GeForce GPU. Moreover, after successful building of executables (EXEs) from sources SDK (independently on VS2010 or VS2013), the same EXEs do not work. Examples:
Workable: Ambocc.exe
Unworkable: cook.exe, whitted.exe
The following message:
OptiX Error: Unknown error (Details: Function “_rtBufferCreateFromGLBO” caught exception: Encountered a CUDA error: cuCLGetDevice() returned (304): Unknown, [3801489])

Probably some of you met the same situation then please inform your solution.
My platforms.
Desktop: Intel® Core 2 Quad @ 2.50GHz, Win 8.1, x64, VS2010, CUDA 6.5, OptiX 3.7.0(beta 2&3). GPU1: GeForce GTX 560 Ti, 448 CUDA Cores.
All EXEs work properly.

PN1: Notebook Samsung NP550P5C-S03 15.6" Intel® Core i7-3630QM @ 2.40GHz, Win 8.1, x64, VS2010, CUDA 6.5, OptiX 3.7.0. (beta 2). GPU2: GeForce GT 650M, 384 CUDA Cores, 2GB.
Workable: Ambocc.exe. Unworkable: cook.exe, whitted.exe

As this notebooks was my main platform I supposed that it may be certain errors of 3rd party software on the computer. My colleague recently decided to reinstall the whole soft on his notebook from zero. After installation of Wni8.1, VS2013, Office I installed CUDA 6.5+OptiX 3.7.0(beta 3) and obtained the platform

PN2: Notebook ASUS N46VZ Intel Core i7-3610QM 2.3GHz, Win 8.1, x64, VS2010, CUDA 6.5, OptiX 3.7.0. (beta 3). GPU2: GeForce GT 650M, 384 CUDA Cores, 2GB. Driver: 344.75
Workable: Ambocc.exe. Unworkable: cook.exe, whitted.exe

During the 1.5 year, I developed a computing part of my project, I removed all of OpenGL and worked successfully. Now I need graphics so please help me.
May be some of you has analogous notebook?

Make sure that the application runs on the discrete GPU.
If the OpenGL device is not found on a laptop that could be because the wrong OpenGL implementation was picked. There are three available on an Optimus capable laptop: NVIDIA (discrete GPU), Intel (integrated graphics), and Microsoft (software).
You must run the OptiX application in a mode where it picks the NVIDIA OpenGL implementation to be able to use OpenGL interoperability functions like rtBufferCreateFromGLBO.
Try forcing the NVIDIA GPU in the control panel or right-click on an application icon and select to run it on the discrete GPU if that context menu option is enabled. (Read the Control Panel guide on the driver download page for your installed driver version.)

THANK YOU! Very very much!
Too simple. Both notebooks work properly.

Hi, I encounter the same error when running optix_advance_samples, but I think my case is different.

I am using a remote server, and log in with “ssh -X”.
I installed OptiX 5.0.0, downloaded and compiled optix_advance_samples.
Link to optix_advance_samples: https://github.com/nvpro-samples/optix_advanced_samples

I can run optixProgressivePhotonMap with -n option, and can see the scene locally.
But when running without -n option, it reports the following error:
OptiX Error: ‘Unknown error (Details: Function “RTresult _rtBufferCreateFromGLBO(RTcontext, unsigned int, unsigned int, RTbuffer_api**)” caught exception: Encountered a CUDA error: cuGLGetDevices returned (999): Unknown)’

I checked the GPU status, it did use discrete GPU (TITAN Xp) when running with -n option.
I have no idea why OpenGL can not find a device.

If it works with the -n or --nopbo option that clearly points to the OpenGL interop as root cause for the failure.
Maybe add code to the OptiX example to print out all information about the OpenGL implementation you can query with glGetString:
[url]https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glGetString.xhtml[/url]
If that is not showing an NVIDIA OpenGL implementation, then that would be one problem.

Other than that I do not know how remote rendering works exactly under Linux and what is sent to the client. If the client is gettting the OpenGL calls for indirect rendering, the OptiX server side (CUDA) would also not be able to interop with that.

Not using OpenGL interop might probably the only working solution for remote rendering then.

I run the following codes:

#include <optix.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sutil.h>

#ifdef __APPLE__
#  include <GLUT/glut.h>
#else
#  include <GL/glew.h>
#  if defined( _WIN32 )
#    include <GL/wglew.h>
#    include <GL/freeglut.h>
#  else
#    include <GL/glut.h>
#  endif
#endif

int main(int argc, char **argv)
{
    glutInit(&argc, argv);
    glutCreateWindow("GLUT");

    glewInit();

    printf("%s\n", glGetString(GL_VENDOR));
    printf("%s\n", glGetString(GL_RENDERER));
    printf("%s\n", glGetString(GL_VERSION));
    printf("%s\n", glGetString(GL_SHADING_LANGUAGE_VERSION));
    return 0;
}

And it prints:
VMware, Inc.
Gallium 0.4 on llvmpipe (LLVM 3.4, 256 bits)
3.0 Mesa 17.0.0
1.30

So it is not NVIDIA OpenGL implementation. Maybe this is the problem. I will try NVIDIA implementation later.

Thank you all.
For some reason, I can not install NVIDIA implementation.
I am sorry that whether Mesa implementation works remains a mystery.