Optix Error of buffer size of 0

I was trying to play with SDK samples. When running either Optix3.7.0 final version or Optix3.8.0beta on Linux64, samples except for 1-8 and ambocc cannot get to work.

Here’s the error message I got when running glass:

OptiX Error: Invalid value (Details: Function “RTresult _rtContextLaunch2D(RTcontext, unsigned int, RTsize, RTsize)” caught exception: A buffer with a size of 0 cannot be mapped (optix, opengl) = (786432, 0), [8978516])

Does anyone know what might cause this problem?

That looks like there was a problem with CUDA-OpenGL interop.
The OptiX example framework is using GLUT and an OpenGL pixel buffer object to render to which is then uploaded to an OpenGL texture for display which avoids a copy from the GPU to the host.

It could be that you do not have an NVIDIA OpenGL implementation running and the PBO allocations failed.

The OptiX SDK example framework offers a command line option which allows to switch off this OpenGL interop. I think it’s -vbo or --vbo.
Please try the --help option to get all command line options listed.

If that works you might want to check how to get a hardware accelerated NVIDIA OpenGL driver running for additional performance of your GPU.

Hi, thanks for your reply!!

I can’t find any option related to switching off the OpenGL interop. Can you be a little bit more specific?

The OptiX example framework contains a member variable named m_use_vbo_buffer inside the SampleScene class which controls that.
The more advanced samples are derived from that class and in each of their main() functions you’ll find an initialization which looks something like the code below, here the example from the mantascene example:

int main( int argc, char** argv )
{
  GLUTDisplay::init( argc, argv );

  bool use_vbo_buffer = true;
  for(int i = 1; i < argc; ++i) {
    std::string arg = argv[ i ];
    if(arg == "-P" || arg == "--pbo") {
      use_vbo_buffer = true;
    } else if( arg == "-n" || arg == "--nopbo" ) {
      use_vbo_buffer = false;
    } else if( arg == "-h" || arg == "--help" ) {
      printUsageAndExit( argv[0] );
    } else {
      std::cerr << "Unknown option '" << arg << "'\n";
      printUsageAndExit( argv[0] );
    }
  }

  if( !GLUTDisplay::isBenchmark() ) printUsageAndExit( argv[0], false );

  try {
    MantaScene scene;
    scene.setUseVBOBuffer( use_vbo_buffer );
    GLUTDisplay::run( "MantaScene", &scene );
  } catch( Exception& e ){
    sutilReportError( e.getErrorString().c_str() );
    exit(1);
  }
  
  return 0;
}

The call scene.setUseVBOBuffer( use_vbo_buffer ); select OpenGL interop or not and that is enabled with the command line parameters -P or --pbo and disabled with --nopbo

Seems like not all of the examples use that option. The GlassScene class initializes the SampleScene with default parameters which is OpenGL interop enabled with m_use_vbo_buffer( true ).

Try running the mantascene example with –pbo and if that fails with –nopbo.
If latter works, the selected OpenGL implementation on your system configuration is not suited for OpenGL interop.

You’d need to verify what NVIDIA display driver you have installed and check which OpenGL implementation is running when starting standard OpenGL applications.
If that is not an NVIDIA implementation that would be the problem.

Please always list your exact OS name and version, installed GPU(s), and NVIDIA display driver version.

Thx!! It is the interop not working.
Here’s my current driver version.

±-----------------------------------------------------+
| NVIDIA-SMI 346.47 Driver Version: 346.47 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro K5000 Off | 0000:02:00.0 On | Off |
| 30% 32C P8 17W / 137W | 165MiB / 4095MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla K20Xm Off | 0000:84:00.0 Off | 0 |
| N/A 27C P8 18W / 235W | 14MiB / 5759MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 9850 G /usr/bin/X 153MiB |
±----------------------------------------------------------------------------+

Ok, so a multi-GPU config with a Tesla board.

Ray tracing on the Tesla would not have OpenGL interop functionality because there shouldn’t run an OpenGL implementation on that. Means when running on the Tesla, do not use OpenGL interop in your applications.

If you let the applications run on the Quadro K5000, OpenGL interop should work.
You can do that programmatically inside OptiX via the rtContextSetDevices() function.

For tests you can force the CUDA driver to use only one or the other board.
Under Windows that either works via the Control Panel or the environment variable CUDA_VISIBLE_DEVICES which should also work under Linux. Search the forum.

Run sample3 which prints out information and device IDs of the installed CUDA devices, similar to the device query example in the CUDA toolkit.
I expect device 0 to be the Tesla board because that has the faster GPU, so try setting CUDA_VISIBLE_DEVICES=1 to pick only the Quadro board and rerun the example which failed the OpenGL interop test.

I think the NVIDIA-SMI enumerates according to the PCI-E device order, so IDs could be different.

It worked!!
Great thanks!

PS: you are right, device 0 is Tesla ^. ^

One final note on multi-GPU setups like yours.
You have one of the configurations which doesn’t allow to use both GPUs together in one OptiX context because the compute capability versions are too far apart. Check the OptiX Release Notes and compare the output of sample3 for the streaming multiprocessor or compute versions.

1 Like