Update to OptiX 6.0 from 5.0.1 Crashes with CanonicalState still used in function

System:
OS: OpenSuse 42.3
GPU: Quadro M5000M
Driver: 418.30
RAM: 32G
CPU: Intel(R) Core™ i7-6920HQ CPU @ 2.90GHz
OptiX: 6.0.0
CUDA: 10.0
NVCC Build Settings: -m64 --use_fast_math -arch=sm_30 -DNVCC

Hi, our company is using OptiX for simulating LIDAR/RADAR sensors in real time, so we wanted to make use of the awesome RTX cards and the brand new OptiX 6.0 SDK. Unfortunately, after updating the OptiX SDK from 5.0.1 to OptiX 6.0, we get an exception in rtContextLaunch2D.

Uses:
  %6 = call float* @_ZN4cort11AABB_getPtrEPNS_14CanonicalStateE(%"struct.cort::CanonicalState.4"* %0) #1
  %7 = call i32 @_ZN4cort24Traversal_getCurrentNodeEPNS_14CanonicalStateE(%"struct.cort::CanonicalState.4"* %0) #1
  %8 = call i32 @_ZN4cort33GraphNode_convertToSelectorHandleEPNS_14CanonicalStateEj(%"struct.cort::CanonicalState.4"* %0, i32 %7) #1
  %9 = call i32 @_ZN4cort20Selector_getChildrenEPNS_14CanonicalStateEj(%"struct.cort::CanonicalState.4"* %0, i32 %8) #1
  call void @_ZN4cort33Runtime_computeGraphNodeGeneralBBEPNS_14CanonicalStateEjPNS_9GeneralBBE(%"struct.cort::CanonicalState.4"* %0, i32 %25, %"struct.cort::GeneralBB"* %childbb) #1
[bt] Execution path:....

[1;31m>>> FATAL  : OptiXDrawable::drawImplementation() optix::Exception ErrorString: <Unknown error (Details: Function "RTresult _rtContextLaunch2D(RTcontext, unsigned int, RTsize, RTsize)" caught exception: Assertion failed: "false : CanonicalState still used in function: directcallable__bounds_selector_ptx0x340671464525ebbe", file: <internal>, line: 455)> ErrorCode: <0XFFFFFFFF>
e[0m

We did not change anything on our side except building the PTX files with CUDA 10 (moved from CUDA 9.0.176.4) and OptiX 6 (moved from OptiX 5.0.1).

I cannot get anything from “CanonicalState still used in function” so I hoped that there will be someone in this forum who has more insight in OptiX and what might be causing this issue.

Thank you!

Do the OptiX 6.0.0. SDK examples work on your machine?

418.30 was the first Linux driver with OptiX 6.0.0 support, but it contains a bug with motion blur as mentioned in the OptiX release notes and it’s recommended to upgrade to a newer driver.

The OptiX error output looks like an issue with a bindless callable program. Are you using any RT_CALLABLE_PROGRAM?

Your NVCC command line options neither contain --keep-device-functions or --relocatable-device-code=true.
Either is required for NVCC to be able to compile an OptiX RT_CALLABLE_PROGRAM because since CUDA 8.0, functions for which no call is found inside a module will be eliminated as dead code, but that would have happened before when you used CUDA 9.0 already.

Are you using Selector nodes in your scene graph?
These are not supported anymore in the OptiX 6.0.0 default execution strategy.

More information about how to make use of the RT cores on the RTX boards for hardware accelerated BVH traveral and triangle intersection here.
[url]https://devtalk.nvidia.com/default/topic/1047102/optix/optix-6-0-quot-rtx-acceleration-is-supported-on-maxwell-and-newer-gpus-quot-/[/url]
The GTC 2019 presentation mentioined at the end of that thread is available in the meantime.
Look for other OptiX 6 related post in this forum as well.

Hi Detlef,
thank you very much for your reply!

I tried 430.14 with the same result. The SDK examples are working just fine. I’m not using RT_CALLABLE_PROGRAM but we do use Selector nodes everywhere.

You mentioned that Selector nodes are not supported in the default execution strategy anymore → is there a strategy that does support them? If they were deprecated, where should I find such information, please? I searched in the release notes but didn’t find any hint on necessary changes for 6.0 to be working. I also compared some examples between OptiX 5 and 6 and did not find any important changes. Also, if you could give us a hint on what to use instead of Selector nodes, that would be great.

All the best and thank you!

I’ve gone over the OptiX 6 documentation and it states:

Selector nodes are deprecated in RTX mode. Future updates to RTX mode will
provide a mechanism to support most of the use cases that required Selector nodes. See
Enabling RTX mode (page 14).

We did not (yet) enable the RTX mode with rtGlobalSetAttribute. That means that selector nodes should be usable. First, I just want to start the raytracing with the OptiX 6 SDK. I just changed the SDK, no change to any code which worked in OptiX 5.0.1. Any ideas what might be wrong are more than welcome!

I increased the log level and what I found are some RTX losg, which confused me since I did not enable RTX : rtx-_Z12compute_aabbv_ptx

e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [INFO], Message: Launch index 0.
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:     Node graph object summary:
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTprogram         : 121
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTbuffer          : 789
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTtexturesampler  : 6
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTacceleration    : 111
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTgroup           : 38
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTgeometrygroup   : 37
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTtransform       : 0
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTselector        : 36
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTgeometryinstance: 37
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTgeometry        : 37
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:             Total prim: 1368246
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [SCENE STAT], Message:         RTmaterial        : 38
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [1] Tag: [TIMING], Message:     Time to first launch: 2660.9 ms
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [MEM USAGE], Message:     Buffer GPU memory usage:
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [MEM USAGE], Message:     |         Category |  Count |  Total MByte |
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [MEM USAGE], Message:     |           buffer |    853 |        606.0 |
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [MEM USAGE], Message:     |          texture |      6 |         18.2 |
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [MEM USAGE], Message:     |      gfx interop |      4 |         27.5 |
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [MEM USAGE], Message:     |     cuda interop |      0 |          0.0 |
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [MEM USAGE], Message:     |   optix internal |    132 |          0.2 |
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [MEM USAGE], Message:     Buffer host memory usage: 519.6 Mbytes
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [1] Tag: [INFO], Message:     Compilation triggered 
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [INFO], Message:     Module cache HIT  : rtx-_Z22compute_aabb_exceptionv_ptx0x3a8211000d658958-keya7d2e51302aad11b64720afa6c60c619-sm_52-drv430.14
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [INFO], Message:     Module cache HIT  : rtx-_Z12compute_aabbv_ptx0x3a8211000d658958-key70c652aa8074f0124a52568ad796fe91-sm_52-drv430.14
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [INFO], Message:     Module cache HIT  : rtx-bounds_rtcbvh_nomotion_ptx0x6fed3842ee4c0563-keyae89bb8f99985f20060f25e2ee196972-sm_52-drv430.14
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [INFO], Message:     Module cache miss : rtx-bounds_selector_ptx0x340671464525ebbe-keyae89bb8f99985f20060f25e2ee196972-sm_52-drv430.14
e[0m
e[35m>>> DEBUG  : OptiXInterface::UsageReportLogger::log LVL: [2] Tag: [INFO], Message:     Module cache HIT  : rtx-_Z11mesh_boundsiPf_ptx0xee04f5dc3063a7be

Is the RTX a hint that I have to change the default execution strategy?

As you mentioned →

Are you using Selector nodes in your scene graph?
These are not supported anymore in the OptiX 6.0.0 <b>default execution strategy</b>.

I already provided the forum link with the necessary information in my first answer.

The OptiX 6.0.0 Release Nodes (link directly beneath the OptiX download button on developer.nvidia.com) say this in the known issues: “Selectors are not implemented in RTX Mode”

No, the RTX execution strategy is the default now. Selectors won’t work by default.

I would not recommend to switch to the old execution strategy. That won’t be able to use RT cores for hardware triangle intersections. The GeometryTriangles require the RTX execution strategy.

Selector nodes have a rather high traversal performance impact in any case, but for the hardware BVH traversal that would have been uncanny. Don’t use them anymore.

Depending on what you’re doing with them(?), you might be able to replace them with the new RTvisibilitymask inside the Group and GeometryGroup nodes and the rtTrace call instead, but it’s only 8 bit wide:
https://raytracing-docs.nvidia.com/optix_6_0/api_6_0/html/group___group_node.html#ga6e09f89ab237e0eb35220af81452f3d4
https://raytracing-docs.nvidia.com/optix_6_0/api_6_0/html/group___geometry_group.html#ga61f7b257376a3ee95ef7dde8c16fba8c
https://raytracing-docs.nvidia.com/optix_6_0/api_6_0/html/optix__device_8h.html#a1b8e7c1a7450a6138fd6bc2b1c44bc1b

Note that there are also RTrayflags which allow other behaviour changes per ray now:
https://raytracing-docs.nvidia.com/optix_6_0/api_6_0/html/optix__declarations_8h.html#ab847419fd18642c5edc35b668df6f67d

For other use cases like configurators you would need to change the scene graph and rebuild/refit the acceleration structures which got faster with RTX.

Some things like per ray geometry LOD won’t be portable though.

Entry point to online ray tracing docs at NVIDIA: https://raytracing-docs.nvidia.com/

@“Selectors are not implemented in RTX Mode”
@“No, the RTX execution strategy is the default now”

From the OptiX 6 doc:

RTX mode is not enabled by default. RTX mode can be enabled with the
RT_GLOBAL_ATTRIBUTE_ENABLE_RTX attribute using rtGlobalSetAttribute when creating the
OptiX context. However, certain features of OptiX will not be available.

I will disable it for testing purpose even if it states it’s disabled by default since you mentioned that it’s enabled by default.

@“I would not recommend to switch to the old execution strategy”
I totally agree, a complete rework of our OptiX plugin is planned for this summer, but I need to update to OptiX 6 before this rework is done due to customers requesting an update of the OptiX SDK shipped with our product. This means that I have to switch to the old execution strategy for a short while. I will give feedback after switching and request further help if the error remains. Of course, the fallback solution is to stick with OptiX 5 and just update to CUDA 10 (which works) till we rework the plugin, but I hope we can make it work with 6 even without the RTX support.

@"you might be able to replace them with the new RTvisibilitymask "
Perfect I read the documentation and it’s clear to me now, thanks!

Ok, disabling RTX resolves the crash at least.

Call before Context::create()

int RTX = false;
if (rtGlobalSetAttribute(RT_GLOBAL_ATTRIBUTE_ENABLE_RTX, sizeof(RTX), &RTX) != RT_SUCCESS)
{
    m_ig->notify(vig::ImageGenerator::VIGNFL_DEBUG, "OptiXInterface::init() Could not disable RTX mode.");
}
else
{
    m_ig->notify(vig::ImageGenerator::VIGNFL_DEBUG, "OptiXInterface::init() RTX mode:%s.", RTX ? "on" : "off");
}

I need to run some performance tests but the first impression is that the performance dropped drastically with OptiX 6 RTX off compared to OptiX 5.0.1. Also, the OpenGL interop we had to display the results does not work anymore - the buffer is empty whatever I write into it. The raytracing works since I did some rtPrint calls and the LIDAR intensities are correct. Any changes to interop with OpenGL I should know of?

Also, please update the documentation since it wrongly states that the RTX execution strategy is off by default, which is wrong. As you correctly stated, the RTX mode is on by default.

I see, yes, the OptiX programming guide inside the SDK itself is out of date.
Please always use the online documentation to get the up to date version: https://raytracing-docs.nvidia.com/

The recommended CUDA toolkit version for OptiX 5 is CUDA 9.0.
(See the release notes. The best version is normally the one with which each OptiX version was built.)
Though there shouldn’t really be a problem when using SM 3.0 as target to generate PTX code for all supported GPU architectures. I have seen issues with 9.1 or 9.2 reported on this forum.
For OptiX 6.0.0 it’s CUDA 10.0.

Is that in the exact same system configuration from the first post in this thread?
The OptiX 6.0.0 optixPathTracer or other SDK examples using a PBO for display work?

OptiX 6.0.0 contains some changes to the OpenGL interop but that should only affect multi-GPU use cases, which wasn’t possible in OptiX 5 before.
https://devtalk.nvidia.com/default/topic/1052690/optix/is-it-possible-for-optix-to-render-to-an-opengl-render-buffer-without-memcopy-/
Are you using an OpenGL core or compatibility context?
https://devtalk.nvidia.com/default/topic/1054960/optix/optix-fails-to-create-a-buffer-from-opengl-buffer-object-with-driver-430-86/

@Please always use the online documentation to get the up to date version:
Thanks, I will read the online documentation from now on :)

@The recommended CUDA toolkit version for OptiX 5 is CUDA 9.0.
You are right, I would like to avoid this by all means. I decided to port all to RTX now and not in the summer, so we will be using OptiX 6 and CUDA 10, which match.

@Is that in the exact same system configuration from the first post in this thread?
Yes, mo change in hardware, I’m adjusting the source code though.

@The OptiX 6.0.0 optixPathTracer or other SDK examples using a PBO for display work?
optixPathTracer works fine, we do not use PBO though.

Creating a buffer:

Buffer OptiXInterface::createOutputBuffer(optix::Context context, RTformat format, size_t element_size,
                                          unsigned int width, unsigned int height)
{
    if (element_size == 0 || width == 0 || height == 0)
    {
        m_ig->notify(vig::ImageGenerator::VIGNFL_FATAL,
                     "OptiXInterface::createOutputBuffer has a 0 element size or width/height is 0.\n");
    }

    Buffer buffer;

    GLuint vbo = 0;

    glGenBuffers(1, &vbo);

    glBindBuffer(GL_ARRAY_BUFFER, vbo);

    glBufferData(GL_ARRAY_BUFFER, element_size * width * height, 0, GL_STREAM_DRAW);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    buffer = context->createBufferFromGLBO(RT_BUFFER_OUTPUT, vbo);

    buffer->setFormat(format);
    if (format == RT_FORMAT_USER)
    {
        buffer->setElementSize(element_size);
    }
    buffer->setSize(width, height);

    return buffer;
}

Element Size:

size_t element_size;
context->checkError(rtuGetSizeForRTformat(format, &element_size));

Buffer Parameters:

m_context[OUTPUT_BUFFER_PARAM_NAME]->set(createOutputBuffer(m_context, RT_FORMAT_FLOAT3, m_width, m_height));

Drawing resulting image:

void
    OptiXDrawable::drawBuffer(osg::RenderInfo& renderInfo) const
    {
        // Draw the resulting image
        Buffer buffer = m_optiXInterface->getBuffer(OptiXInterface::OUTPUT_BUFFER_PARAM_NAME);
        RTsize buffer_width_rts, buffer_height_rts;
        buffer->getSize( buffer_width_rts, buffer_height_rts );
        int buffer_width  = static_cast<int>(buffer_width_rts);
        int buffer_height = static_cast<int>(buffer_height_rts);
        RTformat buffer_format = buffer->getFormat();

        unsigned int vboId = 0;
        vboId = buffer->getGLBOId();

        if (vboId)
        {
            glBindTexture( GL_TEXTURE_RECTANGLE , m_osgTex->getTextureObject(renderInfo.getContextID())->_id );
            glBindBuffer(GL_PIXEL_UNPACK_BUFFER, vboId);

            RTsize elementSize = buffer->getElementSize();
            if      ((elementSize % 8) == 0) glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
            else if ((elementSize % 4) == 0) glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
            else if ((elementSize % 2) == 0) glPixelStorei(GL_UNPACK_ALIGNMENT, 2);
            else                             glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

            if(buffer_format == RT_FORMAT_UNSIGNED_BYTE4) {
                glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGBA8, buffer_width, buffer_height, 0, GL_BGRA, GL_UNSIGNED_BYTE, 0);
            } else if(buffer_format == RT_FORMAT_FLOAT4) {
                glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGBA32F_ARB, buffer_width, buffer_height, 0, GL_RGBA, GL_FLOAT, 0);
            } else if(buffer_format == RT_FORMAT_FLOAT3) {
                glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGB32F_ARB, buffer_width, buffer_height, 0, GL_RGB, GL_FLOAT, 0);
            } else if(buffer_format == RT_FORMAT_FLOAT) {
                glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_LUMINANCE32F_ARB, buffer_width, buffer_height, 0, GL_LUMINANCE, GL_FLOAT, 0);
            } else {
                m_ig->notify( vig::ImageGenerator::VIGNFL_FATAL, "OptiXDrawable::drawImplementation() unknown buffer format\n");
            }

            glBindBuffer( GL_PIXEL_UNPACK_BUFFER, 0 );

        }
    }

The code is the same as in OptiX 5 + CUDA 9. Since I also tried OptiX 5 and CUDA 10, this has to be caused by the OptiX 6 changes. I enabled all exceptions and usageReportLevel is on 3, no problems detected. I get a OpenGL warning, but hard to say where it’s comming from - I might disable the drawBuffer and watch if it remains.

Warning: detected OpenGL error ‘invalid operation’ at after RenderBin::draw(…)

Forgot to post the gl texture creation - OSG (OpenSceneGraph)

        m_osgTex = new osg::TextureRectangle();

        setUseDisplayList( false );
        setUseVertexBufferObjects( true );

        unsigned int width, height;
        m_ig->getViewer()->getWindowSize( width , height );

        m_osgTex->setTextureSize(width, height);

        m_osgTex->setFilter(osg::Texture2D::MIN_FILTER,osg::Texture2D::LINEAR);
        m_osgTex->setFilter(osg::Texture2D::MAG_FILTER,osg::Texture2D::LINEAR);

        m_osgTex->setWrap(osg::Texture2D::WRAP_S,osg::Texture2D::CLAMP_TO_EDGE);
        m_osgTex->setWrap(osg::Texture2D::WRAP_T,osg::Texture2D::CLAMP_TO_EDGE);

        m_osgTex->setInternalFormat( GL_RGB32F_ARB );

VBO and PBO are just linear memory in OpenGL. You bind it to a GL_PIXEL_UNPACK_BUFFER, so it’s used as PBO.

“glBufferData(GL_ARRAY_BUFFER, element_size * width * height, 0, GL_STREAM_DRAW);”
That could also be GL_DYNAMIC_DRAW.

“createOutputBuffer(m_context, RT_FORMAT_FLOAT3, m_width, m_height)”
That’s bad. Avoid float3 for better performance, esp. in multi-GPU.
[url]https://raytracing-docs.nvidia.com/optix_6_0/guide_6_0/index.html#performance#performance-guidelines[/url]

I cannot say where the invalid operation comes from.
Add glGetError() calls in your code to find out out if the error was raised earlier.

@VBO and PBO are just linear memory in OpenGL
True :)

@That could also be GL_DYNAMIC_DRAW.
Thank you, looked up the documentation and it’s better suited indeed!

@RT_FORMAT_FLOAT3
I know this is bad and it’s used everywhere in our legacy code. With the big update to OptiX 6, I planned to change all buffers to float4. Good that you mentioned it now since I can do it right away, don’t know why the initial developer used float3 everywhere since all performance guides state this at the top.

@I cannot say where the invalid operation comes from.
I updated the code so that it gets all the errors after each openGL call, the culprit is the

glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGB32F_ARB, buffer_width, buffer_height, 0, GL_RGB, GL_FLOAT, 0);

Error:

WARNING: OptiXDrawable::drawBuffer():glTexImage2D OpenGL Error: "invalid operation"

I might try another driver or set the compatibility mode as stated in the link you posted, or is there anything else you would suggest?

@Are you using an OpenGL core or compatibility context?
I need to dig into our codebase a little more since it might be hidden in OpenSceneGraph since I did not find any context profile that we would set manually.

“glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGB32F_ARB, buffer_width, buffer_height, 0, GL_RGB, GL_FLOAT, 0);”

Try to set the internal format to GL_RGBA32F.
The GPU doesn’t support any texture format with three components natively.

The glPixelStore GL_UNPACK_ALIGNMENT setting should be the default 4 and all other pixel store offsets need to be default as well to assume tightly packed user data.

@ Try to set the internal format to GL_RGBA32F.

I tried the GL_RGBA32F - unfortunately, no change.

@GL_UNPACK_ALIGNMENT
Also from the Optix advanced samples

        RTsize elmt_size = buffer->getElementSize();
        int align = 1;
        if      ((elmt_size % 8) == 0) align = 8; 
        else if ((elmt_size % 4) == 0) align = 4;
        else if ((elmt_size % 2) == 0) align = 2;
        glPixelStorei(GL_UNPACK_ALIGNMENT, align);

I will change the output_buffer to float4 and adjust all the parameters, maybe it will help :)

float4 did not make any change, was kind of expected

Well, this is how all my examples are using it. I’m not using texture rectangles.
[url]optix_advanced_samples/Application.cpp at master · nvpro-samples/optix_advanced_samples · GitHub
[url]optix_advanced_samples/Application.cpp at master · nvpro-samples/optix_advanced_samples · GitHub

(Ignore the GL_STREAM_READ in the initial PBO creation, that’s an oversight.)

Fixed!

The issue was an OpenSceneGraph change. The container used to draw the buffer (osg::Drawable) did change and did a couple of OpenGL calls that clashed with our internal buffer code leading to an invalid operation error.

Moving all code to a postDrawCallback resolved the issue.

Thank you very much for all the great support! I really appreciate the help from all the NVIDIA devs. I always learn a lot by writing to this forum! So again, many thanks for being here for us!