Strange intersection issue

I’ve encountered a strange intersection issue in a renderer I’m working on. In the attached image is the front and back faces of the tall box from inside a cornell box. You can see that the back face is “in front of” the front face.

If I switch from Trbvh to NoAccel, then I get the same result, unless I switch the order of the two faces in the indices array (I’m using the triangle_mesh.cu from the samples). I’m kinda stumped by this. It’s obviously something I’m doing wrong, but I don’t understand how this would happen in the first place so not sure what to do to debug!

There is not enough information to say what’s going wrong.

This result shouldn’t even happen for coplanar faces. One guess would be that something isn’t normalized correctly. Maybe note that the anyhit and closesthit program domains operate in different coordinate spaces (object vs. world).
The triangle_mesh intersection routine has some special mode to refine the hit point. I would recommend to start simpler.

There is a more straight forward indexed triangle intersection routine and a function which creates a cube created from 12 triangles inside my OptiX Introduction examples for comparison. Links to the presentation and source code here:
[url]https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/[/url]

When reporting any OptiX related issues please always provide at least these system configuration details:
OS version, installed GPU(s), display driver version, OptiX version (major.minor.micro), host compiler version, CUDA toolkit used to generate the PTX code.

Yes sorry Detlef there wasn’t much info there. Was hoping you’d say, “oh well if that happens you’re obviously doing X wrong” :)

I get the same results across the following configurations:

  • geforce 750m, macOS 10.13, driver 387, optix 5.0.0
  • geforce 750m, ubuntu 16.04, driver 390, optix 5.0.0 and optix 5.1.0
  • geforce 1080ti, ubuntu 16.04, driver 390 & 410, optix 5.0.0 (tested programs compiled with both sm_30 and sm_60)

It’s the same regardless of whether I use the refining intersection, the non-refining variant, or a simple one I copied from the github repo you link:

RT_PROGRAM void mesh_intersect(int primIdx) {
    const int3 v_idx = index_buffer[primIdx];

    const float3 p0 = vertex_buffer[v_idx.x];
    const float3 p1 = vertex_buffer[v_idx.y];
    const float3 p2 = vertex_buffer[v_idx.z];

    // Intersect ray with triangle
    float3 n;
    float t, beta, gamma;

    if (intersect_triangle(ray, p0, p1, p2, n, t, beta, gamma)) {
        if (rtPotentialIntersection(t)) {

            shading_normal = geometric_normal = normalize(n);
            rtReportIntersection(0);
        }
    }
}

Adding some printfs I get the following, which seems fine at first glance (note I switched from a perspective camera to a simple orthographic just to rule out the perspective matrix doing something weird):

[bound] 0 area: 54449.996094
[bound] 1 area: 54449.996094
[bound] 2 area: 54449.996094
[bound] 3 area: 54449.996094
[bound] 0 (80.000000, 0.000000, -295.000000) -> (245.000000, 330.000000, -295.000000)
[bound] 1 (80.000000, 0.000000, -295.000000) -> (245.000000, 330.000000, -295.000000)
[bound] 2 (80.000000, 0.000000, -130.000000) -> (245.000000, 330.000000, -130.000000)
[bound] 3 (80.000000, 0.000000, -130.000000) -> (245.000000, 330.000000, -130.000000)

----
prim: 0	p0: (245.000000, 330.000000, -295.000000)
	p1: (245.000000, 0.000000, -295.000000)
	p2: (80.000000, 0.000000, -295.000000)
ray.origin: (127.910156, 238.476562, 0.000000)
ray.direction: (0.000000, 0.000000, -1.000000)
	miss
----
prim: 1	p0: (80.000000, 330.000000, -295.000000)
	p1: (245.000000, 330.000000, -295.000000)
	p2: (80.000000, 0.000000, -295.000000)
ray.origin: (127.910156, 238.476562, 0.000000)
ray.direction: (0.000000, 0.000000, -1.000000)
	hit
		t: 295.000031
		beta: 0.290365
		gamma: 0.277344
		n: (0.000000, 0.000000, -54450.000000)

Note that prim 2 and prim 3 are never considered, even though their bounding boxes show they’re clearly closer to the ray origin than 0 and 1. Then if I switch the order of the prims so 2 and 3 come first in the index buffer:

[bound] 0 area: 54449.996094
[bound] 1 area: 54449.996094
[bound] 2 area: 54449.996094
[bound] 3 area: 54449.996094
[bound] 0 (80.000000, 0.000000, -130.000000) -> (245.000000, 330.000000, -130.000000)
[bound] 1 (80.000000, 0.000000, -130.000000) -> (245.000000, 330.000000, -130.000000)
[bound] 2 (80.000000, 0.000000, -295.000000) -> (245.000000, 330.000000, -295.000000)
[bound] 3 (80.000000, 0.000000, -295.000000) -> (245.000000, 330.000000, -295.000000)

----
ray.origin: (127.910156, 238.476562, 0.000000)
ray.direction: (0.000000, 0.000000, -1.000000)
prim: 0	p0: (80.000000, 0.000000, -130.000000)
	p1: (245.000000, 0.000000, -130.000000)
	p2: (245.000000, 330.000000, -130.000000)
	miss
----
ray.origin: (127.910156, 238.476562, 0.000000)
ray.direction: (0.000000, 0.000000, -1.000000)
prim: 1	p0: (80.000000, 0.000000, -130.000000)
	p1: (245.000000, 330.000000, -130.000000)
	p2: (80.000000, 330.000000, -130.000000)
	hit
		t: 130.000000
		beta: 0.290365
		gamma: 0.432292
		n: (-0.000000, 0.000000, 54450.000000)

Here’s an OAC, if that helps: https://drive.google.com/open?id=1PFjP7bvpa3z-oWUks-ttb0ve40kJEUmO

What should the expected output in the result_buffer look like with that trace?
I see a dark yellow rectangle on black background with either NoAccel or Trbvh.
(Windows 10, OptiX 5.1.0 DLL, 411.63 display drivers, Quadro P6000.)

What does your anyhit program look like in CUDA source?

Because when omitting the anyhit program, the result gets dark magenta which should be the front plane according to your image above.
That could be an instance of a known OptiX bug related to scaled transformed transparent objects (using rtIgnoreIntersection).

On the system configuration, what is your host compiler and esp. CUDA compiler version?
The driver branch alone doesn’t help. The exact version numbers are required to be sure to reproduce your setup in case this is platform specific. I’m not able to test MacOS or Linux.

I would not set variables like the scene_root object on the ray generation program scope but on the context global scope. Actually I have never set any variables on program scope in my renderers.

Hi Detlef, now the problem is obvious thanks to you - I’d assigned both a closest hit and an any hit program to the material, which obviously makes no sense, by virtue of a copy-paste error from previous code. I wonder - is there any use for having both assigned to the same ray type, or might this be better considered to be an error upon validation (in order to save people like me from their own stupidity)?

“I would not set variables like the scene_root object on the ray generation program scope but on the context global scope. Actually I have never set any variables on program scope in my renderers.”

This is interesting - the docs seem to warn against this:

Variables with definitions in multiple scopes are said to be dynamic and may incur a performance penalty. Dynamic variables are therefore best used sparingly.

but on closer inspection it seems like it’s defining the same variable in multiple scopes that’s slow, so is defining a variable exactly once in some inherited scope ok then?

The assignment of closesthit and anyhit programs to raytypes depends solely on the kind of behaviour you want to implement.

For example opaque materials only need a closesthit program on the radiance ray and a very simple anyhit program on the shadow ray, which simply indicates that the visibility test failed and calls rtTerminateRay.
Though for cutout opacity materials you need to have the same closesthit program on the radiance ray again, but an anyhit program which evaluates the cutout opacity condition and calls rtIgnoreIntersection for holes. Similar for the anyhit program on the shadow ray.

My OptiX Introduction presentation and source code examples explain exactly that case on slides 28 and 38.
Links here: [url]https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/[/url]

Code in example optixIntro_07 showing that:
[url]https://github.com/nvpro-samples/optix_advanced_samples/blob/master/src/optixIntroduction/optixIntro_07/shaders/anyhit.cu[/url]

Defining esp. the scene root node object at a program scope doesn’t make much sense because you need that in all rtTrace calls.

For a simple path tracer there is normally one rtTrace call inside the ray generation program handling all ray path segments and one inside the closesthit program for the shadow ray doing the visibility test for the direct lighting.
Means you would need to define it on all program objects which need it in your case, and that is not necessary when you simply define it at the context scope.

The variable scope lookup order allows to do something like default parameters which can be overwritten by the same variables at a higher scope. That is the case which is more expensive.
I recommend to not do that at all but always declare variables in a way which reduces the number of nodes and variables.

For example, in my introduction renderer I do not even declare parameter variables at the material scope anymore, because that allowed to use only two materials in the whole renderer and their parameters are controlled by a single index on the GeometryInstance.

That’s possible because I have only one closesthit program and three anyhit programs. BSDFs are handled by calling into fixed functions implemented as buffers of bindless callable programs which do the sampling and evaluation. (OK, I have a separate closesthit program for the area light, but just because that made the examples more introductory.)

This can get a lot more sophisticated if the material configuration itself is done in another bindless callable program. I’m building these per unique shader from a shader network on the host at runtime to produce efficient code. Input parameters are not baked to be able to reuse shaders for different material instances and esp. to keep the OptiX kernel size small.

Thanks Detlef, that makes sense. Seems like another use of the any hit program could be filtering out same-prim intersections if there’s a way of identifying which geometry you’re evaluating as well as which prim.

Out of curiosity, what is it that makes the definition of multiple scopes expensive?

Yes, self-intersection avoidance is possible in an anyhit program.

The geometric primitive ID is known inside the intersection program and can be handled like any other attribute.
You can also store a unique triangle ID into the triangle index. For that I use uint4 triangle indices and put it into the .w component.

GeometryInstances can hold variables so you can identify them with another ID which is accessible inside the intersection program as well and can be handled as another attribute.

If you only have triangles, all you need as attributes would be the two barycentric coordinates and these two IDs. It’s highly recommended to calculate the necessary attributes deferred inside the anyhit and closesthit program as needed for performance reasons.

What does not work is to identify an instanced sub-tree because Transform and GeometryGroup nodes cannot hold variables.
Being able to uniquely identify instances is a long standing OptiX feature request.

But for self-intersection avoidance alone it’s not necessary to identify a hit geometric primitive uniquely. It’s enough to detect that it was not the same as the ray started from, and that can be done by evaluating the transform at the primitive you started from against the transform of the hit primitive if both IDs are the same.
How you do that is your choice. It’s normally enough to transform some arbitrary non-null component point and compare the results.

For performance reasons I would not recommend that method. Although that allows to hit both coplanar faces, which doesn’t help you much because the order is still traversal dependent, it’s really a lot slower than offsetting the origin to be off the surface.

Some related posts:
[url]https://devtalk.nvidia.com/default/topic/913414/?comment=4792621[/url]
[url]https://devtalk.nvidia.com/default/topic/997269/?comment=5098975[/url]
[url]https://devtalk.nvidia.com/default/topic/1025193/?comment=5214700[/url]
[url]https://devtalk.nvidia.com/default/topic/930666/?comment=5133996[/url]

Defining identical variables in multiple scopes along the variable scope search order simply increases the necessary records and overhead to identify the correct values in OptiX.