Applying Transforms to Geometries

I’m not sure I’m being able to properly understand how Transforms work, even after reading the documentation and checking the samples.

  1. Let’s say I want to rotate(or translate) GeometryInstances individually. Do I need to create a GeometryGroup with a single GeometryInstance child and set it as a child of the Transform itself?

  2. After I have the Transform ready, how do I properly set it as a child of a group? Am I supposed to have issues if I try to set GeometryInstances, Transforms and other GeometryGroups as childs of a GeometryGroup?

Section 3.4 Graph nodes of the programming guide offers a good explanation of what can be a child of what in the Node Graph.

External Media

For all intents and purposes you must have a Geometry, then it’s parent must be a Geometry Instance, and then a GeometryGroup. This is the minimum viable structure for an object in a scene.

This is because materials ( closest hit, any hit ) can only be connected to Geometry Instances, and Acceleration structures can only be connected to a GeometryGroup.

You can then connect the GeometryGroup as a child of as many Transforms nodes as you want, and then finally connect those transforms as children to a single root of type Group. This is how OptiX does instancing of geometry.

So 1) You need at minimum one GeometryInstance and GeometryGroup, you can then reuse these for as many instances as you need of that object, assuming they will all use the same Closest Hit and Anyhit Programs.

  1. Yes, you will get errors. GeometryGroups can only have GeometryInstances and an Acceleration structure as children. If you want to organize a bunch of Transforms or GeometryGroups you must use a regular Group, another Transform or a Selector. However it is recommended to use as few nodes above a GeometryGroup as possible. The less you use above, the faster the BVH build and the raytracing will be ( in theory ).

Also watch the GTC 2018 OptiX Introduction presentation video and work through the accompanying example source code which explains this with the minimal possible OptiX scene graphs.
Links here: [url]https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/[/url]

Thanks a lot for the replies.

I managed to make some Transforms work, but I believe something is off with my translation.

To the left is the box translated, to the right it’s in its original position without the translation. The box was intersecting some spheres in its original position and apparently this intersection was… kept, even with the translation, but that’s not what I was expecting. My matrix function would be:

optix::Matrix4x4 translateMatrix(vec3f offset){
  float floatM[16] = {
      1.0f, 0.0f, 0.0f, offset.x,
      0.0f, 1.0f, 0.0f, offset.y,
      0.0f, 0.0f, 1.0f, offset.z,
      0.0f, 0.0f, 0.0f, 1.0f
    };
  optix::Matrix4x4 mm(floatM);

  return mm;
}

The function I use to set up the translate transform is:

optix::Transform translate(optix::GeometryInstance gi, vec3f& translate, optix::Context &g_context){
  optix::Matrix4x4 matrix = translateMatrix(translate);

  optix::GeometryGroup d_world = g_context->createGeometryGroup();
  d_world->setAcceleration(g_context->createAcceleration("NoAccel"));
  d_world->setChildCount(1);
  d_world->setChild(0, gi);

  optix::Transform translateTransform = g_context->createTransform();
  translateTransform->setChild(d_world);
  translateTransform->setMatrix(false, matrix.getData(), matrix.inverse().getData());

  return translateTransform;
}

Finally, my ray-box intersection program is:

static __device__ float3 boxnormal(float t, float3 t0, float3 t1) {
  float3 neg = make_float3(t == t0.x ? 1 : 0, t == t0.y ? 1 : 0, t == t0.z ? 1 : 0);
  float3 pos = make_float3(t == t1.x ? 1 : 0, t == t1.y ? 1 : 0, t == t1.z ? 1 : 0);
  return pos - neg;
}

// Program that performs the ray-box intersection
RT_PROGRAM void hit_box(int pid) {
    float3 t0 = (boxmin - ray.origin) / ray.direction;
    float3 t1 = (boxmax - ray.origin) / ray.direction;
    float tmin = max_component(min_vec(t0, t1));
    float tmax = min_component(max_vec(t0, t1));

    if(tmin <= tmax) {
      bool check_second = true;
      
      if(rtPotentialIntersection(tmin)) {
        hit_rec_p = ray.origin + tmin * ray.direction;
        hit_rec_u = 0.f;
        hit_rec_v = 0.f;
        hit_rec_normal = boxnormal(tmin, t0, t1);
        
        if(rtReportIntersection(0))
            check_second = false;
      } 
      
      if(check_second) {
        if(rtPotentialIntersection(tmax)) {
            hit_rec_p = ray.origin + tmax * ray.direction;
            hit_rec_u = 0.f;
            hit_rec_v = 0.f;
            hit_rec_normal = boxnormal(tmax, t0, t1);
            rtReportIntersection(0);
        }
      }
    }
}

Any idea what could be wrong?

Should I change the object normals or other parameters in any way if I apply a transform? What about the incoming ray parameters?

When changing any Transform node, you need to call rtAccelerationMarkDirty on all acceleration structures in your scene hierarchy above that Transform node to notify OptiX that it needs to rebuild or refit the BVH.
Read this: [url]http://raytracing-docs.nvidia.com/optix/guide/index.html#host#acceleration-structure-builds[/url]

If you want to change an existing Transform matrix, only call your translateTransform->setMatrix(false, matrix.getData(), matrix.inverse().getData()); and call markDirty() on the accelerations above that, which is normally the root Group’s Acceleration only.

I would also recommend to use Trbvh as builder for all your acceleration structures. Don’t use NoAccel if you’re not absolutely sure about why.

But is that true when I first create & set the the Transform nodes in the hierarchy as well? I only ever deal with them on creation, I don’t make any further changes.

Also, I just tried to call markDirty() as you mentioned, but it doesn’t seem like anything has changed.

Is there a reason why you are recording the hit point within the intersection program?

Correct me if I am wrong but I believe the Intersection is done in object space, not world space. I believe that OptiX does transforms by doing the AABB in world space but the intersection in object space for instancing purposes.

Looking at the advanced samples they only record the normals, uvs and barycentrics in the intersection, and then transform the normals into world space first thing in the closest hit?

Maybe try moving your hit detection to the closest hit.

If you just want to test, try transforming your hit point from object to world space with this rtTransformPoint in the closest hit, before any other processes.

http://raytracing-docs.nvidia.com/optix/api/html/group___c_u_d_a_c_functions.html#gafc63cc5127a005f3e184ebb3ca53e7ad

Thanks a lot! Using rtTransformPoint solved the issue!