Detailed Documentation for Accel-Structures

Hi all,

I am looking for a detailed documentation of the acceleration structures used by Optix.
My special interest is the data structure of the accel-structure, which could be accessed using rtAccelerationGetData. Until now I have not found any information online. I would be happy if you can provide some information about, how data is stored in RTacceleration (typedef struct RTacceleration_api* RTacceleration).

For example a list like this would be awesome:
uint32 - Size of this data in byte
uint8 - Type of accel-structure builder (0 = NoAccel …)
uint8 - Type of accel-structure traverser (0 = NoAccel …)

Thanks a lot already.

Sorry, that is internal implementation dependent information which could change with each OptiX version.

If you’re generally interested in the underlying theory, you can find publications from NVIDIA Research on acceleration structures among a huge number of other topics here: [url]https://research.nvidia.com/publications[/url]

Dear Detlef,

thank you for your fast reply.
Let me rephrase my question:
Is there a way to create the accel-structures on CPUs during runtime?

The reason for my request is, I need the complete processing power of a GPU (K6000) in a singe GPU system for ray-tracing the virtual scene while keeping the virtual environment interactive.
The full virtual scene would consist of ~0.8 x 10^12 triangles.
Since this large amount of triangles cannot be rendered or stored at once, I plan to use LOD approaches to overcome this issue.
However, creating new accel-structures on the GPU during runtime, is slowing down the system too much, hence, it is not interactive anymore. Pre-calculating the accel-structures is not possible due to the size of the data which should be visualized.

Thanks already for your help.

The mesh is really individual triangles or are there instances of the same geometry in the scene?
In the latter case sharing acceleration structures among identical geometry by using Transform nodes to instance them could save a lot of memory and AS build time.

“Pre-calculating the accel-structures is not possible due to the size of the data which should be visualized.”

Are you saying that storing the individual LODs is not feasible either?
Otherwise, if the geometry data is not dynamic you could generate the acceleration structures for the LODs once and store them on disk and load them during runtime instead of rebuilding them.
Please have a look at the OptiX API Refecence and read the chapters on rtAccelerationGetDataSize, rtAccelerationGetData and rtAccelerationSetData.

What’s your primitive count for these LODs?

Splitting the scene spatially into individual acceleration structures which are small enough to be updated on demand from disk is one step. You would also need to update the geometry inside the attribute buffers themselves to match the geometry to the loaded AS per LOD.
The OptiX Programming Guide explains this in chapter 3.5.5.

I have not done that before for individual AS nodes, only for the root.
Since the BVH topology of the root shouldn’t change there wouldn’t be a need to rebuild that.
Otherwise, check the documentation on the “refit” property which is faster then rebuilding.

Which acceleration structure builder are you currently using?
The currently implemented BVH builders have quite different build and traversal speeds.
Please have a look at the rtAccelerationSetBuilder documentation.

There is a CPU based implementation for the Trbvh AS builder inside the OptiX Commercial version. Look for the “build_type” acceleration property inside the OptiX Programming Guide. The OptiX Release Notes describe how to request an OptiX Commercial version if needed.
Though since that would go through the same OptiX context and the OptiX API is not guaranteed to be multi-threading safe (OptiX Programming Guide, Chapter 12), you might not be able to build the AS in parallel at runtime anyway.

How many cores CPU builders use can be selected via the RT_CONTEXT_ATTRIBUTE_CPU_NUM_THREADS context attribute. The default is to use all CPU cores.