OptiX's potential in accelerating photon Monte Carlo simulation

Hello everyone,

Our group are trying to make the best of OptiX engine to accelerate ray-tracing in Monte Carlo simulation of photons in turbid media. But we are not sure of its feasibility yet.

Here is the story. In turbid media (represented by a volumetric domain, using either voxel or mesh), each photon independently travels in a stochastic manner. Basically, the trajectory of each photon could be segmented into pieces, the length and direction of each segment is determined by a stochastic process. Simultaneously, along the path the energy of the photon is absorbed and deposited locally in the corresponding voxels/meshes. Therefore, this ray-tracing process is parallelizible and we have already implemented it using CUDA and achieved significant speed improvement. Our code and software is available here: http://mcx.space/.

After a quick look at OptiX (paper published by Nvidia and Williams College, July 2010), I found a few differences,

  1. In OptiX, the scene is represented using hierarchy nodes. I was wondering how to use it to deal with volumetric domain (voxel/mesh) with heterogeneity of optical properties. (This heterogeneity will significantly affect both the stochastic process and energy deposit calculations in the voxel/mesh where the ray just traversed.)
  2. In OptiX, the rays bounce back and forth between surfaces of different objects, while in Monte Carlo, not only the reflection/transmission is happening on the boundary between different media, but also the ray could be scattered anywhere inside the volume(nature of stochastic process). We do have heard that a subsurface retracing is introduced from the most recent GTC. Still, we are not sure if this stochastic process is feasible using OptiX and are wondering how much improvement we could get.

If you have any experience using this technique in Monte Carlo simulations and are willing to share your code with us, please do not hesitate to contact me. Many thanks!
Also, any general idea to clear this up would be much appreciated!

OptiX is a general purpose GPU accelerated ray casting SDK. It’s not a renderer and doesn’t know about rendering algorithms per se.

Volumetric scattering is just another algorithm which can be implemented by shooting rays and as such can be implemented with OptiX. I used it as an example of the SDK’s flexibility during my GTC presentations.

Volumetric scattering along a stochastic path would not involve surface hits while stepping through some volume. That can be handled by the OptiX ray-generation and miss program domains.

OptiX isn’t limited to surface geometry either.
The intersection program is invoked when the OptiX BVH traversal hits an axis aligned bounding box (AABB). You program where these boxes are by specifying a number of primitives and the bounding box program. Means OptiX actually doesn’t know what it intersects.

For example, if you have a volume with sparse density fields, you can build AABBs containing the data and implement whatever algorithm is required to sample that volume to produce volumetric effects.

Maybe have a look at the NVIDIA GVDB Voxels implementation which actually does some of that: [url]https://developer.nvidia.com/gvdb[/url]
Its developer forum is also under the Advanced Rendering section: [url]https://devtalk.nvidia.com/default/board/252/gvdb-voxels[/url]

For strict volumetric rendering, e.g. just rendering clouds with no other surfaces involved in the scene, you wouldn’t actually need OptiX because that would only involve miss events besides the stochastic path generation code.
But as said above, if you provide OptiX with some AABB structure which contains your volumetric data, you should be able to shoot rays/photons through that with whatever algorithm you need and deposit the resulting information in your output data structure.

Though if your volume’s borders are limited by actual meshes, then using OptiX should become useful to detect those and handle volumetric effects inside.