Depth map

Good day,

I am trying to use Optix to create a depth map for a mesh. Basically, I am trying to modify the optixMeshViewer so that instead of the colored mesh, I get the depth of each pixel saved to a picture.

I’ve been reading through the samples in the Optix SDK and I have a rough idea about how this library works.

In order to get the depth, should I write something like this in the closest hit kernel?

[i]rtDeclareVariable(float, t_hit, rtIntersectionDistance, );
float depth = length(t_hit * ray.direction);
float3 depth_color;

depth_color.x = depth;
depth_color.y = depth;
depth_color.z = depth;

prd_radiance.result = depth_color;
[/i]

How exactly do I need to configure the Context for this?

Thanks,
Andrei

That’s doing too much.
The ray.direction is a normalized vector in world space inside the closest hit program domain. Multiplying the ray.direction with the intersection distance gives the vector from ray.origin to the surface hit point. Since the ray.direction is normalized, the length() of that scaled vector is the intersection distance again. Means in your code.

float depth = t_hit;

If you want to write the depth as float value you would simply use a per-ray payload which contains such a “float depth;” member and fill it when the primary ray hits something with the variable of the rtIntersectionDistance semantic. Means the local depth variable isn’t needed.

prd_radiance.depth = t_hit;

If the ray hits nothing, you could either initialize that per-ray payload member to your desired zFar value inside the ray generation program and have no miss program, or if you have a miss program anyway, write your zFar value there.

There is also no need to store the same value into a float3. You can save that bandwidth.
The output buffer for that could be a single float value, or if you render colors as well, you could store RGB and depth into single float4 (better performance than separate buffers).

Be careful about depth values in case you want to merge this with a rasterizer!
If you simply store the intersection distance into the depth value in a pinhole camera you will get a radial depth, but that’s not how rasterizes do that. There the depth is on a planar plane with distance along the normally orthogonal view direction to that plane, not radial.