Camera movement in the SDK, Trying to find the focal length

I’m tinkering with the path_tracer sample from the OptiX 3.9.1 SDK and I would like to know the focal length of the camera. In SampleScene.h I found a note accompanying the RayGenCameraData struct, saying:

// W   - Viewing direction.             length(W) -> focal distance

I assumed that “focal distance” meant focal length and started measuring its value. However, I found that by moving the camera forward/backward (RMB down, move mouse forward/backward) the length of W changes. I would not expect that, unless the camera is actually zooming instead of moving. The thing is: the camera also seems to change position.

I’m new to the OptiX SDK and don’t know where the camera movement code is located, so I’ve tried reverse engineering the meaning of W, as well as U and V (respectively the horizontal and vertical axis of the view plane) but have not managed to come to any meaningful conclusions. I did find that 0.5*(U+V) does not equal eye+W which I would have expected. If you know how the camera works, how I can get its focal length, or where I could find out myself, I would be happy to hear it.

Hi, you might also want to check the camera code in the 4.0 samples, e.g. optixMeshViewer. That camera is simpler. But really none of the camera code in any of the samples is intended to be production strength. It might be a good exercise for you to rip it out of path_tracer and put in your own version.

Hi dlacewell. Thanks for your reply.
Luckily I don’t need a production strength camera. I only need one for academic purposes.
I could of course write my own camera, but given my focus, I would rather understand how the existing one works. What if I simply misunderstand it?
I would check out the 4.0 as you advised, but sadly OptiX 4.0 doesn’t think my video card is supported.

The camera implementation in OptiX uses a point and three unnormalized and not necessarily orthogonal vectors in world coordinate space to place and orient a pinhole camera which, by design, has no focal length.

The camera position (let’s name it P) is the world space position of the anchor point of the frustum defined by U, V, W.
W is the vector from P to the center of the camera plane (generally a parallelogram, usually a rectangle), let’s call it M = P + W;
U and V are the horizontal and vertical vectors from point M to the right and upper edge of that camera plane in world space (spanning the first quadrant of the full camera plane).

Given these P and UVW you can calculate a point on the camera plane for each ray you want to shoot:
D = P + W + u * U + v * V; // with u and v in the intervall [-1, 1].
The normalized vector from P to D is the ray directon.

Because for a pinhole camera ray.origin = P; which is the camera position, this cancels out the P in the above calculation, so you get ray.direction = normalize(W + u * U + v * V); Simple as that.

Since the vectors U, V, W are neither normalized nor necessarily orthogonal, this allows for sheared viewing frustums as well. This, for example, comes in handy when trying to reduce crosstalk in stereo images due to perspective foreshortening, but for monoscopic images you don’t need that and can simply make UVW orthogonal.
Then the aspect ratio of your rendered output size is the same as aspectRatio = length(U) / length(V), which results in square pixels.
The horizontal field of view (fovx) is also easy to calculate, it’s fovx = 2.0f * atanf(length(U) / length(W));

Related explanations on this OptiX forum:
[url]optix ray direction - OptiX - NVIDIA Developer Forums
[url]https://devtalk.nvidia.com/default/topic/743711/?comment=4220090[/url]

Thanks for the explanation, Detlef.
I hadn’t expected U and V to point to the upper and right edges of the camera plain. I thought they spanned the whole plain, instead of just that quarter. Thanks!
With this understanding, I’m now going to replace the camera movement system in the path_tracer sample. I don’t know what the idea behind the zooming is (hold right mouse button), but since it changes both (the length of) P and W, which I find rather counter intuitive.

Edit: I’m sure that I simply don’t understand the idea behind the default mouse controls, which change both P and W during a zoom, but I’ve found that the argument --game-cam makes the GLUTDisplay handle (mouse) input in a way that I personally find more intuitive and that, more importantly, keeps the focal length (length(W)) static between camera movements.