Simple XYZ Projection, XYZ to Screen Space (For Optix Pathtracer)

Goodnight,

I have been playing for some time with the example code “OptixPathTracer”.

I understand correctly how a ray is thrown from “eye” to a certain pixel (launch_index).

But I needed to do the reverse process. Given a position (x, y, z) I need to find the pixel where it is (whenever it exists).

There is some simple and efficient way in OPTIX to get the pixel (uint2) associated with this position (x, y, z). That is, perform the projection of the point for the current camera and context?

Thank you very much in advance.

That’s pure linear algebra.

Let’s assume you have the pinhole camera definition from OptiX which uses a camera position and a UVW basis to describe a projection.
Image how it does that in my most recent GTC presentation on slide 18: [url]http://on-demand-gtc.gputechconf.com/gtc-quicklink/4JAjAp[/url]. (Follow the link in the sticky post at the top of this sub-forum for the accompanying example code.)

Then you can simply construct the line between the camera position and the point in world space and find the intersection with the virtual camera plane spanned by the UVW vectors.
Means you do a “basis transformation” of the world vector into camera space which is a 3x3 matrix * vector multiplication, where the UVW vectors are the column vectors of the matrix. (Note that UVW is not an ortho-normal basis.)
EDIT: Oops, that was the wrong transformation direction. The projection into that basis are the dot products of your direction vector with the basis vectors.
The resulting xy-values at z == 1 (== at W-vector distance in camera space) are your normalized uv-coordinates on that plane and if these are in the range [-1, 1] your point is inside the view and you can project that onto the underlying pixel grid with a simple scale and bias.

Thanks for the reply,

Actually I am trying to mark (with a color code) certain pixels of the FrameBuffer to show information that I keep during the process of pathTracing, but without slowing down the process.

My first attempt consisted of something similar to the process that you indicate to me.

Applying a method of algebraic projection obtained a lag in my projection. Apparently my re-projected pixels did not exactly match the geometry, (obviously, I was doing something wrong).

Anyway a few days ago I managed to solve it by another method (integrating it into the pinhole_camera program).

From what I’m seeing optix this is very low-level in some aspects, this seems great and allows us to optimize our software more.

Thanks a lot.

Ok, if you need to tag pixels by information you store along a ray path then that would go into the per ray payload struct you define and can then be dumped to any output you like inside the ray generation program.

If you’re using the optixPathTracer as foundation for your own application and you’re just beginning with OptiX, you might want to look at the OptiX Introduction examples I wrote for GTC 2018 which explain how to arrive at a nice and flexible path tracer architecture step by step.

Introduction examples I wrote for GTC 2018 which explain how to arrive at a nice and flexible path tracer architecture step by step

Wow, thanks for an extremely helpful talk. The new introduction examples are a great new learning resource … they all compiled and ran on my machine BTW. Dang! There goes my weekend. :)

That’s right @Bird33 !
@Detlef did a very great job with the samples and the GTC presentation.