Progressive launches for scientific applications

I’m interested in using the OptiX progressive API in generating HDR images. However, there are a few built-in aspects of the progressive API that feel like they’re intended for low-dynamic range display. I’m wondering if it would be possible to work around these somehow.

First question: Does the compositing happen before or after the data is tone mapped? If the compositing is performed on the original HDR data, is there any way I can change how it’s done? For instance, store a sum instead of an average?

Second, can I change the tonemap operator? The documentation says the operator is:

final_value = clamp( pow( hdr_value, 1/gamma ), 0, 1 )

But the clamp operation removes a lot of data, since I could have HDR values much greater than 1. In order to preserve that information, I’d like to use falsecolor or some other tonemap. If the tonemap operation is performed after compositing, I might even want the option to switch tonemaps without restarting the progressive launch (though I guess I could also save and reuse the original output buffer for this, if I could change the compositing function).

Alternately, does anyone know of a strategy for directly encoding data into the stream buffer so that I can interpret it later? For instance, if I want to transfer back a set of floating point values to the CPU, could I read them as bit strings into the stream buffer? It loses the compression, I know, but I can give up a little speed for this.

And third question: Is there a limit to the number of stream buffers I can create from one launch? Could a ray generation program write to multiple output buffers that each had an associated stream buffer?

The initial progressive API implementation focused on algorithms for image synthesis with commutative individual results which can be averaged to a final image in any order.

1.) Compositing happens on the full HDR data. Tone mapping is done during the conversion to the LDR stream buffer.

2.) The compositor and tone map operations are not programmable in this first version.
The special case of gamma == 1.0f will omit the power calculation for performance reasons.

Note that you can get the full HDR averaged result as well (see last paragraph in the OptiX Programming Guide chapter 3.6.3). That is required for any post-processing pipeline anyway. You can simply get the float4 HDR buffer you assigned to the stream buffer and instead of the individual single frame output buffers you would get the averaged HDR buffer when running with the progressive API.
Getting that averaged HDR buffer will stop the progressive launch though, so this needs to be your final frame.

No, you cannot write to the stream buffer directly. Data is only written there by the tone mapper step from the averaged HDR buffer.
I would not consider this as feasible without programmable compositor and tone mapper domain, and then you would need to use a lossless compression for the data transfer.

3.) It should be possible to write to multiple output buffers with a stream buffer assigned. The client checks with rtBufferGetProgressiveUpdateReady() if the queried one is ready. Mind that any stream buffer will be composited and tone mapped and that will cost performance.

I would also recommend to use the progressive API only on the NVIDIA VCA hardware. Running the progressive API on a local system is meant for development purposes so that you can code a VCA capable program without being constantly logged in to a VCA machine.
Compositing and tone mapping inside a custom ray generation program or post-processing shader will be faster on a local system instead of using the progressive API.