Timeout with 50+ lights on the scene

Hello.

I have a problem when my scene has 50+ lights. The procedure with for loop locks the process and does not call timeoutcallback function (I think). if less lights is used all works fine.

I thought of rendering sets of lights in different frames, but this requires a large refactory.

Any idea how I can solve this?

Optix 4.0.2
Geforce GTX 1080.
All drivers updated.

thank’s

This is on Windows? I believe your options are either (1) make the launches shorter so they don’t timeout, or (2) move the GPU to a non-display PCIe slot and put it in TCC mode. I think Detlef has posted information on (2) before, so I’ll let him reply to that.

Rather than sampling all 50 lights, a common approach is to sample some number of them randomly per frame, weighted by a heuristic, e.g., area or solid angle, to estimate the light’s importance. Then if you don’t move the camera, you would continue to accumulate multiple sub frames and noise in the scene will converge as all lights eventually are sampled.

There are advanced methods to improve convergence by discovering which light paths are important as time goes on, although I think these can introduce bias if you’re not careful. This is still an active research area in production rendering. It’s tough to handle a city scene at night with thousands of potentially visible lights. But random sampling will at least converge correctly.

Unfortunately you cannot switch a consumer board into Tesla Compute Cluster (TCC) driver mode and you wouldn’t want that when it’s the only board in your Windows system. ;-) There is no display attached to that driver mode.

Another method to workaround that 2 second Timeout Detection and Recovery (TDR) would be to simply increase the TdrDelay value, which is only allowed during development! (Might need a reboot to take effect.)
[url]https://msdn.microsoft.com/en-us/windows/hardware/drivers/display/timeout-detection-and-recovery[/url]
But let Microsoft explain that in the first two sentences here in the “TDR Registry Key” link on above site.
[url]https://msdn.microsoft.com/en-us/windows/hardware/drivers/display/tdr-registry-keys[/url]

Yes, that same problem has come up in the past with many virtual point lights. Have a look at this thread for one incremental solution:
[url]https://devtalk.nvidia.com/default/topic/806609/?comment=4435008[/url]

Here’s another thread with a similar discussion about how to avoid timeouts, just for a completely different rendering topic:
[url]https://devtalk.nvidia.com/default/topic/923772/?comment=4861259[/url]

That’s really the only viable programming strategy I can recommend for such problems. Do less work more often.

Thank you all for the reply, I go with incremental approach.

Another question. I did some tests comparing uber shader vs more specialized shader and get very different results. In my tests uber shader with all material and branchs got 2x performance over specialized shader.

For example, I have split the large shader in to

matte,
glossy,
transparency,
tranparencyRefraction,
emissive

In the programming guide has

“Each new Program object can introduce execution divergence. Try to reuse the same program with different variable values. However, don’t take this idea too far and attempt to create an “über shader”. This will create execution divergence within the program. Experiment with your scene to find the right balance.”

But I didn’t not find best choice than uber shader when performance are the target (always is :)).

Any idea that usually results in better performance? I thought of splitting the materials with texture / normalmaps of the without texture.