Light Linked List broken on GeForce cards

Hello,
I developed the Light Linked List algorithm and I am currently running it flawlessly on consoles and PCs equipped with Radeons.
However I tried to run it on two new machines one equipped with the GeForce GTX670, the other with a GeForce GTX980 and I am getting terrible artifacts.
I tried debugging my app with the latest NVIDIA NSight but the driver crashes when I attempt to capture a frame.
Here’s a link to a light-weight LLL demo:

I would appreciate some help solving this problem,
Thank you,
Abdul

I suspect the issues that I am seeing are related to the implementation of InterlockedExchange function in a pixel shader.
Btw you can reload the shaders at runtime by pressing R.
Abdul

I changed the implementation and I no longer suspect the atomic functions: it’s almost as if some light shells primitives are not rasterized or something…
[url]http://i57.tinypic.com/359wpcw.png[/url]

Well through sheer grinding I found out what the issue was:
I am passing a unique light index in the 0-255 range from the vertex shader to the pixel shader. The light index is the same for all vertices in a given draw call.
Apparently when my index value makes to the pixel shader sometimes it “loses” precision and drops down to I-1, so instead of let’s say index “5” I end up getting 4.xxx which when converted to an uint turns into “4”.
My quick fix was to nudge the index in the vertex shader:

output.fLightIndex = instance.m_LightIndex + 0.5f;

Abdul

Or use an unsigned int data type and “flat” interpolation modifier to keep it intact.
From 2010: [url]Passing integer from vertex to fragment shader? - OpenGL - Khronos Forums

Hey Detlef,
Thank you for the input, since I am using Directx and HLSL I ended up using an unsigned int and the “nointerpolation” flag:

struct VS_OUTPUT_LIGHT
{
float4 vPosition : SV_POSITION;
nointerpolation uint iLightIndex : TEXCOORD0;
};

Abdul