When I compile my application with the verbose option on, it returns that I use no local memory.
However, when I profile with the memory statistics experiments on, the “overview” tab shows a lot of local memory traffic and the “local” tab also.
In the “caches” tab, the “local hit rate” is 100% for loads and 99.95% for stores. I would have expected it to be 0%.
Does anyone have an idea where this traffic would be coming from?
Would you mind posting the output from the compiler and your kernel source? Are you by chance using any arrays in your kernels? If not specifically indexed at compile time, arrays will use local memory.
Hi and thanks for the reply
Unfortunately this is commercial code so I’m not at liberty to copy anything. I realize this limits your analysis abilities.
Register usage is at 46 (with maxregister set to 0) and blocksize of 128, so there should be no spilling (?).
The kernel is not using any array, but it has a struct with 4 members and another with 12 members. Are those considered indexable arrays?
Also, the “local memory per thread” column in the nsight profile “cuda launchees” window also states it uses “0” byte.
Hmm, at 46, your registers shouldn’t be spilling into local memory (although it could still occur as Greg noted above). I was unable to reproduce this myself. Would you mind posting screenshots of the overview, local, and caches tabs? If this is too sensitive for this public forum, I can send you a private message with my contact information.
Which GPU, Nsight version, and GPU driver are you using? And, which CUDA Compute Capability version are you targetting?
Nsight VSE 3.0 will show all memory transactions correlated to your source code. In order to see this information you have to capture the CUDA Memory Transactions experiment. This was captured in your linked screenshots.
Navigate to the CUDA Launches page
In the top table select the kernel of interest
In the bottom left pane select CUDA Source Profiler\CUDA Memory Transactions
In the bottom right pane in the Memory Transactions table click on the filter Icon on the Memory Type column and select Local and Generic, Local.
The table will now show all local memory accesses. You can click on the Line # cell to jump to the source line in the CUDA Source View to investigate what parts of your code are accessing the local memory.
Right, I had seen that, but only had a view of the SASS (?) code and its a few thousands line long, didnt feel up to the challenge! But obviously once I compiled in debug mode and did a new trace I had access to the original source code.
By the way, that is pretty damn cool.
Back to business.
I have local stores in the signature of the kernel. The source code line associated with the signature is linked to 33 local stores. That, I do not understand.
A line as simple as
unsigned int kvoxel(0u);
is associated to a local load.
float3 onevariable = othervariable.float3member;
is associated with 3 local loads and 3 local stores.
by the way, when you right click the source view window, visual studio crashes
But right during that crash, I even had a local store operation within the math_functions.cu file .
That code was using the old ‘volatile’ trick, which the programming guide clearly says could hurt performances “in the future” :) After removing volatile, the local memory usage is gone.
I however do not fully understand as the examples above did not have anything to do with the volatile variable (which I used for the idx variable). I’m also quite sure I wasn’t reading 2.18gb worth of idx values as per the screenshot a few messages up.