Any resource tells about instruction cache?

Hi,

Any one can help to recommend readings that tells about instruction cache?

Thanks,
Susan

Old, but presumably not too much has changed since:

Demystifying GPU Microarchitecture through Microbenchmarking

Cool! Thank you!

Susan

One more to confirm, Are warps blocked at barrier are not considered for instruction fetching? I saw following statement in the paper: “Effect of instruction fetch and memory scheduling on GPU performance” HPArch

“Warps that are blocked at
a barrier, are waiting for loads/stores to complete, or are waiting
for a branch to be resolved are not considered for fetching.”

Thanks,
Susan

Note the statement you are citing from the paper is a description of the simulated architecture, not necessarily of any actual Nvidia card.

I would believe that, due to the pipelined nature of instruction execution, in actual Nvidia cards blocked warps can have a handful of follow-up instructions fetched. This is due to the fact that at the time of instruction fetch it is not even known whether a warp is blocked or not.