memory concept

Hi,
I am relatively new to CUDA. I am trying to create an application in CUDA and would like to understand more about the memory allocation with CUDA.

Question 1:
As I was writing my program, I was wondering when I declare a simple variable like “int” or “double” in the kernel, does that variable reside in host or device memory?

Question 2:
When I pass a single variable (again “int” or “double”) as parameter to the kernel as a reference, will it run slow if I use the parameter alot within the kernel as I can imagine it will access the host memory every time I use it? Should I create a copy in kernel for performance?

Thanks,
DWH

Just like on the CPU, each thread in a CUDA kernel has its own stack. So anything that you don’t dynamically allocate or load from global memory is placed on each thread’s stack.

I’m also not sure if you can pass references to kernels. I’ve never tried so I can’t really comment.

Thank you MutantJohn.

For ordinary CUDA programming without Unified Memory, you cannot pass host variables by reference as kernel parameters.

For the example you give, you could pass an ordinary single variable “int” or “double” by value.