Link CUDA code to C# application

Hi,

So today I’m trying to figure out how to compile C# application (.cs file) with CUDA code (.cu file)

I’ve got an application in C# and I also got functions that will be use by this application, and this functions are in CUDA C/C++. So I’m wondering how can I compile C# and CUDA C/C++ to get one executable ?

When I read topic, I only see people that say “use that lib it “translate” your C# in CUDA[…]” or something like that, but that’s not what I want to.

I don’t know much about compilation, or link compilation or [anything] compilation… So if you know a website or get the “how to” I’d like to read it ! Thank you =)

This question has been asked before quite recently.

Common frameworks to combine CUDA and C# include managedCUDA and CUDAfy

Here is a recent question:

[url]https://devtalk.nvidia.com/default/topic/971066/cuda-setup-and-installation/cuda-on-c-/[/url]

Yes but I don’t want to run CUDA using C# programming, I want my C# application to use my CUDA C/C++ code, already made!

What you’re showing to me is that I can code CUDA using C# language using a “library” that “translate” cudafy code into CUDA

managedCUDA allows you to use CUDA kernels and CUDA runtime API in C#

There isn’t any “compilation trick” like compile .cu as object and use it for c# compilation or IDK, make it a sort of .dll to be used by C# ?

You can certainly build DLLs with CUDA. Initially, all the libraries shipping with CUDA (e.g. CUBLAS, CUFFT) were only available in DLL form. How you would go about invoking functions in a DLL from C#, I don’t have the slightest idea as I have never used C#.

Is there any great documentation about managedCUDA something else than their gitHub ?

I tried to use it following NVidia CUDA ‘Hello world’ in managed C# and F# with use of managedCUDA | AlgoSlaves and it works well with this example, but it seem that we need to use Func<in T1, in T2, out TResult> function(T1, T2) to get the result of a kernel function. But that mean I can’t launch function that get more than 2 parameter ? Weird…

Cause in my .cu file (the one with kernels defined) they ared defined like :

__global__ void				kernelSimulation(Propagator *CUDA_prop, EarthCoordinates *CUDA_earthStation, calcAngle angle, double eConstante)
__global__ void				kernelPosGeo(Cartesian posStartGeo, Cartesian *CUDA_geo)

And Cartesian/Propagato/EarthCoordinates/calcAngle are Class !

If I got a well understanding of what managedCUDA is, it’s like if managedCUDA replace the main of my .cu file.

So if I use managedCUDA I need to keep my .ptx file compiled previously by my .cu app. But if it’s a multi-file app I’ll get one .ptx ? Right ? More ?

And when I was reading “how to” on this website, I saw that he choose “_Z6kerneliiPi” for the first argument of CudaKernel() but in another program the name change, how to determine which is the name of our kernel function ?

Thanks a lot

Recall that CUDA belongs to the C++ family of languages. The name of the kernel in the object file will be constructed from the name the programmer chose for the function, plus the C++ name decoration added during C++ name mangling to encode the types of the functions arguments (note that name decoration schemes are not standardized).

To see the decorated name of a kernel, you could run cuobjdump --dump-elf-symbols on the executable. You can demangle decorated names though numerous online tools, e.g. [url]https://demangler.com/[/url]. E.g. “_Z6kerneliiPi” resolves to “kernel(int, int, int*)”.

Thanks for the tips, but how to know the difference between :

_Z6kernelPfPf

and

_Z6kernelPfS_

?

And for the kernels that takes class parameters instead of int/double/float/etc. How should I launch them from managedCUDA ?