LIbrary for complex numbers in device kernel

What is the best library to use for complex number algebra for device kernels? I need capability beyond multiplication and addition like powers of complex numbers and division.

I see that there is thrust::complex but it looks dated. Is there something like std::complex?

take a look cuComplex.h : definition of float/double complex and various of functions/operations.

@episteme, it looks like there are no power functions in there (sqrt, pow(complex, float), etc).

You might want to take a look at the Bluebird Library (http://www.orangeowlsolutions.com/bluebird) which is under GPL. I just checked, it does have support for complex pow(), although a comment in the code says “to be optimized”, so it might not be particularly fast. I have not used this library myself, so can only provide a pointer here, you will have to make your own determination as to suitability for your use case.

You may also consider filing an enhancement request with NVIDIA to add complex support to the CUDA standard math library. Enhancement requests can be filed through the bug reporting form linked from the CUDA registered developer website, simply prefix the synopsis with “RFE:”.

@njuffa thank you! Im not so concerned about speed right now but numerical accuracy is important.

I will file a “bug request” next week.

Not sure if this is naive or not but if numerical accuracy is your true desire, it might be more worthwhile to stick with the CPU. There are already arbitrary precision libraries out there and that kind of computation maps better to the CPU than it does to the GPU. Keep in mind, CUDA pushed for a 16 bit floating point data type so if that’s any indication…

@mutantjohn NVIDIA GPUs support IEEE standard. That should be fine.

CUDA features are first and foremost driven by customer demand, and that applies to FP16 (half precision) as well. Filing enhancement requests is one way of expressing customer demand.

Since NVIDIA is a for-profit enterprise that provides CUDA and its associated libraries free of charge, one can naturally assume that the ability to drive GPU sales is one important criterion in the prioritization of feature requests by customers. As deep learning is a multi-billion dollar sales opportunity, NVIDIA’s push with respect to FP16 and machine learning libraries that make use of it is therefore easily explained.

As for a multiple-precision library for CUDA, I would suggest looking at the third-party library CAMPARY: [url]http://homepages.laas.fr/mmjoldes/campary/[/url]. Again, I have not used the library myself.

CAMPARY paper: [url]https://hal.archives-ouvertes.fr/hal-01312858/[/url]
related paper on algorithms: [url]Arithmetic Algorithms for Extended Precision Using Floating-Point Expansions | IEEE Journals & Magazine | IEEE Xplore

CAMPARY presentation slide deck: [url]http://perso.ens-lyon.fr/valentina.popescu/Talks/RAIM15.pdf[/url]
older slide deck: [url]http://perso.ens-lyon.fr/valentina.popescu/Talks/CAPA14.pdf[/url]

Wow, I’ve actually never been happier to be wrong! Thanks, njuffa! That link is awesome.