one CUDA for all

Hello,

I’m not a programmer, however i have a question :
I noticed that developpers who wants to use CUDA develop so many application for so many use : that is to say, for exemple you develop one special TMPGENC and one special PHOTOSHOP for using CUDA. I suppose its because GPU can’t handle more than one task contrary to the CPU.

Is it not possible to develop ONLY ONE CUDA APPLICATION from which you could choose which application you want to be used with GPU? that is to say for the previous exemple instead of developping special photoshop or tmpgenc, you launch only one central CUDA application and then you decide that photoshop or tmpgenc is using GPU.

What i’m saying is nonsense? :huh:

Thanks,

Ok,

After reading Nvidia doc i understand much better why what i ask seems impossible.
Contrary to CPU which works single tread, GPU works parallel.
So if you want to optimise an application you need to know which calculs you have to do in order to do optimize them in parallel. This is why we need to rewritte the program.
Now i see that even if we arrive to distribute operations from CPU to GPU these calculation will remain single tread and will probably result in a loss of time.
Anyone comment?

:ph34r:

The binaries for “all other applications” use the X86 instruction set. The GPUs run their proprietari instruction set. The two are not compatible. So the minimum you would need is a recompilation, but as you noticed from the Programming Guide, the situation is far more complex.

Could a moderator move this to General CUDA GPU computing?