Mixing driver and runtime API code Using contexts in high level cuda code
Hey

I coded an app that uses the runtime API (cuda* functions), but it is very desireable for me to have control over the context parameters, and pushing and popping them. Can I add cuCtx* calls into my cuda* code, without breaking anything? Or should I migrate everything to work with the low level driver API directly? According to the docs, they are mutually exclusive so I should not use them together. Thoughts?
Hey



I coded an app that uses the runtime API (cuda* functions), but it is very desireable for me to have control over the context parameters, and pushing and popping them. Can I add cuCtx* calls into my cuda* code, without breaking anything? Or should I migrate everything to work with the low level driver API directly? According to the docs, they are mutually exclusive so I should not use them together. Thoughts?

#1
Posted 04/23/2009 08:19 AM   
As far as i know it isn't possible, you will get a conflict. I ended up with 2 versions of host code because of this. But i for one prefer the driver code.
As far as i know it isn't possible, you will get a conflict. I ended up with 2 versions of host code because of this. But i for one prefer the driver code.

#2
Posted 04/23/2009 08:33 AM   
[quote name='bluebit' post='533574' date='Apr 23 2009, 09:19 AM']Hey

I coded an app that uses the runtime API (cuda* functions), but it is very desireable for me to have control over the context parameters, and pushing and popping them. Can I add cuCtx* calls into my cuda* code, without breaking anything? Or should I migrate everything to work with the low level driver API directly? According to the docs, they are mutually exclusive so I should not use them together. Thoughts?[/quote]
Hi,
I have writen a small library that reimplements the cuda runtime calls in order to call the cuda driver API. It allows to use the driver API and at the same time compile the code using nvcc with the kernel_name<<< >>>() syntax.
I will make it public soon, if you are interested, just ask me and I can send you the code.
[quote name='bluebit' post='533574' date='Apr 23 2009, 09:19 AM']Hey



I coded an app that uses the runtime API (cuda* functions), but it is very desireable for me to have control over the context parameters, and pushing and popping them. Can I add cuCtx* calls into my cuda* code, without breaking anything? Or should I migrate everything to work with the low level driver API directly? According to the docs, they are mutually exclusive so I should not use them together. Thoughts?

Hi,

I have writen a small library that reimplements the cuda runtime calls in order to call the cuda driver API. It allows to use the driver API and at the same time compile the code using nvcc with the kernel_name<<< >>>() syntax.

I will make it public soon, if you are interested, just ask me and I can send you the code.

#3
Posted 12/20/2009 11:05 PM   
Driver/runtime API interop works with everything except context migration. Context migration still causes things to explode catastrophically.
Driver/runtime API interop works with everything except context migration. Context migration still causes things to explode catastrophically.

#4
Posted 12/20/2009 11:29 PM   
great every thing except the one thing i needed ....
great every thing except the one thing i needed ....

#5
Posted 12/21/2009 06:49 AM   
[quote name='Everton' post='968037' date='Dec 21 2009, 12:05 AM']I have writen a small library that reimplements the cuda runtime calls in order to call the cuda driver API. It allows to use the driver API and at the same time compile the code using nvcc with the kernel_name<<< >>>() syntax.[/quote]

Great news! :)
Another piece of CUDA gets an open replacement...

Is the whole CUDA Runtime supported?
Any chance that it can be altered to use OpenCL rather than the Driver API? (assuming some source-to-source translator takes care of the device code)

If so, that would be... interesting. ;)
[quote name='Everton' post='968037' date='Dec 21 2009, 12:05 AM']I have writen a small library that reimplements the cuda runtime calls in order to call the cuda driver API. It allows to use the driver API and at the same time compile the code using nvcc with the kernel_name<<< >>>() syntax.



Great news! :)

Another piece of CUDA gets an open replacement...



Is the whole CUDA Runtime supported?

Any chance that it can be altered to use OpenCL rather than the Driver API? (assuming some source-to-source translator takes care of the device code)



If so, that would be... interesting. ;)
#6
Posted 12/21/2009 03:27 PM   
Could you explain in a little detail what does not work?

[quote name='tmurray' post='968045' date='Dec 20 2009, 05:29 PM']Driver/runtime API interop works with everything except context migration. Context migration still causes things to explode catastrophically.[/quote]
Could you explain in a little detail what does not work?



[quote name='tmurray' post='968045' date='Dec 20 2009, 05:29 PM']Driver/runtime API interop works with everything except context migration. Context migration still causes things to explode catastrophically.

#7
Posted 12/21/2009 06:21 PM   
Scroll To Top