pthreads vs. OpenMP?
I need to develop a CUDA application that should be easy to run on
the following computers:
1) a workstation with a 6-core Intel processor
2) a server with two six-core Intel Processors
3) a server with four 12-core AMD processors
where each computer will have multiple Fermi GPUs connected to it.

I want to be able to run multiple threads on the cores of each of the above computers,
and use the many GPUs.
The multiple CPU threads will have to communicate some.

Shall I use pthreads or OpenMP?
I need to develop a CUDA application that should be easy to run on

the following computers:

1) a workstation with a 6-core Intel processor

2) a server with two six-core Intel Processors

3) a server with four 12-core AMD processors

where each computer will have multiple Fermi GPUs connected to it.



I want to be able to run multiple threads on the cores of each of the above computers,

and use the many GPUs.

The multiple CPU threads will have to communicate some.



Shall I use pthreads or OpenMP?

#1
Posted 02/14/2011 05:41 AM   
Right now I would strongly suggest pthreads or boost threads over OpenMP. Establishing and maintaining thread-context affinity in CUDA with OpenMP is notoriously difficult to get right. There are rumours that things will be changing in a future CUDA release, but today I wouldn't use OpenMP.
Right now I would strongly suggest pthreads or boost threads over OpenMP. Establishing and maintaining thread-context affinity in CUDA with OpenMP is notoriously difficult to get right. There are rumours that things will be changing in a future CUDA release, but today I wouldn't use OpenMP.

#2
Posted 02/14/2011 07:31 AM   
[quote name='avidday' date='14 February 2011 - 02:31 AM' timestamp='1297668667' post='1193258']
Right now I would strongly suggest pthreads or boost threads over OpenMP. Establishing and maintaining thread-context affinity in CUDA with OpenMP is notoriously difficult to get right. There are rumours that things will be changing in a future CUDA release, but today I wouldn't use OpenMP.
[/quote]

Where can I find an example of using boost threads with GPUs and CUDA?

Which one is better then: pthreads or boost threads?

What do you mean by thread-context affinity, and where do I read how to do it for CUDA and pthreads?
[quote name='avidday' date='14 February 2011 - 02:31 AM' timestamp='1297668667' post='1193258']

Right now I would strongly suggest pthreads or boost threads over OpenMP. Establishing and maintaining thread-context affinity in CUDA with OpenMP is notoriously difficult to get right. There are rumours that things will be changing in a future CUDA release, but today I wouldn't use OpenMP.





Where can I find an example of using boost threads with GPUs and CUDA?



Which one is better then: pthreads or boost threads?



What do you mean by thread-context affinity, and where do I read how to do it for CUDA and pthreads?

#3
Posted 02/14/2011 07:47 AM   
[quote name='mv1' date='14 February 2011 - 09:47 AM' timestamp='1297669646' post='1193259']
Where can I find an example of using boost threads with GPUs and CUDA?[/quote]
GPUworker, formally part of the HOOMD project, is often cited, and still available [url="https://codeblue.umich.edu/hoomd-blue/trac/browser/trunk/libhoomd/utils?rev=3055"]here[/url].

[quote]Which one is better then: pthreads or boost threads?[/quote]
They both implement the POSIX threads API. pthreads is Unix style C API. boost threads is portable C++ template library which was slated to become part of the C++ standard library.

[quote]What do you mean by thread-context affinity, and where do I read how to do it for CUDA and pthreads?[/quote]
I mean that the CUDA model relies on having one host thread tied to a given GPU context. GPU context establishment is expensive, so it is normal to have each thread establish a context at the beginning of an application and hold that through the life of the thread. Many OpenMP runtimes use the idea of persistant operating system thread pools from which OpenMP operations draw threads to do parallel operations. The problem can come that given logical OpenMP thread ID (so what omp_get_thread_num returns) may not at all times be associated with the same operating system thread. Trying to manage GPU contexts inside a pool of operating system threads over which there is no explicit programmer control can be painful. The CUDA driver API has a context migration mechanism that lets contexts be moved from thread to thread, but entails is a lot of extra administrative overhead, both inside the driver at runtime, and in your code.

There is supposed to be some quite big changes in the CUDA APIs to make this easier, but today, pthreads or boost threads is still preferrable, even though it requires more code than OpenMP does to get the same thread operation done.
[quote name='mv1' date='14 February 2011 - 09:47 AM' timestamp='1297669646' post='1193259']

Where can I find an example of using boost threads with GPUs and CUDA?

GPUworker, formally part of the HOOMD project, is often cited, and still available here.



Which one is better then: pthreads or boost threads?


They both implement the POSIX threads API. pthreads is Unix style C API. boost threads is portable C++ template library which was slated to become part of the C++ standard library.



What do you mean by thread-context affinity, and where do I read how to do it for CUDA and pthreads?


I mean that the CUDA model relies on having one host thread tied to a given GPU context. GPU context establishment is expensive, so it is normal to have each thread establish a context at the beginning of an application and hold that through the life of the thread. Many OpenMP runtimes use the idea of persistant operating system thread pools from which OpenMP operations draw threads to do parallel operations. The problem can come that given logical OpenMP thread ID (so what omp_get_thread_num returns) may not at all times be associated with the same operating system thread. Trying to manage GPU contexts inside a pool of operating system threads over which there is no explicit programmer control can be painful. The CUDA driver API has a context migration mechanism that lets contexts be moved from thread to thread, but entails is a lot of extra administrative overhead, both inside the driver at runtime, and in your code.



There is supposed to be some quite big changes in the CUDA APIs to make this easier, but today, pthreads or boost threads is still preferrable, even though it requires more code than OpenMP does to get the same thread operation done.

#4
Posted 02/14/2011 08:37 AM   
Okay, but how d0 you use pthreads with nvcc on a windows platform. I want to use pthreads for the Host part of the C code, but it doesn't recognize #include <pthread.h> in the code or -pthread on the cmd line. Any suggestions ? Thanks, Alan
Okay, but how d0 you use pthreads with nvcc on a windows platform. I want to use pthreads for the Host part of the C code, but it doesn't recognize #include <pthread.h> in the code or -pthread on the cmd line. Any suggestions ?

Thanks,
Alan

#5
Posted 02/18/2013 07:19 PM   
Scroll To Top