Multi GPU results in latencies in Linux
I am currently testing a software that does work with several CUDA GPUs.
The code contains supports both Linux and Windows.
What has been baffling me is the contrary results between Linux and Win.

I ran Cuda-Profiler on both systems (uploaded in the attachments).
We can see that in Windows it runs just as expected (the second CUDA is slower because it's a slower card):
each thread calls are nicely packed together and thus increases the overall efficiency as expected.

However, in Linux, the GPUs seem to have a hard time running threads in parallel:
1. They are executed in serial, such as the first three blocks
2. They are called in parallel but the are spread out and overall calculation time of the threads are about the same as in serial, such as the six blocks.

Has anyone else experienced a similar problem? If so, is there anything I could do about this?
I am currently testing a software that does work with several CUDA GPUs.

The code contains supports both Linux and Windows.

What has been baffling me is the contrary results between Linux and Win.



I ran Cuda-Profiler on both systems (uploaded in the attachments).

We can see that in Windows it runs just as expected (the second CUDA is slower because it's a slower card):

each thread calls are nicely packed together and thus increases the overall efficiency as expected.



However, in Linux, the GPUs seem to have a hard time running threads in parallel:

1. They are executed in serial, such as the first three blocks

2. They are called in parallel but the are spread out and overall calculation time of the threads are about the same as in serial, such as the six blocks.



Has anyone else experienced a similar problem? If so, is there anything I could do about this?

#1
Posted 04/13/2012 12:53 AM   
I have a multi GPU code working on linux. We used the profiler in command line to reconstruct the time line and everything was fine including mem copy and compute kernel overlapping. After that i used the visual profiler and notice exactly your problem: the launch are serialized and the profiler say that is no memory transfer and kernel overlapping. At this point i suppose that is a bug in the visual profiler.
I have a multi GPU code working on linux. We used the profiler in command line to reconstruct the time line and everything was fine including mem copy and compute kernel overlapping. After that i used the visual profiler and notice exactly your problem: the launch are serialized and the profiler say that is no memory transfer and kernel overlapping. At this point i suppose that is a bug in the visual profiler.

#2
Posted 04/15/2012 11:51 AM   
Can you update to 4.1 and try using nvvp? NVVP will also show you the api calls which might reveal where the extra synchronization is coming from.
Can you update to 4.1 and try using nvvp? NVVP will also show you the api calls which might reveal where the extra synchronization is coming from.

#3
Posted 04/16/2012 02:50 PM   
I made i try with 4.1RC2 and that solve my kernel and memory transfer overlapping issue: now the profiler shows overlapped transfers and kernel as expected.
By the way i have another issue: "Event/metric collect failed" kernels behaving differently between runs ...
I will open another post for that.
I made i try with 4.1RC2 and that solve my kernel and memory transfer overlapping issue: now the profiler shows overlapped transfers and kernel as expected.

By the way i have another issue: "Event/metric collect failed" kernels behaving differently between runs ...

I will open another post for that.

#4
Posted 04/18/2012 12:50 PM   
I'm facing something similar.

On my single gpu system, cuda 4.1, 295.20 driver, gcc 446, the driver call, memcopy, etc, has time 0.070 seconds.

On my multigpy system, cuda 4.2, 295.45, gcc 4.5.3, 1,070 second!!

Does anybody know is there an driver issue or something like that in


[quote name='alex_jin36' date='12 April 2012 - 09:53 PM' timestamp='1334278393' post='1395577']
I am currently testing a software that does work with several CUDA GPUs.
The code contains supports both Linux and Windows.
What has been baffling me is the contrary results between Linux and Win.

I ran Cuda-Profiler on both systems (uploaded in the attachments).
We can see that in Windows it runs just as expected (the second CUDA is slower because it's a slower card):
each thread calls are nicely packed together and thus increases the overall efficiency as expected.

However, in Linux, the GPUs seem to have a hard time running threads in parallel:
1. They are executed in serial, such as the first three blocks
2. They are called in parallel but the are spread out and overall calculation time of the threads are about the same as in serial, such as the six blocks.

Has anyone else experienced a similar problem? If so, is there anything I could do about this?
[/quote]
I'm facing something similar.



On my single gpu system, cuda 4.1, 295.20 driver, gcc 446, the driver call, memcopy, etc, has time 0.070 seconds.



On my multigpy system, cuda 4.2, 295.45, gcc 4.5.3, 1,070 second!!



Does anybody know is there an driver issue or something like that in





[quote name='alex_jin36' date='12 April 2012 - 09:53 PM' timestamp='1334278393' post='1395577']

I am currently testing a software that does work with several CUDA GPUs.

The code contains supports both Linux and Windows.

What has been baffling me is the contrary results between Linux and Win.



I ran Cuda-Profiler on both systems (uploaded in the attachments).

We can see that in Windows it runs just as expected (the second CUDA is slower because it's a slower card):

each thread calls are nicely packed together and thus increases the overall efficiency as expected.



However, in Linux, the GPUs seem to have a hard time running threads in parallel:

1. They are executed in serial, such as the first three blocks

2. They are called in parallel but the are spread out and overall calculation time of the threads are about the same as in serial, such as the six blocks.



Has anyone else experienced a similar problem? If so, is there anything I could do about this?

#5
Posted 04/25/2012 03:04 AM   
Scroll To Top