GPU Direct 1 and 2
In NVIDIA GPU direct section:
http://developer.nvidia.com/gpudirect

[quote]
GPUDirect v2.0 adds support for peer-to-peer (P2P) communication between GPUs over PCIe in the same system, and lays the foundation for P2P communication between GPUs and other devices in a future release.
[/quote]

It appears to me that GPU Direct 2.0 include GPU Direct 1.0 features ( it said "adds support" ).
Is this true ?
- If yes, How could I test my MPI to see whether the feature of GPU Direct 1.0 is there ? I couldn't figure out anything from the example code i.e mpi_pinned.c in the package.
- If no, Is it possible to install the driver in the website together with CUDA 4.0 driver? Because the file name "nvidia-gpudirect-3.2-1.tar.gz" suggests driver 3.2.

Thank you
In NVIDIA GPU direct section:

http://developer.nvidia.com/gpudirect





GPUDirect v2.0 adds support for peer-to-peer (P2P) communication between GPUs over PCIe in the same system, and lays the foundation for P2P communication between GPUs and other devices in a future release.





It appears to me that GPU Direct 2.0 include GPU Direct 1.0 features ( it said "adds support" ).

Is this true ?

- If yes, How could I test my MPI to see whether the feature of GPU Direct 1.0 is there ? I couldn't figure out anything from the example code i.e mpi_pinned.c in the package.

- If no, Is it possible to install the driver in the website together with CUDA 4.0 driver? Because the file name "nvidia-gpudirect-3.2-1.tar.gz" suggests driver 3.2.



Thank you

#1
Posted 05/18/2011 06:19 AM   
Could anyone enlighten me on this issue ? Thanks a lot
Could anyone enlighten me on this issue ? Thanks a lot

#2
Posted 05/19/2011 01:25 AM   
P2P has nothing to do with the old GPU direct patch ( needed only to use RDMA with the Infiniband driver on Linux).

With 4.0 final, the GPU direct patch will be obsolete, just wait a little bit.
P2P has nothing to do with the old GPU direct patch ( needed only to use RDMA with the Infiniband driver on Linux).



With 4.0 final, the GPU direct patch will be obsolete, just wait a little bit.

#3
Posted 05/19/2011 04:21 AM   
[quote name='mfatica' date='18 May 2011 - 09:21 PM' timestamp='1305778886' post='1239357']
With 4.0 final, the GPU direct patch will be obsolete, just wait a little bit.
[/quote]
you always spoil my surprises :(
[quote name='mfatica' date='18 May 2011 - 09:21 PM' timestamp='1305778886' post='1239357']

With 4.0 final, the GPU direct patch will be obsolete, just wait a little bit.



you always spoil my surprises :(

#4
Posted 05/19/2011 05:04 AM   
Yeah that is twice he has done that :)
Yeah that is twice he has done that :)

#5
Posted 05/19/2011 05:07 AM   
Yeah, Thanks guys! I will wait for the final release.

Hopefully It also supports "enhanced mode" of MPI i.e MPI_Send and MPI_Recv directly on cuda devices

Cheers !
Yeah, Thanks guys! I will wait for the final release.



Hopefully It also supports "enhanced mode" of MPI i.e MPI_Send and MPI_Recv directly on cuda devices



Cheers !

#6
Posted 05/19/2011 05:35 AM   
CUDA 4.0 Final release is out however I couldn't find anywhere mentioned GPU-Direct 1.0. Is the claim previously still correct ?
CUDA 4.0 Final release is out however I couldn't find anywhere mentioned GPU-Direct 1.0. Is the claim previously still correct ?

#7
Posted 05/27/2011 01:51 AM   
With the 4.0 driver, just set this variable:

export CUDA_NIC_INTEROP=1

This will enable an alternate path for cudaMallocHost that is compatible with the IB drivers.
No need to change your code to use cudaHostRegister or to install the GPU direct v1 patch.
With the 4.0 driver, just set this variable:



export CUDA_NIC_INTEROP=1



This will enable an alternate path for cudaMallocHost that is compatible with the IB drivers.

No need to change your code to use cudaHostRegister or to install the GPU direct v1 patch.

#8
Posted 05/27/2011 02:36 AM   
Thank mfatica, It helps a lot.
Performance over MPI improves about 30%
Thank mfatica, It helps a lot.

Performance over MPI improves about 30%

#9
Posted 06/15/2011 04:06 AM   
Thank mfatica, It helps a lot.
Performance over MPI improves about 30%
Thank mfatica, It helps a lot.

Performance over MPI improves about 30%

#10
Posted 06/15/2011 04:06 AM   
Scroll To Top