GPU based cluster

Hello,

I’m new here :)
I have an question.

My task is to create a computing cluster application, that connect several GPU-s from different PC into a single performance pool, that shares data between each Graphic card to improve calculations ( for example operation on many multi-dimensional matrix-es)

It is even possible to achieve this goal ?
Communication would be created in this same network(LAN) where all hosts are directly connected.

I have two cards at this moment:

NVIDIA® GeForce® GT 540M 1GB/2GB DDR3
and
MSI GeForce GTX 970 GTX 970 GAMING 4G 4GB 256-Bit GDDR5

This problem is for research purposes to show how this kind of application would work, and it doesn’t need to be better that a single GPU applications.

I would be very grateful for any response in this problem :)
Thanks very much in advance :)

One approach might be to learn how to use MPI:

[url]https://en.wikipedia.org/wiki/Message_Passing_Interface[/url]

there is a cuda sample code that demonstrates simple usage of MPI with CUDA:

[url]http://docs.nvidia.com/cuda/cuda-samples/index.html#simplempi[/url]

You should first think of the type of parallelism available in your problem, how much of it and in what form is it present (granularity, tasks, data). Then think about strategies for decomposing the problem space. This first step is not closely related to GPU acceleration, but required. Then you can start thinking about what the programming models/techniques need to provide to solve the problem.

Using MPI can be a good strategy, but you may not need it if you have e.g. a loosely coupled distributed problem.