Need advice on building a gpgpu

Good afternoon,

First let me say thank you to anybody who can give me assistance with this build.

Background: I’m a psychiatrist with a background in physics/math (undergrad) and finishing up a masters in biomedical engineering.

Goal: looking for objective measures of psychiatric illness

Plan: would like to build a gpgpu system with at least 2 to 3 gpu’s; the point is not actually to get meaningful analysis to achieve the above stated goal, it is to learn the skills required to then in the future build a system to achieve the above stated goal. I am hoping to learn the basics of gpgpu computing from this build; how to program and coordinate calculation across the boards (will I need open MP across the 3 gpu’s or since they are connected to same motherboard that will only be needed when I add nodes to the network?).

Specifically: I plan on analyzing fMRI and dtMRI data that is coming out of the connectome project (www.humanconnectomeproject.org) eventually I would apply deep learning algorithms to the data to define “normal network” for processing. This would then be used as a metric to compare against scans of individuals with well documented psychiatric illness.

For a first baby system as a proof of concept what gpu’s would be best to use? I am hoping to not use Tesela cards to keep cost down since this initial build is going to be funded out of my pocket. Once I have something that is publishable and when I finish my engineering degree I hope to qualify for a grant to continue the work. Which motherboard and cpu would be recommended? Type and amount of ram that I should purchase? The plan is to use Linux so no cost will go into the OS. Any advice would be greatly appreciated!! Thank you everybody for your time and consideration.

Rick

Have built 6 2-GPU machines this year and here are my opinions about building a good GPGPU workstation:

  1. You will need a 40-lane CPU in order to get two GPUs with full PCI-E 3.0 x16 bandwidth. There are not that many CPU with 40 lanes, but one good candidate is the Intel i7 5930K 3.5 GHz.

  2. You will need a motherboard that is able to handle 2 GPUs at full PCI-E 3.0, such as this one;

[url]ASRock X99 Professional LGA 2011-v3 Extended ATX Intel Motherboard - Newegg.com

  1. The EVGA superclocked GTX 980 ACX are the best value for a two GPU setup, unless you need more memory bandwidth. Then your options are the GTX Titan X or the GTX 980ti

  2. (prepares for backlash…) IMO it is much easier to get CUDA up and working quickly and reliably with Windows 7 pro or Windows 8.1. Also (IMO) the drivers tend to be better for GPUs in Windows than linux.

  3. At least 32 GB of DDR4 ram

  4. A good Samsung EVO pro or Intel SSD 500 GB, Avoid the 1 TB versions.

Most of the system I built cost (in parts) $2,500- $3,000, assuming two GTX 980 GPUs. We have these guys running 24/7 and the only issue we had was with a bad 1 Terabyte SSD.

Thank you CudaaduC for your advice; I was able to put a system together for a toatly of about 2,750.00 including case and power supply.

Thank you for your advice.

No Problem.

Did you get a liquid cooler for the CPU?
You do not necessarily need one, but I usually get a Corsair liquid cooler for the CPU and quite often they are hard to fit in the case. Just dealing with that issue can take the majority of the build time.

Good luck on your project! I worked on the UCSF/UCSD ‘Glass Brain’ project, which is somewhat related to your work.

You can still see our talk on the topic at GTC 2014 here;

[url]http://on-demand.gputechconf.com/gtc/2014/video/S4633-rt-functional-brain-imaging-gpu-acceleration.mp4[/url]