I’m having a hard time understanding what I’ll need to run Tensor flow economically.
I was looking at older cards like the Tesla M2090 and S1070.
Tensor flow needs lots of RAM I’ve heard and these cards have 5 to 6 GB each – they can be had off eBay for $100 to $200. I have a server rack space so power is not an issue.
I figures I could use multiple S1070 cards as opposed to say a GTX 750.
Now I can see that Tensor Flow requires CUDA Toolkit 7.5+
However when I look at the Wikipedia page CUDA - Wikipedia it says the following:
Compute capability is a property of NVIDIA GPUs. Modern versions of CUDA, and this includes CUDA 7.5 as the latest non-experimental shipping version, require GPUs with device capability >= 2.0.
The Wikipedia page you referenced has a handy table that lists GPUs ordered by compute capability. Note that some CUDA-enabled applications (and I believe this includes some deep learning packages!) require a compute capability higher than 2.0, so I would suggest checking requirements carefully.
I do not have any experience with TensorFlow, and I do not have experience with any of the GPUs on your list, so I am afraid I am unable to comment in detail on your part selection process.
Best I know, both Titan and Titan Z are only available on the second-hand market at this point. Depending on your geographical location, and the intended usage pattern and duration, you may want to consider not only original purchase price but also operating costs (electricity).