50% off Tesla C1060s for developers

We’re running a promotional discount on the Tesla C1060 between now and July 24–50% off the MSRP. For more information on the card itself, see this site. Limit 4 per developer.

To get your promotional code in North America, click here.

Outside of North America, contact a Tesla Preferred Partner.

Holy cow…I am definitely going to get one as soon as I get some money together for one. I was getting ready to buy a GTX260 or GTX280 to do some double precision work…but at 50% off I don’t think I can pass up a C1060!

Thanks.

Good news to upgrade my card.

:)

to me it’s not whether it’s “50% off” but whether it’s a good deal. This completely depends on if you need the 4gb of memory. Otherwise, it looks like it’s still around $500-600, and two gtx 285 cards would probably be better… if you have that sort of money. I’m happy with the gtx 260… I need to do a lot more optimization before I can justify a new card.

That’s very true. It’s really a toss-up between the GTX295 and the C1060, since with the discount they are close to the same price…but I’ve had some GPU programs in mind that will be very memory-intensive, so that’s where the Tesla will come in handy. If anything, I may even get a GTX260 as well, so I can test out some multi-GPU apps as well.

There’s only a few things stopping me from getting one right now, the most important of which is that my current dev. machine only has one PCIe 1.0 x16 slot in it, so there’s no room for the Tesla. That, and I’m holding out a bit to see what news I hear about PCI Express 3.0 (which is supposedly due towards the end of the year).

Hi,

For my company the question, and its interesting to know what nVidia thinks of, how can I squeeze as much

GPUs as possible into one machine. We recently bought two systems (which we still need to check and recheck before

deciding) but C1060/1070 was never even an option. With at most 4 PCI slots on a board, we’d better put as many

GPUs as possible meaning GTX295 - resulting in 8 GPUs (in AMD with 4 PCI slots) and 6 GPUs (in Intel with 3 PCI slots).

As far as our benchmarks shown, the 295 performs roughly the same (and even a bit better) then the C1060 and

with out the discount costs ~1/2 the price. So even with the discount its still a better deal - price/performance wise.

The big question for us is why build a C1060 GPU farm and not GTX295 for at least half the money (if not 1/4th)?

I have absolute trust in nVidia that the GTX295 wouldn’t go up in flames after a few days of calculating. Actually

the test boxes are already running and calculating 24x7 for the last week.

Does anyone have experience with long running clusters other then the Tesla?

eyal

My original 8800 GTX development machine (built 2 years ago) has been turned into a 24/7 job running machine. I’ve got ~4 months of contiguous run time on it now without a single issue. Although, I did have some random crashes in the very beginning that turned out to be caused by overheating. A small case modification to get more airflow fixed that.

The biggest issues with a cluster of GTX 295s are 1) cooling, 2) power supply and 3) mechanics (fitting them in the case). If you can solve all three (which it sounds like you have) you shouldn’t have any other trouble.

Just curious, but why wansn’t S1070 an option (besides cost)? It puts 2 GPUs on each PCIe connection so you can build an 8 GPU box if you want to (somebody correct me if this is not a supported configuration). And it solves issues 1,2, and 3 above while keeping it all packed in a very small rackmount case, making it trivial to deploy dozens of them.

Supported as of R180/CUDA 2.1.

The GTX295 has two GPU’s per board, while the C1060 only has one. However, the Tesla has 4GB of on-board RAM while the GTX295 only has 896MB per GPU (on the currently available boards). The extra RAM is the difference…for some applications, allowing the developers to keep data in RAM rather than transferring it back and forth is a huge deal. You’ll have to decide whether it is worth the cost (and half the GPU’s per box) for your applications.

Mainly cost and as far as I understand that would requite two machines the host and the S1070 taking twice

the sapce as well.

As for the memory issue, well its true, 85% of the work we do take a long time to compute so the

PCI overhead is not a big issue (10% of the time) and the work can be broken into smaller chunks

to fit even into 800MB. For the rest of the 15% I’m working on something :)

I guess that algorithms that cant squeeze into 800MB-1G should have to use the Tesla - or squeeze tighter ;)

eyal

gzip should work well, I guess :D

If I wanted a local vendor to provide the components or put together a PSC box using the “Build Your Own” specs, how would we/they be able to take advantage of this 50% offer?

We’re already in the process of getting quotes, so it would be useful to include the discount in our negotiations with the various vendors.

Ask your vendor for the discount on C1060. Only Tesla resellers can offer the discount on C1060.

bump because this has been extended to July 24

Hi guys,

could you please enlighten me where I can purchase the card, in order to enjoy the discount. for example http://www.tigerdirect.com/applications/Se…&CatId=4044

is showing the card cost 1.2k. This seems to be the full price instead of 50% discount.

Have any of you already purchase the card by enjoying 50% discount. Please do let me know where you purchase the card.

There is no current promotions?