Unified memory across multiple GPU Question

In the CUDA documentation it states.

-Managed allocations are automatically visible to all GPUs in a system via the peer-to-peer capabilities of the GPUs. If peer mappings are not available (for example, between GPUs of different architectures), then the system will fall back to using zero-copy memory in order to guarantee data visibility. This fallback happens automatically, regardless of whether both GPUs are actually used by a program.

When running the simpleP2P example it states that NVIDIA TCC must be enabled. Then when trying to enable TCC on GTX Titan or GTX Titan X it states that it is not supported on these GPU.

So my question if TCC cannot be enabled will managed memory default to zero-copy memory between GPU?
Can unified memory be used across multiple GTX Titan X?
If not is there a way to use P2P on GTX titan?

Yes, I would expect the system to fall back to zero-copy memory usage in that multi-GPU scenario when using Unified Memory. This is not difficult to prove with a test case.

Currently, AFAIK, there is no way to use P2P on GTX Titan on Windows because it cannot currently be placed in TCC mode. However there may be developments in this respect in the near future (the next 60 days) so stay tuned.

“If not is there a way to use P2P on GTX titan?”

Yes - use Linux.

Wow, that would be fantastic! My work would have an immediate use for P2P in Windows multi-GPU.

It would be even better if the ability to invoke the TCC mode was not limited to the GTX Titan X , and included the GTX 980.

Hmm could it have something to do with the release of Windows 10 with it’s shiny new WDDM 2.0 ? :)

Now that CUDA 7.5 RC is out, for Titan family products it should be possible to put them in TCC mode on windows,
which should make P2P possible.

Thanks for the information.