updated cuDNN 7.05 from 7.03 ...

on an P3 AWS mashine - and now all processes using the GPU hang. this means tensorflows bazel ./configure will not run and I can no longer work. Whats possibly the issue? Why are the cuDNN libs no longer with the cuda libs?

Regards
Holger

Holger,

Sorry, for the delayed response. Are you using the AWS Volta image AWS Marketplace: NVIDIA Deep Learning AMI (will be deprecated) and NGC containers?

No - no Image no container. Just a P3 instance and I want to build everything from sources.
I found that here were two versions of 7.0.5 - depending an Cuda 9.0 or 9.1.
And that the driver came with cuda 9.1 which does not work with tensorRt. That all does not look very thought through…

TensorRT 3.0 supports CUDA 9.

We have a version of TensorRT in development that will support 9.1.

Please use CUDA 9.0, cuDNN 7.0.5 and the drivers for CUDA 9.0. Let us know if you are still facing issues.

I found that here were two versions of 7.0.5 - depending an Cuda 9.0 or 9.1.
And that the driver came with cuda 9.1 which does not work with tensorRt. That all does not look very thought through.