Hi everybody,
i’m having some problem here. I want to access the gpu from inside the docker. I have followed the tutorial on Docker on AWS GPU Ubuntu 14.04 / CUDA 6.5 - Seven Story Rabbit Hole but no luck.
the situation now is i have installed cuda on host, nvidia driver 352.55 with cuda 5.5 and devicequerry result is pass.
following the tutorial, the result from this command inside the host
ls -la /dev | grep nvidia
is
crw-rw-rw- 1 root root 250, 0 Nov 3 06:55 nvidia-uvm
crw-rw-rw- 1 root root 195, 0 Nov 3 06:55 nvidia0
crw-rw-rw- 1 root root 195, 255 Nov 3 06:55 nvidiactl
the result from
cat /proc/driver/nvidia/version
show the same result within the host
NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.55 Thu Oct 8 15:18:00 PDT 2015
but when running device query it’s failed with this result
-> CUDA driver version is insufficient for CUDA runtime version
Result = FAIL
when i try to install the driver inside the host its showing error
An NVIDIA kernel module 'nvidia-uvm' appears to already be loaded in your kernel. This may be because it is in use (for example, by an X server, a CUDA program, or the NVIDIA Persistence Daemon), but
this may also happen if your kernel was configured without support for module unloading. Please be sure to exit any programs that may be using the GPU(s) before attempting to upgrade your driver. If
no GPU-based programs are running, you know that your kernel supports module unloading, and you still receive this message, then an error may have occured that has corrupted an NVIDIA kernel module's
usage count, for which the simplest remedy is to reboot your computer.
i dont know what to do anymore, can someone help me ?