CUDA insufficient driver version

Hello,

I tried installing the latest jetpack version 2.3.1 on the TX1 but when I am trying to run the CUDA samples I am getting an error that the CUDA driver version is insufficient for CUDA runtime version.

For example when I run deviceQuery I get:

./deviceQuery Starting...

     CUDA Device Query (Runtime API) version (CUDART static linking)

    cudaGetDeviceCount returned 35
    -> CUDA driver version is insufficient for CUDA runtime version
    Result = FAIL

or when I run the oceanFFT

CUDA error at ../../common/inc/helper_cuda.h:1133 code=35(cudaErrorInsufficientDriver) "cudaGetDeviceCount(&device_count)"

The host machine is running Ubuntu 14.04 and the above commands were run directly from the TX1 and not via sshing to it. So far I have been running the Jetpack v2.1 with no problems and I reinstalled it just to make sure that I can indeed run correctly the above examples and everything is fine.

Anyone else had this problem? Any known solution?

Hi karachalios_s,

You issue seems similar as [url]https://devtalk.nvidia.com/default/topic/969469/[/url].
Have you installed CUDA 8 from JetPack?

Hi kayccc,

I have seen that issue but his problem was that the host machine was not Ubuntu 14.04.

When I run the suggested commands:

sudo dpkg --list | egrep -i '(nvidia|cuda)'
sudo ldconfig -p | egrep -i '(nvidia|cuda)'

I get the same answers as in the other post so it seems that CUDA 8 is installed.
Yes I installed CUDA 8 from the jetpack. I did not do anything else apart from using the jetpack installer and select the full option to install everything.

At the end of the other post there is another user with the same problem as me and that is why I started a new thread so that we can be more specific on my issue and have an accepted answer should we find one.

I’m getting the same error on a Windows 10 install two Titan X SLI (pre Pascal). Went through normal installer process. After that it wanted to update the GeForce experience and that crashed with a run time error.

C:\ProgramData\NVIDIA Corporation\CUDA Samples\v8.0\bin\win64\Debug>histogram.exe
[[histogram]] - Starting…
CUDA error at c:\programdata\nvidia corporation\cuda samples\v8.0\common\inc\helper_cuda.h:1133 code=35(cudaErrorInsufficientDriver) “cudaGetDeviceCount(&device_count)”

I tried to install CUDA 8 [Net installer of the NVIDIA Website] (well it said it succeeded) and compiled the sample programs with VS 2015

“========== Build: 145 succeeded, 10 failed, 0 up-to-date, 0 skipped ==========”

I notice below the CUDA DLL is “NVCUDA.DLL 8.17.13.5362 NVIDIA CUDA 7.5.15 driver”

I have in the past also done this a couple of times on a Jetson TK, Ubuntu and various Windows versions.

Although Windows was updated before (then the CUDA examples failed), Windows wanted to reupdate itsself. That didn’t change the error.

Did a full rebuild
========== Rebuild All: 145 succeeded, 10 failed, 0 skipped ==========
Same result.

NVidia Installer

cuda_8.0.44_win10_network
File Version 1.0.5.0
Product Version 1.0.5

Before, last year, I installed
cuda_7.5.18_win10_network

NVIDIA System Information report created on: 12/14/2016 14:31:12
System name: DESKTOP-C7EMGBP

[Display]
Operating System: Windows 10 Home, 64-bit
DirectX version: 12.0
GPU processor: GeForce GTX TITAN X
Driver version: 353.62
Direct3D API version: 12
Direct3D feature level: 12_1
CUDA Cores: 3072
Core clock: 1002 MHz
Memory data rate: 7010 MHz
Memory interface: 384-bit
Memory bandwidth: 336.48 GB/s
Total available graphics memory: 44967 MB
Dedicated video memory: 12288 MB GDDR5
System video memory: 0 MB
Shared system memory: 32679 MB
Video BIOS version: 84.00.1F.00.01
IRQ: Not used
Bus: PCI Express x16 Gen3
Device ID: 10DE 17C2 113210DE
Part Number: G600 0000
GPU processor: GeForce GTX TITAN X
Driver version: 353.62
Direct3D API version: 12
Direct3D feature level: 12_1
CUDA Cores: 3072
Core clock: 1002 MHz
Memory data rate: 7010 MHz
Memory interface: 384-bit
Memory bandwidth: 336.48 GB/s
Total available graphics memory: 44967 MB
Dedicated video memory: 12288 MB GDDR5
System video memory: 0 MB
Shared system memory: 32679 MB
Video BIOS version: 84.00.1F.00.01
IRQ: Not used
Bus: PCI Express x16 Gen3
Device ID: 10DE 17C2 113210DE
Part Number: G600 0000

[Components]

NvUpdtr.dll 2.11.4.0 NVIDIA Update Components
NvUpdt.dll 2.11.4.0 NVIDIA Update Components
nvui.dll 8.17.13.5362 NVIDIA User Experience Driver Component
nvxdsync.exe 8.17.13.5362 NVIDIA User Experience Driver Component
nvxdplcy.dll 8.17.13.5362 NVIDIA User Experience Driver Component
nvxdbat.dll 8.17.13.5362 NVIDIA User Experience Driver Component
nvxdapix.dll 8.17.13.5362 NVIDIA User Experience Driver Component
NVCPL.DLL 8.17.13.5362 NVIDIA User Experience Driver Component
nvCplUIR.dll 8.1.850.0 NVIDIA Control Panel
nvCplUI.exe 8.1.850.0 NVIDIA Control Panel
nvWSSR.dll 6.14.13.5362 NVIDIA Workstation Server
nvWSS.dll 6.14.13.5362 NVIDIA Workstation Server
nvViTvSR.dll 6.14.13.5362 NVIDIA Video Server
nvViTvS.dll 6.14.13.5362 NVIDIA Video Server
NVSTVIEW.EXE 7.17.13.5362 NVIDIA 3D Vision Photo Viewer
NVSTTEST.EXE 7.17.13.5362 NVIDIA 3D Vision Test Application
NVSTRES.DLL 7.17.13.5362 NVIDIA 3D Vision Module
nvDispSR.dll 6.14.13.5362 NVIDIA Display Server
NVMCTRAY.DLL 8.17.13.5362 NVIDIA Media Center Library
nvDispS.dll 6.14.13.5362 NVIDIA Display Server
PhysX 09.16.0318 NVIDIA PhysX
NVCUDA.DLL 8.17.13.5362 NVIDIA CUDA 7.5.15 driver
nvGameSR.dll 6.14.13.5362 NVIDIA 3D Settings Server
nvGameS.dll 6.14.13.5362 NVIDIA 3D Settings Server

Perhaps both older and newer CUDA are still installed. If so, then only one would be reached first in a default linker path. What do you see from:

sudo ldconfig -p | egrep -i cuda

I used the non net installer and expert settings. Works now.

Sorry scratch that :/ Windows 10 sneakingly reinstalls the 7.5 cuda in a background update.

Google directed me here as the most recent incident of that problem. So if it happens to someone else, I thought it might be, even if my problem is on Windows, not on TX/TK, helpful to some.

Hi karachalios_s,

Confirmed I can run deviceQuery and oceanFFT on my side.
List my steps for you reference:

  1. Install JetPack2.3
  2. The NVIDIA CUDA Toolkit includes sample programs in source form. You should compile them by changeing to
    $ cd ~/NVIDIA_CUDA-8.0_Samples
    $ make
  3. $ cd ~/NVIDIA_CUDA-8.0_Samples/bin/aarch64/linux/release
  4. Running the Binaries

Could you try uninstall all the CUDA and Driver packages and reinstalled it, and make again?
This likely some files are missing.

Hello All,

Thank you @carolyuu recompiling the sample programs solved the issue.
Not sure why I had to recompile them though. The steps that I followed are:

Install JetPack 2.3.1
Run the sample programs. -> They fail with the original errors 
cd ~/NVIDIA_CUDA-8.0_Samples
make
cd ~/NVIDIA_CUDA-8.0_Samples/bin/aarch64/linux/release
Run the sample programs. -> Correct expected output

@linuxdev Just for completeness here is the output that you asked for:

sudo ldconfig -p | egrep -i cuda

libnvsample_cudaprocess.so (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libnvsample_cudaprocess.so
libnvrtc.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvrtc.so.8.0
libnvrtc.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvrtc.so
libnvrtc-builtins.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvrtc-builtins.so.8.0
libnvrtc-builtins.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvrtc-builtins.so
libnvgraph.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvgraph.so.8.0
libnvgraph.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvgraph.so
libnvblas.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvblas.so.8.0
libnvblas.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvblas.so
libnvToolsExt.so.1 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvToolsExt.so.1
libnvToolsExt.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnvToolsExt.so
libnpps.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnpps.so.8.0
libnpps.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnpps.so
libnppitc.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppitc.so.8.0
libnppitc.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppitc.so
libnppisu.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppisu.so.8.0
libnppisu.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppisu.so
libnppist.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppist.so.8.0
libnppist.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppist.so
libnppim.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppim.so.8.0
libnppim.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppim.so
libnppig.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppig.so.8.0
libnppig.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppig.so
libnppif.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppif.so.8.0
libnppif.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppif.so
libnppidei.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppidei.so.8.0
libnppidei.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppidei.so
libnppicom.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppicom.so.8.0
libnppicom.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppicom.so
libnppicc.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppicc.so.8.0
libnppicc.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppicc.so
libnppial.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppial.so.8.0
libnppial.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppial.so
libnppi.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppi.so.8.0
libnppi.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppi.so
libnppc.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppc.so.8.0
libnppc.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libnppc.so
libicudata.so.55 (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libicudata.so.55
libicudata.so (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libicudata.so
libcusparse.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcusparse.so.8.0
libcusparse.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcusparse.so
libcusolver.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcusolver.so.8.0
libcusolver.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcusolver.so
libcurand.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcurand.so.8.0
libcurand.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcurand.so
libcuinj64.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcuinj64.so.8.0
libcuinj64.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcuinj64.so
libcufftw.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcufftw.so.8.0
libcufftw.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcufftw.so
libcufft.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcufft.so.8.0
libcufft.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcufft.so
libcudart.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcudart.so.8.0
libcudart.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcudart.so
libcuda.so.1 (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1
libcuda.so (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libcuda.so
libcuda.so (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/tegra/libcuda.so
libcublas.so.8.0 (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcublas.so.8.0
libcublas.so (libc6,AArch64) => /usr/local/cuda-8.0/targets/aarch64-linux/lib/libcublas.so

Other than the “/usr/lib/aarch64” stuff ldconfig sees it all as CUDA 8 (and perhaps the aarch64 is CUDA 8, but naming convention does not make this obvious). It looks like there is no issue of mixing versions here, at least not through any automatic linking mechanism (some software is compiled to look at a specific file and does not use the linker, so those projects could still look at the wrong location).

I am curious if this file is a symbolic link, and if it is, what does it point to?

ls -l /usr/lib/aarch64-linux-gnu/libicudata.so.55

On my R24.2.1 I have that file and it points at libicudata.so.55.1.

Also, what does the package manager say provides the file (files in “/usr/local” are not provided by a package most of the time, but files in “/usr/lib/aarch64-linux-gnu” are probably all from a package)?

sudo dpkg -S /usr/lib/aarch64-linux-gnu/libicudata.so.55.1