cuDNN Test did not pass

Hi All,

I followed the installation guide to install cuDNN but I end up with

ubuntu@ubuntu:~/cudnn_samples_v7/mnistCUDNN$ make clean && make
rm -rf *o
rm -rf mnistCUDNN
/usr/local/cuda/bin/nvcc -ccbin g++ -I/usr/local/cuda/include -IFreeImage/include  -m64    -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_53,code=compute_53 -o fp16_dev.o -c fp16_dev.cu
g++ -I/usr/local/cuda/include -IFreeImage/include   -o fp16_emu.o -c fp16_emu.cpp
g++ -I/usr/local/cuda/include -IFreeImage/include   -o mnistCUDNN.o -c mnistCUDNN.cpp
In file included from /usr/local/cuda/include/channel_descriptor.h:62:0,
                 from /usr/local/cuda/include/cuda_runtime.h:90,
                 from /usr/include/cudnn.h:64,
                 from mnistCUDNN.cpp:30:
/usr/local/cuda/include/cuda_runtime_api.h:1683:101: error: use of enum ā€˜cudaDeviceP2PAttrā€™ without previous declaration
 extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaDeviceGetP2PAttribute(int *value, enum cudaDeviceP2PAttr attr, int srcDevice, int dstDevice);
                                                                                                     ^
/usr/local/cuda/include/cuda_runtime_api.h:2930:102: error: use of enum ā€˜cudaFuncAttributeā€™ without previous declaration
 extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaFuncSetAttribute(const void *func, enum cudaFuncAttribute attr, int value);
                                                                                                      ^
In file included from /usr/local/cuda/include/channel_descriptor.h:62:0,
                 from /usr/local/cuda/include/cuda_runtime.h:90,
                 from /usr/include/cudnn.h:64,
                 from mnistCUDNN.cpp:30:
/usr/local/cuda/include/cuda_runtime_api.h:5770:92: error: use of enum ā€˜cudaMemoryAdviseā€™ without previous declaration
 extern __host__ cudaError_t CUDARTAPI cudaMemAdvise(const void *devPtr, size_t count, enum cudaMemoryAdvise advice, int device);
                                                                                            ^
/usr/local/cuda/include/cuda_runtime_api.h:5827:98: error: use of enum ā€˜cudaMemRangeAttributeā€™ without previous declaration
 extern __host__ cudaError_t CUDARTAPI cudaMemRangeGetAttribute(void *data, size_t dataSize, enum cudaMemRangeAttribute attribute, const void *devPtr, size_t count);
                                                                                                  ^
/usr/local/cuda/include/cuda_runtime_api.h:5864:102: error: use of enum ā€˜cudaMemRangeAttributeā€™ without previous declaration
 extern __host__ cudaError_t CUDARTAPI cudaMemRangeGetAttributes(void **data, size_t *dataSizes, enum cudaMemRangeAttribute *attributes, size_t numAttributes, const void *devPtr, size_t count);
                                                                                                      ^
Makefile:200: recipe for target 'mnistCUDNN.o' failed
make: *** [mnistCUDNN.o] Error 1

I have installed CUDA-9.0

ubuntu@ubuntu:~/NVIDIA_CUDA-9.0_Samples/bin/x86_64/linux/release$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 560 Ti"
  CUDA Driver Version / Runtime Version          9.0 / 9.0
  CUDA Capability Major/Minor version number:    2.1
  Total amount of global memory:                 957 MBytes (1003159552 bytes)
MapSMtoCores for SM 2.1 is undefined.  Default to use 64 Cores/SM
MapSMtoCores for SM 2.1 is undefined.  Default to use 64 Cores/SM
  ( 8) Multiprocessors, ( 64) CUDA Cores/MP:     512 CUDA Cores
  GPU Max Clock rate:                            1645 MHz (1.64 GHz)
  Memory Clock rate:                             2004 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (65535, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 3 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 9.0, NumDevs = 1
Result = PASS

I open the file:
/usr/include/cudnn.h

And I changed the line:
#include ā€œdriver_types.hā€

to:
#include <driver_types.h>

and now it can compileā€¦

1 Like

Thanks @peter_cz!

your solution worked for me as well.

Had the same issue, with 2 x 1080ti + CUDA 9 + cudnn 7

Have the same issue here with CUDA9 + cudnn7 + GTX 1050ti

I would be very interested how such an error can happen undetected?

Also thx to @peter_cz your solution worked for me!

Your fix worked for me too on cud-9.1 / cudnn7.0.5.15 when upgrading a Google Cloud instance.

Thanks a bunch, but youā€™d think nvidia would fix an obvious typo like this between releases.

@peter_cz your solution worked for me too on cuda-9.1 and cudnn7.0.5

I have the same issue here. I have CUDA 9.1 + cuDNN 7.1 + GTX 1080 Ti + nvidia driver 387.34

Works too.

Thanks. Works too.

I have CUDA 9.1 + cuDNN 7.1 + GTX 1050 Ti + nvidia driver 387.26 (included CUDA 9.1) + Ubuntu 16.04.4

That just solved my problem, bro! Thanks!

Thanks a lot! It works!

This worked for me also, thanks.

this works for me as well cuda 8 and cudnn7.1.3 , nvidia390 driver ,
ubuntu 16.04 ryzen 1700x, 1070 8gb

I am getting this error.

cudnnGetVersion() : 7103 , CUDNN_VERSION from cudnn.h : 7103 (7.1.3)
Cuda failurer version : GCC 5.4.0
Error: unknown error
error_util.h:93
Abortingā€¦

Thanks it worked for me.

Same here cuda 9.1 cudnn 7.1.4 driver 390.30
ubuntu 16.04 geforce gtx 1050

Suggested solution worked!

I think that updating gcc and g++ to version 6 fix this error

CUDA 9.2, cudnn 7.2 ubuntu 16.04 geforce gtx 1050
Suggested solution worked!
Why is this still bugged???

This worked for me also using ubuntu18.04 cuda10.0 cudnn7.4.1.5

Same with cuda 9.0 and cudnn 7.3.1.
The solution works for me also!

Thanks a lot, fix still works for Cuda 10.0 and cuDNN 7.4.1.5. Using Ubuntu 18.04.

Didnā€™t test with any other compiler than default c++ 7.3