Greetings, I am having trouble with this error when I compile “make all” inside CAFFE<caffe. PLEASE let me know on what to do or what direction I should take. Thank you! I am installing Caffe on the Jetson Tx1 with CUDA 9.0.
I get this error:
CXX src/caffe/util/cudnn.cpp
In file included from ./include/caffe/util/cudnn.hpp:5:0,
from src/caffe/util/cudnn.cpp:2:
/usr/include/cudnn.h:63:26: fatal error: driver_types.h: No such file or directory
compilation terminated.
Makefile:581: recipe for target ‘.build_release/src/caffe/util/cudnn.o’ failed
make: *** [.build_release/src/caffe/util/cudnn.o] Error 1
Hi,
This error is caused by cuDNN package installation.
Could you check if you have well-installed our cuDNN package first?
cuDNN package can be installed via JetPack directly.
Thanks.
Hello thanks for your response
How can I check that cuDNN was installed correctly?
Thanks for your help again
Hi,
First, please check if you have cuDNN installed in the ‘/usr/lib/aarch64-linux-gnu’ folder.
ll /usr/lib/aarch64-linux-gnu/libcudnn*
------------------------------------------------------------------
lrwxrwxrwx 1 root root 29 Jul 23 09:27 /usr/lib/aarch64-linux-gnu/libcudnn.so → /etc/alternatives/libcudnn_so
lrwxrwxrwx 1 root root 17 Nov 17 2017 /usr/lib/aarch64-linux-gnu/libcudnn.so.7 → libcudnn.so.7.0.5
-rw-r–r-- 1 root root 246459256 Nov 17 2017 /usr/lib/aarch64-linux-gnu/libcudnn.so.7.0.5
lrwxrwxrwx 1 root root 32 Jul 23 09:27 /usr/lib/aarch64-linux-gnu/libcudnn_static.a → /etc/alternatives/libcudnn_stlib
-rw-r–r-- 1 root root 249273640 Nov 17 2017 /usr/lib/aarch64-linux-gnu/libcudnn_static_v7.a
------------------------------------------------------------------
Then you can check cuDNN functionality with our official sample.
$ cp -r /usr/src/cudnn_samples_v7/ .
$ cd cudnn_samples_v7/mnistCUDNN/
$ make
$./mnistCUDNN
[i]------------------------------------------------------------------
cudnnGetVersion() : 7005 , CUDNN_VERSION from cudnn.h : 7005 (7.0.5)
Host compiler version : GCC 5.4.0
… …
Test passed!
[/i]
Thanks.
I meet the the same problem when I try to install onnx-tensorrt:
/usr/include/cudnn.h:63:10: fatal error: driver_types.h: No such file or directory
#include "driver_types.h"
test cudnn funtion, it show “Test passed”.
$ cp -r /usr/src/cudnn_samples_v7/ .
$ cd cudnn_samples_v7/mnistCUDNN/
$ make
$./mnistCUDNN
Any idea?
Hei,
Getting the same error as zjh.2008.09 when trying to compile onnx-tensorrt
I’m using this image
https://developer.download.nvidia.com/training/nano/dlinano_v1-0-0_image_20GB.zip
Terveisin, Markus
Ok,
Problem probably solved, see
opened 02:57PM - 18 Dec 18 UTC
closed 09:13AM - 23 Jan 19 UTC
Hi there, this issue is related to #81 so I'll also tag @goldgeisser .
After I … fixed the symlink I was still having that `fatal error: driver_types.h: No such file or directory`, the complete log is:
```
[ 2%] Running gen_proto.py on onnx/onnx.in.proto
Processing /workspace/onnx-tensorrt/third_party/onnx/onnx/onnx.in.proto
Writing /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx.proto
Writing /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx.proto3
Writing /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx.pb.h
generating /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx_pb.py
[ 4%] Running C++ protocol buffer compiler on /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx.proto
[ 4%] Built target gen_onnx_proto
[ 6%] Running gen_proto.py on onnx/onnx-operators.in.proto
Processing /workspace/onnx-tensorrt/third_party/onnx/onnx/onnx-operators.in.proto
Writing /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx.proto
Writing /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx.proto3
Writing /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators.pb.h
generating /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx_operators_pb.py
[ 8%] Running C++ protocol buffer compiler on /workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx.proto
Scanning dependencies of target onnx_proto
[ 11%] Building CXX object third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx_onnx2trt_onnx.pb.cc.o
/workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx.pb.cc:598:13: warning: 'dynamic_init_dummy_onnx_2fonnx_5fonnx2trt_5fonnx_2eproto' defined but not used [-Wunused-variable]
static bool dynamic_init_dummy_onnx_2fonnx_5fonnx2trt_5fonnx_2eproto = []() { AddDescriptors_onnx_2fonnx_5fonnx2trt_5fonnx_2eproto(); return true; }();
^
[ 13%] Building CXX object third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-operators_onnx2trt_onnx.pb.cc.o
/workspace/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx.pb.cc:204:13: warning: 'dynamic_init_dummy_onnx_2fonnx_2doperators_5fonnx2trt_5fonnx_2eproto' defined but not used [-Wunused-variable]
static bool dynamic_init_dummy_onnx_2fonnx_2doperators_5fonnx2trt_5fonnx_2eproto = []() { AddDescriptors_onnx_2fonnx_2doperators_5fonnx2trt_5fonnx_2eproto(); return true; }();
^
[ 15%] Linking CXX static library libonnx_proto.a
[ 20%] Built target onnx_proto
[ 22%] Building CUDA object CMakeFiles/nvonnxparser_plugin.dir/FancyActivation.cu.o
[ 24%] Building CUDA object CMakeFiles/nvonnxparser_plugin.dir/ResizeNearest.cu.o
[ 26%] Building CUDA object CMakeFiles/nvonnxparser_plugin.dir/Split.cu.o
[ 28%] Building CXX object CMakeFiles/nvonnxparser_plugin.dir/InstanceNormalization.cpp.o
In file included from /workspace/onnx-tensorrt/InstanceNormalization.hpp:27:0,
from /workspace/onnx-tensorrt/InstanceNormalization.cpp:23:
/usr/include/cudnn.h:63:26: fatal error: driver_types.h: No such file or directory
compilation terminated.
CMakeFiles/nvonnxparser_plugin.dir/build.make:101: recipe for target 'CMakeFiles/nvonnxparser_plugin.dir/InstanceNormalization.cpp.o' failed
make[2]: *** [CMakeFiles/nvonnxparser_plugin.dir/InstanceNormalization.cpp.o] Error 1
CMakeFiles/Makefile2:185: recipe for target 'CMakeFiles/nvonnxparser_plugin.dir/all' failed
make[1]: *** [CMakeFiles/nvonnxparser_plugin.dir/all] Error 2
Makefile:151: recipe for target 'all' failed
make: *** [all] Error 2
```
To rule out installation problems I might have commited I decided to reproduce it in a container.
I'm pulling this container (from `nvidia-docker` at https://ngc.nvidia.com)
```
docker pull nvcr.io/nvidia/tensorrt:18.11-py3
```
And running with:
```
nvidia-docker run -it --rm nvcr.io/nvidia/tensorrt:18.11-py3
```
so the versions are the following:
- cmake `cmake --version`:
```
cmake version 3.12.1
CMake suite maintained and supported by Kitware (kitware.com/cmake).
```
- gcc `gcc --version`:
```
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```
- nvidia-smi:
```
Tue Dec 18 14:39:10 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.48 Driver Version: 410.48 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:03:00.0 Off | N/A |
| 28% 25C P8 8W / 250W | 2MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:04:00.0 On | N/A |
| 23% 40C P8 12W / 250W | 1099MiB / 11175MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
- nvcc `nvcc --version`:
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
```
- tensorrt `dpkg -l | grep -i tensorrt`
```
ii libnvinfer-dev 5.0.2-1+cuda10.0 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.0.2-1+cuda10.0 all TensorRT samples and documentation
ii libnvinfer5 5.0.2-1+cuda10.0 amd64 TensorRT runtime libraries
ii python3-libnvinfer 5.0.2-1+cuda10.0 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 5.0.2-1+cuda10.0 amd64 Python 3 development package for TensorRT
ii tensorrt 5.0.2.6-1+cuda10.0 amd64 Meta package of TensorRT
```
- protobuf (latest version installed from [repo](https://github.com/protocolbuffers/protobuf))
Also the output of `locate driver_types.h` is empty but the symlink for cuda seems to be there. Since the output of `ll /usr/local/` is:
```
total 64
drwxr-xr-x 1 root root 4096 Nov 3 01:57 ./
drwxr-xr-x 1 1000 1000 4096 Nov 27 2017 ../
drwxr-xr-x 1 root root 4096 Dec 18 14:24 bin/
lrwxrwxrwx 1 root root 9 Nov 3 01:33 cuda -> cuda-10.0/
drwxr-xr-x 1 root root 4096 Nov 3 01:45 cuda-10.0/
drwxr-xr-x 3 root root 4096 Nov 3 01:57 doc/
drwxr-xr-x 2 root root 4096 Oct 5 18:03 etc/
drwxr-xr-x 2 root root 4096 Oct 5 18:03 games/
drwxr-xr-x 1 root root 4096 Dec 18 14:24 include/
drwxr-xr-x 1 root root 4096 Dec 18 14:24 lib/
lrwxrwxrwx 1 root root 9 Oct 5 18:03 man -> share/man/
drwxr-xr-x 7 root root 4096 Nov 3 01:42 mpi/
drwxr-xr-x 2 root root 4096 Oct 5 18:03 sbin/
drwxr-xr-x 1 root root 4096 Nov 3 01:57 share/
drwxr-xr-x 2 root root 4096 Oct 5 18:03 src/
```
Only after passing the cuda include dir variable to cmake I was able to solve that:
```
cmake -DCUDA_INCLUDE_DIRS=/usr/local/cuda-10.0/include -DTENSORRT_ROOT=/opt/tensorrt ..
```
What I found weird is that even if the symlink variable seems to be pointing to the correct location I couldn't get it to build without passing that variable. Maybe a note in the Readme would suffice.
Or could it be some cmake shenanigans? I'll close the issue after I get some feedback since it's easily fixable, just wanted it to be here if anyone encounters something similar so they can have some insight.
Also, maybe some CI would be nice, I volunteer to set it up in either travis, circle or maybe even jenkins if Nvida (@benbarsdell) can provide a gpu enabled container. @yinghai I could also set up code formatting and some linting so it'll be easier to contribute, maybe an image from ngc?. I'm planning on submitting some PRs I did for some layers.
So there must be explicit include dirs…:
cmake … -DTENSORRT_ROOT=/usr/src/tensorrt -DGPU_ARCHS=“53” -DCUDA_INCLUDE_DIRS=/usr/local/cuda-10.0/include
terveisin, Markus
1 Like
Good to know this.
Thanks for the update.