Build Caffe with patch for TensorRT INT8 batch generation fails

The TensortRT user guide (NVIDIA Documentation Center | NVIDIA Developer) explains how to generate batch files for TensortRT INT8 calibration for Caffe users.

I have followed the steps:

  • Clone Caffe at the specific commit (473f143f9422e7fc66e9590da6b2a1bb88e50b2f)
  • Patch with the provided patch at /samples/sampleINT8
  • cMake & make using the Makefile.config that had previously worked in my machine.

But I am getting the following error:

CXX src/caffe/layer.cpp
In file included from ./include/caffe/util/device_alternate.hpp:40:0,
                 from ./include/caffe/common.hpp:19,
                 from ./include/caffe/blob.hpp:8,
                 from ./include/caffe/layer.hpp:8,
                 from src/caffe/layer.cpp:2:
./include/caffe/util/cudnn.hpp: In function ‘const char* cudnnGetErrorString(cudnnStatus_t)’:
./include/caffe/util/cudnn.hpp:21:10: warning: enumeration value ‘CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING’ not handled in switch [-Wswitch]
   switch (status) {
          ^
./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::setConvolutionDesc(cudnnConvolutionStruct**, cudnnTensorDescriptor_t, cudnnFilterDescriptor_t, int, int, int, int)’:
./include/caffe/util/cudnn.hpp:113:70: error: too few arguments to function ‘cudnnStatus_t cudnnSetConvolution2dDescriptor(cudnnConvolutionDescriptor_t, int, int, int, int, int, int, cudnnConvolutionMode_t, cudnnDataType_t)’
       pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
                                                                      ^
./include/caffe/util/cudnn.hpp:15:28: note: in definition of macro ‘CUDNN_CHECK’
     cudnnStatus_t status = condition; \
                            ^
In file included from ./include/caffe/util/cudnn.hpp:5:0,
                 from ./include/caffe/util/device_alternate.hpp:40,
                 from ./include/caffe/common.hpp:19,
                 from ./include/caffe/blob.hpp:8,
                 from ./include/caffe/layer.hpp:8,
                 from src/caffe/layer.cpp:2:
/usr/include/cudnn.h:500:27: note: declared here
 cudnnStatus_t CUDNNWINAPI cudnnSetConvolution2dDescriptor( cudnnConvolutionDescriptor_t convDesc,
                           ^
Makefile:575: recipe for target '.build_release/src/caffe/layer.o' failed
make: *** [.build_release/src/caffe/layer.o] Error 1

Any ideas why this is happening?
Thanks.

hi jruizaranguren,

have you solved this problem?

No. And when I try again to build batches for INT8, I will probably program it with the C++ API, unless TensorRT 3 brings a smoother path.

I just update the cudnn.hpp file to sovle this problem.

Btw, how to set the data type, like INT8, FLOAT16 in tensorrt?

Thanks

Hi Zimenglan,

There is an example sampleINT8 in samples/ folder in TENSORRT for INT8 inference. You can see that for reference.

I would like to ask you one question, when you generate batches, what is exactly there in the batch files ?
I guess it will be test images, but how many images in one batch? how it is selected ?

Have you run the sampleINT8 example?