I tried to modify the deepstream_test1_app go with deepstream 4.0.1 on Jetson Nano to use FaceDetection models from this repository: https://github.com/PKUZHO/MTCNN_FaceDetection_TensorRT.
Here is the content of configuration file:
[property]
gpu-id=0
# net-scale-factor=0.0039215697906911373
model-file=det1_relu.caffemodel
proto-file=det1_relu.prototxt
labelfile-path=labels.txt
batch-size=1
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=conv2;prob1
[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1
Here is the content of labels.txt
Face
Here is the error:
Now playing:face_detection_config.txt
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:00.905058496 31713 0x558c656e10 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<nv-inference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:00.905254126 31713 0x558c656e10 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<nv-inference-engine> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:01:19.596060515 31713 0x558c656e10 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<nv-inference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /deepstream-test-applications/models/mtcnn_tensorrt_caffe/det1_relu.caffemodel_b1_fp16.engine
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Creating LL OSD context new
0:01:19.806082258 31713 0x7f44003d40 WARN nvinfer gstnvinfer.cpp:1149:convert_batch_and_push_to_input_thread:<nv-inference-engine> error: NvBufSurfTransform failed with error -2 while converting buffer
Error received from element nv-inference-engine: NvBufSurfTransform failed with error -2 while converting buffer.
Deleting pipeline
How can I fix this problem? How can I dig deeper to check what’s error -2 mean?