Use of Deepstream 4.0.2 tlt-encoded-model=... to avoid using tlt-converter

Because the Jetson tlt-converter - which is TensorRT 5 based - does not work under the current production release JetPack 4.3 (TensorRT 6 based) I like to use the Deepstream 4.0.2 feature within the nvinfer configuration file that allows the direct loading of the exported .etlt file (TLT encoded model).

I’ve just edited the configuration file of the deepstream-test3 app to add a TLT encoded model (resnet10_detector.etlt) and its key.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-encoded-model=./resnet10_detector.etlt
tlt-model-key=##### (Key removed by intention)
labelfile-path=./labels.txt
int8-calib-file=./calibration.bin
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd

[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1

Unfortunately that approach does not work and throws the following errors:

Creating LL OSD context new
0:00:01.159710362  9428   0x55c0aa1640 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
NvDsInferCudaEngineGetFromTltModel:UFF input blob name not provided
0:00:01.159975422  9428   0x55c0aa1640 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Failed to create network using custom network creation function
0:00:01.160014485  9428   0x55c0aa1640 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:01.160063705  9428   0x55c0aa1640 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:01.160088862  9428   0x55c0aa1640 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Returned, stopping playback
Deleting pipeline

Because we don’t provide an UFF-Model it most likely doesn’t make sense to get the error mentioned above.

Thanks for your help!

Well, my fault.
Just adding the input dimensions and input blob-name in UFF-Format solved the issue.

uff-input-dims=3;320;480;0
uff-input-blob-name=input_1