How to use ssd_mobilenet_v2

I download the ssd_mobilenet_v2 pre training model.How to convert to UFF format?
I use “python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py -o sample_ssd_relu6.uff” convert to TFF format,But failed to generate tensorRT engine file.

error log:
Creating LL OSD context new
0:00:00.841806398 32440 0x557db70730 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:00.841893067 32440 0x557db70730 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:03.970414222 32440 0x557db70730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): Parameter check failed at: …/builder/Layers.h::setAxis::333, condition: axis >= 0
0:00:04.021535531 32440 0x557db70730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): Concatenate/concat: all concat input tensors must have the same dimensions except on the concatenation axis
0:00:04.021654910 32440 0x557db70730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): Could not compute dimensions for Concatenate/concat, because the network is not valid
0:00:04.022331021 32440 0x557db70730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed while building cuda engine for network
0:00:04.032484197 32440 0x557db70730 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:04.032852123 32440 0x557db70730 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:04.032893322 32440 0x557db70730 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/xiukd/sutpc_app_nano/config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR

can’t set pipeline to playing state.
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /home/nvidia/config/config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
App run failed

Hi,

We are checking this issue internally.
Will update more information with you later.

Thanks.

Hi,

We can run ssd_mobilenet_v2 with deepstream-app successfully.
Here are our steps for your reference:

1. Compile objectDetector_SSD sample:

$ cd /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD
$ make -C nvdsinfer_custom_impl_ssd

2. Prepare ssd_mobilenet_v2 uff model:

$ wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
$ tar zxvf ssd_mobilenet_v2_coco_2018_03_29.tar.gz

Download attached config.py and generate uff model with:

$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py ./ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb -o ssd_mobilenet_v2.uff -O NMS -p ./config.py

3. Customize config file for ssd_mobilenet_v2

diff --git a/config_infer_primary_ssd.txt b/config_infer_primary_ssd.txt
index bafdff7..9bed8de 100755
--- a/config_infer_primary_ssd.txt
+++ b/config_infer_primary_ssd.txt
@@ -62,9 +62,9 @@ gpu-id=0
 net-scale-factor=0.0078431372
 offsets=127.5;127.5;127.5
 model-color-format=0
-model-engine-file=sample_ssd_relu6.uff_b1_fp32.engine
+model-engine-file=ssd_mobilenet_v2.uff_b1_fp32.engine
 labelfile-path=ssd_coco_labels.txt
-uff-file=sample_ssd_relu6.uff
+uff-file=ssd_mobilenet_v2.uff
 uff-input-dims=3;300;300;0
 uff-input-blob-name=Input
 batch-size=1
@@ -74,7 +74,7 @@ num-detected-classes=91
 interval=0
 gie-unique-id=1
 is-classifier=0
-output-blob-names=MarkOutput_0
+output-blob-names=NMS
 parse-bbox-func-name=NvDsInferParseCustomSSD
 custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so

Then execute it with:

$ deepstream-app -c deepstream_app_config_ssd.txt

Thanks.
config.py.txt (2 KB)

1 Like

Hi AastaLLL

  I test with ssd_mobilenet_v2 model, fps can only reach 21 frames per second. A  NVIDIA friend told me that the frame rate of this model can reach 39 frames per second on Jetson Nano.
  So I asked you what the problem was ?

I tried with Jetson Nano with max performance mode. (sudo nvpmodel -m 0 && sudo jetson_clocks)
The perf is get with config: TensorRt fp16 && input 300x300.

Hi,

Do you indicate the benchmark result shared in this page?
https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks

The result above is pure inference performance measurement.
However, the score in Deepstream integrate the whole pipeline, including camera IO and display.

It’s recommended to tune the parameters (ex. batchsize) for your use case first.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_config.3.2.html

Thanks.

Hi @AastaLLL, @Kevin_xiu
Can you give if possible the maximum performance of the model used in this post using trtexec and that too in INT8.