Deepstream bounding box parser function for sampleSSD in TensorRT example

I am having trouble creating custom bounding parsing function for the tensorrt sampleSSD model. How to implement bounding box for the below layer using nvdsinfer_custom_impl.h?

layer {
name: “detection_out”
type: “DetectionOutput”
bottom: “mbox_loc”
bottom: “mbox_conf_flatten”
bottom: “mbox_priorbox”
top: “detection_out”
top: “keep_count”
include {
phase: TEST
}
detection_output_param {
num_classes: 21
share_location: true
background_label_id: 0
nms_param {
nms_threshold: 0.45
top_k: 400
}
save_output_param {
label_map_file: “data/VOC0712/labelmap_voc.prototxt”
}
code_type: CENTER_SIZE
keep_top_k: 200
confidence_threshold: 0.01
}

Did you refer our sample sources/objectDetector_SSD/nvdsparsebbox_ssd.cpp

This is the tensor output

int keepCount = *((int *) outputLayersInfo[nms1LayerIndex].buffer);
  float *detectionOut = (float *) outputLayersInfo[nmsLayerIndex].buffer;

you can also check tensorRT sample sampleSSD(caffe) sampleUffSSD(uff), which also have the detection output parser.

Hi, Is there any walk-through on use of Tensorflow mobilenet trained models within Deepstream infrastructure?

Reference:

https://devtalk.nvidia.com/default/topic/1055954/deepstream-for-tesla/deepstream-bounding-box-parser-function-for-samplessd-in-tensorrt-example/

Hi Andrey
Deepstream/tensorRT supports mobilenets. You can transform model to be uff model and deloy it by deepstream.