Running nvcaffegie with gst-launch-1.0 (Solved)

Hi,

Has anyone tried to run nvcaffegie with models and configuration provided by “DeepStream SDK on Jetson” I obtained the pipeline graphic and tried to build a gst-launch pipeline, but getting a seg. fault.

I am running on a Jetson Tx1, with Jetpack 3.2.

This is the pipeline that I am currently using for testing:

gst-launch-1.0 nvcamerasrc queue-size=6 sensor-id=0 fpsRange=‘30 30’ ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)30/1, format=(string)I420’ ! queue ! nvvidconv ! nvcaffegie model-path=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel” protofile-path=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt” model-cache=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache” labelfile-path=“/home/nvidia/Model/ResNet_18/labels.txt” net-stride=16 batch-size=2 roi-top-offset=“0,0:1,0:2,0:” roi-bottom-offset=“0,0:1,0:2,0:” detected-min-w-h=“0,0,0:1,0,0:2,0,0” detected-max-w-h=“0,1920,1080:1,100,1080:2,1920,1080:” interval=1 parse-func=4 net-scale-factor=0.0039215697906911373 class-thresh-params=“0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:” ! fakesink silent=false -v

I am aware that other elements are required to actually draw boxes around objects, but currently I prefer to keep it simple for now. Is it possible to use nvcaffegie as I am trying to with gst-launch?

Thank you for your time.

Hi Carlos,

In theory it is possible(only parameter for nvcaffegie is a bit complicated), I will have a try on my side.

Thanks
wayne zhu

Hi Wayne,

Appreciate your effort and time on this, I appreciate any provided information.

Yes nvcaffegie parameters are a little complicated, I understand the class-id and config for nvicaffegie parameters, and in theory the configuration is the same as the one shown by:

  1. Gstreamer DOT file
  2. Parameters from the config file
  3. Information printed when running nvgstiva-app

I have tried different versions of the pipeline that I sent, but no success, will continue checking on my side for possible differences and other configs.

Thank you for you support.
Regards.

Two more properties are required to be specified for nvcaffegie:
output-bbox-layer-name=Layer11_bbox output-coverage-layer-names=Layer11_cov

Thanks
wayne zhu

Hi,

Tried the pipeline with the mentioned parameters and gst-launch command with nvcaffegie pipeline running now. Will continue adding more elements.

thank you!

Hi Carlos001,

Could you show the command line about the gst-launch command with nvcaffegie pipeline running.

thank you!

Hi,

Sure! We have the pipelines documented at:
[url]http://developer.ridgerun.com/wiki/index.php?title=GstInference_and_NVIDIA_Deepstream_1.5_nvcaffegie[/url]

You can also check Gst-Inference project that we are currently working on at:
[url]http://developer.ridgerun.com/wiki/index.php?title=GstInference[/url]

Regards,
Carlos A.

I got it,thank you very much.

Hi Carlos,

Thanks for your reply.

But there is a thing on which I am completely clueless.

GST_DEBUG=3 gst-launch-1.0 nvcamerasrc queue-size=6 sensor-id=0 fpsRange=‘30 30’
! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)30/1, format=(string)I420’
! queue ! nvvidconv ! nvcaffegie model-path=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel”
protofile-path=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt”
model-cache=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache”
labelfile-path=“/home/nvidia/Model/ResNet_18/labels.txt” net-stride=16 batch-size=2 roi-top-offset=“0,0:1,0:2,0:”
roi-bottom-offset=“0,0:1,0:2,0:” detected-min-w-h=“0,0,0:1,0,0:2,0,0” detected-max-w-h=“0,1920,1080:1,100,1080:2,1920,1080:”
interval=1 parse-func=4 net-scale-factor=0.0039215697906911373
class-thresh-params=“0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:”
output-bbox-layer-name=Layer11_bbox output-coverage-layer-names=Layer11_cov ! queue ! nvtracker
! queue ! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr=“3,1.000000,1.000000,0.000000:”
! queue ! nvoverlaysink sync=false enable-last-sample=false

Just for example above, how do I know to add these parameters which are bold ?

Hi,

As Waynezhu indicated nvcaffegie parameters are a bit complicated and a missconfiguration may cause the pipeline to not run, I suggest to check the output of: gst-inspect nvcaffegie For example for the properties you are asking for:

net-stride:
Convolutional Neural Network Stride
flags: readable, writable, changeable only in NULL or READY state
Unsigned Integer. Range: 0 - 4294967295 Default: 16

batch-size:
Number of units [frames(P.GIE) or objects(S.GIE)] to be inferred together in a batch
flags: readable, writable, changeable only in NULL or READY state
Unsigned Integer. Range: 1 - 4294967295 Default: 1

roi-top-offset:
Offset of the ROI from the top of the frame. Only objects within the ROI will be outputted.
Format: class-id,top-offset:class-id,top-offset:
e.g. 0,128:1,128
flags: readable, writable
String. Default: “0,0:”

roi-bottom-offset:
Offset of the ROI from the bottom of the frame. Only objects within the ROI will be outputted.
Format: class-id,bottom-offset:class-id,bottom-offset:…
e.g. 0,128:1,128
flags: readable, writable
String. Default: “0,0:”

and so on are given by the gst-inspect output, as you may see some of those configurations require to know the CNN classes, for example the roi-bottom-offset, you need to know the class-id, and then you configure the bottom offset which means that for the given pipeline:

roi-bottom-offset="0,0:1,0:2,0: class-id=0, region of interest bottom offset = 0 ; class-id=1 region of interest bottom offset = 0, and so on.

Just in case you have not seen the Deepstream webinars, there is a very interesting training available at: [url]Streamline Deep Learning for Video Analytics with DeepStream SDK 2.0

Also you can check
[url]https://developer.ridgerun.com/wiki/index.php?title=GstInference[/url]
[url]https://gstconf.ubicast.tv/videos/gstinference-a-gstreamer-deep-learning-framework/[/url]

Regards.