Hi,
Has anyone tried to run nvcaffegie with models and configuration provided by “DeepStream SDK on Jetson” I obtained the pipeline graphic and tried to build a gst-launch pipeline, but getting a seg. fault.
I am running on a Jetson Tx1, with Jetpack 3.2.
This is the pipeline that I am currently using for testing:
gst-launch-1.0 nvcamerasrc queue-size=6 sensor-id=0 fpsRange=‘30 30’ ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)30/1, format=(string)I420’ ! queue ! nvvidconv ! nvcaffegie model-path=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel” protofile-path=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt” model-cache=“/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache” labelfile-path=“/home/nvidia/Model/ResNet_18/labels.txt” net-stride=16 batch-size=2 roi-top-offset=“0,0:1,0:2,0:” roi-bottom-offset=“0,0:1,0:2,0:” detected-min-w-h=“0,0,0:1,0,0:2,0,0” detected-max-w-h=“0,1920,1080:1,100,1080:2,1920,1080:” interval=1 parse-func=4 net-scale-factor=0.0039215697906911373 class-thresh-params=“0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:” ! fakesink silent=false -v
I am aware that other elements are required to actually draw boxes around objects, but currently I prefer to keep it simple for now. Is it possible to use nvcaffegie as I am trying to with gst-launch?
Thank you for your time.