Help with Deepstream Gst Pipeline

Good day everyone!

Running some tests of gst pipelines, I have been able to stream perfectly an RTSP (cropped) into the nvoverlaysink

gst-launch-1.0 rtspsrc location=rtsp://URL?tcp latency=0 protocols=tcp ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv left=$A right=$B top=$C bottom=$D ! video/x-raw, format=I420, width=$((B-A)), height=$((D-C)) ! videoconvert ! nvoverlaysink sync=false

Also, been able to successfuly source an mp4 file (captured previously from an RTSP) into the deepstream nvinfer and nvdosd elements:

gst-launch-1.0 filesrc location= streams/file.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=$((B-A)) height=$((C-D)) ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so ! nvinfer config-file-path= configs/deepstream-app/config_infer_secondary_carcolor.txt batch-size=16 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0 ! nvmultistreamtiler rows=1 columns=1 width=$((B-A)) height=$((D-C)) ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

But when I try mix both pipelines, like this:

gst-launch-1.0 rtspsrc location=rtsp://URL?tcp latency=0 protocols=tcp ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv left=$A right=$B top=$C bottom=$D ! video/x-raw, format=I420, width=$((B-A)), height=$((D-C)) ! videoconvert ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=$((B-A)) height=$((C-D)) ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so ! nvinfer config-file-path= configs/deepstream-app/config_infer_secondary_carcolor.txt batch-size=16 unique-id=2 infer-on-gie-id=1 infer-on-class-ids=0 ! nvmultistreamtiler rows=1 columns=1 width=$((B-A)) height=$((D-C)) ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

several issues show up, firstly:

WARNING: erroneous pipeline: could not set property "width" in element "nvmultistreamtiler0" to "$((B-A)),"

Then tried several values to no avail, then removing “nvmultistreamtiler”, results in:

WARNING: erroneous pipeline: could not link videoconvert0 to nvv4l2decoder0

Could you please provide a hint on how to link both pipelines successfully? Thanks in advance.

Hi,
We suggest you run deepstream-app. It supports rtspsource. You can simply set it in config file.
[url]NVIDIA DeepStream SDK Developer Guide — DeepStream 6.1.1 Release documentation

Thanks a lot, I’ve succeeded in testing deepstream-app.

Is it possible to crop (not resize) the video to a cover just a certain area?

Where in the config file is there such setting?

Best Regards

Hi,
There is a sample of saving detected objects in 2 of FAQ. Please check if it helps your case.

Thanks. I’ve tried sinking into a file modifying the config file like this:

[sink0]
enable=0

[sink1]
enable=1
#Type - 3=File
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

Also tried mkv

It fails with this error:

** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting
ERROR from sink_sub_bin_encoder1: Cannot identify device '/dev/nvhost-msenc'.
Debug info: /dvs/git/dirty/git-master_linux/3rdparty/gst/gst-v4l2/gst-v4l2/v4l2_calls.c(642): gst_v4l2_open (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/nvv4l2h264enc:sink_sub_bin_encoder1:
system error: No such file or directory
ERROR from sink_sub_bin_encoder1: Could not initialize supporting library.
Debug info: gstvideoencoder.c(1627): gst_video_encoder_change_state (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/nvv4l2h264enc:sink_sub_bin_encoder1:
Failed to open encoder
App run failed

Could it be related to a missing plugin? I’ve already installed the required plugins in order to recompile the deepstream-app sources.

Best Regards

Hi,
Do you run in docker? You should have the device node if you install whole packages/system through sdkmanager.

ERROR from sink_sub_bin_encoder1: Cannot identify device '/dev/nvhost-msenc'.

Hi,

Oh, well I will review the whole process of preparing the docker and its prerequisites in case I missed somethinf. Thanks a lot for your help.

[url]https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t[/url]

Best Regards

Hi,

Just one final note:

Indeed /dev/nvhost-msenc was missing in my docker.

On the host computer (Jetson Nano) it is existing… I was using the docker activation command as stated in this document:

https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t

The solution is to add --device /dev/nvhost-msenc into that command.

sudo docker run <b>--device /dev/nvhost-msenc</b> -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:4.0.1-19.09-samples

Good to hear that. Thanks for the sharing.

Were you able to do this? Can you kindly share?

I didn’t actually crop the video, since I moved away from this, but I hope this can point you in the right direction.

It Appears that the config file should be supplemented with new settings, such as these

[ds-example]
enable=1
processing-width=1280
processing-height=720
full-frame=1
unique-id=15
x-coordinate-top=642
y-coordinate-top=10
x-coordinate-bottom=618
y-coordinate-bottom=720

and that the plugin gstdsexample.cpp needs to be customized as well declaring those new settings.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_custom_plugin.html%23wwpID0E4HA

https://devtalk.nvidia.com/default/topic/1064850/deepstream-sdk/how-to-add-new-parameters-to-deepstream_app_config_yolov2_tiny-txt-file-/