Gstreamer command line for DeepStream 4.0

1. Transcoding

gst-launch-1.0 filesrc location= ./sample_1080p_h265.mp4 ! qtdemux ! h265parse ! nvv4l2decoder ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=I420” ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=test.mp4

gst-launch-1.0 filesrc location= ~/xqjq-hdranchor-1-002630-120s.mkv ! matroskademux ! h265parse ! nvv4l2decoder ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=I420” ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=test.mp4

nvv4l2h265enc expects input in I420 format.
This will still generate a corrupted output as there needs a support to handle 10 bit output buffers in low level cuvid code (libcuvidv4l2.so)
After this nvvideoconvert also needs support to handle 10 bit input YUV so that we can convert it from 10 bit to 8 bit and can flow this buffer to all the downstream components which only understand 8 bit for now.

Refer to: http://nvbugs/200558653/9

2. dsexample usage before nvinfer in DS pipeline

gst-launch-1.0 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 ! dsexample processing-width=640 processing-height=480 full-frame=1 ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! nveglglessink

Refer to https://devtalk.nvidia.com/default/topic/1064542/deepstream-sdk/undistort-camera-input-non-360-deg-

3. Nvvideoconvert does not support uyvy input, so it needs videoconvert before.
gst-launch-1.0
v4l2src device=/dev/video0 do-timestamp=true ! video/x-raw,format=UYVY,width=1920,height=1080,framerate=30/1 !
videoconvert ! video/x-raw,format=NV12,width=1920,height=1080,framerate=30/1 !
nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=30/1 !
queue ! nvoverlaysink sync=false

Nvvidconv uses old nvbuf_utils but not unfied nvbufsurface, so it can’t be used with other deepstream modules. It can support uyvy input.
gst-launch-1.0
v4l2src device=/dev/video0 ! “video/x-raw,format=UYVY,width=1920,height=1080,framerate=30/1” !
nvvidconv ! “video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=30/1” !
nvoverlaysink sync=false

4. Following pipeline tested OK on ToT Xavier
gst-launch-1.0 rtspsrc location= rtsp://10.24.217.30:8554/ ! rtph265depay ! h265parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path= config_infer_primary.txt ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! nvv4l2h265enc ! h265parse ! filesink location= file.h265
By creating a rtsp server on host for streaming h265 stream
cvlc -vvv ~/sample_1080p_h265.mp4 --loop --sout-keep --sout ‘#gather:rtp{sdp=rtsp://10.24.217.30:8554/}’
Refer to Topic 1065157
Above pipeline is similar to what user is trying to do except that decoder’s output should not be “video/x-raw(memory:NVMM), format=RGBA”
Decoder always outputs NV12

5. Debug nvstreammux
gst-launch-1.0 videotestsrc ! nvvideoconvert ! ‘video/x-raw(memory:NVMM), format=NV12, width=3088, height=2064’ ! m.sink_0 nvstreammux name=m width=3080 height=2064 batch-size=1 nvbuf-memory-type=0 ! nvegltransform ! nveglglessink -e -v

6. Jpeg camera + rtp streaming
gst-launch-1.0 v4l2src device=/dev/video0 ! “image/jpeg,width=1920,height=1080, framerate=30/1” ! nvjpegdec ! video/x-raw ! nvvidconv ! ‘video/x-raw,format=I420’ ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000

Here are some I’ve played with (tested on V100), first set DS_ROOT

export DS_ROOT=/root/deepstream_sdk_v4.0.2_x86_64/

Optical Flow to file
gst-launch-1.0 filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! .sink_0 nvstreammux batch-size=1 width=1280 height=720 ! nvof ! queue ! nvofvisual ! queue ! nvvideoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./nvof.mp4

nvinfer (primary + secondary), tracker
gst-launch-1.0 filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! decodebin ! .sink_1 nvstreammux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=$DS_ROOT/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker tracker-width=1280 tracker-height=720 ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so enable-batch-process=1 ! nvinfer config-file-path=$DS_ROOT/sources/apps/sample_apps/deepstream-test2/dstest2_sgie1_config.txt ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=RGBA” ! nvdsosd ! autovideosink

nvinfer tiled output
gst-launch-1.0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=$DS_ROOT/samples/configs/deepstream-app/config_infer_primary.txt ! nvmultistreamtiler width=1280 height=720 name=nvm
filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_0
filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_1
filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_2
filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_3
nvm.src ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=RGBA” ! nvdsosd ! nvvideoconvert ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out5.mp4

nvinfer multiple streams to multiple files
gst-launch-1.0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=$DS_ROOT/samples/configs/deepstream-app/config_infer_primary.txt ! nvstreamdemux name=demux filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_0
filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! decodebin ! queue ! mux.sink_1
demux.src_0 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=RGBA” ! nvdsosd ! nvvideoconvert ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out1.mp4
demux.src_1 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=RGBA” ! nvdsosd ! nvvideoconvert ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out2.mp4

tlt model
first need to go through the setup steps described here: GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models

gst-launch-1.0 filesrc location=$DS_ROOT/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! .sink_0 nvstreammux width=1280 height=720 batch-size=1 ! nvinfer config-file-path=pgie_ssd_uff_config.txt ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=RGBA” ! nvdsosd ! nveglglessink

Thank you Chris Hall.

Please feel free to share your command if you have.