1. Transcoding
gst-launch-1.0 filesrc location= ./sample_1080p_h265.mp4 ! qtdemux ! h265parse ! nvv4l2decoder ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=I420” ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=test.mp4
gst-launch-1.0 filesrc location= ~/xqjq-hdranchor-1-002630-120s.mkv ! matroskademux ! h265parse ! nvv4l2decoder ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=I420” ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=test.mp4
nvv4l2h265enc expects input in I420 format.
This will still generate a corrupted output as there needs a support to handle 10 bit output buffers in low level cuvid code (libcuvidv4l2.so)
After this nvvideoconvert also needs support to handle 10 bit input YUV so that we can convert it from 10 bit to 8 bit and can flow this buffer to all the downstream components which only understand 8 bit for now.
Refer to: http://nvbugs/200558653/9
2. dsexample usage before nvinfer in DS pipeline
gst-launch-1.0 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 ! dsexample processing-width=640 processing-height=480 full-frame=1 ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! nveglglessink
Refer to https://devtalk.nvidia.com/default/topic/1064542/deepstream-sdk/undistort-camera-input-non-360-deg-
3. Nvvideoconvert does not support uyvy input, so it needs videoconvert before.
gst-launch-1.0
v4l2src device=/dev/video0 do-timestamp=true ! video/x-raw,format=UYVY,width=1920,height=1080,framerate=30/1 !
videoconvert ! video/x-raw,format=NV12,width=1920,height=1080,framerate=30/1 !
nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=30/1 !
queue ! nvoverlaysink sync=false
Nvvidconv uses old nvbuf_utils but not unfied nvbufsurface, so it can’t be used with other deepstream modules. It can support uyvy input.
gst-launch-1.0
v4l2src device=/dev/video0 ! “video/x-raw,format=UYVY,width=1920,height=1080,framerate=30/1” !
nvvidconv ! “video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=30/1” !
nvoverlaysink sync=false
4. Following pipeline tested OK on ToT Xavier
gst-launch-1.0 rtspsrc location= rtsp://10.24.217.30:8554/ ! rtph265depay ! h265parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path= config_infer_primary.txt ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! nvv4l2h265enc ! h265parse ! filesink location= file.h265
By creating a rtsp server on host for streaming h265 stream
cvlc -vvv ~/sample_1080p_h265.mp4 --loop --sout-keep --sout ‘#gather:rtp{sdp=rtsp://10.24.217.30:8554/}’
Refer to Topic 1065157
Above pipeline is similar to what user is trying to do except that decoder’s output should not be “video/x-raw(memory:NVMM), format=RGBA”
Decoder always outputs NV12
5. Debug nvstreammux
gst-launch-1.0 videotestsrc ! nvvideoconvert ! ‘video/x-raw(memory:NVMM), format=NV12, width=3088, height=2064’ ! m.sink_0 nvstreammux name=m width=3080 height=2064 batch-size=1 nvbuf-memory-type=0 ! nvegltransform ! nveglglessink -e -v
6. Jpeg camera + rtp streaming
gst-launch-1.0 v4l2src device=/dev/video0 ! “image/jpeg,width=1920,height=1080, framerate=30/1” ! nvjpegdec ! video/x-raw ! nvvidconv ! ‘video/x-raw,format=I420’ ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000