Help converting gst-launch command to Python

I’m trying to convert the following shell script to Python:
(Taken from Ridgeruns’ Gstreamer pipeline page)

CLIENT_IP=10.100.0.70
gst-launch-1.0 nvcamerasrc fpsRange="30 30" intent=3 ! nvvidconv flip-method=6 \
! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! \
omxh264enc control-rate=2 bitrate=4000000 ! 'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false

I’m having trouble with these two lines:

'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! \
'video/x-h264, stream-format=(string)byte-stream'

I thought they were caps so I tried Gst.caps_from_string and set_property(‘caps’, …), but nvvidconv, omxh264enc, and h264parse all tell me they don’t have a caps property

Here’s what I have so far. Any advice?

"""
CLIENT_IP=10.100.0.70
gst-launch-1.0 nvcamerasrc fpsRange="30 30" intent=3 ! nvvidconv flip-method=6 \
! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! \
omxh264enc control-rate=2 bitrate=4000000 ! 'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false
"""

CLIENT_IP="10.100.0.70"

import gi
gi.require_version("Gst", "1.0")
from gi.repository import Gst

Gst.init(None)

# Create elements
nvcamerasrc = Gst.ElementFactory.make('nvcamerasrc')
nvvidconv = Gst.ElementFactory.make('nvvidconv')
encoder = Gst.ElementFactory.make('omxh264enc')
parser = Gst.ElementFactory.make('h264parse')
payload = Gst.ElementFactory.make('rtph264pay')
udpsink = Gst.ElementFactory.make('udpsink')

# Configure elements
nvcamerasrc.set_property('fpsRange', "30 30")
nvcamerasrc.set_property('intent', 3)

caps = Gst.caps_from_string('video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1')
nvvidconv.set_property('caps', caps)
nvvidconv.set_property('flip-method', 6)

caps = Gst.caps_from_string('video/x-h264, stream-format=(string)byte-stream')
encoder.set_property('caps', caps)
encoder.set_property('control-rate', 2)
encoder.set_property('bitrate', 4000000)

payload.set_property('mtu', 1400)

udpsink.set_property('host', CLIENT_IP)
udpsink.set_property('port', 5000)
udpsink.set_property('sync', False)
udpsink.set_property('async', False)

bump on this - did you ever find a solution?!

trying similar now :)

Oh yeah, this is from back when I had no idea how gstreamer worked. Let me move the script to GitHub and I’ll post the link.

Here are the scripts for a vision server and client to set up streaming via udp. The server listens for a tcp connection and uses that IP address to forward the stream. I also implemented functionality to send movement commands (WASD) to the robot via that TCP channel.

Still a bit rough, haven’t done anything with it in awhile.
https://github.com/DaxBot/DaxVision

The whole process was a lot simpler than I thought, you just pass the whole pipeline string into:

Gst.parse_launch(pipeline_string)

See DaxVisionServer.build_pipeline()

Thanks man – much appreciated here. Its not massively clear from the gstreamer docs you can just use Gst.parse_launch…!

I will try this tomorrow. Im trying to pass a stream from the jetson into a pretrained classifier via OpenCV - been a bit of a headf*** compared to doing it on local machine!

ok gave it a whirl but not having much luck…

tried saving DaxVisionServer to my repo

intention was to fire up python on jetson and go as follows…

from DaxVisionServer import DaxVisionServer
dvs = DaxVisionServer(port)
dvs.build_pipeline(target_ip)

when i run dvs = DaxVisionServer(5000) it just seems to hang at “Socket bound successfully”

and i cant continue with the approach?

I did also try just a vanilla Gst.launch via entering “”“nvcamerasrc fpsRange=“30 30” intent=3 ! nvvidconv flip-method=6 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! omxh264enc control-rate=2 bitrate=4000000 ! ‘video/x-h264, stream-format=(string)byte-stream’ ! h264parse ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false”“”

however its having a bit of a fit if i do this. Did try looking on the Gstreamer docs / community but (hopefully im not offending anyone) they are not very user friendly!

Any help would be greatly appreciated!

edit:

apologies it does seem to be partially working but my receiver side isnt getting any video? I get this output but cant see any video?

$gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! autovideosink sync=false async=false -e
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Got context from element ‘autovideosink0’: gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)“(GstGLDisplayCocoa)\ gldisplaycocoa0”;
Setting pipeline to PLAYING …
New clock: GstSystemClock

Once I get to work I’ll take a look. DaxVisionServer and DaxVision (the client) are meant to be run standalone. They’re examples, not much more

On the Jetson

python DaxVisionServer.py

On your computer

python DaxVision.py

Then you put the IP address of your Jetson into the client and connect. Should then see the pipeline get built by the server. You won’t see anything besides “socket bound successfully” until there is a client.

ok tried both sides of things - just hitting connection failed in big letters on the black Dax Vision window :/

thanks in advance for any steer here

Yeah I think the 1 second socket timeout is a bit too strict. Did you see the connection attempt on the server?

All i can see is socket bound successfully - i did try adding a bit more patience and changing socket timeout to 10 but nothing seems to be happening !

That sounds like a network error then. You should get the pipelines working with gst-launch-1.0 first before you try scripting it.

Make these pipelines work first

For example this on the jetson:

CLIENT_IP=<IP_ADDRESS>
gst-launch-1.0 nvcamerasrc fpsRange="30 30" intent=3 ! nvvidconv flip-method=6 \
! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! \
omxh264enc low-latency=1 control-rate=2 bitrate=4000000 ! 'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false

and this on your computer:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! xvimagesink sync=false async=false -e

Pipepines definitely working already! Just doublechecked them…

CLIENT_IP=192.168.0.2

sender

gst-launch-1.0 nvcamerasrc fpsRange=“30 30” intent=3 ! nvvidconv flip-method=6 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! omxh264enc control-rate=2 bitrate=4000000 ! ‘video/x-h264, stream-format=(string)byte-stream’ ! h264parse ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false

reciever
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! autovideosink sync=false async=false -e

all ok with some slight lag - ideally im really looking for a way to get this passed through openCV in python so your solution seemed like a step on the way!

I’d put the Jetson pipeline into a Gst.parse_launch script and try to retrieve it on the other end with a normal gst-launch command. May have to run python as sudo in order to use the socket.

The beforementioned github code is no longer available, so just for the record, the following example describes the way to add such caps into the pipeline:

source_caps = Gst.Caps.from_string('video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1')
source_filter = Gst.ElementFactory.make('capsfilter', 'source_filter')
source_filter.set_property('caps', source_caps)
pipeline.add(source_filter)
source.link(source_filter)
1 Like

Hello!
I’m having the same problem, but I want to add those caps added into a videoconvert element using PYTHON

What I’m doing is: I’m sourcing the video from the camera in the format NV12 and the format 1280x720
I want to convert it to RGB 640x480 before feeding it into appsink. Do I need to add a videoconvert and change its caps? I tried to do that but it says that the videoconvert element has no caps

Add a capsfilter element like in Help converting gst-launch command to Python - #14 by zoolyka