convert gstreamer pipeline to opencv in python

Hi all,

I have created a network stream with following gstreamer commands:

sender (PC):

gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=20/1 ! videoscale ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=X.X.X.X port=5000

receiver (TX1):

gst-launch-1.0 -v udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink

This just works fine. I now want to include the stream on the receiver side in a python script. In the script I want to do some video processing with opencv.

Does anyone know how to convert the described pipeline, so that it can be used with opencv?

This is my script so far:

import numpy as np
import cv2
import math


# Capture Video and set resolution
cap = cv2.VideoCapture("udpsrc port=5000 ! application/x-rtp,media=video,payload=96,clock-rate=90000,encoding-name=H264, ! rtph264depay ! decodebin ! videoconvert ! appsink ")




while(True):

    # Capture frame-by-frame
    ret, frame = cap.read()
    
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    cv2.imshow('frame', frame) 
    #cv2.imshow('frame2', frame2)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

I get following error:

OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/nvidia/build-opencv/opencv/modules/highgui/src/window.cpp, line 331 Traceback (most recent call last): File "launchstream_ip.py", line 13, in <module> cv2.imshow('frame', frame) cv2.error: /home/nvidia/build-opencv/opencv/modules/highgui/src/window.cpp:331: error: (-215) size.width>0 && size.height>0 in function imshow

Thanks!

Seems it failed to read frame. You should first check that cap is successfully opened :

print(cap.isOpened())

Note that your opencv needs to have been built with gstreamer support, which was not the case for the opencv versions provided in JetPack (not checked last JetPack versions however).
You may add at the beginning of your code:

print(cv2.getBuildInformation())

and check output for such line:

Video I/O:
...
[b]    GStreamer:                   
      base:                      YES (ver 1.8.3)
      video:                     YES (ver 1.8.3)
      app:                       YES (ver 1.8.3)[/b]
...

If you have gstreamer support, and if capture cannot open, last thing to try would be adding caps for being sure videoconvert outputs into BGR8 format:

cap = cv2.VideoCapture("udpsrc port=5000 ! application/x-rtp,media=video,payload=96,clock-rate=90000,encoding-name=H264, ! rtph264depay ! decodebin ! videoconvert ! <b>video/x-raw, format=BGR</b> ! appsink "

I want to do the sending part in python but I can’t get it right.

This is my pipline which is working perfectly from terminal

gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! x264enc ! h264parse ! queue ! flvmux ! rtmpsink location="rtmp://localhost:1935/show/cam1"

and the same pipline which I want to do with opencv is

"appsrc ! videoconvert ! x264enc ! h264parse ! queue ! flvmux ! rtmpsink location='rtmp://localhost:1935/show/cam1'"

But it is not working. Your help will be appreciated. Thanks

You may tell what is not working…Please add some context information such as a code snippet and error message for better advice.
You may have a look to this example.