DeepStream now supports Python!

Based on the feedback we received from our developer community, we are happy to announce support for Python in DeepStream. We are releasing an alpha version of Python bindings with 4 sample apps. Being an alpha version, some functionalities are limited but the performance is on-par with the native C++ apps. On the GitHub page, there is a short how-to guide to get started. The sample apps can be used to quickly prototype your solutions.

DeepStream team is constantly looking to improve the product, so try it out and tell us what you think. We value your feedback.

DS python apps on GitHub: https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
Python bindings downloads: https://developer.nvidia.com/deepstream-download
How-To guide: https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/HOWTO.md

1 Like

I’ve extracted ds_pybind to deepstream folder.

This is the error:

$ python deepstream_test_1.py MOT16-11.mp4
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

 Unable to create NvStreamMux
 Unable to create pgie
 Unable to create nvosd
Creating EGLSink

It’s seems the nvidia components are None, how to debug this?

Hi,

Thanks for this. How do we run this on a webcam stream?

Hi, will this also support the Yolo models as well?

My system: ubuntu 18.04, Python 3.7.3,

$ sudo apt-get install python-gi-dev
$ export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0"
$ export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include"
$ git clone https://github.com/GStreamer/gst-python.git
$ cd gst-python
$ git checkout 1a8f48a
$ ./autogen.sh PYTHON=python3

...
configure: error: Python libs not found. Windows requires Python modules to be explicitly linked to libpython.
  configure failed

./configure at line number 14,509
if ac_fn_c_try_link “$LINENO”;
else
“Python libs not found. Windows requires Python modules to be explicitly linked to libpython.” “$LINENO” 5

The complete results of ./autogen.sh PYTHON=python3 as follows:

+ passing argument PYTHON=python3 to configure
+ options passed to configure:  PYTHON=python3
+ check for build tools
  checking for autoconf >= 2.60 ... found 2.69, ok.
  checking for automake >= 1.10 ... found 1.15.1, ok.
  checking for libtoolize >= 1.5.0 ... found 2.4.6, ok.
  checking for pkg-config >= 0.8.0 ... found 0.29.1, ok.
+ running libtoolize --copy --force...
libtoolize: putting auxiliary files in '.'.
libtoolize: copying file './ltmain.sh'
libtoolize: putting macros in 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
libtoolize: Consider adding 'AC_CONFIG_MACRO_DIRS([m4])' to configure.ac,
libtoolize: and rerunning libtoolize and aclocal.
+ running aclocal -I m4 -I common/m4 ...
+ running autoheader ...
+ running autoconf ...
+ running automake -a -c -Wno-portability...
configure.ac:47: installing './compile'
configure.ac:13: installing './missing'
gi/overrides/Makefile.am: installing './depcomp'
plugin/Makefile.am:3: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')
+ running configure ... 
  ./configure default flags: --enable-maintainer-mode
  ./configure external flags:  PYTHON=python3

checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking target system type... x86_64-pc-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether UID '1000' is supported by ustar format... yes
checking whether GID '1000' is supported by ustar format... yes
checking how to create a ustar tar archive... gnutar
checking nano version... 0 (release)
checking whether to enable maintainer-specific portions of Makefiles... yes
checking whether make supports nested variables... (cached) yes
checking how to print strings... printf
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking dependency style of gcc... gcc3
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop
checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for ar... ar
checking for archiver @FILE support... @
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for sysroot... no
checking for a working dd... /bin/dd
checking how to truncate binary pipes... /bin/dd bs=4096 count=1
checking for mt... mt
checking if mt is a manifest tool... no
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking for shl_load... no
checking for shl_load in -ldld... no
checking for dlopen... no
checking for dlopen in -ldl... yes
checking whether a program can dlopen itself... yes
checking whether a statically linked program can dlopen itself... no
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking whether gcc understands -c and -o together... (cached) yes
checking dependency style of gcc... (cached) gcc3
checking for gcc option to accept ISO C99... none needed
checking for gcc option to accept ISO Standard C... (cached) none needed
checking for python version... 3.7
checking for python platform... linux
checking for python script directory... ${prefix}/lib/python3.7/site-packages
checking for python extension module directory... ${exec_prefix}/lib/python3.7/site-packages
checking for python >= 2.7... checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for GST... yes
checking for PYGOBJECT... yes
okay
checking for headers required to compile python extensions... found
checking for pygobject overrides directory... ${exec_prefix}/lib/python3.7/site-packages/gi/overrides
checking for GST... yes
configure: Using /usr/local/lib/gstreamer-1.0 as the plugin install location
checking for PYGOBJECT... yes
checking for libraries required to embed python... no
configure: error: Python libs not found. Windows requires Python modules to be explicitly linked to libpython.
  configure failed

Please help to solve this problem.
Thank you very much in advance.

Suryadi

Hi All, it’s best to put questions and issues in their own topics so we can have a dedicated thread for each one.

@rilut:
The deepstream_test_1.py only supports H264 elementary stream (due to H264 parser used https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/43861859fa8aca1f17cf752a208c12b0c8b7d287/apps/deepstream-test1/deepstream_test_1.py#L150). For mp4 support, please see deepstream_test_3.py. We’ll add an FAQ entry for this.

@robigregorio

  • For webcam sample (based on test1 app), the gist of it is to replace filesrc->h264parse->nvv4l2decoder elements with v4l2src->videoconvert elements. The sample code for this has not been ported to Python yet. We will look into it and update. This is a bit involved so opening a separate topic would be best.
  • The yolo model needs to be tested. In theory it should work once the yolo sample lib and config are properly prepared per the sample’s README. The Python sample app’s dstest1_pgie_config.txt needs to be replaced with the yolo config file.

@suryadi:
Have you installed python3-dev and libpython3-dev? If not please try installing them.

Dear Zhliunycm2,

I try to create a python 3.6 environment using anaconda-navigator now.

$ sudo apt-get install -f python3-dev
python3-dev is already the newest version (3.6.7-1~18.04).

$ sudo apt-get install -f libpython3-dev
libpython3-dev is already the newest version (3.6.7-1~18.04).
libpython3-dev set to manually installed.

$ apt policy python3 python3-dev libpython3-dev
python3:
  Installed: 3.6.7-1~18.04
  Candidate: 3.6.7-1~18.04
  Version table:
 *** 3.6.7-1~18.04 500
        500 http://id.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     3.6.5-3 500
        500 http://id.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
python3-dev:
  Installed: 3.6.7-1~18.04
  Candidate: 3.6.7-1~18.04
  Version table:
 *** 3.6.7-1~18.04 500
        500 http://id.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     3.6.5-3 500
        500 http://id.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
libpython3-dev:
  Installed: 3.6.7-1~18.04
  Candidate: 3.6.7-1~18.04
  Version table:
 *** 3.6.7-1~18.04 500
        500 http://id.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     3.6.5-3 500
        500 http://id.archive.ubuntu.com/ubuntu bionic/main amd64 Packages

$ python3
Python 3.6.7 | packaged by conda-forge | (default, Nov  6 2019, 16:19:42) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

I still stuck at

checking for libraries required to embed python... no
configure: error: Python libs not found. Windows requires Python modules to be explicitly linked to libpython.
  configure failed

Thank you very much in advance.

Warmest regards,
Suryadi

@robigregorio
As @zhliunycm2 mentioned in comment #6, to use a webcam (USB camera), one of the DeepStream test sample apps could be modified according to your end use-case to read from the USB device using GStreamer plugin “v4l2src”.

The modified “deepstream-test1” sample pipeline would be:
v4l2src → videoconvert → nvvideoconvert → mux → nvinfer → nvvideoconvert → nvosd → video-renderer

To achieve this, you can merge the following code snippet into def main in apps/deepstream-test1/deepstream_test_1.py.
The diff with apps/deepstream-test1/deepstream_test_1.py should show the additional GStreamer plugins introduced into the pipeline and linked to support reading from UVC devices in general.

def main(args):
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <v4l2-device-path>\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("v4l2src", "usb-cam-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
    if not caps_v4l2src:
        sys.stderr.write(" Unable to create v4l2src capsfilter \n")

print("Creating Video Converter \n")

    # Adding videoconvert -> nvvideoconvert as not all
    # raw formats are supported by nvvideoconvert;
    # Say YUYV is unsupported - which is the common
    # raw format for many logi usb cams
    # In case we have a camera with raw format supported in
    # nvvideoconvert, GStreamer plugins' capability negotiation
    # shall be intelligent enough to reduce compute by
    # videoconvert doing passthrough (TODO we need to confirm this)

# videoconvert to make sure a superset of raw formats are supported
    vidconvsrc = Gst.ElementFactory.make("videoconvert", "convertor_src1")
    if not vidconvsrc:
        sys.stderr.write(" Unable to create videoconvert \n")

    # nvvideoconvert to convert incoming raw buffers to NVMM Mem (NvBufSurface API)
    nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2")
    if not nvvidconvsrc:
        sys.stderr.write(" Unable to create Nvvideoconvert \n")

    caps_vidconvsrc = Gst.ElementFactory.make("capsfilter", "nvmm_caps")
    if not caps_vidconvsrc:
        sys.stderr.write(" Unable to create capsfilter \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on camera's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing cam %s " %args[1])
    caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=30/1"))
    caps_vidconvsrc.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM)"))
    source.set_property('device', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "dstest1_pgie_config.txt")
    # Set sync = false to avoid late frame drops at the display-sink
    sink.set_property('sync', False)

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(caps_v4l2src)
    pipeline.add(vidconvsrc)
    pipeline.add(nvvidconvsrc)
    pipeline.add(caps_vidconvsrc)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # v4l2src -> nvvideoconvert -> mux -> 
    # nvinfer -> nvvideoconvert -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(caps_v4l2src)
    caps_v4l2src.link(vidconvsrc)
    vidconvsrc.link(nvvidconvsrc)
    nvvidconvsrc.link(caps_vidconvsrc)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = caps_vidconvsrc.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of caps_vidconvsrc \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

Once merged you could run (assuming UVC device at /dev/video0):
python3 deepstream_test_1.py /dev/video0

Im trying to get the :python3 deepstream_test_1.py to run with the TLT resnet10_detector.trt

I have modified the :dstest1_pgie_config.txt like this

net-scale-factor=0.0039215697906911373
int8-calib-file=../../../../samples/models/Primary_Detector_Nano_tlt/calibration.bin
labelfile-path=../../../../samples/models/Primary_Detector_Nano_tlt/labels.txt
trt-model-file=../../../../samples/models/Primary_Detector_Nano_tlt/resnet10_detector.etlt
tlt-model-key=ajdqdnVicTU4Mm0wcGg0OWoyMDI0NmJrMTQ6YWM4MjllNmYtZWE5Ny00NzI3LTlmNzItNGY1M2VlZWYxOTFk
input-dims=3;384;1248;0 
uff-input-blob=input_1
batch-size=4
network-mode=0
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
is-classifier=0
[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1

and like this

net-scale-factor=0.0039215697906911373
model-engine-file=../../models/Primary_Detector_Nano_tlt/resnet10_detector.trt
labelfile-path=../../models/Primary_Detector_Nano_tlt/labels.txt
batch-size=4
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
#enable-dbscan=1
gie-unique-id=1
is-classifier=0

but I get this error when it runs

rror: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

any ideas?

hi,

have anyone tried test2 app with yolo? is there any code i need to modify to run with yolo or do i only need to modify config files?

Hello unnik,

I tried this main function code on TX2
python3 deepstream_test_1.py /dev/video0

error message got:
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:usb-cam-source:
streaming stopped, reason not-negotiated (-4)

Does this code support TX2 camera onboard (/dev/video0)?

Thank you.

hi,

when will there be support for an equivalent for the gst-dsexample plugin in python?

@All:
Please use separate topics for questions and issues instead of posting them in this thread. The discussions and solutions will be much more clear that way. Thank you in advance for your cooperation. This thread will be locked to avoid further confusion.

@suryadi: Missing python libs seems like a setup issue. We have not been able to duplicate it for investigation. If this is still a problem, please open a ticket under a separate topic. Thank you.

@adventuredaisy: Please move the TLT model question to a separate ticket. Thank you.

@tzuchun: The TX2 onboard camera is CSI, not USB. The pipeline will be different. Please open a separate ticket. Thank you.

@ibondokji: gst-dsexample plugin support is under consideration. If you can share some more details on your use case in a separate ticket, that would be greatly appreciated. Thank you.