How to use Deepstream with aravissrc (as a gstreamer command) ?

Hi everyone,

I am using a camera that does not support v4l2src. in order to get the camera working on my Jetson Nano, I installed aravissrc. now, I can successfully run my camera with gstreamer and get the video stream by running the following command:

gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc camera-name="U3V-00D24866386" !  nveglglessink

running the above command would give me the following warning in the terminal:

Setting pipeline to PAUSED ...

Using winsys: x11 
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING ...
New clock: GstSystemClock

(gst-launch-1.0:21433): GLib-GObject-WARNING **: 11:05:30.920: g_object_set_is_valid_property: object class 'ArvUvStream' has no property named 'packet-resend'

then a window appears, and it successfully shows the video stream from my camera.

now, I would like to change the above command a little bit to connect it to my yolov2-tiny classifier. to do so, I opened up a terminal in the directory where my yolo is installed (i.e. /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo), and I run the following command:

gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc camera-name="U3V-00D24866386"  !  nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/config_infer_primary_yoloV2_tiny.txt batch-size=1 unique-id=1 ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/config_infer_primary_yoloV2_tiny.txt batch-size=16 unique-id=2 infer-on-gie-id=1 ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

which successfully creates the tensorrt engine, but at the end pops up an error as follows:

$  gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc camera-name="U3V-00D24866386"  !  nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/config_infer_primary_yoloV2_tiny.txt batch-size=1 unique-id=1 ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/config_infer_primary_yoloV2_tiny.txt batch-size=16 unique-id=2 infer-on-gie-id=1 ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
Setting pipeline to PAUSED ...

Using winsys: x11 
Creating LL OSD context new
0:00:06.117645315 21802   0x556ba15e10 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<nvinfer1> NvDsInferContext[UID 2]:checkEngineParams(): Requested Max Batch Size is less than engine batch size
0:00:06.118706839 21802   0x556ba15e10 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<nvinfer1> NvDsInferContext[UID 2]:initialize(): Trying to create engine from model files
Loading pre-trained weights...
Loading complete!
Total Number of weights read : 549187
      layer               inp_size            out_size       weightPtr
(1)   conv-bn-leaky     3 x 224 x 224      16 x 224 x 224    448   
(2)   maxpool          16 x 224 x 224      16 x 112 x 112    448   
(3)   conv-bn-leaky    16 x 112 x 112      32 x 112 x 112    5088  
(4)   maxpool          32 x 112 x 112      32 x  56 x  56    5088  
(5)   conv-bn-leaky    32 x  56 x  56      64 x  56 x  56    23584 
(6)   maxpool          64 x  56 x  56      64 x  28 x  28    23584 
(7)   conv-bn-leaky    64 x  28 x  28     128 x  28 x  28    97440 
(8)   maxpool         128 x  28 x  28     128 x  14 x  14    97440 
(9)   conv-bn-leaky   128 x  14 x  14     128 x  14 x  14    245024
(10)  maxpool         128 x  14 x  14     128 x   7 x   7    245024
(11)  conv-bn-leaky   128 x   7 x   7     256 x   7 x   7    540192
(12)  conv-linear     256 x   7 x   7      35 x   7 x   7    549187
(13)  region           35 x   7 x   7      35 x   7 x   7    549187
Anchors are being converted to network input resolution i.e. Anchors x 32 (stride)
Output blob names :
region_13
Total number of layers: 21
Total number of layers on DLA: 0
Building the TensorRT Engine...
Building complete!
0:01:28.498629092 21802   0x556ba15e10 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<nvinfer1> NvDsInferContext[UID 2]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/model_b16_fp16.engine
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
ERROR: from element /GstPipeline:pipeline0/GstAravis:aravis0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstAravis:aravis0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

my question precisely is “How can I successfully run my yolov2-tiny classifier with using aravissrc?”

Hi,
We don’t have experience of using ‘aravissrc’. It is uncertain whether it can be customized into deepstream-app. Please share information of executing ‘$ gst-launch-1.0gst-inspect-1.0 aravissrc’ for reference.

the code

$ gst-launch-1.0 aravissrc

can not run. it gives the following error:

ERROR: pipeline could not be constructed: no element "aravissrc".

this is because it cannot find the aravissrc plugin. therefore You need to specifically mention the plugin path in the code.

running the following code

gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc

,which is the simplest command to use with “aravissrc”) would give the following output (that keeps running in the terminal):

nano@nano-desktop:~$ gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

(gst-launch-1.0:10078): GLib-GObject-WARNING **: 14:23:09.758: g_object_set_is_valid_property: object class 'ArvUvStream' has no property named 'packet-resend'
ERROR: from element /GstPipeline:pipeline0/GstAravis:aravis0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstAravis:aravis0:
streaming stopped, reason not-linked (-1)
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...

Hi,
My bad. I made a typo. It should be ‘$ gst-inspect-1.0 aravissrc’

I run this command:

gst-inspect-1.0 --gst-plugin-path=/usr/local/lib/ aravissrc
Factory Details:
  Rank                     none (0)
  Long-name                Aravis Video Source
  Klass                    Source/Video
  Description              Aravis based source
  Author                   Emmanuel Pacaud <emmanuel@gnome.org>

Plugin Details:
  Name                     aravis
  Description              Aravis Video Source
  Filename                 /usr/local/lib/gstreamer-1.0/libgstaravis.0.6.so
  Version                  0.6.4
  License                  LGPL
  Source module            aravis
  Binary package           aravis
  Origin URL               http://blogs.gnome.org/emmanuel

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseSrc
                         +----GstPushSrc
                               +----GstAravis

Pad Templates:
  SRC template: 'src'
    Availability: Always
    Capabilities:
      ANY

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SRC: 'src'
    Pad Template: 'src'

Element Properties:
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "aravis0"
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  blocksize           : Size in bytes to read per buffer (-1 = default)
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 4096 
  typefind            : Run typefind before negotiating (deprecated, non-functional)
                        flags: readable, writable, deprecated
                        Boolean. Default: false
  do-timestamp        : Apply current stream time to buffers
                        flags: readable, writable
                        Boolean. Default: false
  camera-name         : Name of the camera
                        flags: readable, writable
                        String. Default: null
  camera              : Camera instance to retrieve additional information
                        flags: readable
                        Object of type "ArvCamera"
  gain                : Gain (dB)
                        flags: readable, writable
                        Double. Range:              -1 -             500 Default:              -1 
  gain-auto           : Auto Gain Mode
                        flags: readable, writable
                        Boolean. Default: false
  exposure            : Exposure time (”s)
                        flags: readable, writable
                        Double. Range:              -1 -           1e+08 Default:              -1 
  exposure-auto       : Auto Exposure Mode
                        flags: readable, writable
                        Boolean. Default: false
  h-binning           : CCD horizontal binning
                        flags: readable, writable
                        Integer. Range: 1 - 2147483647 Default: -1 
  v-binning           : CCD vertical binning
                        flags: readable, writable
                        Integer. Range: 1 - 2147483647 Default: -1 
  offset-x            : Offset in x direction
                        flags: readable, writable
                        Integer. Range: 0 - 2147483647 Default: 0 
  offset-y            : Offset in y direction
                        flags: readable, writable
                        Integer. Range: 0 - 2147483647 Default: 0 
  packet-resend       : Request dropped packets to be reissued by the camera
                        flags: readable, writable
                        Boolean. Default: true
  num-buffers         : Number of video buffers to allocate for video frames
                        flags: readable, writable
                        Integer. Range: 1 - 2147483647 Default: 50

Hi,
It is strange it does not show any support format in source pad. Please check if it can output video/x-raw,format=I420. The pipeline is like:

$ gst-launch-1.0 aravissrc ! video/x-raw,format=I420,width=_SOURCE_W_,height=_SOURCE_H_ ! nvvideoconvert ! nvoverlaysink

SOURCE_W and SOURCE_H have to be replaced with the resolution supported by your source.
If it can be run, it is possible to customize deepstream-app for your usecase.

Dear Danelll,

running the following code

gst-launch-1.0 aravissrc -e --gst-plugin-path=/usr/local/lib/ ! video/x-raw,format=I420,width=640,height=480 ! nvvideoconvert ! nvoverlaysink

gives me an error like this:

:~$ gst-launch-1.0 aravissrc -e --gst-plugin-path=/usr/local/lib/ ! video/x-raw,format=I420,width=640,height=480 ! nvvideoconvert ! nvoverlaysink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstAravis:aravis0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstAravis:aravis0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

running the following code opens up a jammed display window:

gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc ! video/x-raw  ! nvvideoconvert  ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/config_infer_primary_yoloV2_tiny.txt batch-size=1 unique-id=1 ! nvegltransform  ! nveglglessink

and it shows the following in the terminal:

gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc ! video/x-raw  ! nvvideoconvert  ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/config_infer_primary_yoloV2_tiny.txt batch-size=1 unique-id=1 ! nvegltransform  ! nveglglessink
Setting pipeline to PAUSED ...

Using winsys: x11 
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING ...
New clock: GstSystemClock

(gst-launch-1.0:14235): GLib-GObject-WARNING **: 09:13:25.563: g_object_set_is_valid_property: object class 'ArvUvStream' has no property named 'packet-resend'
0:00:04.474795840 14235   0x55eb43d4a0 WARN                 nvinfer gstnvinfer.cpp:1201:gst_nvinfer_process_full_frame:<nvinfer0> error: NvDsBatchMeta not found for input buffer.
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: NvDsBatchMeta not found for input buffer.
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1201): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
ERROR: from element /GstPipeline:pipeline0/GstAravis:aravis0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstAravis:aravis0:
streaming stopped, reason error (-5)

that’s it.
I finally solved it. the following code works well for me. if anyone else has a camera that is not UVC support, (GENICAM in my case), he/she can install aravissrc and use it instead of v4l2src. by opening up a command prompt in the directory of yolo and entering the following code, the whole setup can work using the following code:

gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc camera-name="U3V-00D24866386"  ! nvvideoconvert ! "video/x-raw(memory:NVMM),width=(int)1280,height=(int)720,format=NV12" ! m.sink_0 nvstreammux  name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/config_infer_primary_yoloV2_tiny.txt batch-size=1  unique-id=1 infer-on-class-ids=1 !  nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-4.0/sources/yololight_car_test_jelo_panjereh_plate_both_backup/config_infer_primary_yoloV2_tiny.txt  batch-size=1 unique-id=2 infer-on-gie-id=1 !    nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvegltransform  ! nveglglessink sync=false
2 Likes

Hi,
Thanks for the sharing.

1 Like

@barzanhayati
Hi!
If you dont mind me asking, I am trying to achieve similar thing since I am using FLIR Blackfly camera + Aravissrc as well.

I am running through jetson-inference example, and I wanted to see if you tried to change the source from jetson.utils.gstCamera. I am able to run the aravissrc command on Terminal, but I would like to be able to call it from my python script.

Would appreciate your help.

Thanks!

2 Likes

Did you ever get this to work?

If you were able to run aravissrc via terminal and gst-launch-1.0, then you should be able to run that via python scripts. Just define your pipeline (you run that in terminal ) in python script and use appsink.

When you could run a sample pipeline in terminal , you could run that pipeline in python script, also.

I don’t know jetson.utils.gstCamera.