Visionworks Framesource slowdown

I’ve created a new class for grabbing frames from an MJPEG v4l2 camera. I create a gst pipeline v4l2src → nvjpegdec → nvvidconv → nvvideosink. I couldn’t get it working until I realised the nvvideosink needed the “outcaps” set and that should be NV12 or I420 despite it claiming RGBA output! The source for my class is included below.

Now the code works and I can create a framesource and load frames in the visionworks tracking example. For the first 10 - 20 seconds the video is fine but then it suddenly becomes choppy and doesn’t recover until I restart. CPU and memory usage seems fine. Using the same camera through the standard v4l2 framesource seems to work fine. The algorithm and display time are reporting high fps despite the choppiness. Using a similar pipeline with gst-launch to nvoverlaysink works fine without slowdown.

#include "GStreamerMJPEGFrameSourceImpl.hpp"

#include <NVX/FrameSource/GStreamer/GStreamerEGLStreamSinkFrameSourceImpl.hpp>


#include <sstream>


namespace nvidiaio
{


GStreamerMJPEGFrameSourceImpl::GStreamerMJPEGFrameSourceImpl(uint cameraIdx_) :
    GStreamerEGLStreamSinkFrameSourceImpl(nvxio::FrameSource::CAMERA_SOURCE, "GStreamerMJPEGFrameSource", false),
    cameraIdx(cameraIdx_)
{
}

bool GStreamerMJPEGFrameSourceImpl::setConfiguration(const FrameSource::Parameters& params)
{
    NVXIO_ASSERT(end);

    configuration.frameHeight = params.frameHeight;
    configuration.frameWidth = params.frameWidth;
    configuration.fps = params.fps;

    NVXIO_ASSERT((params.format == NVXCU_DF_IMAGE_NV12) ||
                 (params.format == NVXCU_DF_IMAGE_U8) ||
                 (params.format == NVXCU_DF_IMAGE_RGB) ||
                 (params.format == NVXCU_DF_IMAGE_RGBX)||
                 (params.format == NVXCU_DF_IMAGE_NONE));


    configuration.format = params.format;

    return true;
}

bool GStreamerMJPEGFrameSourceImpl::InitializeGstPipeLine()
{



    // Set defaults

    if (configuration.frameWidth == (vx_uint32)-1)
        configuration.frameWidth = 1920;
    if (configuration.frameHeight == (vx_uint32)-1)
        configuration.frameHeight = 1080;
    if (configuration.fps == (vx_uint32)-1)
        configuration.fps = 60;

    



    GstStateChangeReturn status;
    end = true;

    pipeline = GST_PIPELINE(gst_pipeline_new(nullptr));
    if (!pipeline)
    {
        NVXIO_PRINT("Cannot create Gstreamer pipeline");
        return false;
    }

    bus = gst_pipeline_get_bus(GST_PIPELINE (pipeline));

    // create v4l2src
    GstElement * v4l2src = gst_element_factory_make("v4l2src", nullptr);
    if (!v4l2src)
    {
        NVXIO_PRINT("Cannot create v4l2src");

        FinalizeGstPipeLine();

        return false;
    }



    std::ostringstream cameraDev;
    cameraDev << "/dev/video" << cameraIdx;
    g_object_set(G_OBJECT(v4l2src), "device", cameraDev.str().c_str(), nullptr);

    gst_bin_add(GST_BIN(pipeline), v4l2src);


    // create nvjpegdec
    GstElement * nvjpegdec = gst_element_factory_make("nvjpegdec", nullptr);
    if (!nvjpegdec)
    {
        NVXIO_PRINT("Cannot create nvjpegdec element");
        FinalizeGstPipeLine();

        return false;
    }


    g_object_set(G_OBJECT(nvjpegdec), "max-errors", G_GINT64_CONSTANT(-1), "idct-method", G_GINT64_CONSTANT(2),  nullptr);


    gst_bin_add(GST_BIN(pipeline), nvjpegdec);

    // create nvvidconv
    GstElement * nvvidconv = gst_element_factory_make("nvvidconv", NULL);
    if (nvvidconv == NULL)
    {
        NVXIO_PRINT("Cannot create nvvidconv");
        FinalizeGstPipeLine();
 
        return false;
    }
 
    gst_bin_add(GST_BIN(pipeline), nvvidconv);

    // create nvvideosink element
    GstElement * nvvideosink = gst_element_factory_make("nvvideosink", nullptr);
    if (!nvvideosink)
    {
        NVXIO_PRINT("Cannot create nvvideosink element");
        FinalizeGstPipeLine();

        return false;
    }

    std::ostringstream stream;

    stream << "video/x-raw(memory:NVMM), width=" << configuration.frameWidth << ", height=" << configuration.frameHeight << ", format=(string)NV12, framerate=" << configuration.fps << "/1;";

    std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_out(gst_caps_from_string(stream.str().c_str()));

    g_object_set(G_OBJECT(nvvideosink),
                 "display", context.display,
                 "stream", context.stream,
                 "fifo", fifoMode,
                 "max-lateness", G_GINT64_CONSTANT(-1),
                 "throttle-time", G_GUINT64_CONSTANT(0),
                 "render-delay", G_GUINT64_CONSTANT(0),
                 "qos", FALSE,
                 "sync", FALSE,
                 "async", TRUE,
                 nullptr);

    g_object_set(G_OBJECT(nvvideosink), "outcaps", caps_out.get(), NULL);


    gst_bin_add(GST_BIN(pipeline), nvvideosink);




    // Create caps
    stream.str(std::string());
    stream << "image/jpeg, width=[1," << configuration.frameWidth << "], height=[1," << configuration.frameHeight << "], framerate=" << configuration.fps << "/1;";

    std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_v42lsrc(gst_caps_from_string(stream.str().c_str()));



    if (!caps_v42lsrc)
    {
        NVXIO_PRINT("Failed to create caps v4lsrc");
        FinalizeGstPipeLine();

        return false;
    }

    // link elements
    if (!gst_element_link_filtered(v4l2src, nvjpegdec, caps_v42lsrc.get()))
    {
        NVXIO_PRINT("GStreamer: cannot link v4l2src -> nvjpegdec using caps");
        FinalizeGstPipeLine();

        return false;
    }

    // Create caps
    std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_nvjpegdec(gst_caps_from_string("video/x-raw, format=(string)I420"));

    if (!caps_nvjpegdec)
    {
        NVXIO_PRINT("Failed to create caps nvjpegdec");
        FinalizeGstPipeLine();

        return false;
    }

    // link elements

    if (!gst_element_link_filtered(nvjpegdec, nvvidconv, caps_nvjpegdec.get()))
    {
        NVXIO_PRINT("GStreamer: cannot link nvjpegdec -> nvvidconv using caps");
        FinalizeGstPipeLine();

        return false;
    }


    // Create caps
    std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_nvvidconv(gst_caps_from_string("video/x-raw(memory:NVMM), format=(string)NV12"));

    if (!caps_nvvidconv)
    {
        NVXIO_PRINT("Failed to create caps nvvidconv");
        FinalizeGstPipeLine();

        return false;
    }

    // link elements

    if (!gst_element_link_filtered(nvvidconv, nvvideosink, caps_nvvidconv.get()))
    {
        NVXIO_PRINT("GStreamer: cannot link nvvidconv -> nvvideosink using caps");
        FinalizeGstPipeLine();

        return false;
    }

    // Force pipeline to play video as fast as possible, ignoring system clock
    gst_pipeline_use_clock(pipeline, nullptr);

    status = gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_PLAYING);
    handleGStreamerMessages();
    
    if (status == GST_STATE_CHANGE_ASYNC)
    {
        status = gst_element_get_state(GST_ELEMENT(pipeline), nullptr, nullptr, GST_CLOCK_TIME_NONE);
    }
    

    if (status == GST_STATE_CHANGE_FAILURE)
    {
        NVXIO_PRINT("GStreamer: unable to start playback");
        FinalizeGstPipeLine();

        return false;
    }

    if (!updateConfiguration(nvvideosink, v4l2src, configuration))
    {
        FinalizeGstPipeLine();
        return false;
    }


    end = false;


    return true;
}

} // namespace nvidiaio

Hi Matt,
Have you tried max clocks by running jetson_clocks.sh?

Yep, I’ve run nvpmodel -m 0 and jetson_clocks.sh to try and get the max power. Fan is running all the time and clockspeeds are high. The framerates reported seem to claim 100s of fps for the tracking and 30fps for the display.

I tried creating a different pipeline of v4lsrc (I420) → nvvidconv → nvvideosink and it has a similar problem. Using the V4L2 camera framesource has no slowdown either. Makes me think the problem is with nvvideosink has using gst-launch gives no problem with the same pipelines with nvoverlaysink. Enabling the fifo mode rather than mailbox mode doesn’t seem to help either, just causes there to be a backlog of frames too.

Hi Matt, are you able to share your implementation so that we can reproduce it? [v4lsrc (I420) → nvvidconv → nvvideosink] should be good for us to run with arbitrary usb cameras.

That’d be great, thanks! I’ve attached it below. I’ve set the default to 640x480 60Hz here. It seems to take longer for the slowdown to occur at lower resolution. On my camera I noticed the first frame I grab is all green, which doesn’t occur with the MJPEG implementation above.

#include "GStreamerYUVFrameSourceImpl.hpp"

#include <NVX/FrameSource/GStreamer/GStreamerEGLStreamSinkFrameSourceImpl.hpp>

#include <sstream>

namespace nvidiaio
{

GStreamerYUVFrameSourceImpl::GStreamerYUVFrameSourceImpl(uint cameraIdx_) :
    GStreamerEGLStreamSinkFrameSourceImpl(nvxio::FrameSource::CAMERA_SOURCE, "GStreamerYUVFrameSource", false),
    cameraIdx(cameraIdx_)
{
}

bool GStreamerYUVFrameSourceImpl::setConfiguration(const FrameSource::Parameters& params)
{
    NVXIO_ASSERT(end);

    configuration.frameHeight = params.frameHeight;
    configuration.frameWidth = params.frameWidth;
    configuration.fps = params.fps;

    NVXIO_ASSERT((params.format == NVXCU_DF_IMAGE_NV12) ||
                 (params.format == NVXCU_DF_IMAGE_U8) ||
                 (params.format == NVXCU_DF_IMAGE_RGB) ||
                 (params.format == NVXCU_DF_IMAGE_RGBX)||
                 (params.format == NVXCU_DF_IMAGE_NONE));

configuration.format = params.format;

    return true;
}

bool GStreamerYUVFrameSourceImpl::InitializeGstPipeLine()
{

// Set defaults

    if (configuration.frameWidth == (vx_uint32)-1)
        configuration.frameWidth = 640;
    if (configuration.frameHeight == (vx_uint32)-1)
        configuration.frameHeight = 480;
    if (configuration.fps == (vx_uint32)-1)
        configuration.fps = 60;

GstStateChangeReturn status;
    end = true;

    pipeline = GST_PIPELINE(gst_pipeline_new(nullptr));
    if (!pipeline)
    {
        NVXIO_PRINT("Cannot create Gstreamer pipeline");
        return false;
    }

    bus = gst_pipeline_get_bus(GST_PIPELINE (pipeline));

    // create v4l2src
    GstElement * v4l2src = gst_element_factory_make("v4l2src", nullptr);
    if (!v4l2src)
    {
        NVXIO_PRINT("Cannot create v4l2src");

        FinalizeGstPipeLine();

        return false;
    }

std::ostringstream cameraDev;
    cameraDev << "/dev/video" << cameraIdx;
    g_object_set(G_OBJECT(v4l2src), "device", cameraDev.str().c_str(), nullptr);

    gst_bin_add(GST_BIN(pipeline), v4l2src);

// create nvvidconv
    GstElement * nvvidconv = gst_element_factory_make("nvvidconv", NULL);
    if (nvvidconv == NULL)
    {
        NVXIO_PRINT("Cannot create nvvidconv");
        FinalizeGstPipeLine();
 
        return false;
    }
 
    gst_bin_add(GST_BIN(pipeline), nvvidconv);

    // create nvvideosink element
    GstElement * nvvideosink = gst_element_factory_make("nvvideosink", nullptr);
    if (!nvvideosink)
    {
        NVXIO_PRINT("Cannot create nvvideosink element");
        FinalizeGstPipeLine();

        return false;
    }

    std::ostringstream stream;

    stream << "video/x-raw(memory:NVMM), width=" << configuration.frameWidth << ", height=" << configuration.frameHeight << ", format=(string)NV12, framerate=" << configuration.fps << "/1;";

    std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_out(gst_caps_from_string(stream.str().c_str()));

    g_object_set(G_OBJECT(nvvideosink),
                 "display", context.display,
                 "stream", context.stream,
                 "fifo", fifoMode,
                 "max-lateness", G_GINT64_CONSTANT(-1),
                 "throttle-time", G_GUINT64_CONSTANT(0),
                 "render-delay", G_GUINT64_CONSTANT(0),
                 "qos", FALSE,
                 "sync", FALSE,
                 "async", TRUE,
                 nullptr);

    g_object_set(G_OBJECT(nvvideosink), "outcaps", caps_out.get(), NULL);

gst_bin_add(GST_BIN(pipeline), nvvideosink);

// Create caps
    stream.str(std::string());
    stream << "video/x-raw, format=(string){I420}, width=[1," << configuration.frameWidth <<
              "], height=[1," << configuration.frameHeight << "], framerate=" << configuration.fps << "/1;";

    std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_v42lsrc(gst_caps_from_string(stream.str().c_str()));

if (!caps_v42lsrc)
    {
        NVXIO_PRINT("Failed to create caps v4lsrc");
        FinalizeGstPipeLine();

        return false;
    }

    // link elements
    if (!gst_element_link_filtered(v4l2src, nvvidconv, caps_v42lsrc.get()))
    {
        NVXIO_PRINT("GStreamer: cannot link v4l2src -> nvvidconv using caps");
        FinalizeGstPipeLine();

        return false;
    }

// Create caps
    std::unique_ptr<GstCaps, GStreamerObjectDeleter> caps_nvvidconv(gst_caps_from_string("video/x-raw(memory:NVMM), format=(string)NV12"));

    if (!caps_nvvidconv)
    {
        NVXIO_PRINT("Failed to create caps nvvidconv");
        FinalizeGstPipeLine();

        return false;
    }

    // link elements

    if (!gst_element_link_filtered(nvvidconv, nvvideosink, caps_nvvidconv.get()))
    {
        NVXIO_PRINT("GStreamer: cannot link nvvidconv -> nvvideosink using caps");
        FinalizeGstPipeLine();

        return false;
    }

    // Force pipeline to play video as fast as possible, ignoring system clock
    gst_pipeline_use_clock(pipeline, nullptr);

    status = gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_PLAYING);
    handleGStreamerMessages();
    
    if (status == GST_STATE_CHANGE_ASYNC)
    {
        status = gst_element_get_state(GST_ELEMENT(pipeline), nullptr, nullptr, GST_CLOCK_TIME_NONE);
    }

if (status == GST_STATE_CHANGE_FAILURE)
    {
        NVXIO_PRINT("GStreamer: unable to start playback");
        FinalizeGstPipeLine();

        return false;
    }

    if (!updateConfiguration(nvvideosink, v4l2src, configuration))
    {
        FinalizeGstPipeLine();
        return false;
    }

end = false;

return true;
}

} // namespace nvidiaio

Hi Matt, which app do you use? nvgstcamera_capture?

I’ve actually been using the VisionWorks-Tracking-0.88-Sample app nvx_sample_object_tracker

I decided I should test the NvCameraSourceImpl and made that work by setting the outcaps parameter of nvvideosink and I still have the same problem of slowdown after a few seconds of use.

Since there is no existing sample to reproduce this, it would be great if you can share a binary to work on r27.1 + VisioWorks 1.6(installed via Jetpack 3.0).

I’ve created a binary that starts a v4l2 camera framesource with 640x480, I420, 60hz by default. If you want another resolution/framerate I can change it or make it a parameter. So the gstreamer pipeline is v4l2src → nvvidconv → nvvideosink. With this set up it can take a little longer before the slowdown starts to occur.

[url]Login - Dropbox

Are you able to attach it here? Somehow we cannot download it from dropbox.

There doesn’t seem to be an option to attach but I think I’ve fixed the dropbox link, sorry.

[url]Dropbox - File Deleted

edit: Seems you can attach after posting but not during!
YUVExample.zip (235 KB)

Hi MattTS,

We run your YUVExample, hit “Can’t open source URI YUV” error.
Could you help build again with below command:

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,width=1280,height=720,framerate=30/1' ! nvvidconv ! nvoverlaysink

We can run this pipeline successfully. Thanks!

I think I must have built it opening dev/video1. I’ll build it again once I get to work, sorry.

I’ve built it to open /dev/video0 with 1280x720 at 30hz now. Thanks again.
yuv720p.zip (235 KB)

Hi MattTS,

After I run “nvx_sample_object_tracker”, need press “S” key to skip current frame.
Could you share how to repro choppy issue?

It’s based on the tracking example in Visionworks. So you can use the mouse to draw a bounding box and then it’ll track that. If it loses the object then it’ll need a new bounding box drawing on. It’s best to put the bounding box around something with well defined corners. After the webcam feed has been running a little while the video becomes choppy but the printed frame rate for the tracking algorithm and display rate both remain high.

I’ll try to make a video when I get into the office to show this.

Thanks MattTS.

I can see “Object were lost”, but after press “S” key, the display back normal.
Are you hit the same issue? Could you also attach video for us reference.

Hi MattTS,
Please share a video showing the issue. Do you observe it with several usb cameras? We run Logitech c930e and can see it freeze when ‘Object were lost’, but can get good scene after pressing ‘S’ key.

We only have the one USB camera here but managed to replicated it using the nvcamera framesource with
nvcamera → nvvideosink.

Below is a video showing the fault. For the first 20 seconds it’s fine then the choppiness starts. Using nvpmodel -m 0 and jetson_clocks.sh.

You might need to download it to see it in full quality as the embedded preview is low quality.

[url]Dropbox - File Deleted