Error encoding with gstreamer and omxh264enc

My workflow is a C++ application utilizing OpenCV and other libraries that will:

  1. Decode several rtsp streams
  2. Perform inference and other processing
  3. Draw metadata on frames
  4. Encode each stream to h264 using OpenCV + gstreamer with omxh264enc

Here is the gstreamer pipeline I am using:

appsrc ! video/x-raw, format=(string)BGR ! videoconvert
! video/x-raw, format=(string)I420, framerate=(fraction)15/1
! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24
! video/x-h264,level=(string)5.2,stream-format=byte-stream
! h264parse ! qtmux ! filesink location=file.mp4

I’m not sure if the qp-range is optimal, but it works and seems to produce sufficient quality vs size. I’m not familiar with h264 quantization parameters.

To reproduce the error I isolated the encoding portion of my processing pipeline so that it will:

  1. Load a 30 second video
  2. Create 4 or 8 encoder objects
  3. Push frames into each encoder in a loop to create 30 second recordings in a loop

After some time, usually a few minutes one of the encoders throws the following error:

Framerate set to : 15 at NvxVideoEncoderSetParameterH264: Profile = 100, Level = 52
NvMMLiteVideoEncDoWork: Surface resolution (1 x 3145728) smaller than encode resolution (1920 x 1080)
VENC: NvMMLiteVideoEncDoWork: 4207: BlockSide error 0x4
Event_BlockError from 330BlockAvcEnc : Error code - 4
Sending error event from 330BlockAvcEnc

This call appears to block the calling thread indefinitely as well. If I attempt to cause a core dump using kill -SIGSEGV, the stack is corrupted.

When running with UndefinedBehaviorSanitizer I will sometimes get the following stack trace when this error occurs:

UndefinedBehaviorSanitizer:DEADLYSIGNAL
==3147==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x007f37f34e98 bp 0x007f313d0140 sp 0x007f313cffa0 T3172)
==3147==The signal is caused by a READ memory access.
==3147==Hint: address points to the zero page.
#0 0x7f37f34e94 in gst_query_find_allocation_meta (/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstomx.so+0xae94)
#1 0x7f37f36a00 in gst_query_find_allocation_meta (/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstomx.so+0xca00)
#2 0x7f37f36b20 in gst_query_find_allocation_meta (/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstomx.so+0xcb20)
#3 0x7f37f4d3b0 (/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstomx.so+0x233b0)
#4 0x7f7507fd8c in gst_element_change_state (/usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0+0x62d8c)

I have tried using nvv4l2h264enc as well with the following pipeline:

appsrc ! video/x-raw, format=(string)BGR ! videoconvert ! video/x-raw, format=(string)I420, framerate=(fraction)10/1
! nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420
! nvv4l2h264enc profile=4 preset-level=3 insert-sps-pps=1 iframeinterval=10
! video/x-h264,stream-format=byte-stream ! h264parse! qtmux ! filesink location=file.mp4

This pipeline appears to cause a memory leak on the dma buffers. Monitoring /sys/kernel/debug/nvmap/iovmm/allocations shows an increase of roughly 1.5MB every time I write a video.

Does anyone know what is going on here?

Hi,
Please try videotestsrc:

videotestsrc ! video/x-raw, format=(string)BGR, width=1920,height=1080 ! videoconvert
! video/x-raw, format=(string)I420, framerate=(fraction)15/1
! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 
! video/x-h264,level=(string)5.2,stream-format=byte-stream 
! h264parse ! qtmux ! filesink location=file.mp4

Want to clarify it is not an issue in the appsrc code.

I have written a bash script to loop the pipeline that you provided with gst-launch-1.0. It creates a gst-launch process 8 times every 10 seconds and then kills all of them. There does not appear to be an issue in this configuration. However, this is a fairly different workflow from my application.

My application creates the pipeline in the same process space 8 times every n seconds while pushing frames at 15fps, whereas using gst-launch-1.0 a separate process is created for each “encoder”. I am using OpenCVs implementation of gstreamer which is fairly simple. The implementation is shown at [url]https://github.com/opencv/opencv/blob/master/modules/videoio/src/cap_gstreamer.cpp[/url]. I have looked through their implementation and there does appear to be anything that could cause this issue. It simply uses gst_parse_launch and pushes frames to gstreamer using gst_app_src_push_buffer. I’ve isolated my application code to purely creating the pipeline and pushing frames to OpenCVs implementation of gstreamer.

If you have any other suggestions for debugging the issue I can try them.

Note: the issue seems to happen during the pipeline creation.

Hi,
Not sure but it may be an issue in appsrc. For 1920x1080 BGR, the frame size is 1920x1080x3 bytes. Probably some frames are copied out of range, leading to memory corruption.

If it still looks to be an issue in omxh264enc, please modify below test code into your case and share to us:
[url]https://devtalk.nvidia.com/default/topic/1026106/jetson-tx1/usage-of-nvbuffer-apis/post/5219225/#5219225[/url]
So that we can reproduce it and debug.

I have been able to reproduce the issue with the code below. There is a problem that results in the video not being playable, but it is still encoding the frames and writing to file. Maybe you can spot what is missing that results in an unplayable video. Let me know if you have any questions about the code. You will have to change the hard coded paths, maybe the frame rate, and number of recorders.

#include <gst/app/gstappsrc.h>
#include <gst/gst.h>
#include <gst/gstinfo.h>

#include <opencv2/core/mat.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/videoio.hpp>
#include <signal.h>
#include <thread>
#include <iostream>

using namespace std::chrono;


bool running{true};

void signalHandler(sigset_t* set)
{
    int sig;
    sigwait(set, &sig);

    running = false;
}

/**
 * Get frames from a video, cv::VideoCapture used for simplicity
 * @param videoPath
 * @return
 */
std::vector<cv::Mat> getFramesFromVideo(const std::string& videoPath)
{
    cv::VideoCapture cap(videoPath);

    std::vector<cv::Mat> frames;

    cv::Mat image;

    while (cap.read(image))
    {
        frames.emplace_back(image);
        if(!running)
        {
            break;
        }
    }

    return frames;
}

class Recorder
{
public:
    Recorder(int frameRate, const std::string& name, std::vector<cv::Mat> frames)
    {
        m_frameRate = frameRate;
        m_name = name;
        m_frames = frames;
        m_running = true;
    }

    void startRecorder()
    {
        m_recordThread = std::thread(&Recorder::record, this);
    }

    void stopRecorder()
    {
        m_running = false;

        if(m_recordThread.joinable())
        {
            m_recordThread.join();
        }
    }

private:
    void record()
    {
        std::ostringstream oss;
        
	int width = m_frames[0].cols;
        int height = m_frames[0].rows;

        oss << "appsrc name=mysource ! video/x-raw,width=" << width <<",height=" << height << ",format=(string)BGR "
            << " ! videoconvert ! video/x-raw, format=(string)I420, framerate=(fraction)15/1"
            << " ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24"
            << " ! video/x-h264,level=(string)5.2,stream-format=byte-stream"
            << " ! h264parse! qtmux ! filesink location=";

        // initialize gstreamer
        int argc = 0;
        if (!gst_init_check(&argc, nullptr, nullptr))
        {
            std::cout << m_name << " failed to initialize gstreamer library with gst_init()" << std::endl;
            return;
        }

        GMainLoop *main_loop;
        main_loop = g_main_loop_new(nullptr, FALSE);

        uint32_t microSecPerFrame = static_cast<uint32_t>(1000000.0 / m_frameRate);

        while(m_running)
        {
            // append the file path to the pipeline
            std::string launchString = oss.str();
            launchString += "testvideos/" + getNameFromTime(system_clock::now());

            GError *error = nullptr;
            GstPipeline *gst_pipeline = (GstPipeline *)gst_parse_launch(launchString.c_str(), &error);

            GstElement *appsrc_ = gst_bin_get_by_name(GST_BIN(gst_pipeline), "mysource");
            gst_app_src_set_stream_type(GST_APP_SRC(appsrc_), GST_APP_STREAM_TYPE_STREAM);

            gst_element_set_state((GstElement *)gst_pipeline, GST_STATE_PLAYING);
            
	    GstClockTime duration = ((double)1/m_frameRate) * GST_SECOND;
            for(int i = 0; i < m_frames.size(); i++)
            {
                if(!m_running)
                {
                    break;
                }

                GstBuffer *buffer;
                guint size = width * height * 3;
                GstMapInfo map = {0};
                GstFlowReturn ret;

                buffer = gst_buffer_new_allocate(nullptr, size, nullptr);

                gst_buffer_map(buffer, &map, GST_MAP_WRITE);
                // copy cv::Mat data into map
                memcpy(map.data, m_frames[i].data, size);
                gst_buffer_unmap(buffer, &map);

                g_signal_emit_by_name(appsrc_, "push-buffer", buffer, &ret);
                gst_buffer_unref(buffer);

                // set buffer properties
                GstClockTime timestamp = duration * i;
                buffer->duration = duration;
                buffer->offset = i;
                buffer->pts = timestamp;
                buffer->dts = timestamp;

                // sleep for roughly one frame time to act as if we are receiving decoded frames
		std::this_thread::sleep_for(std::chrono::microseconds{microSecPerFrame});
            }

            // close pipeline
            gst_element_set_state((GstElement *)gst_pipeline, GST_STATE_NULL);
            gst_object_unref(GST_OBJECT(gst_pipeline));
        }
        g_main_loop_unref(main_loop);
    }

    /**
     * Convert chrono time into a date time string with mp4 appended
     * @param timePoint
     * @return
     */
    std::string getNameFromTime(const time_point<system_clock> &timePoint)
    {
        time_t now = std::chrono::system_clock::to_time_t(timePoint);
        struct tm tstruct{};
        char buf[80];
        tstruct = *localtime(&now);
        strftime(buf, sizeof(buf), "%Y-%m-%d_%H-%M-%S", &tstruct);
        return m_name + "_" + std::string(buf) + ".mp4";
    }

    std::vector<cv::Mat> m_frames;

    int m_frameRate;
    std::thread m_recordThread;
    std::string m_name;
    bool m_running;
};

int main(int argc, char** argv)
{
    std::thread signalThread;
    sigset_t set;

    sigemptyset(&set);
    sigaddset(&set, SIGQUIT);
    sigaddset(&set, SIGINT);
    sigaddset(&set, SIGTERM);

    pthread_sigmask(SIG_BLOCK, &set, nullptr);

    signalThread = std::thread(&signalHandler, &set);

    auto frames = getFramesFromVideo("video.mp4");
    
    int numRecorders = 8;

    std::vector<Recorder> recorders;

    // create 8 recorders each with the same cv::Mats
    for(int i = 0; i < numRecorders; i++)
    {
        recorders.emplace_back(15, "recorder" + std::to_string(i), frames);
    }

    for(auto &recorder : recorders)
    {
        recorder.startRecorder();
    }

    signalThread.join();

    for(auto &recorder : recorders)
    {
        recorder.stopRecorder();
    }

    return 0;
}

Hi,
Not sure but it looks like OpenCV has to be re-installed. Could you share full steps to set up the environment and build/run the application?

I am not sure how the code I attached would be an OpenCV issue. I am using OpenCV to decode the video only. After that I implemented a gstreamer recorder to write the video frames in a continuous loop.

I fixed one issue in the code, new code is below:

#include <gst/app/gstappsrc.h>
#include <gst/gst.h>
#include <gst/gstinfo.h>

#include <opencv2/core/mat.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/videoio.hpp>
#include <signal.h>
#include <thread>
#include <iostream>

using namespace std::chrono;

bool running{true};

void signalHandler(sigset_t* set)
{
    int sig;
    sigwait(set, &sig);

    running = false;
}

/**
 * Get frames from a video, cv::VideoCapture used for simplicity
 * @param videoPath
 * @return
 */
std::vector<cv::Mat> getFramesFromVideo(const std::string& videoPath)
{
    cv::VideoCapture cap(videoPath);

    std::vector<cv::Mat> frames;

    cv::Mat image;

    while (cap.read(image))
    {
        frames.emplace_back(image.clone());
        if(!running)
        {
            break;
        }
    }

    return frames;
}

class Recorder
{
public:
    Recorder(int frameRate, const std::string& name, std::vector<cv::Mat> frames)
    {
        m_frameRate = frameRate;
        m_name = name;
        m_frames = frames;
        m_running = true;
    }

    void startRecorder()
    {
        m_recordThread = std::thread(&Recorder::record, this);
    }

    void stopRecorder()
    {
        m_running = false;

        if(m_recordThread.joinable())
        {
            m_recordThread.join();
        }
    }

private:
    void record()
    {
        std::ostringstream oss;
        
	    int width = m_frames[0].cols;
        int height = m_frames[0].rows;

        oss << "appsrc name=mysource ! video/x-raw,width=" << width <<",height=" << height << ",format=(string)BGR "
            << " ! videoconvert ! video/x-raw, format=I420, framerate=15/1"
            << " ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24"
            << " ! video/x-h264,level=(string)5.2,stream-format=byte-stream"
            << " ! h264parse! qtmux ! filesink location=";

        // initialize gstreamer
        int argc = 0;
        if (!gst_init_check(&argc, nullptr, nullptr))
        {
            std::cout << m_name << " failed to initialize gstreamer library with gst_init()" << std::endl;
            return;
        }

        GMainLoop *main_loop;
        main_loop = g_main_loop_new(nullptr, FALSE);

        uint32_t microSecPerFrame = static_cast<uint32_t>(1000000.0 / m_frameRate);

        while(m_running)
        {
            // append the file path to the pipeline
            std::string launchString = oss.str();
            launchString += "testvideos/" + getNameFromTime(system_clock::now());

            GError *error = nullptr;
            GstPipeline *gst_pipeline = (GstPipeline *)gst_parse_launch(launchString.c_str(), &error);

            GstElement *appsrc_ = gst_bin_get_by_name(GST_BIN(gst_pipeline), "mysource");
            gst_app_src_set_stream_type(GST_APP_SRC(appsrc_), GST_APP_STREAM_TYPE_STREAM);

            gst_element_set_state((GstElement *)gst_pipeline, GST_STATE_PLAYING);
            
	        GstClockTime duration = ((double)1/m_frameRate) * GST_SECOND;
            for(int i = 0; i < m_frames.size(); i++)
            {
                if(!m_running)
                {
                    break;
                }

                GstBuffer *buffer;
                guint size = width * height * 3;
                GstMapInfo map = {0};
                GstFlowReturn ret;

                buffer = gst_buffer_new_allocate(nullptr, size, nullptr);

                gst_buffer_map(buffer, &map, GST_MAP_WRITE);
                // copy cv::Mat data into map
                memcpy(map.data, m_frames[i].data, size);
                gst_buffer_unmap(buffer, &map);

                g_signal_emit_by_name(appsrc_, "push-buffer", buffer, &ret);
                gst_buffer_unref(buffer);

                // set buffer properties
                GstClockTime timestamp = duration * i;
                buffer->duration = duration;
                buffer->offset = i;
                buffer->pts = timestamp;
                buffer->dts = timestamp;

                // sleep for roughly one frame time to act as if we are receiving decoded frames
		        std::this_thread::sleep_for(std::chrono::microseconds{microSecPerFrame});
            }

            // close pipeline
            gst_element_set_state((GstElement *)gst_pipeline, GST_STATE_NULL);
            gst_object_unref(GST_OBJECT(gst_pipeline));
        }
        g_main_loop_unref(main_loop);
    }

    /**
     * Convert chrono time into a date time string with mp4 appended
     * @param timePoint
     * @return
     */
    std::string getNameFromTime(const time_point<system_clock> &timePoint)
    {
        time_t now = std::chrono::system_clock::to_time_t(timePoint);
        struct tm tstruct{};
        char buf[80];
        tstruct = *localtime(&now);
        strftime(buf, sizeof(buf), "%Y-%m-%d_%H-%M-%S", &tstruct);
        return m_name + "_" + std::string(buf) + ".mp4";
    }

    std::vector<cv::Mat> m_frames;

    int m_frameRate;
    std::thread m_recordThread;
    std::string m_name;
    bool m_running;
};

int main(int argc, char** argv)
{
    std::thread signalThread;
    sigset_t set;

    sigemptyset(&set);
    sigaddset(&set, SIGQUIT);
    sigaddset(&set, SIGINT);
    sigaddset(&set, SIGTERM);

    pthread_sigmask(SIG_BLOCK, &set, nullptr);

    signalThread = std::thread(&signalHandler, &set);

    auto frames = getFramesFromVideo("video.mp4");
    
    int numRecorders = 8;

    std::vector<Recorder> recorders;

    // create 8 recorders each with the same cv::Mats
    for(int i = 0; i < numRecorders; i++)
    {
        recorders.emplace_back(15, "recorder" + std::to_string(i), frames);
    }

    for(auto &recorder : recorders)
    {
        recorder.startRecorder();
    }

    signalThread.join();

    for(auto &recorder : recorders)
    {
        recorder.stopRecorder();
    }

    return 0;
}

My CMakeLists.txt for environment:

cmake_minimum_required(VERSION 3.10)

project(recordTest VERSION 1.0
                   DESCRIPTION ""
                   LANGUAGES CXX)

set(CMAKE_CXX_STANDARD  11)
set(CMAKE_CXX_STANDARD_REQUIRED YES)
set(CMAKE_CXX_EXTENSIONS NO)
set(CMAKE_POSITION_INDEPENDENT_CODE TRUE)

find_package(OpenCV REQUIRED)
find_package(PkgConfig)

pkg_check_modules(GST REQUIRED gstreamer-1.0>=1.4
                               gstreamer-sdp-1.0>=1.4
                               gstreamer-video-1.0>=1.4
                               gstreamer-app-1.0>=1.4)

add_executable(recorderTest recorderMain.cpp)

set (COMMON_OPTIONS -Wall -Wextra -Werror=format-security -pedantic -pipe -fstack-protector -D_GLIBCXX_ASSERTIONS)
set (DEBUG_OPTIONS -O0 -ggdb -DDEBUG -fno-omit-frame-pointer -fno-optimize-sibling-calls ${COMMON_OPTIONS})
set (RELEASE_OPTIONS -O2 -D_FORTIFY_SOURCE=2 ${COMMON_OPTIONS})

target_include_directories(recorderTest
    PRIVATE
        ${OpenCV_INCLUDE_DIRS}
        ${GST_INCLUDE_DIRS}
)

target_link_libraries(recorderTest
    PRIVATE
        ${OpenCV_LIBS}
        ${GST_LIBRARIES}
        pthread
)

target_compile_options(recorderTest PRIVATE "$<$<CONFIG:DEBUG>:${DEBUG_OPTIONS}>")
target_compile_options(recorderTest PRIVATE "$<$<CONFIG:RELEASE>:${RELEASE_OPTIONS}>")

target_link_options(recorderTest PRIVATE "-Wl,--disable-new-dtags")

Compiled with gcc 7.4.0

To build:

  1. mkdir build
  2. cd build
  3. cmake ..
  4. make

To run:

  1. Navigate to root of project directory
  2. mkdir testvideos
  3. Copy test video to current directory
  4. ./build/recorderTest

The issue mentioned originally should occur within a few minutes. I reproduced with this standalone project using OpenCV and GStreamer versions that were shipped with Jetpack 4.2.

Hi,
Video decoding through OpenCV APIs does not utilize hardware decoder on Xavier. Are you able to use omxh264dec/omxh265dec in gstreamer or NvVideoDecoder in tegra_multimedia_api?

The example above is not supposed to use the decoder. However, OpenCV does support utilizing hardware accelerated gstreamer pipelines if you pass in a manual pipeline using appsink and appsrc. The decoding is not the important part of the code I provided above. It shows a video recorder implemented using gstreamer with a omxh264enc. The intent was to provide an simple minimal sample that can reproduce the encoder problem I am running into. Using just gstreamer to record frames, I am able to reproduce the bug very quickly.

Hi,
OpenCV 3.3.1 in Jetpack4.2 is not built with ‘-D WITH_GSTREAMER=ON’, so it may not work well with gstreamer. Please try to re-install OpenCV with scripts at JEP/script at master · AastaNV/JEP · GitHub

It still looks to be an issue in OpenCV bacause if it is a general issue, it should also be reproduced with videotestsrc plugin and the sample application at
[url]https://devtalk.nvidia.com/default/topic/1026106/jetson-tx1/usage-of-nvbuffer-apis/post/5219225/#5219225[/url]

@DaneLLL The sample application will not reproduce the issue, because it doesn’t create eight separate encoders in the same process.

I agree with @nr94, the OpenCV usage seems to be beside the point (apart from the fact that he uses argument copying of the entire arrays of frames, which tickles my “gee that’s a lot of memcpy()” reflex, but probably isn’t a problem, either.)

He could probably write a program that uses videotestsrc to generate the set of frames (or, for that matter, just generate random noise into the video frames instead) and still get the same problem.

@nr94: I suggest you do that! Just memset() the frames to a fixed value, 0x01 for the first frame, 0x02 for the second frame, …
You don’t even need to copy the frames to do that; just memset() directly into a buffer when it’s time to generate a frame.

Hi,
We can run four encoding threads in single process successfully through gst-launch-1.0:

$ gst-launch-1.0 videotestsrc num-buffers=9000 is-live=true ! video/x-raw,width=1920,height=1080 ! omxh264enc ! filesink location=a.h264 videotestsrc num-buffers=9000 is-live=true pattern=1 ! video/x-raw,width=1920,height=1080 ! omxh264enc ! filesink location=b.h264 videotestsrc num-buffers=9000 is-live=true pattern=2 ! video/x-raw,width=1920,height=1080 ! omxh264enc ! filesink location=c.h264 videotestsrc num-buffers=9000 is-live=true pattern=3 ! video/x-raw,width=1920,height=1080 ! omxh264enc ! filesink location=d.h264

Each source is 1080p30. It is 5 minutes for 9000 frames. The memory usage looks stable:

root@nvidia-desktop:/home/nvidia# cat /sys/kernel/debug/nvmap/iovmm/clients
CLIENT                        PROCESS      PID        SIZE
user                   gst-launch-1.0    18763     245612K
user                           compiz     8133      70528K
user                             Xorg     7689     149056K
user                      gnome-shell     6307      70592K
user                             Xorg     6050      42816K
user                   nvargus-daemon     5754         76K
total                                              578680K

@snarky
With OpenCV the frames should never be copied unless they are explicitly cloned. Passing a cv::Mat around is similar to using a shared ptr. They just increase a reference counter. I could probably use videotestsrc in the code instead of loading from a video.

@DaneLLL
On the note of OpenCV, I typically use version 4.x of OpenCV that is compiled with full gstreamer and cuda support.

Back to the main issue, I would expect that we would be able to run multiple encoders in the same process just fine. However, the issue seems to occur when looping short videos and creating new encoders many times in the same process. That is what I was trying to show in the code I provided.

I will try to create a gst-launch pipeline that does this.

Hello, I have removed everything related to OpenCV and decoding. I have been able to reproduce the issue using the pipeline in the code below which uses videotestsrc. The error happens within a few minutes.

#include <gst/app/gstappsrc.h>
#include <gst/gst.h>
#include <gst/gstinfo.h>

#include <signal.h>
#include <thread>
#include <iostream>
#include <vector>
#include <sstream>

using namespace std::chrono;


bool running{true};

void signalHandler(sigset_t* set)
{
    int sig;
    sigwait(set, &sig);

    running = false;
}

class Recorder
{
public:
    Recorder(int frameRate, const std::string& name)
    {
        m_frameRate = frameRate;
        m_name = name;
        m_running = true;
    }

    void startRecorder()
    {
        m_recordThread = std::thread(&Recorder::record, this);
    }

    void stopRecorder()
    {
        m_running = false;

        if(m_recordThread.joinable())
        {
            m_recordThread.join();
        }
    }

private:
    void record()
    {
        std::ostringstream oss;

        oss << "videotestsrc num-buffers=150 is-live=true pattern=2"
	    << " ! video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1"
            << " ! videoconvert ! video/x-raw, format=(string)I420"
            << " ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24"
            << " ! video/x-h264,level=(string)5.2,stream-format=byte-stream"
            << " ! h264parse ! qtmux ! filesink location=";

        // initialize gstreamer
        int argc = 0;
        if (!gst_init_check(&argc, nullptr, nullptr))
        {
            std::cout << m_name << " failed to initialize gstreamer library with gst_init()" << std::endl;
            return;
        }

        GMainLoop *main_loop;
        main_loop = g_main_loop_new(nullptr, FALSE);

        uint32_t microSecPerFrame = static_cast<uint32_t>(1000000.0 / m_frameRate);

        while(m_running)
        {
            // append the file path to the pipeline
            std::string launchString = oss.str();
            launchString += "testvideos/" + getNameFromTime(system_clock::now());

            GError *error = nullptr;
            GstPipeline *gst_pipeline = (GstPipeline *)gst_parse_launch(launchString.c_str(), &error);

            gst_element_set_state((GstElement *)gst_pipeline, GST_STATE_PLAYING);
            
            for(int i = 0; i < 150; i++)
            {
                if(!m_running)
                {
                    break;
                }

                // sleep for roughly one frame time to act as if we are receiving decoded frames
		std::this_thread::sleep_for(std::chrono::microseconds{microSecPerFrame});
            }

            // close pipeline
            gst_element_set_state((GstElement *)gst_pipeline, GST_STATE_NULL);
            gst_object_unref(GST_OBJECT(gst_pipeline));
        }
        g_main_loop_unref(main_loop);
    }

    /**
     * Convert chrono time into a date time string with mp4 appended
     * @param timePoint
     * @return
     */
    std::string getNameFromTime(const time_point<system_clock> &timePoint)
    {
        time_t now = std::chrono::system_clock::to_time_t(timePoint);
        struct tm tstruct{};
        char buf[80];
        tstruct = *localtime(&now);
        strftime(buf, sizeof(buf), "%Y-%m-%d_%H-%M-%S", &tstruct);
        return m_name + "_" + std::string(buf) + ".mp4";
    }

    int m_frameRate;
    std::thread m_recordThread;
    std::string m_name;
    bool m_running;
};

int main(int argc, char** argv)
{
    std::thread signalThread;
    sigset_t set;

    sigemptyset(&set);
    sigaddset(&set, SIGQUIT);
    sigaddset(&set, SIGINT);
    sigaddset(&set, SIGTERM);

    pthread_sigmask(SIG_BLOCK, &set, nullptr);

    signalThread = std::thread(&signalHandler, &set);

    int numRecorders = 8;

    std::vector<Recorder> recorders;

    // create 8 recorders each with the same cv::Mats
    for(int i = 0; i < numRecorders; i++)
    {
        recorders.emplace_back(15, "recorder" + std::to_string(i));
    }

    for(auto &recorder : recorders)
    {
        recorder.startRecorder();
    }

    signalThread.join();

    for(auto &recorder : recorders)
    {
        recorder.stopRecorder();
    }

    return 0;
}

Hi nr94,
We can reproduce the issue in running the test code. It hits the error and segment fault on r32.1/Xavier:

Framerate set to : 15 at NvxVideoEncoderSetParameterH264: Profile = 100, Level = 52 
NvMMLiteVideoEncDoWork: Surface resolution (1 x 3145728) smaller than encode resolution (1920 x 1080)
VENC: NvMMLiteVideoEncDoWork: 4207: BlockSide error 0x4
Event_BlockError from 330BlockAvcEnc : Error code - 4
Sending error event from 330BlockAvcEnc

Also certain issues are hit in running nvv4l2h264enc. Since we are deprecating omx plugins, we will focus on v4l2 plugins. Will update once we make progress. Thanks for the test code.

DaneLLL,

It is good to know that the issue is reproducible on your side. Do you have any suggestions for workarounds in the short term? I am supposed to demonstrate my application with the Xavier near the end of the month, but I am left without a path to encode video clips. The h264/h265 omx plugins have the issue mentioned above, and the nvv4l2 plugins have memory leaks.

Is there anything I can try short of making a new process for every single video I record?

Hi nr94,
We will debug this usecase.

Is there is consistent way to reproduce it in running omxh24enc? We can reproduce it but it sometimes takes 5-10 minutes, amd smetimes 30+ minutes. Looks to be race condition and happens randomly. If you have any tip to reproduce it quickly, please share to us.

The way that I have been able to increase the frequency of the error is to either increase the total number of recorders or to reduce the length of each video clip. To start with I would recommend increasing the number of recorders by simply editing line 138 of the example code to increase numRecorders, so maybe instead of

int numRecorders = 8;

try

int numRecorders = 16;

To reduce the length of each recording, just change the number of buffers in the pipeline and in the for loop that just sleeps. Maybe instead of 150, try 75 buffers. To start with I would just try increasing the number of recorders.

Hi nr94,
Please note it is still an issue on r32.2.
For multiple encoding, we verify one encoding thread in one process such as below script:

#!/bin/bash

i=1
while [ "$i" != "700" ]
do
    echo "loop" $i
    gst-launch-1.0 videotestsrc num-buffers=150 is-live=true pattern=2 ! 'video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1' ! videoconvert ! 'video/x-raw, format=(string)NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 ! 'video/x-h264,level=(string)5.2,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testvideos/test_1_$i.mp4 &
    gst-launch-1.0 videotestsrc num-buffers=150 is-live=true pattern=2 ! 'video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1' ! videoconvert ! 'video/x-raw, format=(string)NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 ! 'video/x-h264,level=(string)5.2,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testvideos/test_2_$i.mp4 &
    gst-launch-1.0 videotestsrc num-buffers=150 is-live=true pattern=2 ! 'video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1' ! videoconvert ! 'video/x-raw, format=(string)NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 ! 'video/x-h264,level=(string)5.2,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testvideos/test_3_$i.mp4 &
    gst-launch-1.0 videotestsrc num-buffers=150 is-live=true pattern=2 ! 'video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1' ! videoconvert ! 'video/x-raw, format=(string)NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 ! 'video/x-h264,level=(string)5.2,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testvideos/test_4_$i.mp4 &
    gst-launch-1.0 videotestsrc num-buffers=150 is-live=true pattern=2 ! 'video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1' ! videoconvert ! 'video/x-raw, format=(string)NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 ! 'video/x-h264,level=(string)5.2,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testvideos/test_5_$i.mp4 &
    gst-launch-1.0 videotestsrc num-buffers=150 is-live=true pattern=2 ! 'video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1' ! videoconvert ! 'video/x-raw, format=(string)NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 ! 'video/x-h264,level=(string)5.2,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testvideos/test_6_$i.mp4 &
    gst-launch-1.0 videotestsrc num-buffers=150 is-live=true pattern=2 ! 'video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1' ! videoconvert ! 'video/x-raw, format=(string)NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 ! 'video/x-h264,level=(string)5.2,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testvideos/test_7_$i.mp4 &
    gst-launch-1.0 videotestsrc num-buffers=150 is-live=true pattern=2 ! 'video/x-raw,width=1920,height=1080,format=(string)BGR,framerate=(fraction)15/1' ! videoconvert ! 'video/x-raw, format=(string)NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc preset-level=3 profile=8 insert-sps-pps=1 qp-range=8,24:8,24:8,24 ! 'video/x-h264,level=(string)5.2,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testvideos/test_8_$i.mp4 &
    sleep 20
    i=$(($i+1))
done

Running multiple threads in single process is not a verified case. We are checking to support it, but it may take a while. For stability on r32.1 and r32.2, we would like to suggest you run one encoding thread in one process.

Hi,
Please try attachment in
https://devtalk.nvidia.com/default/topic/1064435/jetson-agx-xavier/using-nvv4l2h264enc-encoder-memory-increases-when-releasing-pipeline-and-setting-up-a-new-one/post/5412793/#5412793