How to draw lanes on the video

Is there a plugin support drawing lanes on the video?
Or how I can write a plugin to draw lanes.

Hi,
Do you mean detecting lanes and drawing? Or you have models to detect the lanes and want to access video frames to draw out the result?

Hi,DaneLLL. I already have plugin to detect the lanes, just want to access video frames to draw out the result.

Hi,
Please refer to [Implementing a Custom Plugin] in the document. You can enable the sample element by putting below into config file:

[ds-example]
enable=1
processing-width=640
processing-height=480
full-frame=0
unique-id=15

video frames can be accesses via NvBuffer APIs. The sample function is at home\nvidia\gst-dsexample_sources\gstdsexample.cpp

static GstFlowReturn
get_converted_mat (GstDsExample * dsexample, int in_dmabuf_fd,
    NvOSD_RectParams * crop_rect_params, cv::Mat & out_mat, gdouble & ratio);

For porcessing via CUDA, you can check

tegra_multimedia_api\samples\common\algorithm\cuda

For processing via CPU, you can check

tegra_multimedia_api\samples0_camera_recording

tegra_multimedia_api samples can be installed via Jetpack and document is here

Hi.
I want to write a customized nvosd plugin to draw lane lines. but all the changes int the gst-dsexample_sources, such as drawing lines or texts, are not found on the video display by nveglglessink.

Hi,
Please check the demonstration in 10_camera_recording:

if (DO_CPU_PROCESS) {
            NvBufferParams par;
            NvBufferGetParams (fd, &par);
            void *ptr_y;
            uint8_t *ptr_cur;
            int i, j, a, b;
            NvBufferMemMap(fd, Y_INDEX, NvBufferMem_Write, &ptr_y);
            NvBufferMemSyncForCpu(fd, Y_INDEX, &ptr_y);
            ptr_cur = (uint8_t *)ptr_y + par.pitch[Y_INDEX]*START_POS + START_POS;

            // overwrite some pixels to put an 'N' on each Y plane
            // scan array_n to decide which pixel should be overwritten
            for (i=0; i < FONT_SIZE; i++) {
                for (j=0; j < FONT_SIZE; j++) {
                    a = i>>SHIFT_BITS;
                    b = j>>SHIFT_BITS;
                    if (array_n[a][b])
                        (*ptr_cur) = 0xff; // white color
                    ptr_cur++;
                }
                ptr_cur = (uint8_t *)ptr_y + par.pitch[Y_INDEX]*(START_POS + i)  + START_POS;
            }
            NvBufferMemSyncForDevice (fd, Y_INDEX, &ptr_y);
            NvBufferMemUnMap(fd, Y_INDEX, &ptr_y);
        }

The source is YUV420 and it modifies Y plane. In ds-example it is RGBA surfaces.

How to draw colorful lines.My code is as bellow,if I save the in_mat with imwrite,color of the line on the picture can be what I wanted, but only be black on video.

int in_dmabuf_fd = 0;
NvBufferParams buf_params;
gpointer mapped_ptr = NULL;
GstMapInfo in_map_info;
memset(&in_map_info, 0, sizeof(in_map_info));
gst_buffer_map(inbuf, &in_map_info, GST_MAP_WRITE);
ExtractFdFromNvBuffer(in_map_info.data, &in_dmabuf_fd);
NvBufferGetParams(in_dmabuf_fd, &buf_params);
NvBufferMemMap(in_dmabuf_fd, 0, NvBufferMem_Write, &mapped_ptr);
NvBufferMemSyncForCpu(in_dmabuf_fd, 0, &mapped_ptr);
cv::Mat in_mat = cv::Mat(video_info.height,video_info.width, CV_8UC4, mapped_ptr, buf_params.pitch[0]);
cv::Point point1;
point1.x = 0;
point1.y = 100;
cv::Point point2;
point2.x = 100;
point2.y = 100;
cv::line(mat, point1, point2, cv::Scalar(blue, green, red), 3);
NvBufferMemUnMap(in_dmabuf_fd, 0, &mapped_ptr);
NvReleaseFd(in_dmabuf_fd);
gst_buffer_unmap(inbuf, &in_map_info);

Hi,
Can you try to call NvBufferMemSyncForDevice() before unmap the buffer? It is missed in your code.

I’m developing on the TX2 jetson board.
I’m taking video from an ipcamera, drawing some stuff on the frames, and then sending it out as a video stream.
The above posts helped me a lot! It draws lines in different colors on the local hdmi display. But when it is streamed out, the lines are always black. It doesn’t matter what I change the Scalar values to. Could someone please help?

Below is a snippet of my code

int main() {
    ostringstream launch_stream;
    GstAppSinkCallbacks callbacks = {osdsink_eos, NULL, new_frame_cb};

    launch_stream
    << "rtspsrc location=rtsp://admin:admin@192.168.217.246 ! decodebin ! "
    << "nvvidconv ! "
    << "video/x-raw(memory:NVMM), format=RGBA, width="<< w <<", height="<< h <<" ! tee name=t "
    << "t. ! appsink name=osdsink " 
    << "t. ! nvvidconv ! video/x-raw(memory:NVMM), format=I420, width="<< w <<", height="<< h << " ! omxh264enc ! tee name=udp_sink "
    << "udp_sink. ! video/x-h264,stream-format=byte-stream ! h264parse ! rtph264pay pt=96 ! udpsink host=192.168.217.7 port=7000 sync=false async=false "
    << "udp_sink. ! video/x-h264,stream-format=byte-stream ! h264parse ! rtph264pay pt=96 ! udpsink host=192.168.217.7 port=8000 sync=false async=false ";
    launch_string = launch_stream.str();
...
}

static GstFlowReturn new_frame_cb(GstAppSink *osdsink, gpointer user_data) {
    GstSample *sample = NULL;
    NvBufferParams parm;
    int ret = -1;

    g_signal_emit_by_name (osdsink, "pull-sample", &sample,NULL);

    if (sample)
    {
        GstBuffer *buffer = NULL;
        GstCaps   *caps   = NULL;
        GstMapInfo map    = {0};
        int dmabuf_fd = 0;

        caps = gst_sample_get_caps (sample);
        if (!caps)
        {
            printf("could not get snapshot format\n");
        }
        gst_caps_get_structure (caps, 0);
        buffer = gst_sample_get_buffer (sample);
        gst_buffer_map (buffer, &map, GST_MAP_READ);

        ExtractFdFromNvBuffer((void *)map.data, &dmabuf_fd);

       ret = NvBufferGetParams(dmabuf_fd, &parm);
       if (ret != -0) {
           printf ("**** error NvBufferGetParams()\n");
       }

       void * psrc_data = NULL;
       unsigned int plane = 0;
       ret = NvBufferMemMap(dmabuf_fd, plane, NvBufferMem_Read_Write, &psrc_data);

if (ret == 0) {
            NvBufferMemSyncForCpu(dmabuf_fd, plane, &psrc_data);

            cv::Mat in_mat = cv::Mat(1440,960, CV_8UC4, psrc_data, parm.pitch[0]);
            cv::Point point1;
            point1.x = 0;
            point1.y = 100;
            cv::Point point2;
            point2.x = 200;
            point2.y = 300;
            cv::line(in_mat, point1, point2, cv::Scalar(0, 255,255), 3);
            cv::Point point3;
            point3.x = 300;
            point3.y = 100;
            cv::line(in_mat, point2, point3, cv::Scalar(0, 0, 255), 3);
            cv::Point point4;
            point4.x = 2800;
            point4.y = 100;
            cv::line(in_mat, point3, point4, cv::Scalar(255, 0, 0), 3);
    }

    NvReleaseFd(dmabuf_fd);
    frame_count++;

    gst_buffer_unmap(buffer, &map);
    gst_sample_unref (sample);
    return GST_FLOW_OK;
}

edit I have discovered that setting the alpha to 255 would fix my problem. :-) hope this helps someone else in the future

Hi mxy,
You should also call

+ NvBufferMemSyncForDevice();
+ NvBufferMemUnMap();
NvReleaseFd(dmabuf_fd);
frame_count++;

I have added the lines that DaneLLL suggested.

Now, I want to put a graphic on top of my video.

I have tried adding this snippet into the new_frame_cb() function I have posted above.
I expect this to open another window and display my heart bmp ontop of the video.

Mat src1,src2;
  src1 =  Mat(1440,960, CV_8UC4, psrc_data, parm.pitch[0]); 
  //src1 = imread("output028.jpg",1);
  src2=imread("heart.bmp",1);
  if(!src1.data)
  { cout<<"Error loading src1"<<endl; }
  if(!src2.data)
  { cout<<"Error loading src2"<<endl; }
  if(src1.size <= src2.size)
  { cout<<"Error First Image should be larger than Second Image"<<endl;}
  src2.copyTo(src1(cv::Rect(10,10,src2.cols, src2.rows)));

  namedWindow( "Display window", WINDOW_AUTOSIZE );// Create a window for display.
  imshow( "Display window", src1 );                   // Show our image inside it.
  waitKey(0);                                          // Wait for a keystroke in the window

If I set src1 to a static image (output028.jpg) in the folder, then I see heart displayed in the corner of the image (displayed in a separate window).

If I set src1 to be the virtual memory pointed to the dmabuf, I get an error message.
[b][INFO] (NvEglRenderer.cpp:109) Setting Screen width 1440 height 960
NvMMLiteOpen : Block : BlockType = 261
TVMR: NvMMLiteTVMRDecBlockOpen: 7647: NvMMLiteBlockOpen
NvMMLiteBlockCreate : Block : BlockType = 261
TVMR: cbBeginSequence: 1179: BeginSequence 1920x1088, bVPR = 0
TVMR: LowCorner Frequency = 180000
TVMR: cbBeginSequence: 1529: DecodeBuffers = 5, pnvsi->eCodec = 4, codec = 0
TVMR: cbBeginSequence: 1600: Display Resolution : (1920x1080)
TVMR: cbBeginSequence: 1601: Display Aspect Ratio : (1920x1080)
TVMR: cbBeginSequence: 1669: ColorFormat : 5
TVMR: cbBeginSequence:1683 ColorSpace = NvColorSpace_YCbCr601
TVMR: cbBeginSequence: 1809: SurfaceLayout = 3
TVMR: cbBeginSequence: 1902: NumOfSurfaces = 12, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 1904: BeginSequence ColorPrimaries = 2, TransferCharacteristics = 2, MatrixCoefficients = 2
Allocating new output: 1920x1088 (x 12), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3464: Send OMX_EventPortSettingsChanged : nFrameWidth = 1920, nFrameHeight = 1088
Framerate set to : 0 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
0:00:02.959404512 8355 0x66e450 WARN omxvideoenc gstomxvideoenc.c:1860:gst_omx_video_enc_set_format: Error setting temporal_tradeoff 0 : Vendor specific error (0x00000001)
Error First Image should be larger than Second Image
OpenCV Error: Assertion failed (channels() == ((((dtype) & ((512 - 1) << 3)) >> 3) + 1)) in copyTo, file /home/nvidia/build-opencv/opencv/modules/core/src/copy.cpp, line 260
terminate called after throwing an instance of ‘cv::Exception’
what(): /home/nvidia/build-opencv/opencv/modules/core/src/copy.cpp:260: error: (-215) channels() == ((((dtype) & ((512 - 1) << 3)) >> 3) + 1) in function copyTo

[/b]

Could someone please advise me why this could be happening? Thanks!

Not sure at all, but doesn’t:

looks suspect ? Do you have a framerate set ?

[EDIT: Main problem may be that you have a 4 channels src1 (RGBA?) and a 3 channels src2 (RGB?), or different sizes,…].

@Honey_Patouceul,

Good point! I checked some of my error outputs. And sure enough, my src2.size is larger than src1.size!

I don’t know how the small heart.bmp could be ‘larger’ than a 1440x960 frame though? It must have something to do with the CV_8UC4 parameter. Could anyone tell me or point me in the right direction of a document that would teach me which parameter I SHOULD be using?

Thanks!

For questions about opencv, you may go to http://answers.opencv.org/questions/

So I figured out the problem. It was just as Honey had said. The image I’m trying to load must be encoded with an alpha channel. I was able to fix it with this line:

src2=imread("heart.bmp",CV_LOAD_IMAGE_UNCHANGED);

Another thing to note is that most graphics will be saved as RGB. But opencv reads files in as BGR. So the graphic will have the colors flipped.

opencv questions could as well be addressed to opencv irc chat and slack channel
References:
https://join.slack.com/t/open-cv/shared_invite/enQtNDEzMDU2MjIzNDYxLTNhNzQzNWFjMjYwOWRmZmVmNDJiYjBjZjRhOGE3N2Y4OTI2OTFmY2Y1MGVmMzM5MDU3YmEzYjVkOGU0OGMzMWU
http://webchat.freenode.net/ #opencv