questions regarding test application 4

Hey,

I am trying to modify the program test application 4 so it fits my use case. I have some questions while modifying it:

  • Is there a way I can add the whole frame data (or part of the frame data like the data within a bounding box) as part of the metadata and send to the analytics server? I found when the NvDsEventMsgMeta was generated, the frame information is already in the metadata format.
  • Currently, I am viewing all of the metadata sent to the analytics server through Kibana. I am trying to do secondary classifier in the analytics server. How should I structure the plugins such that I can grab the frame information from metadata and apply the secondary detection algorithm in x86 deepstream platform?
  • Thank you very much for your assistance on my problem!

    you can refer to this, within README
    Generating custom metadata for different type of objects:
    In addition to common fields provided in NvDsEventMsgMeta structure, user can
    also create custom objects and attach to buffer as NVDS_META_EVENT_MSG metadata.
    To do that NvDsEventMsgMeta provides “extMsg” and “extMsgSize” fields. User can
    create custom structure, fill that structure and assign the pointer of that
    structure as “extMsg” and set the “extMsgSize” accordingly.
    If custom object contains fields that can’t be simply mem copied then user should
    also provide function to copy and free those objects.

    Refer generate_event_msg_meta() to know how to use “extMsg” and “extMsgSize”
    fields for custom objects and how to provide copy/free function and attach that
    object to buffer as metadata.

    Thanks for your reply! Sorry for not making my question very clear. The part that I get stuck on is how I can get access to the frame data.

    Inside function “osd_sink_pad_buffer_probe”, the NvDsEventMsgMeta is instantiated and then, send to analytics server through kafka broker with msgconverter plugin.

    However, if you take a closer look inside the function “osd_sink_pad_buffer_probe”, the double for loop iterates through all of the frames and the objects in each frame. The NvDsBatchMeta contains only metadata of the frames. It doesn’t have any frame data in it. Where can I get the frame data and pass to the function?

    It would also be great if you could take a look at the second problem. If I get the frame in analytics server, is there a way I can put that into deepstream and do 2nd-degree classifier here?

    Hi,
    not sure frame data you mentioned is pixel data? refer to these documents, hope it’s helpful.
    https://docs.nvidia.com/metropolis/deepstream/4.0/migration-guide/index.html#page/DeepStream_Migration_Guide/DeepStream_GStreamer_migration.html#wwpID0E0QB0HA
    https://docs.nvidia.com/metropolis/deepstream/4.0/migration-guide/index.html#page/DeepStream_Migration_Guide/DeepStream_metadata_migration.html#wwpID0E0BB0HA

    Yes, pixel data. Is that doable?

    osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
    {
    GstBuffer buf = (GstBuffer ) info->data;
    NvDsFrameMeta *frame_meta = NULL;

    this buf is batched buffer NvBufSurface
    you can use gst_buffer_map APIs to get pointer to NvBufSurface
    and inside it surfaceList has pointer to individual buffers

    Thank you very much amycao! I will for sure look into that more!

    Currently, the osd_sink_pad_buffer_probe function will send broker message for the first object every 30 frame. I want to modify it such that a Broker message is sent when a new object is detected in the video. I enabled the tracker module and got the unique id for each of the objects. I also implement a HashSet to record the unique id. Is there a way I can pass in the HashSet to the callback function osd_sink_pad_buffer_probe as an argument?

    C doesn’t allow global pointer, so I have to figure out a way to instantiate the hashmap inside the main function and pass that hashmap pointer into the callback function osd_sink_pad_buffer_probe.

    Also, what is your thought about my second question that I posted earlier? Is there a way I can run deepstream 2nd classifier (sgie) on analytics server using the pixel data (of a frame) received in the metadata?

    Thank you again for your help!

    Hi Guys,

    I am trying to send frame data along with metadata to a webserver. Kindly let me know how to extract the frame data and send the frame data to the server along with metadata. I am using libcurl to achieve this.

    Thanks.

    for your second question, we do not support this by now.

    Hi Guys,

    I am trying to access the frame buffer exactly the way suggested above. To start with, I am just trying to access the Y channel. However I get segmentation fault. The input resolution of the frame is 1280*720. I am using deepstream-app sample and trying to access the frame after tracking is done. Please find my code below:

    void parse_ychannel_row_major(uint32_t width, uint32_t height,const uint8_t *Y,uint32_t Y_stride)
    {
    	int x,y;
    	for(y=0; y< height-1; ++y)
    	{
    		const uint8_t *y_row_ptr=Y+y*Y_stride;
    
    		for(x=0;x<width;++x)
    		{
    			uint8_t data = *(y_row_ptr+x);
    			printf("Y Pixel Value : %d\n",data);
    			printf("x : %d , y : %d \n",x,y);
    		}
    
    	}
    
    }
    
    static GstPadProbeReturn
    tracking_done_buf_prob (GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
    {
      NvDsInstanceBin *bin = (NvDsInstanceBin *) u_data;
      guint index = bin->index;
      AppCtx *appCtx = bin->appCtx;
      GstBuffer *buf = (GstBuffer *) info->data;
      NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
      if (!batch_meta) {
        NVGSTDS_WARN_MSG_V ("Batch meta not found for buffer %p", buf);
        return GST_PAD_PROBE_OK;
      }
    
      GstMapInfo in_map_info;
      NvBufSurface *surface = NULL;
    
      memset (&in_map_info, 0, sizeof (in_map_info));
      if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
        g_print ("Error: Failed to map gst buffer\n");
      }
    
    #if 1
    
      surface = (NvBufSurface *) in_map_info.data;  
    
      int batch_size= surface->batchSize;
    
      printf("Batch Size : %d",batch_size );
    
    
      for(int i=0; i<batch_size; ++i)
      {
    	uint32_t data_size =  surface->surfaceList[i].dataSize;
    	uint32_t pitch =  surface->surfaceList[i].pitch;
    	uint32_t width =  surface->surfaceList[i].width;
    	uint32_t height =  surface->surfaceList[i].height;
    	NvBufSurfaceLayout layout = surface->surfaceList[i].layout;
    	NvBufSurfacePlaneParams plane_params = surface->surfaceList[i].planeParams;
     	void *dataPtr = surface->surfaceList[i].dataPtr;
    	uint8_t *data = dataPtr;
    	
     	//printf("\nData at first index buffer %d : %d",i,*data);
    
    	uint32_t num_planes=plane_params.num_planes;
    	
    	printf("\nNumber of Planes : %d\n\n", num_planes);
    
    	for(int i=0 ; i < num_planes; ++i)
    	{
    	   uint32_t width=plane_params.width[i];
    	   uint32_t height=plane_params.height[i];
    	   uint32_t pitch=plane_params.pitch[i]; 
               uint32_t offset=plane_params.offset[i];
               uint32_t psize=plane_params.psize[i];
               uint32_t bytes_per_pix=plane_params.bytesPerPix[i];
    		
    	   printf("width of the plane %d : %d\n\n",i,width);
    	   printf("height of the plane %d : %d\n\n",i,height);
    	   printf("pitch of the plane  %d : %d\n\n",i,pitch);
    	   printf("offset of the plane  %d : %d\n\n",i,offset);
    	   printf("psize of the plane  %d : %d\n\n",i,psize);
    	   printf("bytes_per_pix of the plane  %d : %d\n\n\n\n",i,bytes_per_pix);	   
    	}
    
    	printf("Size of the frame buffer : %d\n\n",data_size);
    	printf("Pitch of the frame buffer : %d\n\n",pitch);
    	printf("width of the frame buffer : %d\n\n",width);
    	printf("height of the frame buffer : %d\n\n",height);
    
    	NvBufSurfaceColorFormat color_format= surface->surfaceList[i].colorFormat;
    
            if (color_format == NVBUF_COLOR_FORMAT_NV12)
               printf("color_format: NVBUF_COLOR_FORMAT_NV12 \n");
            else if (color_format == NVBUF_COLOR_FORMAT_NV12_ER)
               printf("color_format: NVBUF_COLOR_FORMAT_NV12_ER \n");
            else if (color_format == NVBUF_COLOR_FORMAT_NV12_709)
               printf("color_format: NVBUF_COLOR_FORMAT_NV12_709 \n");
            else if (color_format == NVBUF_COLOR_FORMAT_NV12_709_ER)
               printf("color_format: NVBUF_COLOR_FORMAT_NV12_709_ER \n");
      }
    
    
       uint32_t frame_width=surface->surfaceList[0].planeParams.width[0];
       uint32_t frame_height=surface->surfaceList[0].planeParams.height[0];
       uint32_t Y_stride=surface->surfaceList[0].planeParams.pitch[0];
       uint32_t UV_stride=surface->surfaceList[0].planeParams.pitch[1];
    
       printf("\n\nframe_width : %d\n\n", frame_width);
       printf("\n\nframe_height : %d\n\n", frame_height);
       printf("\n\nY_stride : %d\n\n", Y_stride);
       printf("\n\nUV_stride : %d\n\n", UV_stride);
    
    
       int offset_calc=frame_width*frame_height;
    
       //Method 1
       const uint8_t *Y=surface->surfaceList[0].dataPtr;
    
       // Method 2 
       //const uint8_t *Y=surface->surfaceList[0].dataPtr + (surface->surfaceList[0].planeParams.psize[0] - offset_calc);
       
       parse_ychannel_row_major(frame_width, frame_height,Y,Y_stride);
    
      exit(0); 
    
    #endif
      /*
       * Output KITTI labels with tracking ID if configured to do so.
       */
      write_kitti_track_output(appCtx, batch_meta);
    
      if (appCtx->primary_bbox_generated_cb)
        appCtx->primary_bbox_generated_cb (appCtx, buf, batch_meta, index);
      return GST_PAD_PROBE_OK;
    }
    

    The output of the parameters of the stream can be found below:

    width of the plane 0 : 1280
    height of the plane 0 : 720
    pitch of the plane  0 : 1280
    offset of the plane  0 : 0
    psize of the plane  0 : 1048576
    bytes_per_pix of the plane  0 : 1
    
    width of the plane 1 : 640
    height of the plane 1 : 360
    pitch of the plane  1 : 1280
    offset of the plane  1 : 1048576
    psize of the plane  1 : 524288
    bytes_per_pix of the plane  1 : 2
    
    Size of the frame buffer : 1572864
    Pitch of the frame buffer : 1280
    width of the frame buffer : 1280
    height of the frame buffer : 720
    
    color_format: NVBUF_COLOR_FORMAT_NV12 
    
    frame_width : 1280
    frame_height : 720
    Y_stride : 1280
    UV_stride : 1280
    

    I get segmentation fault around 287th row of the buffer. Kindly point out where I might be going wrong. Is there a possibility of a memory conflict? Since the size of the frame buffer exceeds the actual size of Y channel, are there any headers or metadata which need to be taken care of ?

    Please help me out.

    Thanks.

    if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
    g_print (“Error: Failed to map gst buffer\n”);
    }

    Can you add below after g_print (“Error: Failed to map gst buffer\n”);
    and try again?

    gst_buffer_unmap (buf, &in_map_info);
    return GST_FLOW_ERROR;

    Hi amycao,

    I still get the segmentation fault. However, I was able to figure out that I need to use NvBufSurfaceMap before I access the buffers on CPU. Kindly correct me if I am wrong. I do so use the following code:

    nt write_jpeg_file( char *filename, unsigned char* rgb_image , int width, int height, int bytes_per_pixel, J_COLOR_SPACE color_space )
    {
    
    	struct jpeg_compress_struct cinfo;
    	struct jpeg_error_mgr jerr;
    	
    	JSAMPROW row_pointer[1];
    	FILE *outfile = fopen( filename, "wb" );
    	
    	if ( !outfile )
    	{
    		printf("Error opening output jpeg file %s\n!", filename );
    		return -1;
    	}
    	cinfo.err = jpeg_std_error( &jerr );
    	jpeg_create_compress(&cinfo);
    	jpeg_stdio_dest(&cinfo, outfile);
    
    
    	cinfo.image_width = width;	
    	cinfo.image_height = height;
    	cinfo.input_components = bytes_per_pixel;
    	cinfo.in_color_space = color_space; //JCS_RGB
    
    	jpeg_set_defaults( &cinfo );
    
    	jpeg_start_compress( &cinfo, TRUE );
    
    	while( cinfo.next_scanline < cinfo.image_height )
    	{
    		row_pointer[0] = &rgb_image[ cinfo.next_scanline * cinfo.image_width *  cinfo.input_components];
    		jpeg_write_scanlines( &cinfo, row_pointer, 1 );
    	}
    
    	jpeg_finish_compress( &cinfo );
    	jpeg_destroy_compress( &cinfo );
    	fclose( outfile );
    
    	return 1;
    }
    
    
    /**
     * Buffer probe function after tracker.
     */
    static GstPadProbeReturn
    tracking_done_buf_prob (GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
    {
      NvDsInstanceBin *bin = (NvDsInstanceBin *) u_data;
      guint index = bin->index;
      AppCtx *appCtx = bin->appCtx;
      GstBuffer *buf = (GstBuffer *) info->data;
      NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
      if (!batch_meta) {
        NVGSTDS_WARN_MSG_V ("Batch meta not found for buffer %p", buf);
        return GST_PAD_PROBE_OK;
      }
    
      GstMapInfo in_map_info;
      NvBufSurface *surface = NULL;
    
      memset (&in_map_info, 0, sizeof (in_map_info));
      if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
        g_print ("Error: Failed to map gst buffer\n");
      }
    
    #if 1
    
      surface = (NvBufSurface *) in_map_info.data;  
    
      int batch_size= surface->batchSize;
    
      printf("Batch Size : %d",batch_size );
    
      int map_result = NvBufSurfaceMap(surface,-1,-1,NVBUF_MAP_READ);	   
      printf("\nMap Result : %d\n", map_result);
    
      int sync_result = NvBufSurfaceSyncForCpu(surface,-1,-1);
      printf("\nSync Result : %d\n", sync_result);
    
      for(int i=0; i<batch_size; ++i)
      {
    	uint32_t data_size =  surface->surfaceList[i].dataSize;
    	uint32_t pitch =  surface->surfaceList[i].pitch;
    	uint32_t width =  surface->surfaceList[i].width;
    	uint32_t height =  surface->surfaceList[i].height;
    	NvBufSurfaceLayout layout = surface->surfaceList[i].layout;
    	NvBufSurfacePlaneParams plane_params = surface->surfaceList[i].planeParams;
     	void *dataPtr = surface->surfaceList[i].dataPtr;
    	printf("\nSize of the first location : %d\n", sizeof(dataPtr[0]));
    	uint8_t* data = dataPtr;
    	
    	printf("\n\nData pointer : %p\n",data);
    
    	uint32_t num_planes=plane_params.num_planes;
    	
    	printf("\nNumber of Planes : %d\n\n", num_planes);
    
    	for(int i=0 ; i < num_planes; ++i)
    	{
    	   uint32_t width=plane_params.width[i];
    	   uint32_t height=plane_params.height[i];
    	   uint32_t pitch=plane_params.pitch[i]; 
               uint32_t offset=plane_params.offset[i];
               uint32_t psize=plane_params.psize[i];
               uint32_t bytes_per_pix=plane_params.bytesPerPix[i];
    		
    	   printf("width of the plane %d : %d\n\n",i,width);
    	   printf("height of the plane %d : %d\n\n",i,height);
    	   printf("pitch of the plane  %d : %d\n\n",i,pitch);
    	   printf("offset of the plane  %d : %d\n\n",i,offset);
    	   printf("psize of the plane  %d : %d\n\n",i,psize);
    	   printf("bytes_per_pix of the plane  %d : %d\n\n\n\n",i,bytes_per_pix);	   
    	}
    
    	printf("Size of the frame buffer : %d\n\n",data_size);
    	printf("Pitch of the frame buffer : %d\n\n",pitch);
    	printf("width of the frame buffer : %d\n\n",width);
    	printf("height of the frame buffer : %d\n\n",height);
    
    	NvBufSurfaceColorFormat color_format= surface->surfaceList[i].colorFormat;
    
            if (color_format == NVBUF_COLOR_FORMAT_NV12)
               printf("color_format: NVBUF_COLOR_FORMAT_NV12 \n");
            else if (color_format == NVBUF_COLOR_FORMAT_NV12_ER)
               printf("color_format: NVBUF_COLOR_FORMAT_NV12_ER \n");
            else if (color_format == NVBUF_COLOR_FORMAT_NV12_709)
               printf("color_format: NVBUF_COLOR_FORMAT_NV12_709 \n");
            else if (color_format == NVBUF_COLOR_FORMAT_NV12_709_ER)
               printf("color_format: NVBUF_COLOR_FORMAT_NV12_709_ER \n");
    
    
    
       uint32_t frame_width=surface->surfaceList[i].planeParams.width[0];
       uint32_t frame_height=surface->surfaceList[i].planeParams.height[0];
       uint32_t Y_stride=surface->surfaceList[i].planeParams.pitch[0];
       uint32_t UV_stride=surface->surfaceList[i].planeParams.pitch[1];
    
       printf("\n\nframe_width : %d\n\n", frame_width);
       printf("\n\nframe_height : %d\n\n", frame_height);
       printf("\n\nY_stride : %d\n\n", Y_stride);
       printf("\n\nUV_stride : %d\n\n", UV_stride);
    
       uint8_t *RGB= malloc(3*frame_width*frame_height);
       uint32_t RGB_stride=frame_width*3;
       YCbCrType yuv_type=YCBCR_709;
     
       printf("\n\nData at first index of RGB : %d\n\n", *RGB);
       printf("\n\nRGB_stride : %d\n\n", RGB_stride);
    
       int offset_calc=frame_width*frame_height;
    
       printf("\n\nCalculated offset : %d\n\n", offset_calc);
       int offset_proc= (surface->surfaceList[0].planeParams.psize[0] - offset_calc);
    
       const uint8_t *Y_mapped=surface->surfaceList[i].mappedAddr.addr[0]; 
    
       if(Y_mapped != NULL)
       {
    	char filename[200];
    	sprintf(filename,"y_%dx%d.jpg",frame_width,frame_height);
    	write_jpeg_file( filename, Y_mapped , frame_width, frame_height, 1, JCS_GRAYSCALE );
       }
       else
       {
    	printf("\nY == NULL or UV != NULL\n");
       }
    
      }
     
      int unmap_result= NvBufSurfaceUnMap(surface,-1,-1);
    
      exit(0); 
    
    #endif
      /*
       * Output KITTI labels with tracking ID if configured to do so.
       */
      write_kitti_track_output(appCtx, batch_meta);
    
      //send_kitti_track_output(appCtx, batch_meta);
    
      if (appCtx->primary_bbox_generated_cb)
        appCtx->primary_bbox_generated_cb (appCtx, buf, batch_meta, index);
      return GST_PAD_PROBE_OK;
    }
    

    I still do not get the desired output. Kindly let me know where I could be going wrong and if this approach is the right approach.

    Thanks.

    int map_result = NvBufSurfaceMap(surface,-1,-1,NVBUF_MAP_READ);

    the usage:

    int NvBufSurfaceMap (NvBufSurface *surf, int index, int plane, NvBufSurfaceMemMapFlags type)

    Are the pixel data of each frame stored in the dataPtr within the NvBufSurfaceParams structure?
    I am accessing the pixel information with the following code:

    osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
    {
      GstBuffer *buf = (GstBuffer *) info->data;
      NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
      if (!batch_meta) {
        NVGSTDS_WARN_MSG_V ("Batch meta not found for buffer %p", buf);
        return GST_PAD_PROBE_OK;
      }
    
      GstMapInfo in_map_info;
      NvBufSurface *surface = NULL;
    
      if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
        g_print ("Error: Failed to map gst buffer\n");
        gst_buffer_unmap (buf, &in_map_info);
        return GST_FLOW_ERROR;
    
      }
    
      surface = (NvBufSurface *) in_map_info.data;  
      int batch_size= surface->batchSize;	   
    
    for(int i=0; i<batch_size; ++i)
      {
            uint32_t dataSize = surface->surfaceList[i].dataSize;
     	uint8_t *dataPtr = (uint8_t *)surface->surfaceList[i].dataPtr;
            for (int j= 0; j<dataSize;j++){ 
                printf("%d\n", dataPtr[j]);
            }
            printf("\n\n\n\n");
    
       }
    
    ....
    }
    

    However, this is not quit working. Several weird things happens that makes no sense to me (below are using the 48s video from the deepstream example):

  • I found the color format is RGBA, each frame is 1080*1920 number of pixel, so the size of the pointer should be 1080*1920*4 ideally. However, the size is 8388608. Why is this happening?
  • Secondly, the the A channel should be a fraction between 0 to 1, but I assigned it to 8 bit unsigned integer. Originally, it is a void pointer, how can I parse it to make them in the right channel? Is it the first 1080*1920 data in the Ptr is R , and the next 1080*1920 G channel?
  • Hello neophyte1,

    Did you figure out how to grab the pixel data from the frame? I also have the segmentation fault problem when grabbing the data.

    Thanks!

    Hi qsu,

    Yes. After spending a lot of time on this problem, I was able to figure out the solution using help from NVIDIA folks.

    Please refer to the following thread. It has the solution:

    https://devtalk.nvidia.com/default/topic/1061205/deepstream-sdk/rtsp-camera-access-frame-issue/2

    Cheers!