How to get the NvBuffer from NvBufferGetParams

Hi.

I need to work with each pixels of the image taken by the argus library.
I tried to create an NvBuffer with IImageNativeBuffer::createNvBuffer.
After that, I guessed that the nv_buffer member of struct _NvBufferParams should be the pointer to NvBuffer.

int dmabuf_fd = iImageNativeBuffer->createNvBuffer(Size {1920, 1080}, NvBufferColorFormat_YUV420, NvBufferLayout_Pitch, &status);
if (status != STATUS_OK)
    printf("Failed to create a native buffer");

NvBufferParams params;
int ret = NvBufferGetParams(dmabuf_fd, &params);
if (ret < 0)
    printf("Failed to get a native buffer parameters");

NvBuffer* nvBuffer = (NvBuffer*)params.nv_buffer;

printf("%u\n", nvBuffer->planes[0].data);

But the above code doesn’t print out anything
I think I’m not using these functions and structures as they are intended,
but unfortunately, I cannot find any neat API documents describing how to access each pixels.

I’d really appreciate some help.
Thanks.

Hi
The purpose of NvBuffer was package the frame data and send to some HW component like VIC.
For you case you can reference to the yuvJpeg sample code to get the frame data.

// Print out image details, and map the buffers to read out some data.
        Image *image = iFrame->getImage();
        IImage *iImage = interface_cast<IImage>(image);
        IImage2D *iImage2D = interface_cast<IImage2D>(image);
        for (uint32_t i = 0; i < iImage->getBufferCount(); i++)
        {
            const uint8_t *d = static_cast<const uint8_t*>(iImage->mapBuffer(i));
            if (!d)
                ORIGINATE_ERROR("\tFailed to map buffer\n");

            Size size = iImage2D->getSize(i);
            CONSUMER_PRINT("\tIImage(2D): "
                           "buffer %u (%ux%u, %u stride), "
                           "%02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x\n",
                           i, size.width, size.height, iImage2D->getStride(i),
                           d[0], d[1], d[2], d[3], d[4], d[5],
                           d[6], d[7], d[8], d[9], d[10], d[11]);
        }

        // Write a JPEG to disk.
        IImageJPEG *iJPEG = interface_cast<IImageJPEG>(image);
        if (iJPEG)

Hello,

I also tried to copy/map images captured by the argus library to cpu memory.
The “yuvJpeg” example uses

iImage->mapBuffer(i)

to map 3 buffers into cpu memory.
I assumed that the first plane contains the Y component of a YUV4:2:0, the second one the Cb and the third the Cr component, am I right?
However, the resulting images are looking really strange. Is there anything I missed?

Using IImageJPEG leads to correct images.

Thank you and best regards

Yes the are Y and UV plane. You have to figure it out if the IIMAGEJPEG sample code is no problem.

Thanks for the reply.
I’ve also tried to use IImage::mapBuffer() but when I try to change the contents of the buffer as follows, the program gives segmentation fault.

IImage *iImage = interface_cast<IImage>(image);
IImage2D *iImage2D = interface_cast<IImage2D>(image);

uint8_t* d = const_cast<uint8_t*>((const uint8_t*)(iImage->mapBuffer(0)));
if (!d)
    printf("Failed to map buffer");

Size size = iImage2D->getSize();

for (int j = 0; j < size.height; j++)
    for (int k = 0; k < size.width; k++) {
        uint8_t* p = d + j * iImage2D->getStride() + k;
        *p = 0;                                         /* segmentation fault */
    }

Actually, I’ve realized that I have to mmap() the file pointed by dmabuf_fd to the memory.
The following code seems to meet my needs : to read each pixel’s value and to change each pixel’s value.

int dmabuf_fd = iImageNativeBuffer->createNvBuffer(Size {1920, 1080}, NvBufferColorFormat_YUV420, NvBufferLayout_Pitch, &status);

uint8_t* data_mem;
int fsize = 3538944;

data_mem = (uint8_t*)mmap(0, fsize, PROT_READ | PROT_WRITE, MAP_SHARED, dmabuf_fd, 0);
if (data_mem == MAP_FAILED)
    printf("mmap failed : %s\n", strerror(errno));

This way, I can change the contents of data_mem.
I’ve got this idea from the code here: https://devtalk.nvidia.com/default/topic/966938/jetson-tx1/how-to-achieve-the-h-264-encoding-performance-4k-3-840x2-160-30fps-with-openmax-il-api-l4t-r24-1/post/4990099/#4990099.

So am I right that this way I’ll have no limitation for manipulating each pixel?

Yes, you can reference to the sample code from the link you find from the forum.

https://devtalk.nvidia.com/default/topic/966938/jetson-tx1/how-to-achieve-the-h-264-encoding-performance-4k-3-840x2-160-30fps-with-openmax-il-api-l4t-r24-1/post/4990099/#4990099