Tegra MMAPI to encode image from an array
Hello, Recently I explored a few samples of Tegra MMAPI. With the current project of mine, which heavily relies on [url]https://github.com/dusty-nv/jetson-inference/blob/master/detectnet-camera/detectnet-camera.cpp[/url], I want to save the cropped images based on the bounding box information. For this, I wanted to leverage the NvJPEGEncoder as shown in sample [color="green"]05_jpeg_encode[/color]. I notice that I can do something like [code] /* snippet from ~/tegra_multimedia_api/samples/05_jpeg_encode/jpeg_encode_main.cpp */ if (!ctx.use_fd) { unsigned long out_buf_size = ctx.in_width * ctx.in_height * 3 / 2; unsigned char *out_buf = new unsigned char[out_buf_size]; NvBuffer buffer(V4L2_PIX_FMT_YUV420M, ctx.in_width, ctx.in_height, 0); buffer.allocateMemory(); ret = read_video_frame(ctx.in_file, buffer); TEST_ERROR(ret < 0, "Could not read a complete frame from file", cleanup); ret = ctx.jpegenc->encodeFromBuffer(buffer, JCS_YCbCr, &out_buf, out_buf_size); TEST_ERROR(ret < 0, "Error while encoding from buffer", cleanup); ctx.out_file->write((char *) out_buf, out_buf_size); delete[] out_buf; goto cleanup; } [/code] However, instead of [code] ret = read_video_frame(ctx.in_file, buffer); [/code] I will have to set up the NvBuffer from [b]imgCPU[/b] as defined in: [url]https://github.com/dusty-nv/jetson-inference/blob/e12e6e64365fed83e255800382e593bf7e1b1b1a/detectnet-camera/detectnet-camera.cpp#L178[/url] My question is, how should I go about setting the NvBuffer using this data pointer? I couldn't find a way to do this from the MMAPI reference docs. The [b]imgCPU[/b] pointer is of I420 format from the gstreamer pipeline using RTSP camera. If anyone has some experience/hints let me know. Thanks!
Hello,

Recently I explored a few samples of Tegra MMAPI. With the current project of mine, which heavily relies on https://github.com/dusty-nv/jetson-inference/blob/master/detectnet-camera/detectnet-camera.cpp, I want to save the cropped images based on the bounding box information.

For this, I wanted to leverage the NvJPEGEncoder as shown in sample 05_jpeg_encode.

I notice that I can do something like

/* snippet from ~/tegra_multimedia_api/samples/05_jpeg_encode/jpeg_encode_main.cpp */
if (!ctx.use_fd)
{
unsigned long out_buf_size = ctx.in_width * ctx.in_height * 3 / 2;
unsigned char *out_buf = new unsigned char[out_buf_size];

NvBuffer buffer(V4L2_PIX_FMT_YUV420M, ctx.in_width,
ctx.in_height, 0);

buffer.allocateMemory();

ret = read_video_frame(ctx.in_file, buffer);
TEST_ERROR(ret < 0, "Could not read a complete frame from file",
cleanup);

ret = ctx.jpegenc->encodeFromBuffer(buffer, JCS_YCbCr, &out_buf,
out_buf_size);
TEST_ERROR(ret < 0, "Error while encoding from buffer", cleanup);

ctx.out_file->write((char *) out_buf, out_buf_size);
delete[] out_buf;

goto cleanup;
}


However, instead of
ret = read_video_frame(ctx.in_file, buffer);

I will have to set up the NvBuffer from imgCPU as defined in: https://github.com/dusty-nv/jetson-inference/blob/e12e6e64365fed83e255800382e593bf7e1b1b1a/detectnet-camera/detectnet-camera.cpp#L178

My question is, how should I go about setting the NvBuffer using this data pointer? I couldn't find a way to do this from the MMAPI reference docs.

The imgCPU pointer is of I420 format from the gstreamer pipeline using RTSP camera.

If anyone has some experience/hints let me know.

Thanks!

#1
Posted 02/14/2018 08:39 AM   
I was able to find the source code for [color="green"]read_viedeo_frame()[/color] at [color="green"]~/tegra_multimedia_api/samples/common/classes[/color] and replicated the functionality using [b]memcpy[/b]. Wasn't aware where the classes for all headers in MMAPI are located.
Answer Accepted by Original Poster
I was able to find the source code for read_viedeo_frame() at ~/tegra_multimedia_api/samples/common/classes and replicated the functionality using memcpy.

Wasn't aware where the classes for all headers in MMAPI are located.

#2
Posted 02/14/2018 07:23 PM   
Scroll To Top

Add Reply