tegra_multimedia_API:dq buffer from encoder output_plane can not completed
Hi Nvidia: Use tegra_multimedia_API to dq buffer from encoder output_plane, but it can not completed. detail description: JetPack:3.2 release date: 2018/03/08 Base code example: 12_camera_v4l2_cuda /tegra_multimedia_API/sample Function: start_capture(context_t * ctx) NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd, &transParams) is used to convert YUV422 data in g_buff[v4l2_buf.index].dmabuff_fd to YUV420 data and save the result data to ctx->render_dmabuf_fd. I am trying to convert YUV422 data to YUV420 data, then encode YUV420 data to H264 data with encoder, so encoder is added to my code, reference initial code as below: static bool encoder_initialize(context_t * ctx) { int ret = 0; ctx->enc = NvVideoEncoder::createVideoEncoder("enc0"); if(!ctx->enc) ERROR_RETURN("Could not create encoder"); // It is necessary that Capture Plane format be set before Output Planeformat. // Set encoder capture plane format. It is necessary to set width and height on thr capture plane as well // Set encoder output plane format ret = ctx->enc->setCapturePlaneFormat(V4L2_PIX_FMT_H264, 640,480, 4 * 640 * 480); if(ret < 0) ERROR_RETURN("Could not set Capture plane format"); ret = ctx->enc->setOutputPlaneFormat(V4L2_PIX_FMT_YUV420M, 640,480); if(ret < 0) ERROR_RETURN("Could not set output plane format"); ret = ctx->enc->setBitrate(ctx->enc_bitrate); if(ret < 0) ERROR_RETURN("Could not set bitrate"); ret = ctx->enc->setProfile(V4L2_MPEG_VIDEO_H264_PROFILE_HIGH); if(ret < 0) ERROR_RETURN("Could not set encoder profile"); ret = ctx->enc->setLevel(V4L2_MPEG_VIDEO_H264_LEVEL_5_0); if(ret < 0) ERROR_RETURN("Could not set encoder level"); ret = ctx->enc->setFrameRate(ctx->enc_fps_n, ctx->enc_fps_d); if(ret < 0) ERROR_RETURN("Could not set framerate"); // Query, Export and Map the output plane buffers so that we can read raw data into the buffers ret = ctx->enc->output_plane.setupPlane(V4L2_MEMORY_DMABUF, V4L2_BUFFERS_NUM, true, false); if(ret < 0) ERROR_RETURN("Could not setup output plane"); // Query, Export and Map the output plane buffers so that we can write encoded data from the buffers ret = ctx->enc->capture_plane.setupPlane(V4L2_MEMORY_MMAP, V4L2_BUFFERS_NUM, true, false); if(ret < 0) ERROR_RETURN("Could not setup capture plane"); return true; } static bool enqueue_encoder_buff(context_t *ctx) { // Enqueue empty buffer into encoder output plane for (unsigned int index = 0; index < ctx->enc->output_plane.getNumBuffers(); index++) { struct v4l2_buffer v4l2_buf; struct v4l2_plane planes[MAX_PLANES]; memset(&v4l2_buf, 0, sizeof(v4l2_buf)); memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane)); v4l2_buf.index = index; v4l2_buf.m.planes = planes; if (ctx->enc->output_plane.qBuffer(v4l2_buf, NULL) < 0) INFO("Failed to enqueue empty buffer into encoder output plane"); } // Enqueue empty buffer into encoder capture plane for (unsigned int index = 0; index < ctx->enc->capture_plane.getNumBuffers(); index++) { struct v4l2_buffer v4l2_buf; struct v4l2_plane planes[MAX_PLANES]; memset(&v4l2_buf, 0, sizeof(v4l2_buf)); memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane)); v4l2_buf.index = index; v4l2_buf.m.planes = planes; if (ctx->enc->capture_plane.qBuffer(v4l2_buf, NULL) < 0) INFO("Failed to enqueue empty buffer into encoder capture plane"); } return true; } And function start_capture changed in my program, code as below: start_capture(context_t * ctx) { struct sigaction sig_action; struct pollfd fds[1]; NvBufferTransformParams transParams; // Ensure a clean shutdown if user types <ctrl+c> sig_action.sa_handler = signal_handle; sigemptyset(&sig_action.sa_mask); sig_action.sa_flags = 0; sigaction(SIGINT, &sig_action, NULL); ctx->enc->capture_plane.setDQThreadCallback(encoder_capture_plane_dq_callback); ctx->enc->capture_plane.startDQThread(&ctx); // Init the NvBufferTransformParams memset(&transParams, 0, sizeof(transParams)); transParams.transform_flag = NVBUFFER_TRANSFORM_FILTER; transParams.transform_filter = NvBufferTransform_Filter_Smart; // Enable render profiling information //ctx->renderer->enableProfiling(); fds[0].fd = ctx->cam_fd; fds[0].events = POLLIN; while (poll(fds, 1, 5000) > 0 && !quit) { if (fds[0].revents & POLLIN) { struct v4l2_buffer v4l2_buf; // Dequeue camera buff memset(&v4l2_buf, 0, sizeof(v4l2_buf)); v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; v4l2_buf.memory = V4L2_MEMORY_DMABUF; if (ioctl(ctx->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0) ERROR_RETURN("Failed to dequeue camera buff: %s (%d)", strerror(errno), errno); ctx->frame++; if (ctx->frame == ctx->save_n_frame) save_frame_to_file(ctx, &v4l2_buf); // Cache sync for VIC operation NvBufferMemSyncForDevice(ctx->g_buff[v4l2_buf.index].dmabuff_fd, 0, (void**)&ctx->g_buff[v4l2_buf.index].start); struct v4l2_buffer v4l2_buf1; struct v4l2_plane planes[MAX_PLANES]; NvBuffer *buffer; memset(&v4l2_buf1, 0, sizeof(v4l2_buf1)); memset(planes, 0, sizeof(planes)); v4l2_buf1.m.planes = planes; printf("123\n"); if(ctx->enc->output_plane.dqBuffer(v4l2_buf1, &buffer, NULL, V4L2_BUFFERS_NUM) < 0) printf("ERROR while DQing buffer at output plane\n"); printf("234\n"); // Convert the camera buffer from YUV422 to YUV420P if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, buffer->planes[0].fd, &transParams)) ERROR_RETURN("Failed to convert the buffer"); if (ctx->enc->output_plane.qBuffer(v4l2_buf, NULL) < 0) printf("Error while Qing buffer at output plane"); /*if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd, &transParams)) ERROR_RETURN("Failed to convert the buffer");*/ //cuda_postprocess(ctx, ctx->render_dmabuf_fd); //ctx->renderer->render(ctx->render_dmabuf_fd); // Enqueue camera buff if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf)) ERROR_RETURN("Failed to queue camera buffers: %s (%d)", strerror(errno), errno); } } // Print profiling information when streaming stops. //ctx->renderer->printProfilingStats(); return true; } Run my program, “123” printed once, any other not printed. Is there any problem in my program? Expected your reply ! Thanks a lot!
Hi Nvidia:
Use tegra_multimedia_API to dq buffer from encoder output_plane, but it can not completed.
detail description:
JetPack:3.2 release date: 2018/03/08
Base code example: 12_camera_v4l2_cuda /tegra_multimedia_API/sample
Function: start_capture(context_t * ctx)
NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd,
&transParams) is used to convert YUV422 data in g_buff[v4l2_buf.index].dmabuff_fd to YUV420 data and save the result data to ctx->render_dmabuf_fd.
I am trying to convert YUV422 data to YUV420 data, then encode YUV420 data to H264 data with encoder, so encoder is added to my code, reference initial code as below:
static bool encoder_initialize(context_t * ctx)
{
int ret = 0;

ctx->enc = NvVideoEncoder::createVideoEncoder("enc0");
if(!ctx->enc)
ERROR_RETURN("Could not create encoder");

// It is necessary that Capture Plane format be set before Output Planeformat.
// Set encoder capture plane format. It is necessary to set width and height on thr capture plane as well
// Set encoder output plane format
ret = ctx->enc->setCapturePlaneFormat(V4L2_PIX_FMT_H264, 640,480, 4 * 640 * 480);
if(ret < 0)
ERROR_RETURN("Could not set Capture plane format");

ret = ctx->enc->setOutputPlaneFormat(V4L2_PIX_FMT_YUV420M, 640,480);
if(ret < 0)
ERROR_RETURN("Could not set output plane format");

ret = ctx->enc->setBitrate(ctx->enc_bitrate);
if(ret < 0)
ERROR_RETURN("Could not set bitrate");

ret = ctx->enc->setProfile(V4L2_MPEG_VIDEO_H264_PROFILE_HIGH);
if(ret < 0)
ERROR_RETURN("Could not set encoder profile");

ret = ctx->enc->setLevel(V4L2_MPEG_VIDEO_H264_LEVEL_5_0);
if(ret < 0)
ERROR_RETURN("Could not set encoder level");

ret = ctx->enc->setFrameRate(ctx->enc_fps_n, ctx->enc_fps_d);
if(ret < 0)
ERROR_RETURN("Could not set framerate");

// Query, Export and Map the output plane buffers so that we can read raw data into the buffers
ret = ctx->enc->output_plane.setupPlane(V4L2_MEMORY_DMABUF, V4L2_BUFFERS_NUM, true, false);
if(ret < 0)
ERROR_RETURN("Could not setup output plane");

// Query, Export and Map the output plane buffers so that we can write encoded data from the buffers
ret = ctx->enc->capture_plane.setupPlane(V4L2_MEMORY_MMAP, V4L2_BUFFERS_NUM, true, false);
if(ret < 0)
ERROR_RETURN("Could not setup capture plane");

return true;
}

static bool enqueue_encoder_buff(context_t *ctx)
{
// Enqueue empty buffer into encoder output plane
for (unsigned int index = 0; index < ctx->enc->output_plane.getNumBuffers(); index++)
{
struct v4l2_buffer v4l2_buf;
struct v4l2_plane planes[MAX_PLANES];

memset(&v4l2_buf, 0, sizeof(v4l2_buf));
memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

v4l2_buf.index = index;
v4l2_buf.m.planes = planes;

if (ctx->enc->output_plane.qBuffer(v4l2_buf, NULL) < 0)
INFO("Failed to enqueue empty buffer into encoder output plane");
}

// Enqueue empty buffer into encoder capture plane
for (unsigned int index = 0; index < ctx->enc->capture_plane.getNumBuffers(); index++)
{
struct v4l2_buffer v4l2_buf;
struct v4l2_plane planes[MAX_PLANES];

memset(&v4l2_buf, 0, sizeof(v4l2_buf));
memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

v4l2_buf.index = index;
v4l2_buf.m.planes = planes;

if (ctx->enc->capture_plane.qBuffer(v4l2_buf, NULL) < 0)
INFO("Failed to enqueue empty buffer into encoder capture plane");
}
return true;
}

And function start_capture changed in my program, code as below:
start_capture(context_t * ctx)
{
struct sigaction sig_action;
struct pollfd fds[1];
NvBufferTransformParams transParams;

// Ensure a clean shutdown if user types <ctrl+c>
sig_action.sa_handler = signal_handle;
sigemptyset(&sig_action.sa_mask);
sig_action.sa_flags = 0;
sigaction(SIGINT, &sig_action, NULL);

ctx->enc->capture_plane.setDQThreadCallback(encoder_capture_plane_dq_callback);
ctx->enc->capture_plane.startDQThread(&ctx);

// Init the NvBufferTransformParams
memset(&transParams, 0, sizeof(transParams));
transParams.transform_flag = NVBUFFER_TRANSFORM_FILTER;
transParams.transform_filter = NvBufferTransform_Filter_Smart;

// Enable render profiling information
//ctx->renderer->enableProfiling();

fds[0].fd = ctx->cam_fd;
fds[0].events = POLLIN;
while (poll(fds, 1, 5000) > 0 && !quit)
{
if (fds[0].revents & POLLIN) {
struct v4l2_buffer v4l2_buf;
// Dequeue camera buff
memset(&v4l2_buf, 0, sizeof(v4l2_buf));
v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
v4l2_buf.memory = V4L2_MEMORY_DMABUF;
if (ioctl(ctx->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0)
ERROR_RETURN("Failed to dequeue camera buff: %s (%d)",
strerror(errno), errno);

ctx->frame++;

if (ctx->frame == ctx->save_n_frame)
save_frame_to_file(ctx, &v4l2_buf);

// Cache sync for VIC operation
NvBufferMemSyncForDevice(ctx->g_buff[v4l2_buf.index].dmabuff_fd, 0,
(void**)&ctx->g_buff[v4l2_buf.index].start);

struct v4l2_buffer v4l2_buf1;
struct v4l2_plane planes[MAX_PLANES];
NvBuffer *buffer;

memset(&v4l2_buf1, 0, sizeof(v4l2_buf1));
memset(planes, 0, sizeof(planes));
v4l2_buf1.m.planes = planes;
printf("123\n");
if(ctx->enc->output_plane.dqBuffer(v4l2_buf1, &buffer, NULL, V4L2_BUFFERS_NUM) < 0)
printf("ERROR while DQing buffer at output plane\n");
printf("234\n");
// Convert the camera buffer from YUV422 to YUV420P
if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, buffer->planes[0].fd, &transParams))
ERROR_RETURN("Failed to convert the buffer");
if (ctx->enc->output_plane.qBuffer(v4l2_buf, NULL) < 0)
printf("Error while Qing buffer at output plane");
/*if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd,
&transParams))
ERROR_RETURN("Failed to convert the buffer");*/

//cuda_postprocess(ctx, ctx->render_dmabuf_fd);

//ctx->renderer->render(ctx->render_dmabuf_fd);

// Enqueue camera buff
if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf))
ERROR_RETURN("Failed to queue camera buffers: %s (%d)",
strerror(errno), errno);
}
}

// Print profiling information when streaming stops.
//ctx->renderer->printProfilingStats();

return true;
}

Run my program, “123” printed once, any other not printed. Is there any problem in my program?

Expected your reply ! Thanks a lot!

#1
Posted 04/09/2018 06:04 AM   
Hi feng, We have tegra_multimedia_api\samples\01_video_encode to demonstrating video encoding. What is different between this sample and yours?
Hi feng,
We have tegra_multimedia_api\samples\01_video_encode to demonstrating video encoding. What is different between this sample and yours?

#2
Posted 04/09/2018 09:34 AM   
Hi DaneLLL Thanks for your relpy! The difference is input format, your input is a file,but my input is usb camera real time data,and usb camera data is YUV422,before encode it should to convert it to YUV420. so I code it base on 12_camera_v4l2_cuda. Expected your reply again! Thanks!
Hi DaneLLL
Thanks for your relpy!
The difference is input format, your input is a file,but my input is usb camera real time data,and usb camera data is YUV422,before encode it should to convert it to YUV420. so I code it base on 12_camera_v4l2_cuda.
Expected your reply again! Thanks!

#3
Posted 04/09/2018 09:53 AM   
Hi feng.baoying, You said: [i][color="green"]Run my program, “123” printed once, any other not printed.[/color][/i] [code]printf("123\n"); if(ctx->enc->output_plane.dqBuffer(v4l2_buf1, &buffer, NULL, V4L2_BUFFERS_NUM) < 0) printf("ERROR while DQing buffer at output plane\n"); printf("234\n");[/code] It seems that there is no empty buffer in the output plane of the encoder, and your code is blocked at dqBuffer(). When you call enqueue_encoder_buff() before start_capture() ? ------ Hi DaneLLL, I'm using cudaMemcpy2D(,,,,,cudaMemcpyHostToHost) and NvVideoConverter from an USB camera (V4L2) to NvVideoEncoder. Can NvBufferTransform() convert between Block Linear buffer and Pitched Linear buffer ? Best Regards,
Hi feng.baoying,

You said:
Run my program, “123” printed once, any other not printed.

printf("123\n");
if(ctx->enc->output_plane.dqBuffer(v4l2_buf1, &buffer, NULL, V4L2_BUFFERS_NUM) < 0)
printf("ERROR while DQing buffer at output plane\n");
printf("234\n");


It seems that there is no empty buffer in the output plane of the encoder, and your code is blocked at dqBuffer().
When you call enqueue_encoder_buff() before start_capture() ?

------

Hi DaneLLL,

I'm using cudaMemcpy2D(,,,,,cudaMemcpyHostToHost) and NvVideoConverter from an USB camera (V4L2) to NvVideoEncoder.
Can NvBufferTransform() convert between Block Linear buffer and Pitched Linear buffer ?

Best Regards,

#4
Posted 04/10/2018 02:51 AM   
Hi mynaemi, For your case, below pseudo code should work: [code]NvBufferCreate(&fd_BL, block_linear); NvBufferCreate(&fd_PL, pitch_linear); NvBufferTransform(fd_BL, fd_PL);[/code]
Hi mynaemi,
For your case, below pseudo code should work:
NvBufferCreate(&fd_BL, block_linear);
NvBufferCreate(&fd_PL, pitch_linear);
NvBufferTransform(fd_BL, fd_PL);

#5
Posted 04/10/2018 07:53 AM   
Hi mynaemi: I am sure enqueue_encoder_buff() is called before start_capture(), is there any other reason for this problem? And what is the meaning of the DaneLLL 's reply ? is it about my problem ? how to use it solve my problem? Expected your reply soon! Thank you!
Hi mynaemi:
I am sure enqueue_encoder_buff() is called before start_capture(), is there any other reason for this problem?
And what is the meaning of the DaneLLL 's reply ? is it about my problem ? how to use it solve my problem?
Expected your reply soon! Thank you!

#6
Posted 04/10/2018 08:25 AM   
Hi mynaemi: After enqueue_encoder_buff() is excuted,is there still no buffer in encoder output plane? when enqueue_encoder_buff() changed as below: static bool enqueue_encoder_buff(context_t *ctx) { // Enqueue empty buffer into encoder capture plane for (unsigned int index = 0; index < ctx->enc->capture_plane.getNumBuffers(); index++) { struct v4l2_buffer v4l2_buf; struct v4l2_plane planes[MAX_PLANES]; memset(&v4l2_buf, 0, sizeof(v4l2_buf)); memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane)); v4l2_buf.index = index; v4l2_buf.m.planes = planes; if (ctx->enc->capture_plane.qBuffer(v4l2_buf, NULL) < 0) INFO("Failed to enqueue empty buffer into encoder capture plane"); } return true; } start_capture() changed as below: start_capture(context_t * ctx) { struct sigaction sig_action; struct pollfd fds[1]; NvBufferTransformParams transParams; // Ensure a clean shutdown if user types <ctrl+c> sig_action.sa_handler = signal_handle; sigemptyset(&sig_action.sa_mask); sig_action.sa_flags = 0; sigaction(SIGINT, &sig_action, NULL); ctx->enc->capture_plane.setDQThreadCallback(encoder_capture_plane_dq_callback); ctx->enc->capture_plane.startDQThread(&ctx); // Init the NvBufferTransformParams memset(&transParams, 0, sizeof(transParams)); transParams.transform_flag = NVBUFFER_TRANSFORM_FILTER; transParams.transform_filter = NvBufferTransform_Filter_Smart; // Enable render profiling information //ctx->renderer->enableProfiling(); fds[0].fd = ctx->cam_fd; fds[0].events = POLLIN; [b]int count = 0;[/b] while (poll(fds, 1, 5000) > 0 && !quit) { if (fds[0].revents & POLLIN) { struct v4l2_buffer v4l2_buf; // Dequeue camera buff memset(&v4l2_buf, 0, sizeof(v4l2_buf)); v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; v4l2_buf.memory = V4L2_MEMORY_DMABUF; if (ioctl(ctx->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0) ERROR_RETURN("Failed to dequeue camera buff: %s (%d)", strerror(errno), errno); ctx->frame++; if (ctx->frame == ctx->save_n_frame) save_frame_to_file(ctx, &v4l2_buf); // Cache sync for VIC operation NvBufferMemSyncForDevice(ctx->g_buff[v4l2_buf.index].dmabuff_fd, 0, (void**)&ctx->g_buff[v4l2_buf.index].start); if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd, &transParams)) ERROR_RETURN("Failed to convert the buffer"); if( count < ctx->enc->output_plane.getNumBuffers() ) { struct v4l2_buffer v4l2_buf1; struct v4l2_plane planes[MAX_PLANES]; NvBuffer *buffer = ctx->enc->output_plane.getNthBuffer(count); memset(&v4l2_buf1, 0, sizeof(v4l2_buf1)); memset(planes, 0, sizeof(planes)); v4l2_buf1.index = count; printf("abc\n"); v4l2_buf1.m.plane[0].m.fd = ctx->render_dmabuf_fd; printf("bcd\n"); v4l2_buf1.m.plane[0].bytesused = 1; if(ctx->enc->output_plane.qBuffer(v4l2_buf1, NULL)) printf("ERROR while Qing buffer at output plane\n"); count++; } else { struct v4l2_buffer v4l2_buf1; struct v4l2_plane planes[MAX_PLANES]; NvBuffer *buffer; memset(&v4l2_buf1, 0, sizeof(v4l2_buf1)); memset(planes, 0, sizeof(planes)); if(ctx->enc->output_plane.dqBuffer(v4l2_buf1, &buffer, NULL, V4L2_BUFFERS_NUM) < 0) printf("ERROR while DQing buffer at output plane\n"); if(ctx->enc->output_plane.qBuffer(v4l2_buf1, NULL)) printf("ERROR while Qing buffer at output plane\n"); } // Enqueue camera buff if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf)) ERROR_RETURN("Failed to queue camera buffers: %s (%d)", strerror(errno), errno); } } // Print profiling information when streaming stops. //ctx->renderer->printProfilingStats(); return true; } Run this code, “123” printed once, then core dump. v4l2_buf1.m.plane[0].m.fd = ctx->render_dmabuf_fd; is this cause core dump? Expected your reply soon! Thank you!
Hi mynaemi:
After enqueue_encoder_buff() is excuted,is there still no buffer in encoder output plane?
when enqueue_encoder_buff() changed as below:
static bool enqueue_encoder_buff(context_t *ctx)
{
// Enqueue empty buffer into encoder capture plane
for (unsigned int index = 0; index < ctx->enc->capture_plane.getNumBuffers(); index++)
{
struct v4l2_buffer v4l2_buf;
struct v4l2_plane planes[MAX_PLANES];

memset(&v4l2_buf, 0, sizeof(v4l2_buf));
memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

v4l2_buf.index = index;
v4l2_buf.m.planes = planes;

if (ctx->enc->capture_plane.qBuffer(v4l2_buf, NULL) < 0)
INFO("Failed to enqueue empty buffer into encoder capture plane");
}
return true;
}

start_capture() changed as below:
start_capture(context_t * ctx)
{
struct sigaction sig_action;
struct pollfd fds[1];
NvBufferTransformParams transParams;

// Ensure a clean shutdown if user types <ctrl+c>
sig_action.sa_handler = signal_handle;
sigemptyset(&sig_action.sa_mask);
sig_action.sa_flags = 0;
sigaction(SIGINT, &sig_action, NULL);

ctx->enc->capture_plane.setDQThreadCallback(encoder_capture_plane_dq_callback);
ctx->enc->capture_plane.startDQThread(&ctx);

// Init the NvBufferTransformParams
memset(&transParams, 0, sizeof(transParams));
transParams.transform_flag = NVBUFFER_TRANSFORM_FILTER;
transParams.transform_filter = NvBufferTransform_Filter_Smart;

// Enable render profiling information
//ctx->renderer->enableProfiling();

fds[0].fd = ctx->cam_fd;
fds[0].events = POLLIN;
int count = 0;
while (poll(fds, 1, 5000) > 0 && !quit)
{
if (fds[0].revents & POLLIN) {
struct v4l2_buffer v4l2_buf;
// Dequeue camera buff
memset(&v4l2_buf, 0, sizeof(v4l2_buf));
v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
v4l2_buf.memory = V4L2_MEMORY_DMABUF;
if (ioctl(ctx->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0)
ERROR_RETURN("Failed to dequeue camera buff: %s (%d)",
strerror(errno), errno);

ctx->frame++;

if (ctx->frame == ctx->save_n_frame)
save_frame_to_file(ctx, &v4l2_buf);

// Cache sync for VIC operation
NvBufferMemSyncForDevice(ctx->g_buff[v4l2_buf.index].dmabuff_fd, 0,
(void**)&ctx->g_buff[v4l2_buf.index].start);

if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd,
&transParams))
ERROR_RETURN("Failed to convert the buffer");
if( count < ctx->enc->output_plane.getNumBuffers() )
{
struct v4l2_buffer v4l2_buf1;
struct v4l2_plane planes[MAX_PLANES];
NvBuffer *buffer = ctx->enc->output_plane.getNthBuffer(count);

memset(&v4l2_buf1, 0, sizeof(v4l2_buf1));
memset(planes, 0, sizeof(planes));
v4l2_buf1.index = count;
printf("abc\n");
v4l2_buf1.m.plane[0].m.fd = ctx->render_dmabuf_fd;
printf("bcd\n");
v4l2_buf1.m.plane[0].bytesused = 1;
if(ctx->enc->output_plane.qBuffer(v4l2_buf1, NULL))
printf("ERROR while Qing buffer at output plane\n");
count++;
}
else
{
struct v4l2_buffer v4l2_buf1;
struct v4l2_plane planes[MAX_PLANES];
NvBuffer *buffer;

memset(&v4l2_buf1, 0, sizeof(v4l2_buf1));
memset(planes, 0, sizeof(planes));

if(ctx->enc->output_plane.dqBuffer(v4l2_buf1, &buffer, NULL, V4L2_BUFFERS_NUM) < 0)
printf("ERROR while DQing buffer at output plane\n");

if(ctx->enc->output_plane.qBuffer(v4l2_buf1, NULL))
printf("ERROR while Qing buffer at output plane\n");
}


// Enqueue camera buff
if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf))
ERROR_RETURN("Failed to queue camera buffers: %s (%d)",
strerror(errno), errno);
}
}

// Print profiling information when streaming stops.
//ctx->renderer->printProfilingStats();

return true;
}
Run this code, “123” printed once, then core dump.
v4l2_buf1.m.plane[0].m.fd = ctx->render_dmabuf_fd; is this cause core dump?
Expected your reply soon! Thank you!

#7
Posted 04/10/2018 09:32 AM   
Hi feng.baoying, For preparation, I don't use output_plane.qBuffer() and use another queue for empty buffers. I enqueue some empty buffers to this queue before encoding. When an image is captured from camera, I get an empty buffer from this queue, fill it and call qBuffer(). After encoding, I get empty returned buffer from output_plane and store it to this queue in dqCallBack. -------- It seems that size=0 buffers are queued to output_plane in your code. * Using memset() [code]static bool enqueue_encoder_buff(context_t *ctx) { // Enqueue empty buffer into encoder output plane for (unsigned int index = 0; index < ctx->enc->output_plane.getNumBuffers(); index++) { memset(&v4l2_buf, 0, sizeof(v4l2_buf)); memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane)); if (ctx->enc->output_plane.qBuffer(v4l2_buf, NULL) < 0) INFO("Failed to enqueue empty buffer into encoder output plane"); }[/code] In MMAPI R28.1, it will return from output_plane.dqBuffer() with size 0 data, but in MMAPI R28.2 (JetPack 3.2) it will not return. It is a specification change about libtegrav4l2.so and dqBuffer() R28.2. (I don't know its detail.) [code]printf("123\n"); if(ctx->enc->output_plane.dqBuffer(v4l2_buf1, &buffer, NULL, V4L2_BUFFERS_NUM) < 0) printf("ERROR while DQing buffer at output plane\n");[/code] Refer to my another topic: [MMAPI R28.2] deinitPlane() of NvVideoEncoder https://devtalk.nvidia.com/default/topic/1031348/jetson-tx2/-mmapi-r28-2-deinitplane-of-nvvideoencoder/?offset=8#5250631 -------- Sorry, DaneLLL's comment is for my question, not yours.
Hi feng.baoying,

For preparation, I don't use output_plane.qBuffer() and use another queue for empty buffers.
I enqueue some empty buffers to this queue before encoding.
When an image is captured from camera, I get an empty buffer from this queue, fill it and call qBuffer().
After encoding, I get empty returned buffer from output_plane and store it to this queue in dqCallBack.

--------

It seems that size=0 buffers are queued to output_plane in your code.
* Using memset()

static bool enqueue_encoder_buff(context_t *ctx)
{ // Enqueue empty buffer into encoder output plane
for (unsigned int index = 0; index < ctx->enc->output_plane.getNumBuffers(); index++)
{
memset(&v4l2_buf, 0, sizeof(v4l2_buf));
memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

if (ctx->enc->output_plane.qBuffer(v4l2_buf, NULL) < 0)
INFO("Failed to enqueue empty buffer into encoder output plane");
}



In MMAPI R28.1, it will return from output_plane.dqBuffer() with size 0 data,
but in MMAPI R28.2 (JetPack 3.2) it will not return.
It is a specification change about libtegrav4l2.so and dqBuffer() R28.2.
(I don't know its detail.)

printf("123\n");
if(ctx->enc->output_plane.dqBuffer(v4l2_buf1, &buffer, NULL, V4L2_BUFFERS_NUM) < 0)
printf("ERROR while DQing buffer at output plane\n");


Refer to my another topic:
[MMAPI R28.2] deinitPlane() of NvVideoEncoder
https://devtalk.nvidia.com/default/topic/1031348/jetson-tx2/-mmapi-r28-2-deinitplane-of-nvvideoencoder/?offset=8#5250631

--------

Sorry, DaneLLL's comment is for my question, not yours.

#8
Posted 04/10/2018 09:43 AM   
Hi feng.baoying, Sorry, I cannot advise you about "Core Dump". If you use gdb, you may get useful information about core dump. I don't know how to use DMABUF in both objects, and don't know where the buffer entity is. For MMAP or USRPTR it allocates memory to buffers, but for DMABUF is only pointer, I think. In my code, previous object make capture buffer as MMAP, and following object make output buffer as DMABUF. Best Regards,
Hi feng.baoying,

Sorry, I cannot advise you about "Core Dump".
If you use gdb, you may get useful information about core dump.

I don't know how to use DMABUF in both objects,
and don't know where the buffer entity is.

For MMAP or USRPTR it allocates memory to buffers, but for DMABUF is only pointer, I think.
In my code, previous object make capture buffer as MMAP,
and following object make output buffer as DMABUF.

Best Regards,

#9
Posted 04/10/2018 10:00 AM   
Hi mynaemi: Thanks for your reply! Could you paste your code or email your reference code to me? It may be for empty buffer queued into encoder output plane, so I changed code again as above to enqueue a filled buffer to output plane start: [b]v4l2_buf1.m.plane[0].m.fd = ctx->render_dmabuf_fd; v4l2_buf1.m.plane[0].bytesused = 1; ctx->enc->output_plane.qBuffer(v4l2_buf1, NULL);[/b] but this sentence cause core dump. so I wonder to know how you realize it. please paste your code or email your reference code to me. Thanks a lot!
Hi mynaemi:
Thanks for your reply!
Could you paste your code or email your reference code to me?
It may be for empty buffer queued into encoder output plane, so I changed code again as above to enqueue a filled buffer to output plane start:
v4l2_buf1.m.plane[0].m.fd = ctx->render_dmabuf_fd;
v4l2_buf1.m.plane[0].bytesused = 1;
ctx->enc->output_plane.qBuffer(v4l2_buf1, NULL);

but this sentence cause core dump. so I wonder to know how you realize it. please paste your code or email your reference code to me.
Thanks a lot!

#10
Posted 04/10/2018 10:07 AM   
Hi feng.baoying, I cannot understand that you use render_dmabuf_fd for output_plane buffer of NvVideoEncoder, and you don't use cam_fd. Buffer of EglRenderer is for input to renderer, and output_plane buffer is same input to encoder. Of couse, cam_fd has invalid data in the buffer before camera capturing. Best Regards,
Hi feng.baoying,

I cannot understand that you use render_dmabuf_fd for output_plane buffer of NvVideoEncoder,
and you don't use cam_fd.
Buffer of EglRenderer is for input to renderer, and output_plane buffer is same input to encoder.

Of couse, cam_fd has invalid data in the buffer before camera capturing.

Best Regards,

#11
Posted 04/10/2018 10:15 AM   
Hi feng.baoying, I cannot deliver whole code, sorry. And I don't use NvBufferTransform(), but use memcpy() and NvVideoConverter. Common part: [code]typedef struct { int index; uint8_t *data; uint8_t *share; int length; uint64_t timestamp_us; void *nvBuf; void *v4Buf; int fd; planeData_t plane[3]; } qData_t;[/code] [code]class BufferPool { private: std::queue<void*> m_emptyQueue; std::mutex m_emptyQueueMutex; std::condition_variable m_emptyCondition; std::queue<void*> m_filledQueue; std::mutex m_filledQueueMutex; std::condition_variable m_filledCondition; public: int pushEmptyBuffer(void *buffer) { { std::unique_lock<std::mutex> lock(m_emptyQueueMutex); m_emptyQueue.push(buffer); m_emptyCondition.notify_one(); } return 0; } void *getEmptyBuffer(int timeout_ms = 5) { std::unique_lock<std::mutex> lock(m_emptyQueueMutex); void *res = nullptr; if (m_emptyQueue.empty() && (timeout_ms > 0)) { m_emptyCondition.wait_for(lock, std::chrono::milliseconds(timeout_ms), [this] {return !m_emptyQueue.empty();}); } if (!m_emptyQueue.empty()) { res = m_emptyQueue.front(); m_emptyQueue.pop(); } return res; } int getEmptyBufferSize() { std::unique_lock<std::mutex> lock(m_emptyQueueMutex); return m_emptyQueue.size(); } int pushFilledBuffer(void *buffer) { { std::unique_lock<std::mutex> lock(m_filledQueueMutex); m_filledQueue.push(buffer); m_filledCondition.notify_one(); } return 0; } void *getFilledBuffer(int timeout_ms = 5) { void *res = nullptr; if (m_filledQueue.empty()) { std::this_thread::yield(); } { std::unique_lock<std::mutex> lock(m_filledQueueMutex); if (m_filledQueue.empty() && (timeout_ms > 0)) { std::this_thread::yield(); m_filledCondition.wait_for(lock, std::chrono::milliseconds(timeout_ms), [this] {return !m_filledQueue.empty();}); } if (!m_filledQueue.empty()) { res = m_filledQueue.front(); m_filledQueue.pop(); } } return res; } int getFilledBufferSize() { std::unique_lock<std::mutex> lock(m_filledQueueMutex); return m_filledQueue.size(); } BufferPool(){ } virtual ~BufferPool() { void *res = nullptr; while(!m_filledQueue.empty()) { res = m_filledQueue.front(); m_filledQueue.pop(); } while(!m_emptyQueue.empty()) { res = m_emptyQueue.front(); m_emptyQueue.pop(); } } };[/code] [code]class FilterBase { protected: BufferPool *inBufferPool = nullptr; BufferPool outBufferPool; public: FilterBase() { inBufferPool = nullptr; } ~FilterBase() { inBufferPool = nullptr; } virtual BufferPool *getOutputBuffer() { return &outBufferPool; } virtual int connectFilter(FilterBase *filter){ if(filter){ inBufferPool = filter->getOutputBuffer(); } return 0; } };[/code] Summary of Capture from camera [code]void capV4L2::init_buffers(void) { enum v4l2_buf_type type; switch (m_io) { case IO_METHOD_MMAP: for (i = 0; i < n_buffers; ++i) { struct v4l2_buffer buf; CLEAR(buf); buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf.memory = V4L2_MEMORY_MMAP; buf.index = i; if (ER_FD == xioctl(m_fd, VIDIOC_QBUF, &buf)) { }; } break; } return; }[/code] [code]int capV4L2::grabFrame(void) { struct v4l2_buffer buf; uint64_t ts; fd_set fds; struct timeval tv; FD_ZERO(&fds); FD_SET(m_fd, &fds); tv.tv_sec = 0; tv.tv_usec = 100000; // 100msec r = select(m_fd + 1, &fds, NULL, NULL, &tv); switch (m_io) { case IO_METHOD_MMAP: CLEAR(buf); buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf.memory = V4L2_MEMORY_MMAP; if (ER_FD == xioctl(m_fd, VIDIOC_DQBUF, &buf)) { } ts = (uint64_t) buf.timestamp.tv_sec * 1000000 + buf.timestamp.tv_usec; if (ts != 0) { process_image(m_buffers[buf.index].start, m_buffers[0].length, ts); } if (ER_FD == xioctl(m_fd, VIDIOC_QBUF, &buf)) errno_exit("VIDIOC_QBUF"); break; } } [code]void capV4L2::process_image(void *p, int length, uint64_t ts) { unsigned char *pDataOut = (unsigned char *) p; uint64_t timestamp; qData_t * outBuf; outBuf = (qData_t*) outBufferPool.getEmptyBuffer(100); if (outBuf) { outBuf->timestamp_us = timestamp; for (auto i = 0; i < 3; i++) { int datasize = outBuf->plane[i].bytesperpixel * outBuf->plane[i].width; uint8_t *data = (uint8_t *) outBuf->plane[i].data; outBuf->plane[i].bytesused = 0; if (outBuf->plane[i].stride == datasize) { // Block Liner memcpy(data, pDataOut, datasize * outBuf->plane[i].height); pDataOut += datasize * outBuf->plane[i].height; } else { // Pitch Liner #if 0 for (auto j = 0; j < outBuf->plane[i].height; j++) { memcpy(data, pDataOut, datasize); data += outBuf->plane[i].stride; pDataOut += datasize; } #else cudaMemcpy2D(data, outBuf->plane[i].stride, pDataOut, datasize, datasize, outBuf->plane[i].height, cudaMemcpyHostToHost); pDataOut += datasize * outBuf->plane[i].height; #endif } outBuf->plane[i].bytesused = outBuf->plane[i].stride * outBuf->plane[i].height; } outBufferPool.pushFilledBuffer(outBuf); } }[/code] Summary of Input to NvVideoConverter --- same as your NvVideoEncoder [code]void VideoConverter::ThreadFunc(void) { mConv0 = NvVideoConverter::createVideoConverter("conv0"); ret = mConv0->setOutputPlaneFormat(ctx.in_pixfmt, ctx.in_width, ctx.in_height, ctx.in_buffmt); ret = mConv0->setCapturePlaneFormat(ctx.in_pixfmt, ctx.out_width, ctx.out_height, ctx.out_buffmt); ret = mConv0->output_plane.setupPlane(V4L2_MEMORY_DMABUF, NUM_OF_OUTPUT_BUFFER_CONV, false, false); ret = mConv0->capture_plane.setupPlane(V4L2_MEMORY_MMAP, NUM_OF_CAPTURE_BUFFER_CONV, false, false); ret = mConv0->output_plane.setStreamStatus(true); ret = mConv0->capture_plane.setStreamStatus(true); mConv0->capture_plane.setDQThreadCallback(conv0_capture_dqbuf_thread_callback); mConv0->output_plane.setDQThreadCallback(conv0_output_dqbuf_thread_callback); mConv0->output_plane.startDQThread(this); mConv0->capture_plane.startDQThread(this); for (uint32_t i = 0; i < mConv0->capture_plane.getNumBuffers(); i++) { struct v4l2_buffer v4l2_buf; struct v4l2_plane planes[MAX_PLANES]; memset(&v4l2_buf, 0, sizeof(v4l2_buf)); memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane)); v4l2_buf.index = i; v4l2_buf.m.planes = planes; ret = mConv0->capture_plane.qBuffer(v4l2_buf, NULL); } for (auto i = 0; i < mConv0->output_plane.getNumBuffers(); i++) { struct v4l2_buffer v4l2_buf; struct v4l2_plane planes[MAX_PLANES]; NvBuffer *buffer = mConv0->output_plane.getNthBuffer(i); v4l2_buf.index = i; v4l2_buf.m.planes = planes; if (inBufferPool) { qData_t *inBuf = new qData_t; memset(inBuf, 0x00, sizeof(qData_t)); inBuf->nvBuf = (void*) buffer; inBuf->index = buffer->index; for (auto i = 0; i < 3; i++) { inBuf->plane[i].data = (uint8_t*) buffer->planes[i].data; inBuf->plane[i].width = buffer->planes[i].fmt.width; inBuf->plane[i].height = buffer->planes[i].fmt.height; inBuf->plane[i].stride = buffer->planes[i].fmt.stride; inBuf->plane[i].bytesperpixel = buffer->planes[i].fmt.bytesperpixel; inBuf->plane[i].bytesused = 0; } inBufferPool->pushEmptyBuffer(inBuf); } } while (!got_error && !mConv0->isInError() && !eos && !myExitFlag) { ret = inBuf_to_conv0(); }[/code] [code]int VideoConverter::inBuf_to_conv0() { int ret; struct v4l2_buffer v4l2_buf; struct v4l2_plane planes[MAX_PLANES]; NvBuffer *buffer; qData_t *inBuf; memset(&v4l2_buf, 0, sizeof(v4l2_buf)); memset(planes, 0, sizeof(planes)); v4l2_buf.m.planes = planes; if ((inBuf = (qData_t*)inBufferPool->getFilledBuffer(40)) == NULL) { return -1; } v4l2_buf.index = inBuf->index; v4l2_buf.timestamp.tv_sec = inBuf->timestamp_us / 1000000; v4l2_buf.timestamp.tv_usec = inBuf->timestamp_us % 1000000; buffer = (NvBuffer *) inBuf->nvBuf; for (auto i = 0; i < buffer->n_planes; i++) { NvBuffer::NvBufferPlane &plane = buffer->planes[i]; plane.bytesused = inBuf->plane[i].bytesused; } delete inBuf; ret = mConv0->output_plane.qBuffer(v4l2_buf, NULL); if (ret < 0) { } return 1; }[/code]
Hi feng.baoying,

I cannot deliver whole code, sorry.
And I don't use NvBufferTransform(), but use memcpy() and NvVideoConverter.

Common part:
typedef struct {
int index;
uint8_t *data;
uint8_t *share;
int length;
uint64_t timestamp_us;
void *nvBuf;
void *v4Buf;
int fd;
planeData_t plane[3];
} qData_t;


class BufferPool {
private:
std::queue<void*> m_emptyQueue;
std::mutex m_emptyQueueMutex;
std::condition_variable m_emptyCondition;

std::queue<void*> m_filledQueue;
std::mutex m_filledQueueMutex;
std::condition_variable m_filledCondition;

public:
int pushEmptyBuffer(void *buffer) {
{
std::unique_lock<std::mutex> lock(m_emptyQueueMutex);
m_emptyQueue.push(buffer);
m_emptyCondition.notify_one();
}
return 0;
}

void *getEmptyBuffer(int timeout_ms = 5) {
std::unique_lock<std::mutex> lock(m_emptyQueueMutex);
void *res = nullptr;

if (m_emptyQueue.empty() && (timeout_ms > 0)) {
m_emptyCondition.wait_for(lock,
std::chrono::milliseconds(timeout_ms),
[this] {return !m_emptyQueue.empty();});
}

if (!m_emptyQueue.empty()) {
res = m_emptyQueue.front();
m_emptyQueue.pop();
}

return res;
}

int getEmptyBufferSize() {
std::unique_lock<std::mutex> lock(m_emptyQueueMutex);
return m_emptyQueue.size();
}


int pushFilledBuffer(void *buffer) {
{
std::unique_lock<std::mutex> lock(m_filledQueueMutex);
m_filledQueue.push(buffer);
m_filledCondition.notify_one();
}

return 0;
}

void *getFilledBuffer(int timeout_ms = 5) {
void *res = nullptr;

if (m_filledQueue.empty()) {
std::this_thread::yield();
}
{
std::unique_lock<std::mutex> lock(m_filledQueueMutex);
if (m_filledQueue.empty() && (timeout_ms > 0)) {
std::this_thread::yield();
m_filledCondition.wait_for(lock,
std::chrono::milliseconds(timeout_ms),
[this] {return !m_filledQueue.empty();});
}
if (!m_filledQueue.empty()) {
res = m_filledQueue.front();
m_filledQueue.pop();
}
}
return res;
}

int getFilledBufferSize() {
std::unique_lock<std::mutex> lock(m_filledQueueMutex);
return m_filledQueue.size();
}

BufferPool(){
}

virtual ~BufferPool() {
void *res = nullptr;

while(!m_filledQueue.empty()) {
res = m_filledQueue.front();
m_filledQueue.pop();
}
while(!m_emptyQueue.empty()) {
res = m_emptyQueue.front();
m_emptyQueue.pop();
}
}
};


class FilterBase {
protected:
BufferPool *inBufferPool = nullptr;
BufferPool outBufferPool;

public:
FilterBase() {
inBufferPool = nullptr;
}

~FilterBase() {
inBufferPool = nullptr;
}

virtual BufferPool *getOutputBuffer() {
return &outBufferPool;
}
virtual int connectFilter(FilterBase *filter){
if(filter){
inBufferPool = filter->getOutputBuffer();
}
return 0;
}
};



Summary of Capture from camera
void capV4L2::init_buffers(void) {
enum v4l2_buf_type type;

switch (m_io) {

case IO_METHOD_MMAP:
for (i = 0; i < n_buffers; ++i) {
struct v4l2_buffer buf;
CLEAR(buf);
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
if (ER_FD == xioctl(m_fd, VIDIOC_QBUF, &buf)) { };
}
break;
}
return;
}


int capV4L2::grabFrame(void) {
struct v4l2_buffer buf;
uint64_t ts;
fd_set fds;
struct timeval tv;

FD_ZERO(&fds);
FD_SET(m_fd, &fds);
tv.tv_sec = 0;
tv.tv_usec = 100000; // 100msec
r = select(m_fd + 1, &fds, NULL, NULL, &tv);
switch (m_io) {

case IO_METHOD_MMAP:
CLEAR(buf);
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
if (ER_FD == xioctl(m_fd, VIDIOC_DQBUF, &buf)) {

}
ts = (uint64_t) buf.timestamp.tv_sec * 1000000 + buf.timestamp.tv_usec;
if (ts != 0) {
process_image(m_buffers[buf.index].start, m_buffers[0].length, ts);
}
if (ER_FD == xioctl(m_fd, VIDIOC_QBUF, &buf))
errno_exit("VIDIOC_QBUF");
break;
}
}

[code]void capV4L2::process_image(void *p, int length, uint64_t ts) {
unsigned char *pDataOut = (unsigned char *) p;
uint64_t timestamp;
qData_t * outBuf;

outBuf = (qData_t*) outBufferPool.getEmptyBuffer(100);
if (outBuf) {
outBuf->timestamp_us = timestamp;
for (auto i = 0; i < 3; i++) {
int datasize = outBuf->plane[i].bytesperpixel
* outBuf->plane[i].width;
uint8_t *data = (uint8_t *) outBuf->plane[i].data;
outBuf->plane[i].bytesused = 0;

if (outBuf->plane[i].stride == datasize) { // Block Liner
memcpy(data, pDataOut, datasize * outBuf->plane[i].height);
pDataOut += datasize * outBuf->plane[i].height;
} else { // Pitch Liner
#if 0
for (auto j = 0; j < outBuf->plane[i].height; j++) {
memcpy(data, pDataOut, datasize);
data += outBuf->plane[i].stride;
pDataOut += datasize;
}
#else
cudaMemcpy2D(data, outBuf->plane[i].stride, pDataOut, datasize, datasize,
outBuf->plane[i].height, cudaMemcpyHostToHost);
pDataOut += datasize * outBuf->plane[i].height;
#endif
}
outBuf->plane[i].bytesused = outBuf->plane[i].stride * outBuf->plane[i].height;
}
outBufferPool.pushFilledBuffer(outBuf);
}
}



Summary of Input to NvVideoConverter --- same as your NvVideoEncoder
void VideoConverter::ThreadFunc(void) {
mConv0 = NvVideoConverter::createVideoConverter("conv0");
ret = mConv0->setOutputPlaneFormat(ctx.in_pixfmt, ctx.in_width, ctx.in_height, ctx.in_buffmt);
ret = mConv0->setCapturePlaneFormat(ctx.in_pixfmt, ctx.out_width, ctx.out_height, ctx.out_buffmt);
ret = mConv0->output_plane.setupPlane(V4L2_MEMORY_DMABUF, NUM_OF_OUTPUT_BUFFER_CONV, false, false);
ret = mConv0->capture_plane.setupPlane(V4L2_MEMORY_MMAP, NUM_OF_CAPTURE_BUFFER_CONV, false, false);

ret = mConv0->output_plane.setStreamStatus(true);
ret = mConv0->capture_plane.setStreamStatus(true);
mConv0->capture_plane.setDQThreadCallback(conv0_capture_dqbuf_thread_callback);
mConv0->output_plane.setDQThreadCallback(conv0_output_dqbuf_thread_callback);
mConv0->output_plane.startDQThread(this);
mConv0->capture_plane.startDQThread(this);

for (uint32_t i = 0; i < mConv0->capture_plane.getNumBuffers(); i++) {
struct v4l2_buffer v4l2_buf;
struct v4l2_plane planes[MAX_PLANES];

memset(&v4l2_buf, 0, sizeof(v4l2_buf));
memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));
v4l2_buf.index = i;
v4l2_buf.m.planes = planes;
ret = mConv0->capture_plane.qBuffer(v4l2_buf, NULL);
}

for (auto i = 0; i < mConv0->output_plane.getNumBuffers(); i++) {
struct v4l2_buffer v4l2_buf;
struct v4l2_plane planes[MAX_PLANES];
NvBuffer *buffer = mConv0->output_plane.getNthBuffer(i);

v4l2_buf.index = i;
v4l2_buf.m.planes = planes;

if (inBufferPool)
{
qData_t *inBuf = new qData_t;
memset(inBuf, 0x00, sizeof(qData_t));
inBuf->nvBuf = (void*) buffer;
inBuf->index = buffer->index;
for (auto i = 0; i < 3; i++) {
inBuf->plane[i].data = (uint8_t*) buffer->planes[i].data;
inBuf->plane[i].width = buffer->planes[i].fmt.width;
inBuf->plane[i].height = buffer->planes[i].fmt.height;
inBuf->plane[i].stride = buffer->planes[i].fmt.stride;
inBuf->plane[i].bytesperpixel =
buffer->planes[i].fmt.bytesperpixel;
inBuf->plane[i].bytesused = 0;
}
inBufferPool->pushEmptyBuffer(inBuf);
}
}

while (!got_error && !mConv0->isInError() && !eos && !myExitFlag) {
ret = inBuf_to_conv0();
}


int VideoConverter::inBuf_to_conv0() {
int ret;
struct v4l2_buffer v4l2_buf;
struct v4l2_plane planes[MAX_PLANES];
NvBuffer *buffer;
qData_t *inBuf;

memset(&v4l2_buf, 0, sizeof(v4l2_buf));
memset(planes, 0, sizeof(planes));

v4l2_buf.m.planes = planes;

if ((inBuf = (qData_t*)inBufferPool->getFilledBuffer(40)) == NULL) {
return -1;
}
v4l2_buf.index = inBuf->index;
v4l2_buf.timestamp.tv_sec = inBuf->timestamp_us / 1000000;
v4l2_buf.timestamp.tv_usec = inBuf->timestamp_us % 1000000;
buffer = (NvBuffer *) inBuf->nvBuf;
for (auto i = 0; i < buffer->n_planes; i++) {
NvBuffer::NvBufferPlane &plane = buffer->planes[i];
plane.bytesused = inBuf->plane[i].bytesused;
}
delete inBuf;

ret = mConv0->output_plane.qBuffer(v4l2_buf, NULL);
if (ret < 0) {

}
return 1;
}

#12
Posted 04/10/2018 10:59 AM   
Hi mynaemi: Thank you for your reply! "I cannot understand that you use render_dmabuf_fd for output_plane buffer of NvVideoEncoder, and you don't use cam_fd",reason is NvVideoEncoder request input data format is YUV420,and the data format in cam_fd is YUV422,but data format in render_dmabuf_fd is YUV420. In MMAPI R28.1,base on example 12_camera_v4l2_cuda, add encoder,I have realized the function I want. As the example 12_camera_v4l2_cuda in MMAPI R28.2 changed, function NvBufferTransform() istead of converter,and the programme seams simple, so I want to follow this method to complete my function again, result as this topic. Now I want to know wheather my method is right? can this method be realized? Expected your reply soon! Thank you!
Hi mynaemi:
Thank you for your reply!
"I cannot understand that you use render_dmabuf_fd for output_plane buffer of NvVideoEncoder,
and you don't use cam_fd",reason is NvVideoEncoder request input data format is YUV420,and the data format in cam_fd is YUV422,but data format in render_dmabuf_fd is YUV420.
In MMAPI R28.1,base on example 12_camera_v4l2_cuda, add encoder,I have realized the function I want.
As the example 12_camera_v4l2_cuda in MMAPI R28.2 changed, function NvBufferTransform() istead of converter,and the programme seams simple, so I want to follow this method to complete my function again, result as this topic. Now I want to know wheather my method is right? can this method be realized?
Expected your reply soon! Thank you!

#13
Posted 04/11/2018 01:07 AM   
Hi feng, Please also refer to [url]https://devtalk.nvidia.com/default/topic/999493/jetson-tx1/nvidia-multimedia-apis-with-uyvy-sensor/post/5117049/#5117049[/url]
Hi DaneLLL: Thanks for your reply! The code you mentioned above is same to the code I used in MMAPI R28.1,which convert YUV422 format data to YUV420 format data with NvVideoConverter. It really run right. In MMAPI R28.2,which convert YUV422 format data to YUV420 format data with function NvBufferTransform(),my problem is if I follow this method, How to give the YUV420 format data in render_dmabuf_fd to encoder output plane buffer? Can it be continued? Expected your reply soon! Thanks!
Hi DaneLLL:
Thanks for your reply!
The code you mentioned above is same to the code I used in MMAPI R28.1,which convert YUV422 format data to YUV420 format data with NvVideoConverter. It really run right.
In MMAPI R28.2,which convert YUV422 format data to YUV420 format data with function NvBufferTransform(),my problem is if I follow this method, How to give the YUV420 format data in render_dmabuf_fd to encoder output plane buffer? Can it be continued?
Expected your reply soon! Thanks!

#15
Posted 04/11/2018 09:00 AM   
Scroll To Top

Add Reply