I am looking for a example, or consultancy on how to receive a CSI camera frame in unified memory buffer. I would prefer to use Argus, for reading the camera frame, and would like to “pre-Allocate” unified memory buffer using cudaMallocManaged(), and pass the buffer address given by cudaMallocManaged() to Argus API for reading frame into that buffer.
When I looked around for and example which I can follow, I found /home/ubuntu/tegra_multimedia_api/samples/v4l2cuda/capture.cpp. However this does not seem to work with CSI camera.
Is there a way to pass unified memory buffer pointer to iFrameConsumer->acquireFrame() ?
My motivation for all this - is to avoid frame buffer copies that I would have to do otherwise. We are processing our frame partly on GPU and partly on CPU. In this scenario, Unified mem buffer would help.
Sorry, I am not able to connect/comprehend my question , and pointer/patch you gave earlier.
I am looking to allocate a buffer in unified memory such that I can process the frame interchangeably, on CPU and GPU. I was able to build/run your patch given on 6/7/17, but are the frame buffers allocated in your patch residing in unified memory ? I am not sure.
Also, when you acquire the frame in your patch –
// Acquire a frame.
UniqueObj<Frame> frame(iFrameConsumer->acquireFrame());
IFrame *iFrame = interface_cast<IFrame>(frame);
if (!iFrame)
break;
// Get the IImageNativeBuffer extension interface and create the fd.
NV::IImageNativeBuffer *iNativeBuffer =
interface_cast<NV::IImageNativeBuffer>(iFrame->getImage());
if (!iNativeBuffer)
ORIGINATE_ERROR("IImageNativeBuffer not supported by Image.");
fd = iNativeBuffer->createNvBuffer(Size(ctx.width, ctx.height),
NvBufferColorFormat_YUV420,
NvBufferLayout_BlockLinear);
How can we ensure that pixels land in unified buffer ?
Hi dumbogeorge,
we don’t support unified memory in this case.
We have APIs to do CPU/GPU processing on memory from Argus. You can check tegra_multimedia_api/include/nvbuf_utils.h
For GPU processing:
EGLImageKHR NvEGLImageFromFd (EGLDisplay display, int dmabuf_fd);
int NvDestroyEGLImage (EGLDisplay display, EGLImageKHR eglImage);
For CPU processing:
int NvBufferMemMap (int dmabuf_fd, unsigned int plane, NvBufferMemFlags memflag, void **pVirtAddr);
int NvBufferMemSyncForCpu (int dmabuf_fd, unsigned int plane, void **pVirtAddr);
int NvBufferMemUnMap (int dmabuf_fd, unsigned int plane, void **pVirtAddr);
Is there any plan to support unified memory for purpose of camera frame capture ? Would you create any new examples ?
Are there examples for CPU Processing functions -
int NvBufferMemMap (int dmabuf_fd, unsigned int plane, NvBufferMemFlags memflag, void **pVirtAddr);
int NvBufferMemSyncForCpu (int dmabuf_fd, unsigned int plane, void **pVirtAddr);
int NvBufferMemUnMap (int dmabuf_fd, unsigned int plane, void **pVirtAddr);
When I surf around I found few examples on nvidia developer forum, but they do not seem to use unified memory .
Hi DaneLLL,
Thanks. In our case we are processing frame on CPU and GPU both. Is there an example that I can copy about processing frame on CPU and GPU. I need to access frame first on GPU and then same input frame need to be accessed on CPU.