setPixelFormat PIXEL_FMT_YCbCr_444_888 issue

Hi Guys,

I am trying to set the pixel format for the output camera stream using the method setPixelFormat. When I set the pixel format to PIXEL_FMT_YCbCr_420_888 I do not face any issue. However, when I try to set it to PIXEL_FMT_YCbCr_444_888 I get the following error :

execute:353 Failed to create left stream

Please find the code snippet which gives me error below:

bool  execute()
{

    using namespace Argus;
    
    // Initialize the preview window and EGL display.    
    Window &window = Window::getInstance(); 
    PROPAGATE_ERROR(g_display.initialize(window.getEGLNativeDisplay()));

    // Initialize the Argus camera provider.
    UniqueObj<CameraProvider> cameraProvider(CameraProvider::create());

    // Get the ICameraProvider interface from the global CameraProvider.
    ICameraProvider *iCameraProvider = interface_cast<ICameraProvider>(cameraProvider);
    if (!iCameraProvider)
        ORIGINATE_ERROR("Failed to get ICameraProvider interface");

    // Get the camera devices.
    std::vector<CameraDevice*> cameraDevices;
    iCameraProvider->getCameraDevices(&cameraDevices);

    if (cameraDevices.size() < 2)
        ORIGINATE_ERROR("Must have at least 2 sensors available");

CameraDevice *cameraDevice = cameraDevices[0];
    ICameraProperties *iCameraProperties = interface_cast<ICameraProperties>(cameraDevice);
    if (!iCameraProperties)
        ORIGINATE_ERROR("Failed to get ICameraProperties interface");

// Create the capture session, AutoControl will be based on what the 1st device sees.
    UniqueObj<CaptureSession> captureSessionCamera0(iCameraProvider->createCaptureSession(cameraDevices[0]));
    ICaptureSession *iCaptureSessionCamera0 = interface_cast<ICaptureSession>(captureSessionCamera0);
    if (!iCaptureSessionCamera0)
        ORIGINATE_ERROR("Failed to get capture session interface");

std::vector<Argus::SensorMode*> sensorModes;

    // Get the sensor mode to determine the video output stream resolution.   

    iCameraProperties->getSensorModes(&sensorModes);

    if (sensorModes.size() == 0)
	ORIGINATE_ERROR("Failed to get sensor modes");

    ISensorMode *iSensorMode = interface_cast<ISensorMode>(sensorModes[0]);
    if (!iSensorMode)
	ORIGINATE_ERROR("Failed to get sensor mode interface");

    // Create stream settings object and set settings common to both streams.
    UniqueObj<OutputStreamSettings> streamSettingsCamera0(iCaptureSessionCamera0->createOutputStreamSettings());
    IOutputStreamSettings *iStreamSettingsCamera0 = interface_cast<IOutputStreamSettings>(streamSettingsCamera0);
    if (!iStreamSettingsCamera0)
        ORIGINATE_ERROR("Failed to create OutputStreamSettings");
 
//   iStreamSettingsCamera0->setPixelFormat(PIXEL_FMT_YCbCr_420_888);
   
      iStreamSettingsCamera0->setPixelFormat(PIXEL_FMT_YCbCr_444_888);

iStreamSettingsCamera0->setResolution(Argus::Size(1920,1080));

    // Create egl streams
    iStreamSettingsCamera0->setCameraDevice(cameraDevices[0]);
    UniqueObj<OutputStream> streamLeft(iCaptureSessionCamera0->createOutputStream(streamSettingsCamera0.get()));
    IStream *iStreamLeft = interface_cast<IStream>(streamLeft);
    if (!iStreamLeft)
        ORIGINATE_ERROR("Failed to create left stream");

UniqueObj<OutputStream> videoStreamCamera0(iCaptureSessionCamera0->createOutputStream(streamSettingsCamera0.get()));
    IStream *iVideoStreamCamera0 = interface_cast<IStream>(videoStreamCamera0);
    if (!iVideoStreamCamera0)
        ORIGINATE_ERROR("Failed to create video stream");

// Create a request

    UniqueObj<Request> requestCamera0(iCaptureSessionCamera0->createRequest());	

   IRequest *iRequestCamera0 = interface_cast<IRequest>(requestCamera0);
    if (!iRequestCamera0)
        ORIGINATE_ERROR("Failed to create Request");

// Enable both streams in the request.    
    iRequestCamera0->enableOutputStream(streamLeft.get());
    ISourceSettings *iSourceSettingsCamera0 = interface_cast<ISourceSettings>(iRequestCamera0->getSourceSettings());
    if (!iSourceSettingsCamera0)
        ORIGINATE_ERROR("Failed to get ISourceSettings interface");
    iSourceSettingsCamera0->setFrameDurationRange(Argus::Range<uint64_t>(1e9/DEFAULT_FPS));

// Submit capture for the specified time.
    if (iCaptureSessionCamera0->repeat(requestCamera0.get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to start repeat capture request for preview");

// Stop the capture requests and wait until they are complete.
    iCaptureSessionCamera0->stopRepeat();
    iCaptureSessionCamera0->waitForIdle();
    iStreamLeft->disconnect();

// Shut down Argus.
   cameraProvider.reset();

   // Cleanup the EGL display
   PROPAGATE_ERROR(g_display.cleanup());

    return true;
}

}; // namespace ends here

int main(int argc, char** argv )
{
   if (!ArgusSamples::execute())
      return EXIT_FAILURE;
	
   return EXIT_SUCCESS;
}

Hi lamegeorge,

In current argus, only PIXEL_FMT_YCbCr_420_888 is supported. May I ask your purpose of using non sub-sampled images?

Hi WayneWWW,

  1. We need higher resolution of U / V for image processing. Our algorithm requires detailed information in Luma as well as Chroma components for processing the image.

  2. If it is not possible to meet this requirement, we would like to see if we can bring Luma and Chroma components to same scale. This could mean scaling up chroma or scaling down luma. Please give your suggestion as to which method would suffice our need.

We are targeting real-time performance hence would like to know the most optimized way of achieving this.

Thanks

Hi WayneWWW,

Could you please let me know the best way to leverage TX1 platform to scale down “only Y” plane when we get YUV420P format from camera ? If that is the case then we could set the input resolution to 1920x1080 and try scaling down “only Y” plane to 960x540. This would bring all the planes to the same resolution.

Thanks

Hi lamegeorge,

Please refer to mmapi sample 07_video_convert for downscale.

Hi WayneWWW,

Thanks for your response. I referred to the example. I feel you are pointing to NvVideoConverter to be used. Am I right ? Actually I have pointed to a problem with using v4l2 buffers / classes. Please find the problem described in the following link :

https://devtalk.nvidia.com/default/topic/1022627/12_camera_v4l2_cuda-not-working/

I understand that the APIs which need to be used make use of v4l2 structures / objects / pipeline which might not work with the camera (Sony IMX274 Bayer) that I am using. Please correct me if I am wrong.

Is there any alternative way to scale down using hardware scaler?

Thanks

Hi lamegeorge,

Sorry for the late reply, we have argus sample in mmapi sample 10.

I don’t know what your concern is. Do you mean your sensor does not use our ISP so that you are handling raw format?

Hi WayneWWW,

Our application requires YUV444 pixel format to be outputted from camera. From previous responses, I understand that Argus does not support this format yet although there is an option in the documentation.

I was referred to few samples but they make use of v4l2 which I believe does not work with our sensor. Hence I would like you to advise how to get Y /U / V channels to the same resolution for example ( 960x540)

Thanks

Hi dumbogeorge,

This is my understand of your usecase here: transform your video input(yuv420) → yuv444.

I just check that our video converter is able to transform YUV444 output. Please try it with format YUV444.

Hi WayneWWW,

Thanks for your response.

  1. Are you referring to ‘nvvidconv’ ?

  2. Are you referring to argus code mentioned above ?

iStreamSettingsCamera0->setPixelFormat(PIXEL_FMT_YCbCr_444_888);
  1. Are you referring to a sample ?

Thanks.

I referred to mmapi sample.

If you look carefully, you can find there is a format “V4L2_PIX_FMT_YUV444M”.

I guess you can use something like “ctx->conv->setOutputPlaneFormat” in mmapi sample to conv your YUV420 buffer into YUV444.

Hi WayneWWW,

Thanks for your response. I reckon v4l2 does not work with the sensors we are using as posted earlier on this thread.

"I feel you are pointing to NvVideoConverter to be used. Am I right ? Actually I have pointed to a problem with using v4l2 buffers / classes. Please find the problem described in the following link :

https://devtalk.nvidia.com/default/topic/1022627/12_camera_v4l2_cuda-not-working/ "

Is there any other way to accomplish this task?

Thanks

Hi,
PIXEL_FMT_YCbCr_444_888 is not supported in Argus output, so you need to convert the Argus output YUV420 into YUV444 via NvVideoconverter.

12_camera_v4l2_cuda is for YUV sensors/USB cameras outputting YUV422(UYVY, YUYV, …), not Bayer sensors.