Saving CSI nvcamera raw frames to file using gstreamer.

In order to save the raw frame data, I’m trying:

$ gst-launch-1.0 nvcamerasrc num-buffers=10 fpsRange=“30.0 30.0” ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! filesink location=video.raw

Unfortunately the generated file only saves 480 bytes per frame, instead of the expected 1920*1080.

In fact, if I try to use gstreamer programmatically and set up an appsink, the new_sample callback gets called 30 times per second, but the size of the buffer is invariably 480. I’d post the code but trying to first make the simple pipeline above should be a first step.

Interestingly, the following works for h264:

$ gst-launch-1.0 nvcamerasrc num-buffers=100 fpsRange=“30.0 30.0” ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! omxh264enc ! h264parse ! qtmux ! filesink location=video.mp4

Breaking up the pipeline:
nvcamerasrc - data source is CSI camera
omxh264enc - encodes to h.264
h264parse - no sure what it actually does, but it is needed
qtmux - puts the video into a quicktime container
filesink - saves the quicktime file

If I then:
$ vlc video.mp4

It plays just fine.

So somehow omxh264enc can get the real frame data, and is not stuck with the measly 480 bytes that I get when trying to save to file.

How to get the raw frame data?

To answer my own question, you need to add nvvidconv with the same caps, except for the memory specification. Now the frames appear to be complete (1.510801920).

e.g.
gst-launch-1.0 nvcamerasrc num-buffers=1 fpsRange=“30.0 30.0” ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! nvvidconv ! ‘video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420’ ! filesink location=video.raw

What I really need now is the actual bayer pattern (or gray8) from the sensor, not the 420 that the hw produces. But that is another question.

how could you know need to add nvvidconv? where can get some references about nvcamerasrc or nvvidconv?

@fwmao I have not found any real documentation on them. The downloadable “LT4 Media User Guide” is more of a cookbook than a real guide, it doesn’t explain much. Regarding the nvvidconv it doesn’t say much other than being ‘proprietary’ and that it can do scaling (now it turns out it can also convert from NVMM to main memory), and I haven’t found any source code.

The way I thought of the solution was simply because I found the “memory:NVMM” a little suspicious (not even ‘NVMM’ seems defined anywhere, I only found some vague glossary entry), and the fact that I always got 480 bytes regardless of video resolution made me think that it is some kind of struct with information of where the real frame data is. Then I saw in one of those Media User Guide recipes “nvvidconv ! video/x-raw, …” so I gave it a shot.

Regarding nvcamerasrc, ‘$ gst-inspect-1.0 nvcamerasrc’ will give you some information, including the binary file location:

/usr/lib/arm-linux-gnueabihf/gstreamer-1.0/libgstnvcamera.so

Unfortunately I have not found its source code, which would be really useful for anyone trying to support any other sensor on this hardware.

@apalopohapa Did U find the way to get the raw frame data?
the Other question, Using “gst-launch-1.0 nvcamerasrc …” can get I420 formate, do U know raw → rgb → I420 is transferred by GPU or ARM in this command?

OK, Maybe “video/x-raw(memory:NVMM)” means this is transferred by GPU?

Hello,
If you want to get YUV data from camera source, try the following pipeline:

gst-launch-1.0 nvcamerasrc num-buffers=1 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)I420, framerate=(fraction)30/1’ ! nvvidconv ! ‘video/x-raw, format=(string)I420, framerate=(fraction)30/1’ ! filesink location=‘camera.yuv’

br
ChenJIan

@fwmao I did not find a way to get the raw data using gstreamer, but the v4l2 interface provided in R23.2 will do it (confirmed). Not by default though, you have to enable some and disable other .config options and recompile the kernel. The .config options to change are documented in the Tegra_Linux_Driver_Package_Documents_R23.2, in the “Video for Linux User Guide” section.

On a side note, for some reason I was only getting 45 fps when in 1280x720 mode (instead of 60 or 120). Don’t know why yet, but something to keep in mind. You can check this with the yavta example ($ ./yavta /dev/video0 -c600 -n8 -s1280x720 -fSRGGB10), even having a large number of v4l2 frame buffers (e.g. -n32 option) and without writing to disk. Btw, yavta defaults to mmapped memory.

On a side note, for some reason I was only getting 45 fps when in 1280x720 mode (instead of 60 or 120). Don’t know why yet, but something to keep in mind. You can check this with the yavta example ($ ./yavta /dev/video0 -c600 -n8 -s1280x720 -fSRGGB10), even having a large number of v4l2 frame buffers (e.g. -n32 option) and without writing to disk. Btw, yavta defaults to mmapped memory.

Can you paste the command line you are using?

br
Chenjian

I wanted to update this thread and confirm that with the register configuration settings in the provided ov5693 driver for 720p mode, the sensor only outputs 45 fps. It is not that the hardware is not handling it or that frames are being lost, the sensor is really just outputting 45 fps.

Hello,

How can the .yuv file be opened and displayed again after it was saved in the way above?

regards,
lhampel

You can convert it to a normal image file (e.g. png) using a python script, and then open it with you favorite image viewer. Install python 2.7, then the Python Imaging Library (PIL) http://www.pythonware.com/products/pil/. The script would simply read the .yuv binary file, calculate RGB components, create pixel by pixel the new image, then save it in the image format that you wish.

To interpret the .yuv file, it is in I420 planes, so for example the .yuv of an 8 pixel image would be YYYYYYYYUUVV.

I’m trying to use opencv VideoCapture class to capture these camera images.
Does anyone know what arg to give this constructor for the CS1 camera on the jetson.’
0 or -1 doesn’t work

Hello guys,

I am trying to use in python, nvcamerasrc (CSI camera) installed on nvidia to save MP4 video files using gstreamer 1.0

the problem is that my curent pipeline is not working:

pipeline = Gst.parse_launch (“nvcamerasrc ! video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=I420 ! omxh264enc ! qtmux ! filesink location=test.mp4”)

I used all the combinations but the results are 0bytes files or not encoded, rawfile .
The problem is that the pipeline is not closing successfully the pipeline (send event new EOS) or maybe the encoding part or elements etc.

For avi is working fine.

Does anyone have created a python script in order to use nvcamerasrc of nvidiatx2 to save multiple mp4 video file without opencv using gstreamer?