Hi,
I am getting high CPU usage when using omxh264dec as compared to nv_omx_h264dec.
Following post also describes something similar (by some other user):
https://devtalk.nvidia.com/default/topic/965272/jetson-tk1/omxh264dec-vs-nv_omx_h264dec/
On this post Shane has clarified that the CPU usage difference must be due to the different versions of gstreamer framework but ideally newer version should perform better than the older one.
When using omxh264dec, tegrastats shows 0% against GR3D.
[PS: I have enabled performance mode for CPU + set GPU to highest clock rate. Also, running tegrastat with root]
When using gstreamer-1.0, I am getting 50 FPS @ 96% CPU [h264 (Main) → NV12]
Command : gstreamer-1.0 filesrc location= ! avidemux ! queue ! h264parse ! omxh264dec ! fakesink -e
But when I was using gstreamer-0.10, CPU usage was pretty less. So, whats the differance b/w gstreamer-0.10 and gstreamer-1.0. Ideally, new version should improve/optimize things.
It is also possible that my interpretation of results/stats is wrong. Please correct me if this is the case.
My last doubt is, whether GPU does H264 video decoding or is there any other hardware for video decoding because in tegrastats GR3D shows 0% in both the cases but stats againts EMC, AVP, VDE shows change?