Enable PureVideo under Linux (MPEG-4 / H.264 XvMC)

This is slightly off-topic but it is kind to related and I could not find a other developers forum where this should belongs, so here goes:

Will NVIDIA even make their PureVideo™ Technology for hardware accelerated video decoding available under Linux (like you have done with CUDA)?

@NVIDIA® Corporation, please let users and developers use your PureVideo API to accelerate high-definition H.264, WMV, and MPEG-2 video decoding using XvMC, (or a other library if XvMC is not an option. XvMC is the Linux platforms open source answer to DxVA, Microsoft DirectX Video Acceleration API do it would be the natural choise).

Best for the open source community would be if NVIDIA open up the API as much as it has open up its CUDA API, which good dodumentations, specification, sample code and reference libraries. Enabling developers to activate support for PureVideo in the software running under Linux on a computer with a DirectX 9 (or 10) NVIDIA GPU that supports the PureVideo technology. Take VIA Technologies as a good example of a company have open source and helped implement hardware acceleation of MPEG-4 ASP (H.263) video on their Unichrome (S3) graphic processors via XvMC additions.

It is not only me, this is really a broadly wanted feature; popular open source multimedia player softwares on Linux such as these bellow need GPU hardware acceleration to assist with decoding MPEG-4 AVC (H.264) at native 1080p resolutions:
http://www.mplayerhq.hu
http://www.videolan.org
http://www.xinehq.de
http://www.mythtv.org
http://www.linuxmce.com
http://www.xboxmediacenter.com (yes XBMC is being ported to Linux as I write this)

The processes that could possibly be accelerated by a NVIDIA graphics processor are:

  • Motion compensation (mo comp)
  • Inverse Discrete Cosine Transform (iDCT)
    ** Inverse Telecine 3:2 and 2:2 pull-down correction
  • Bitstream processing (CAVLC/CABAC)
  • in-loop deblocking
  • inverse quantization (IQ)
  • Variable-Length Decoding (VLD)
  • Deinterlacing (Spatial-Temporal De-Interlacing)
    **Plus automatic interlace/progressive source detection

PS! Most (if not all) open source softwares for Linux use FFmpeg open source codec suit library to decode most popular video (and audio) formats. (FFmpeg can even decode WMV9 and VC-1 video). So it would be great if FFmpeg could be used as a reference design for NIDIA PureVideo utilization under Linux. Note that bitstream processing (CAVLC/CABAC) and in-loop deblocking are probebely the two most important ones to have running on a GPU in order to playback MPEG-4 AVC (H.264) at native 1080p resolutions, but motion compensation (mo comp) and iDCT are probebely the simplest ones to implement.

I am not a video expert, but when I install the NVIDIA driver, it also installs a libXvMCNVIDIA.so and .a
It works nice with MPlayer, see docu on how to use it.

Is this what you are looking for or is it incomplete? What’s the difference to VIAs lib?

Peter

The Linux-Drivers do have MPEG2-Acceleration Support over XvMC.
It’s easy to use with xine: xine -V xxmc dvd://
But nothing else.

@prkipfer, like Mr_Maniac already pointed out; libXvMCNVIDIA.so that comes with the official closed source NVIDIA binary device driver for Linux only supports hardware acceleration of mo comp (Motion Compensation) and iDCT (Inverse Discrete Cosine Transform) for MPEG-2.

VIA’s open source Unichrome device drivers support the same for MPEG-2, but it also support hardware acceleration of mo comp and iDCT for MPEG-4 ASP (H.263), you know the standard used in DivX and Xvid video that is so popular with internet file-sharers.

NVIDIA’s DirectX9-based GPU hardware (and later) does in addition to MPEG-2 also support hardware acceleration of MPEG-4 AVC (H.264) and WMV9 HD (which is basically the same as VC-1), which is the video codec formats used in HD DVD and Blu-ray movies. However NVIDIA’s device drivers currently only support that hardware acceleration API, which it calls “PureVideo Technology”, under Microsoft Windows operating-systems, (not on Linux).

So what I an asking NVIDIA for is to either release official Linux device drivers that support XvMC hardware accleration of all video codec format that the hardware supports (but is currently only available under Windows). Alternativly release the specifications and documentations of API, like it has done for CUDA, so developers can for them self add support for it to the XvMC open source library. Even better would be if NVIDIA open source the API and libraries needed to use it, but I assume that is too much to ask, baby steps and all.

All and all, all we want is to be able to use the same hardware accleration under Linux that is already available under Windows, is that too much to ask?

No, that is not too much to ask. :)

I was not aware that there is H.264 support in hardware (or at least for some parts of it).

Out of interest: is there actually a low level NVIDIA API for that on Win, or do you need to talk to the card through some D3D9 extensions?

Peter

Yes both NVIDIA and ATI have supported H.264 decoding in hardware, and they have had it for a while on the Windows platform. NVIDIA calls its technology for “PureVideo”, and ATI calls their technology for “Avivo”. What decoding processes it accelerates is not exacly clear from the existing information available, except for the statement that they does do MPEG-2, H.264, VC-1 and WMV9 decode acceleration (at native high-definition resolutions), and also Spatial-Temporal De-Interlacing, Noise Reduction and Edge Enhancement as post-processing.

http://www.nvidia.com/page/purevideo.html

http://www.nvidia.com/page/pvhd_fb.html

http://ati.amd.com/technology/Avivo/index.html

http://ati.amd.com/technology/avivo/h264.html

Any low-level access to the API is not publicly documented by NVIDIA as far as I know, so currently you have to use high-level API interface in DXVA from Microsoft DirectX 9 framework. However only having high-level access is not necessary a bad thing, it meant to make development easier but it requires that the manufactur (NVIDIA) provides the device drivers and documentaion required to access it, …which as said, it only currently does for Microsoft Windows.

There should be an API for it, and NIVIDIA could make that API accecable under Linux via the XvMC extension if they wanted.

By the way, sorry for cross-posting but how do you get NVIDIA’s attention in here to get this done?:
[url=“http://forums.nvidia.com/index.php?showtopic=35695”]http://forums.nvidia.com/index.php?showtopic=35695[/url]

:P

FYI; according to this article the (huge) difference between NVIDIA PureVideo technology and ATI’s Avivo Technology is that ATI Avivo uses pixel shaders to assist in decoding the video, while NVIDIA PureVideo is a true discrete programmable processing core inside the NVIDIA GPU. The NVIDIA PureVideo technology is a combination of a hardware video processor and video decode software, meaning it only offloads psrts of the video decoding to the GPU (but since those are the ‘heavy’ and processor intensive parts it show a huge diffrence on CPU usage when using PureVideo vs. not using PureVideo).
[url=“http://www.bit-tech.net/news/2006/01/07/nvidia_decode_h264/”]http://www.bit-tech.net/news/2006/01/07/nvidia_decode_h264/[/url]

PS! You might also notice that the article is over one year long, which is how long PureVideo been supported on Windows since the ForceWare version 85 device driver came out. The first generation of NVIDIA’s PureVideo technology is supported by all GeForce 6 and 7 series GPU hardare which all have the PureVideo video engine.

Cyberace, thanks for the enlightenment.

So the point is that noone (except MS and NV) exactly knows what is actually been done on the hardware as everything below the high level API is opaque.

Writing a DXVA clone for Linux is probably a lot of work. From an exposed low level API one could probably easily diff what is marketing and what is actual true hardware processing. And MS would also not want to see a competing media lib shape up, given how protective they are about the DX source.

So here is a practical idea: if there is really dedicated video hardware on the chip, NV could expose access to it in CUDA, like the calls to the tex engines. You would upload some data buffer, set some config registers and make it process the data. That should not be too hard to incorporate into CUDA, would spare NV to write a complete high level API and would give us the power to write nice video apps :magic:

What do you think?

Peter

Good idea, Peter.

Before reading this topic, I had assumed PureVideo was done using mostly CUDA-visible hardware (maybe a peculiar texture mode), so thank you Cyberace for broadening my education.

There are a bunch of alternative video codecs that people may like to use, and giving access to nVidia’s low-level hardware via CUDA may be a low-cost way for nVidia to tap into some of the open source communities interests. It wouldn’t need to be perfect, just usable (but that may be a tall order too).

I have a friend and ex-colleague who may be very interested if it offered HD video-speed support, and I may be too in a few months (so it gives nVidia a couple of months to work on it :D )

Seriously, A Great Suggestion, and interesting education.

Good idea, thinking outside the box. Note though that CUDA is only available in NVIDIA GeForce 8 Series hardware and newer only, while PureVideo is supported on all NVIDIA GeForce 6 and GeForce 7 series hardware and so the API should be available for all those hardware. Nevertheless, I thing the high-level API should be XvMC which already is the standard, no point in reinventing the wheel.

PS! Even though using the CUDA API tools would be a possibility, I very much doubt that it would be complicated or hard for NVIDIA engineers and device driver developers to add support for the PureVideo API in their Linux device drivers and put together some documentation of the PureVideo API and present it seperatly from CUDA. It is only matter of man hours, (and who will be willing to pay for those man hours).

Yeah, for broader applicability hooking it into XvMC would be definitely preferable.

Actually, I must admit that I was a bit selfish when suggesting the CUDA integration, as having functions like IDCT accelerated will also benefit other scientific algorithms… ;)

Peter

Even if iDCT, mo comp, etc. is only available via XvMC there is noting from stopping you from using those processes for something else than video decoding (if it is possible to use iDCT, mo comp, etc. for something other than video decoding that is) ;) …sure for a developer it be nice to only have to learn one API but got got to be flexible if you want broader access.

I too have a personal reason for wanting hardware video acceleration, but I’m not sure I would call the a reason selfish when so many more people than me would also enjoy the benifits. You see I’m one of the project managers of XBMC (Xbox Media Center), the free and open source multimedia entertainment center for the Xbox. Our developers are currently in the process of porting XBMC to Linux, we are doing this mainly because the Xbox hardware is not powerfull enough to decode H.264 video in high-definition (720p and 1080i/1080p), and we are looking into using the Apple TV and the next cheap hardware platform, it only has a 1GB Pentium M processor and NVIDIA GeForce 7300 GPU and that processor alone is not enough to decode H.264 video at 720p without some quality secrificing tweeks (and you can forget about 1080i/1080p). With GPU hardware acceleration the Apple TV should be able to decode H.264 in 720p without problems, (and maybe even with some of those quality secrificing tweeks also playback 1080i/1080p at an acceptable frame-rate). …sure we might support a more expensive hardware platform in the future (like the PS3 or the next generation Mac Mini) but most XBMC users only want a cheap networked video player, which is why the Xbox have served us so well and will continue to serve for standard-definition video, but many users want high-definition so we got to evolve and move with the times. The reason for not simple choosing a computer motherboard and building the hardware platform on that is that the life-cycle of a specific motherboard does not even come close to a game-console or the Apple TV, and having a locked-down hardware makes developing and debugging on it perfect for out developers, every user runs on the same hardware. Wow, what a rant, anyway, hope to see this in the NVIDIA Linux drivers soon.

I was being selfish too. I’m interested in image and video processing, so access to the hardware without an intervening (proprietary) algorithm would suite me better.

Thanks again Peter and Cyberace

Ah, I see. Nice project ! (yes I have it running on my Xbox :) )

Good luck with the Linux port.

The reason why I would prefer to use the CUDA integration over XvMC calls is tighter coupling into the simulation. For sharing CUDA data with OpenGL for example, you need a context switch (NV is currently working on making that fast) and explicit copy/bind/map calls. That will be similar for Xv surfaces. Binding in libXvMC complicates things further. As CUDA runs without an X server, it might not be possible at all. Third, the hardware might support configuring for example the iDCT processor in a way that is not needed for video but useful for simulation. This mode will therefore probably not be exposed in XvMC.

See, I am selfish … External Media

Peter

Open-Source Nvidia Drivers Petition (over 2500 people have signed up this far):

http://www.petitiononline.com/nvfoss/

I think this is relative because open source drivers from NVIDIA could mean that others could try to implement full XvMC support for PureVideo in them if NVIDIA does not want to do it themselves.

NVIDIA Corp developers can get help if they want…

Linux Kernel Devs Offer Free Driver Development:
[url=“Linux Kernel Devs Offer Free Driver Development - Slashdot”]http://linux.slashdot.org/linux/07/01/30/0...3.shtml?tid=162[/url]
follow up on above:
[url=“http://www.linuxworld.com.au/index.php/id;58590129;fp;16;fpid;0”]http://www.linuxworld.com.au/index.php/id;...29;fp;16;fpid;0[/url]
Official website: http://www.linuxdriverproject.org

Also see: [url=“nouveau”]http://nouveau.freedesktop.org/wiki/[/url]
(Nouveau - Open Source 3D acceleration for nVidia cards)

I still suspect that with 8xxx GPUs the video acceleration part is implemented using something like CUDA, as including more (similar) circuitry for video decompression for a few codecs doesn’t sound like a good reason to increase the transistor count of an already extremely complicated processor.

I’m no expert at the specifics of Mpeg2/4 codecs, but I’m quite sure that CUDA would allow implementing motion compensation and probably a much bigger part of the compression/decompression process. CUDA introduced integer and bit operations which also come in very handy in encoding and decoding bit streams.

Of course, you cannot expect NVidia to release the source code of their codecs, but it’d be interesting if you could implement various parts of decompression yourself using CUDA, maybe even beat their performance :)

Okay wumpus, it’s late, I’ll bite.

Why can’t we expect nvidia to release source for codecs?

I can think of plausible reasons for nvidia to increase their appeal to consumers and especially developers, not least because nvidia is likely to come under increasing commercial pressure from a AMD+ATI, and Intel’s GPU efforts. Source code can be a powerful magnet for techies External Media

Open source codecs already exist from others (FFmpeg), the need is for an open API and possible open source drivers:

Simplest for the NVIDIA Corporation would be if they do something similar to what Intel did and hire/fund a proffesional company like Tungsten Graphics to help program open source device drivers for The Nouveau Project. Tungsten Graphics is a company who’s Development Services Department specialize in among other things programming 2D and 3D display drivers for Linux, FreeBSD and Embedded Operating Systems. The Nouveau Project is a community project made up of of developers who are determened to write open source device drivers for all of NVIDIA graphic adapters which fully supports 3D hardware acceleration and all features that NVIDIA’s closed source device drivers support, (today they are trying to do this via reverse-engineering which is not an effecive method when NVIDIA do not even provide detailed technical documentation/specification sufficient for independent developers to write open source drivers).

http://en.wikipedia.org/wiki/Graphics_hardware_and_FOSS

http://www.linuxsymposium.org/2006/linuxsymposium_procv1.pdf

What does it take to get the NVIDIA Corporation to take notice of the OSS community?

http://www.tungstengraphics.com/tech.htm