Enable PureVideo under Linux (MPEG-4 / H.264 XvMC)
  1 / 3    
This is slightly off-topic but it is kind to related and I could not find a other developers forum where this should belongs, so here goes:


Will NVIDIA even make their [url="http://www.nvidia.com/page/purevideo.html"]PureVideo™ Technology[/url] for hardware accelerated video decoding available under Linux (like you have done with CUDA)?

@NVIDIA® Corporation, please let users and developers use your PureVideo API to accelerate high-definition H.264, WMV, and MPEG-2 video decoding using [url="http://www.mythtv.org/wiki/index.php/XvMC"]XvMC[/url], (or a other library if [url="http://www.mythtv.org/wiki/index.php/XvMC"]XvMC[/url] is not an option. [url="http://www.mythtv.org/wiki/index.php/XvMC"]XvMC[/url] is the Linux platforms open source answer to DxVA, Microsoft DirectX Video Acceleration API do it would be the natural choise).

Best for the open source community would be if NVIDIA open up the API as much as it has open up its CUDA API, which good dodumentations, specification, sample code and reference libraries. Enabling developers to activate support for PureVideo in the software running under Linux on a computer with a DirectX 9 (or 10) NVIDIA GPU that supports the PureVideo technology. Take VIA Technologies as a good example of a company have open source and helped implement hardware acceleation of MPEG-4 ASP (H.263) video on their Unichrome (S3) graphic processors via [url="http://www.mythtv.org/wiki/index.php/XvMC"]XvMC[/url] additions.

It is not only me, this is really a broadly wanted feature; popular open source multimedia player softwares on Linux such as these bellow need GPU hardware acceleration to assist with decoding MPEG-4 AVC (H.264) at native 1080p resolutions:
[url="http://www.mplayerhq.hu"]http://www.mplayerhq.hu[/url]
[url="http://www.videolan.org"]http://www.videolan.org[/url]
[url="http://www.xinehq.de"]http://www.xinehq.de[/url]
[url="http://www.mythtv.org"]http://www.mythtv.org[/url]
[url="http://www.linuxmce.com"]http://www.linuxmce.com[/url]
[url="http://www.xboxmediacenter.com"]http://www.xboxmediacenter.com[/url] (yes XBMC is being ported to Linux as I write this)

The processes that could possibly be accelerated by a NVIDIA graphics processor are:
* Motion compensation (mo comp)
* Inverse Discrete Cosine Transform (iDCT)
** Inverse Telecine 3:2 and 2:2 pull-down correction
* Bitstream processing (CAVLC/CABAC)
* in-loop deblocking
* inverse quantization (IQ)
* Variable-Length Decoding (VLD)
* Deinterlacing (Spatial-Temporal De-Interlacing)
**Plus automatic interlace/progressive source detection

PS! Most (if not all) open source softwares for Linux use [url="http://ffmpeg.mplayerhq.hu"]FFmpeg[/url] open source codec suit library to decode most popular video (and audio) formats. (FFmpeg can even decode WMV9 and VC-1 video). So it would be great if [url="http://ffmpeg.mplayerhq.hu"]FFmpeg[/url] could be used as a reference design for NIDIA PureVideo utilization under Linux. Note that bitstream processing (CAVLC/CABAC) and in-loop deblocking are probebely the two most important ones to have running on a GPU in order to playback MPEG-4 AVC (H.264) at native 1080p resolutions, but motion compensation (mo comp) and iDCT are probebely the simplest ones to implement.
This is slightly off-topic but it is kind to related and I could not find a other developers forum where this should belongs, so here goes:





Will NVIDIA even make their PureVideo™ Technology for hardware accelerated video decoding available under Linux (like you have done with CUDA)?



@NVIDIA® Corporation, please let users and developers use your PureVideo API to accelerate high-definition H.264, WMV, and MPEG-2 video decoding using XvMC, (or a other library if XvMC is not an option. XvMC is the Linux platforms open source answer to DxVA, Microsoft DirectX Video Acceleration API do it would be the natural choise).



Best for the open source community would be if NVIDIA open up the API as much as it has open up its CUDA API, which good dodumentations, specification, sample code and reference libraries. Enabling developers to activate support for PureVideo in the software running under Linux on a computer with a DirectX 9 (or 10) NVIDIA GPU that supports the PureVideo technology. Take VIA Technologies as a good example of a company have open source and helped implement hardware acceleation of MPEG-4 ASP (H.263) video on their Unichrome (S3) graphic processors via XvMC additions.



It is not only me, this is really a broadly wanted feature; popular open source multimedia player softwares on Linux such as these bellow need GPU hardware acceleration to assist with decoding MPEG-4 AVC (H.264) at native 1080p resolutions:

http://www.mplayerhq.hu

http://www.videolan.org

http://www.xinehq.de

http://www.mythtv.org

http://www.linuxmce.com

http://www.xboxmediacenter.com (yes XBMC is being ported to Linux as I write this)



The processes that could possibly be accelerated by a NVIDIA graphics processor are:

* Motion compensation (mo comp)

* Inverse Discrete Cosine Transform (iDCT)

** Inverse Telecine 3:2 and 2:2 pull-down correction

* Bitstream processing (CAVLC/CABAC)

* in-loop deblocking

* inverse quantization (IQ)

* Variable-Length Decoding (VLD)

* Deinterlacing (Spatial-Temporal De-Interlacing)

**Plus automatic interlace/progressive source detection



PS! Most (if not all) open source softwares for Linux use FFmpeg open source codec suit library to decode most popular video (and audio) formats. (FFmpeg can even decode WMV9 and VC-1 video). So it would be great if FFmpeg could be used as a reference design for NIDIA PureVideo utilization under Linux. Note that bitstream processing (CAVLC/CABAC) and in-loop deblocking are probebely the two most important ones to have running on a GPU in order to playback MPEG-4 AVC (H.264) at native 1080p resolutions, but motion compensation (mo comp) and iDCT are probebely the simplest ones to implement.

#1
Posted 05/14/2007 02:56 PM   
I am not a video expert, but when I install the NVIDIA driver, it also installs a libXvMCNVIDIA.so and .a
It works nice with MPlayer, see docu on how to use it.

Is this what you are looking for or is it incomplete? What's the difference to VIAs lib?

Peter
I am not a video expert, but when I install the NVIDIA driver, it also installs a libXvMCNVIDIA.so and .a

It works nice with MPlayer, see docu on how to use it.



Is this what you are looking for or is it incomplete? What's the difference to VIAs lib?



Peter

#2
Posted 05/14/2007 03:43 PM   
The Linux-Drivers do have MPEG2-Acceleration Support over XvMC.
It's easy to use with xine: xine -V xxmc dvd://
But nothing else.
The Linux-Drivers do have MPEG2-Acceleration Support over XvMC.

It's easy to use with xine: xine -V xxmc dvd://

But nothing else.

CPU: Intel Core2Quad Q9450
Board: MSI NEO2-FR
RAM: 4 GB DDR2-800
Graphics: Palit GeForce GTX 460 Sonic
Sound: XFi Fatal1ty pro gamer
OS: Windows 7 x64, Gentoo Linux (always with the recent stable Kernel)

#3
Posted 05/14/2007 04:16 PM   
@prkipfer, like Mr_Maniac already pointed out; libXvMCNVIDIA.so that comes with the official closed source NVIDIA binary device driver for Linux only supports hardware acceleration of mo comp (Motion Compensation) and iDCT (Inverse Discrete Cosine Transform) for MPEG-2.

VIA's open source Unichrome device drivers support the same for MPEG-2, but it also support hardware acceleration of mo comp and iDCT for MPEG-4 ASP (H.263), you know the standard used in DivX and Xvid video that is so popular with internet file-sharers.

NVIDIA's DirectX9-based GPU hardware (and later) does in addition to MPEG-2 also support hardware acceleration of MPEG-4 AVC (H.264) and WMV9 HD (which is basically the same as VC-1), which is the video codec formats used in HD DVD and Blu-ray movies. However NVIDIA's device drivers currently only support that hardware acceleration API, which it calls "PureVideo Technology", under Microsoft Windows operating-systems, (not on Linux).

So what I an asking NVIDIA for is to either release official Linux device drivers that support XvMC hardware accleration of all video codec format that the hardware supports (but is currently only available under Windows). Alternativly release the specifications and documentations of API, like it has done for CUDA, so developers can for them self add support for it to the XvMC open source library. Even better would be if NVIDIA open source the API and libraries needed to use it, but I assume that is too much to ask, baby steps and all.

All and all, all we want is to be able to use the same hardware accleration under Linux that is already available under Windows, is that too much to ask?
@prkipfer, like Mr_Maniac already pointed out; libXvMCNVIDIA.so that comes with the official closed source NVIDIA binary device driver for Linux only supports hardware acceleration of mo comp (Motion Compensation) and iDCT (Inverse Discrete Cosine Transform) for MPEG-2.



VIA's open source Unichrome device drivers support the same for MPEG-2, but it also support hardware acceleration of mo comp and iDCT for MPEG-4 ASP (H.263), you know the standard used in DivX and Xvid video that is so popular with internet file-sharers.



NVIDIA's DirectX9-based GPU hardware (and later) does in addition to MPEG-2 also support hardware acceleration of MPEG-4 AVC (H.264) and WMV9 HD (which is basically the same as VC-1), which is the video codec formats used in HD DVD and Blu-ray movies. However NVIDIA's device drivers currently only support that hardware acceleration API, which it calls "PureVideo Technology", under Microsoft Windows operating-systems, (not on Linux).



So what I an asking NVIDIA for is to either release official Linux device drivers that support XvMC hardware accleration of all video codec format that the hardware supports (but is currently only available under Windows). Alternativly release the specifications and documentations of API, like it has done for CUDA, so developers can for them self add support for it to the XvMC open source library. Even better would be if NVIDIA open source the API and libraries needed to use it, but I assume that is too much to ask, baby steps and all.



All and all, all we want is to be able to use the same hardware accleration under Linux that is already available under Windows, is that too much to ask?

#4
Posted 05/14/2007 09:28 PM   
[quote name='Cyberace' date='May 14 2007, 10:28 PM']All and all, all we want is to be able to use the same hardware accleration under Linux that is already available under Windows, is that too much to ask?
[/quote]

No, that is not too much to ask. :)

I was not aware that there is H.264 support in hardware (or at least for some parts of it).

Out of interest: is there actually a low level NVIDIA API for that on Win, or do you need to talk to the card through some D3D9 extensions?

Peter
[quote name='Cyberace' date='May 14 2007, 10:28 PM']All and all, all we want is to be able to use the same hardware accleration under Linux that is already available under Windows, is that too much to ask?





No, that is not too much to ask. :)



I was not aware that there is H.264 support in hardware (or at least for some parts of it).



Out of interest: is there actually a low level NVIDIA API for that on Win, or do you need to talk to the card through some D3D9 extensions?



Peter

#5
Posted 05/15/2007 03:20 PM   
[quote name='prkipfer' date='May 15 2007, 05:20 PM']I was not aware that there is H.264 support in hardware (or at least for some parts of it)[/quote]Yes both NVIDIA and ATI have supported H.264 decoding in hardware, and they have had it for a while on the Windows platform. NVIDIA calls its technology for "PureVideo", and ATI calls their technology for "Avivo". What decoding processes it accelerates is not exacly clear from the existing information available, except for the statement that they does do MPEG-2, H.264, VC-1 and WMV9 decode acceleration (at native high-definition resolutions), and also Spatial-Temporal De-Interlacing, Noise Reduction and Edge Enhancement as post-processing.
[url="http://www.nvidia.com/page/purevideo.html"]http://www.nvidia.com/page/purevideo.html[/url]
[url="http://www.nvidia.com/page/pvhd_fb.html"]http://www.nvidia.com/page/pvhd_fb.html[/url]
[url="http://ati.amd.com/technology/Avivo/index.html"]http://ati.amd.com/technology/Avivo/index.html[/url]
[url="http://ati.amd.com/technology/avivo/h264.html"]http://ati.amd.com/technology/avivo/h264.html[/url]

[quote name='prkipfer' date='May 15 2007, 05:20 PM']Out of interest: is there actually a low level NVIDIA API for that on Win, or do you need to talk to the card through some D3D9 extensions?[/quote]Any low-level access to the API is not publicly documented by NVIDIA as far as I know, so currently you have to use high-level API interface in DXVA from Microsoft DirectX 9 framework. However only having high-level access is not necessary a bad thing, it meant to make development easier but it requires that the manufactur (NVIDIA) provides the device drivers and documentaion required to access it, ...which as said, it only currently does for Microsoft Windows.

There should be an API for it, and NIVIDIA could make that API accecable under Linux via the XvMC extension if they wanted.
[quote name='prkipfer' date='May 15 2007, 05:20 PM']I was not aware that there is H.264 support in hardware (or at least for some parts of it)Yes both NVIDIA and ATI have supported H.264 decoding in hardware, and they have had it for a while on the Windows platform. NVIDIA calls its technology for "PureVideo", and ATI calls their technology for "Avivo". What decoding processes it accelerates is not exacly clear from the existing information available, except for the statement that they does do MPEG-2, H.264, VC-1 and WMV9 decode acceleration (at native high-definition resolutions), and also Spatial-Temporal De-Interlacing, Noise Reduction and Edge Enhancement as post-processing.

http://www.nvidia.com/page/purevideo.html

http://www.nvidia.com/page/pvhd_fb.html

http://ati.amd.com/technology/Avivo/index.html

http://ati.amd.com/technology/avivo/h264.html



[quote name='prkipfer' date='May 15 2007, 05:20 PM']Out of interest: is there actually a low level NVIDIA API for that on Win, or do you need to talk to the card through some D3D9 extensions?Any low-level access to the API is not publicly documented by NVIDIA as far as I know, so currently you have to use high-level API interface in DXVA from Microsoft DirectX 9 framework. However only having high-level access is not necessary a bad thing, it meant to make development easier but it requires that the manufactur (NVIDIA) provides the device drivers and documentaion required to access it, ...which as said, it only currently does for Microsoft Windows.



There should be an API for it, and NIVIDIA could make that API accecable under Linux via the XvMC extension if they wanted.

#6
Posted 05/18/2007 02:06 PM   
By the way, sorry for cross-posting but how do you get NVIDIA's attention in here to get this done?:
[url="http://forums.nvidia.com/index.php?showtopic=35695"]http://forums.nvidia.com/index.php?showtopic=35695[/url]

:P
By the way, sorry for cross-posting but how do you get NVIDIA's attention in here to get this done?:

http://forums.nvidia.com/index.php?showtopic=35695



:P

#7
Posted 05/18/2007 02:09 PM   
FYI; according to this article the (huge) difference between NVIDIA PureVideo technology and ATI's Avivo Technology is that ATI Avivo uses pixel shaders to assist in decoding the video, while NVIDIA PureVideo is a true discrete programmable processing core inside the NVIDIA GPU. The NVIDIA PureVideo technology is a combination of a hardware video processor and video decode software, meaning it only offloads psrts of the video decoding to the GPU (but since those are the 'heavy' and processor intensive parts it show a huge diffrence on CPU usage when using PureVideo vs. not using PureVideo).
[url="http://www.bit-tech.net/news/2006/01/07/nvidia_decode_h264/"]http://www.bit-tech.net/news/2006/01/07/nvidia_decode_h264/[/url]

PS! You might also notice that the article is over one year long, which is how long PureVideo been supported on Windows since the ForceWare version 85 device driver came out. The first generation of NVIDIA's PureVideo technology is supported by all GeForce 6 and 7 series GPU hardare which all have the PureVideo video engine.
FYI; according to this article the (huge) difference between NVIDIA PureVideo technology and ATI's Avivo Technology is that ATI Avivo uses pixel shaders to assist in decoding the video, while NVIDIA PureVideo is a true discrete programmable processing core inside the NVIDIA GPU. The NVIDIA PureVideo technology is a combination of a hardware video processor and video decode software, meaning it only offloads psrts of the video decoding to the GPU (but since those are the 'heavy' and processor intensive parts it show a huge diffrence on CPU usage when using PureVideo vs. not using PureVideo).

http://www.bit-tech.net/news/2006/01/07/nvidia_decode_h264/



PS! You might also notice that the article is over one year long, which is how long PureVideo been supported on Windows since the ForceWare version 85 device driver came out. The first generation of NVIDIA's PureVideo technology is supported by all GeForce 6 and 7 series GPU hardare which all have the PureVideo video engine.

#8
Posted 05/18/2007 02:32 PM   
[quote name='Cyberace' date='May 18 2007, 03:06 PM']Any low-level access to the API is not publicly documented by NVIDIA as far as I know, so currently you have to use high-level API interface in DXVA from Microsoft DirectX 9 framework.
[/quote]

Cyberace, thanks for the enlightenment.

So the point is that noone (except MS and NV) exactly knows what is actually been done on the hardware as everything below the high level API is opaque.

Writing a DXVA clone for Linux is probably a lot of work. From an exposed low level API one could probably easily diff what is marketing and what is actual true hardware processing. And MS would also not want to see a competing media lib shape up, given how protective they are about the DX source.

So here is a practical idea: if there is really dedicated video hardware on the chip, NV could expose access to it in CUDA, like the calls to the tex engines. You would upload some data buffer, set some config registers and make it process the data. That should not be too hard to incorporate into CUDA, would spare NV to write a complete high level API and would give us the power to write nice video apps :magic:

What do you think?

Peter
[quote name='Cyberace' date='May 18 2007, 03:06 PM']Any low-level access to the API is not publicly documented by NVIDIA as far as I know, so currently you have to use high-level API interface in DXVA from Microsoft DirectX 9 framework.





Cyberace, thanks for the enlightenment.



So the point is that noone (except MS and NV) exactly knows what is actually been done on the hardware as everything below the high level API is opaque.



Writing a DXVA clone for Linux is probably a lot of work. From an exposed low level API one could probably easily diff what is marketing and what is actual true hardware processing. And MS would also not want to see a competing media lib shape up, given how protective they are about the DX source.



So here is a practical idea: if there is really dedicated video hardware on the chip, NV could expose access to it in CUDA, like the calls to the tex engines. You would upload some data buffer, set some config registers and make it process the data. That should not be too hard to incorporate into CUDA, would spare NV to write a complete high level API and would give us the power to write nice video apps :magic:



What do you think?



Peter

#9
Posted 05/18/2007 03:01 PM   
[quote name='prkipfer' date='May 18 2007, 04:01 PM']...
So here is a practical idea: if there is really dedicated video hardware on the chip, NV could expose access to it in CUDA, like the calls to the tex engines. You would upload some data buffer, set some config registers and make it process the data. That should not be too hard to incorporate into CUDA, would spare NV to write a complete high level API and would give us the power to write nice video apps  :magic:
...
[right][snapback]198539[/snapback][/right]
[/quote]
Good idea, Peter.

Before reading this topic, I had assumed PureVideo was done using mostly CUDA-visible hardware (maybe a peculiar texture mode), so thank you Cyberace for broadening my education.

There are a bunch of alternative video codecs that people may like to use, and giving access to nVidia's low-level hardware via CUDA may be a low-cost way for nVidia to tap into some of the open source communities interests. It wouldn't need to be perfect, just usable (but that may be a tall order too).

I have a friend and ex-colleague who may be very interested if it offered HD video-speed support, and I may be too in a few months (so it gives nVidia a couple of months to work on it :D )

Seriously, A Great Suggestion, and interesting education.
[quote name='prkipfer' date='May 18 2007, 04:01 PM']...

So here is a practical idea: if there is really dedicated video hardware on the chip, NV could expose access to it in CUDA, like the calls to the tex engines. You would upload some data buffer, set some config registers and make it process the data. That should not be too hard to incorporate into CUDA, would spare NV to write a complete high level API and would give us the power to write nice video apps  :magic:

...

[snapback]198539[/snapback]




Good idea, Peter.



Before reading this topic, I had assumed PureVideo was done using mostly CUDA-visible hardware (maybe a peculiar texture mode), so thank you Cyberace for broadening my education.



There are a bunch of alternative video codecs that people may like to use, and giving access to nVidia's low-level hardware via CUDA may be a low-cost way for nVidia to tap into some of the open source communities interests. It wouldn't need to be perfect, just usable (but that may be a tall order too).



I have a friend and ex-colleague who may be very interested if it offered HD video-speed support, and I may be too in a few months (so it gives nVidia a couple of months to work on it :D )



Seriously, A Great Suggestion, and interesting education.

#10
Posted 05/18/2007 03:20 PM   
Good idea, thinking outside the box. Note though that CUDA is only available in NVIDIA GeForce 8 Series hardware and newer only, while PureVideo is supported on all NVIDIA GeForce 6 and GeForce 7 series hardware and so the API should be available for all those hardware. Nevertheless, I thing the high-level API should be XvMC which already is the standard, no point in reinventing the wheel.

PS! Even though using the CUDA API tools would be a possibility, I very much doubt that it would be complicated or hard for NVIDIA engineers and device driver developers to add support for the PureVideo API in their Linux device drivers and put together some documentation of the PureVideo API and present it seperatly from CUDA. It is only matter of man hours, (and who will be willing to pay for those man hours).
Good idea, thinking outside the box. Note though that CUDA is only available in NVIDIA GeForce 8 Series hardware and newer only, while PureVideo is supported on all NVIDIA GeForce 6 and GeForce 7 series hardware and so the API should be available for all those hardware. Nevertheless, I thing the high-level API should be XvMC which already is the standard, no point in reinventing the wheel.



PS! Even though using the CUDA API tools would be a possibility, I very much doubt that it would be complicated or hard for NVIDIA engineers and device driver developers to add support for the PureVideo API in their Linux device drivers and put together some documentation of the PureVideo API and present it seperatly from CUDA. It is only matter of man hours, (and who will be willing to pay for those man hours).

#11
Posted 05/18/2007 03:56 PM   
Yeah, for broader applicability hooking it into XvMC would be definitely preferable.

Actually, I must admit that I was a bit selfish when suggesting the CUDA integration, as having functions like IDCT accelerated will also benefit other scientific algorithms... ;)

Peter
Yeah, for broader applicability hooking it into XvMC would be definitely preferable.



Actually, I must admit that I was a bit selfish when suggesting the CUDA integration, as having functions like IDCT accelerated will also benefit other scientific algorithms... ;)



Peter

#12
Posted 05/18/2007 04:06 PM   
Even if iDCT, mo comp, etc. is only available via XvMC there is noting from stopping you from using those processes for something else than video decoding (if it is possible to use iDCT, mo comp, etc. for something other than video decoding that is) ;) ...sure for a developer it be nice to only have to learn one API but got got to be flexible if you want broader access.

I too have a personal reason for wanting hardware video acceleration, but I'm not sure I would call the a reason selfish when so many more people than me would also enjoy the benifits. You see I'm one of the project managers of [url="http://en.wikipedia.org/wiki/XBMC"]XBMC (Xbox Media Center)[/url], the free and open source multimedia entertainment center for the Xbox. Our developers are currently in the process of porting XBMC to Linux, we are doing this mainly because the Xbox hardware is not powerfull enough to decode H.264 video in high-definition (720p and 1080i/1080p), and we are looking into using the Apple TV and the next cheap hardware platform, it only has a 1GB Pentium M processor and NVIDIA GeForce 7300 GPU and that processor alone is not enough to decode H.264 video at 720p without some quality secrificing tweeks (and you can forget about 1080i/1080p). With GPU hardware acceleration the Apple TV should be able to decode H.264 in 720p without problems, (and maybe even with some of those quality secrificing tweeks also playback 1080i/1080p at an acceptable frame-rate). ...sure we might support a more expensive hardware platform in the future (like the PS3 or the next generation Mac Mini) but most XBMC users only want a cheap networked video player, which is why the Xbox have served us so well and will continue to serve for standard-definition video, but many users want high-definition so we got to evolve and move with the times. The reason for not simple choosing a computer motherboard and building the hardware platform on that is that the life-cycle of a specific motherboard does not even come close to a game-console or the Apple TV, and having a locked-down hardware makes developing and debugging on it perfect for out developers, every user runs on the same hardware. Wow, what a rant, anyway, hope to see this in the NVIDIA Linux drivers soon.
Even if iDCT, mo comp, etc. is only available via XvMC there is noting from stopping you from using those processes for something else than video decoding (if it is possible to use iDCT, mo comp, etc. for something other than video decoding that is) ;) ...sure for a developer it be nice to only have to learn one API but got got to be flexible if you want broader access.



I too have a personal reason for wanting hardware video acceleration, but I'm not sure I would call the a reason selfish when so many more people than me would also enjoy the benifits. You see I'm one of the project managers of XBMC (Xbox Media Center), the free and open source multimedia entertainment center for the Xbox. Our developers are currently in the process of porting XBMC to Linux, we are doing this mainly because the Xbox hardware is not powerfull enough to decode H.264 video in high-definition (720p and 1080i/1080p), and we are looking into using the Apple TV and the next cheap hardware platform, it only has a 1GB Pentium M processor and NVIDIA GeForce 7300 GPU and that processor alone is not enough to decode H.264 video at 720p without some quality secrificing tweeks (and you can forget about 1080i/1080p). With GPU hardware acceleration the Apple TV should be able to decode H.264 in 720p without problems, (and maybe even with some of those quality secrificing tweeks also playback 1080i/1080p at an acceptable frame-rate). ...sure we might support a more expensive hardware platform in the future (like the PS3 or the next generation Mac Mini) but most XBMC users only want a cheap networked video player, which is why the Xbox have served us so well and will continue to serve for standard-definition video, but many users want high-definition so we got to evolve and move with the times. The reason for not simple choosing a computer motherboard and building the hardware platform on that is that the life-cycle of a specific motherboard does not even come close to a game-console or the Apple TV, and having a locked-down hardware makes developing and debugging on it perfect for out developers, every user runs on the same hardware. Wow, what a rant, anyway, hope to see this in the NVIDIA Linux drivers soon.

#13
Posted 05/18/2007 05:25 PM   
[quote name='prkipfer' date='May 18 2007, 05:06 PM']Actually, I must admit that I was a bit selfish when suggesting the CUDA integration, as having functions like IDCT accelerated will also benefit other scientific algorithms... ;)
[right][snapback]198571[/snapback][/right]
[/quote]
I was being selfish too. I'm interested in image and video processing, so access to the hardware without an intervening (proprietary) algorithm would suite me better.

Thanks again Peter and Cyberace
[quote name='prkipfer' date='May 18 2007, 05:06 PM']Actually, I must admit that I was a bit selfish when suggesting the CUDA integration, as having functions like IDCT accelerated will also benefit other scientific algorithms... ;)

[snapback]198571[/snapback]




I was being selfish too. I'm interested in image and video processing, so access to the hardware without an intervening (proprietary) algorithm would suite me better.



Thanks again Peter and Cyberace

#14
Posted 05/18/2007 05:27 PM   
[quote name='Cyberace' date='May 18 2007, 06:25 PM']You see I'm one of the project managers of [url="http://en.wikipedia.org/wiki/XBMC"]XBMC (Xbox Media Center)[/url], the free and open source multimedia entertainment center for the Xbox.
[/quote]

Ah, I see. Nice project ! (yes I have it running on my Xbox :) )
Good luck with the Linux port.

The reason why I would prefer to use the CUDA integration over XvMC calls is tighter coupling into the simulation. For sharing CUDA data with OpenGL for example, you need a context switch (NV is currently working on making that fast) and explicit copy/bind/map calls. That will be similar for Xv surfaces. Binding in libXvMC complicates things further. As CUDA runs without an X server, it might not be possible at all. Third, the hardware might support configuring for example the iDCT processor in a way that is not needed for video but useful for simulation. This mode will therefore probably not be exposed in XvMC.

See, I am selfish ... /whistling.gif' class='bbc_emoticon' alt=':whistling:' />

Peter
[quote name='Cyberace' date='May 18 2007, 06:25 PM']You see I'm one of the project managers of XBMC (Xbox Media Center), the free and open source multimedia entertainment center for the Xbox.





Ah, I see. Nice project ! (yes I have it running on my Xbox :) )

Good luck with the Linux port.



The reason why I would prefer to use the CUDA integration over XvMC calls is tighter coupling into the simulation. For sharing CUDA data with OpenGL for example, you need a context switch (NV is currently working on making that fast) and explicit copy/bind/map calls. That will be similar for Xv surfaces. Binding in libXvMC complicates things further. As CUDA runs without an X server, it might not be possible at all. Third, the hardware might support configuring for example the iDCT processor in a way that is not needed for video but useful for simulation. This mode will therefore probably not be exposed in XvMC.



See, I am selfish ... /whistling.gif' class='bbc_emoticon' alt=':whistling:' />



Peter

#15
Posted 05/18/2007 05:38 PM   
  1 / 3    
Scroll To Top