Serious security issue with CUDA on Linux
Hello,

We have recently found serious security breach in CUDA Linux drivers.
The problem is related to cudaHostAlloc/cuMemHostAlloc API calls. In brief,
driver maps pinned memory to user space but does not initialize it to zero.
As an example, our simplest "proof of concept" program was able to read large
fragments of files being written or read by other users.

More information on this bug is available here:

http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7675-1380-00.htm

http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7676-1022+00.htm

http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7677-1391+00.htm

http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7681-487+00.htm

Kind regards,

Alex Granovsky
Firefly Project
http://classic.chem.msu.su/gran/firefly/
Hello,



We have recently found serious security breach in CUDA Linux drivers.

The problem is related to cudaHostAlloc/cuMemHostAlloc API calls. In brief,

driver maps pinned memory to user space but does not initialize it to zero.

As an example, our simplest "proof of concept" program was able to read large

fragments of files being written or read by other users.



More information on this bug is available here:



http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7675-1380-00.htm



http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7676-1022+00.htm



http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7677-1391+00.htm



http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7681-487+00.htm



Kind regards,



Alex Granovsky

Firefly Project

http://classic.chem.msu.su/gran/firefly/

#1
Posted 01/11/2011 05:20 AM   
I find it surprising that Linux does not automatically clear new memory pages...

Besides how many people might assume the memory returned from cudaHostMalloc is zeroed? When you memcpy a new device buffer to host memory you can see device memory is always zeroed...

Considering that people consider Linux's kernel to be more secure than Windows, this shows in some cases the opposite is true.
I find it surprising that Linux does not automatically clear new memory pages...



Besides how many people might assume the memory returned from cudaHostMalloc is zeroed? When you memcpy a new device buffer to host memory you can see device memory is always zeroed...



Considering that people consider Linux's kernel to be more secure than Windows, this shows in some cases the opposite is true.

#2
Posted 01/12/2011 10:24 AM   
Quick update on this...

We have a fix for this issue and will release updated drivers that contain the fix as well as a patch kit for previous drivers early next week.

When the new drivers and patch kit are available, we’ll post the links here.
Quick update on this...



We have a fix for this issue and will release updated drivers that contain the fix as well as a patch kit for previous drivers early next week.



When the new drivers and patch kit are available, we’ll post the links here.

#3
Posted 01/20/2011 11:13 PM   
Linux gives you the option to choose between zero'd (get_zeroed_pages) or non-zero'd memory (get_free_pages)

Seems like an easy enough fix at least /wink.gif' class='bbc_emoticon' alt=';)' />

Just allocate memory at initialization (like you should be) and the overhead from zero'ing will be fine.
Linux gives you the option to choose between zero'd (get_zeroed_pages) or non-zero'd memory (get_free_pages)



Seems like an easy enough fix at least /wink.gif' class='bbc_emoticon' alt=';)' />



Just allocate memory at initialization (like you should be) and the overhead from zero'ing will be fine.

#4
Posted 01/23/2011 08:55 PM   
The new driver is out with the bug fix for both 260 stable and 270 beta branches.
The new driver is out with the bug fix for both 260 stable and 270 beta branches.

#5
Posted 01/24/2011 08:43 PM   
For convenience, direct links to the R260 drivers and patch kit below:

http://www.nvidia.com/object/linux-display-ia32-260.19.36-driver.html
http://www.nvidia.com/object/linux-display-amd64-260.19.36-driver.html

http://developer.download.nvidia.com/misc/patches/sysmem_clear_on_allocation/sysmem_clear_on_allocation.zip
For convenience, direct links to the R260 drivers and patch kit below:



http://www.nvidia.com/object/linux-display-ia32-260.19.36-driver.html

http://www.nvidia.com/object/linux-display-amd64-260.19.36-driver.html



http://developer.download.nvidia.com/misc/patches/sysmem_clear_on_allocation/sysmem_clear_on_allocation.zip

#6
Posted 02/01/2011 06:04 PM   
Scroll To Top