Building nvidia driver on kernel 3.9.0
  2 / 3    
I'm still working on it. Open and read are fairly trivial using the seq_file API - it's writes that are more complicated, which impacts only the registry entry. I have it compiling but I'm fairly sure the write code is not correct. You also need at least 3.10-rc2 to try on 3.10, as 3.10-rc1 broken some workqueue APIs for non-GPL modules.
I'm still working on it. Open and read are fairly trivial using the seq_file API - it's writes that are more complicated, which impacts only the registry entry. I have it compiling but I'm fairly sure the write code is not correct. You also need at least 3.10-rc2 to try on 3.10, as 3.10-rc1 broken some workqueue APIs for non-GPL modules.

#16
Posted 05/22/2013 03:20 AM   
Compiles on 3.9.x, 3.10-rc2+, tested on 3.10-rc2. Didn't test write case for the registry /proc files but I *think* they're correct. [url]http://pastie.org/7942599[/url]
Compiles on 3.9.x, 3.10-rc2+, tested on 3.10-rc2. Didn't test write case for the registry /proc files but I *think* they're correct.

http://pastie.org/7942599

#17
Posted 05/22/2013 09:20 AM   
How do I patch .run file with this? I did: sh NVIDIA-Linux-x86_64-319.17.run --apply-patch nvidia-3.10-rc2.diff but I'm getting asked what file to patch. :S EDIT: I just extracted the .run package and patched it with "patch -p1" command. Thanks @Unhelpful for the patch, it is working great. :)
How do I patch .run file with this? I did: sh NVIDIA-Linux-x86_64-319.17.run --apply-patch nvidia-3.10-rc2.diff
but I'm getting asked what file to patch. :S

EDIT: I just extracted the .run package and patched it with "patch -p1" command.

Thanks @Unhelpful for the patch, it is working great. :)

#18
Posted 05/22/2013 10:59 AM   
I compilled it now, but just simple optirun glxgears is giving me some errors, so I don't think it's enough or I just really fucked up something :D
I compilled it now, but just simple optirun glxgears is giving me some errors, so I don't think it's enough or I just really fucked up something :D

#19
Posted 05/22/2013 12:19 PM   
Interesting, have GL compositing, glxgears, and CUDA all working, and poking around in /proc shows what appear to be sane values. I have no idea what to write to the registry files, so write on them is the only thing I didn't test. I don't have an optimus system, so I can't test that… what sort of errors did you see?
Interesting, have GL compositing, glxgears, and CUDA all working, and poking around in /proc shows what appear to be sane values. I have no idea what to write to the registry files, so write on them is the only thing I didn't test. I don't have an optimus system, so I can't test that… what sort of errors did you see?

#20
Posted 05/22/2013 12:52 PM   
[code] [nekroman@nekroman ~]$ optirun glxgears Error: nConfigOptions (16) does not match the actual number of options in __driConfigOptions (10). X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 72 (X_PutImage) Serial number of failed request: 33 Current serial number in output stream: 35 [/code] //edit: Ok I've got it! It was wrong configuration of bumblebee. If you have same problem, just change paths in /etc/bumblbee/bumblebee.conf from /usr/lib/nvidia to /usr/lib/nvidia-bumblebee.
[nekroman@nekroman ~]$ optirun glxgears
Error: nConfigOptions (16) does not match the actual number of options in
__driConfigOptions (10).
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 72 (X_PutImage)
Serial number of failed request: 33
Current serial number in output stream: 35


//edit:
Ok I've got it! It was wrong configuration of bumblebee. If you have same problem, just change paths in /etc/bumblbee/bumblebee.conf from /usr/lib/nvidia to /usr/lib/nvidia-bumblebee.

#21
Posted 05/22/2013 03:32 PM   
Thanks @Unhelpful, you patch woks like a charm. [code] towo:Defiant> inxi -SG System: Host: Defiant Kernel: 3.10-rc2.towo.1 x86_64 (64 bit) Desktop: Xfce 4.10.2 Distro: siduction 12.2.0 Riders on the Storm - xfce - (201212092126) Graphics: Card: NVIDIA GK106 [GeForce GTX 650 Ti] X.Org: 1.14.1 driver: nvidia Resolution: 1920x1080@60.0hz GLX Renderer: GeForce GTX 650 Ti/PCIe/SSE2 GLX Version: 4.3.0 NVIDIA 319.17 [/code]
Thanks @Unhelpful, you patch woks like a charm.
towo:Defiant> inxi -SG
System: Host: Defiant Kernel: 3.10-rc2.towo.1 x86_64 (64 bit)
Desktop: Xfce 4.10.2 Distro: siduction 12.2.0 Riders on the Storm - xfce - (201212092126)
Graphics: Card: NVIDIA GK106 [GeForce GTX 650 Ti] X.Org: 1.14.1 driver: nvidia Resolution: 1920x1080@60.0hz
GLX Renderer: GeForce GTX 650 Ti/PCIe/SSE2 GLX Version: 4.3.0 NVIDIA 319.17

#22
Posted 05/22/2013 04:45 PM   
it is quite different what driver you start with (resp. 32-bit or 64-bit machine): here is kernel 3.9 on Intel core-i-7 with Sandybridge and [pearOS 7.0 (Ubuntu 12.10)] with Geforce CUDA Optimus 540M: the driver package NVIDIA_CUDA_SDK_1.1_Beta_Linux.run is working proper here. I only dont know where to get SDK_2.2 for Linux and knowing not too, if this package is better than now ? Or is there a newer SDK-package for CUDA under Linux ???
it is quite different what driver you start with (resp. 32-bit or 64-bit machine):

here is kernel 3.9 on Intel core-i-7 with Sandybridge and
[pearOS 7.0 (Ubuntu 12.10)] with Geforce CUDA Optimus 540M:

the driver package
NVIDIA_CUDA_SDK_1.1_Beta_Linux.run
is working proper here.

I only dont know where to get SDK_2.2 for Linux and knowing not too, if this package
is better than now ? Or is there a newer SDK-package for CUDA under Linux ???

#23
Posted 05/25/2013 10:03 PM   
sorry, the posting above is a bit "newbish" of myself - have seen now on nvidia.com that SDK 1.1 as CUDA package for Linux is the newest at present. And SDK 2.2 as CUDA packags is for Apple the newest at present.
sorry, the posting above is a bit "newbish" of myself - have seen now on nvidia.com
that SDK 1.1 as CUDA package for Linux is the newest at present.
And SDK 2.2 as CUDA packags is for Apple the newest at present.

#24
Posted 05/25/2013 10:20 PM   
works in release 3.10.0-0.rc3.git0.1.fc20.x86_64 --- nv-procfs.c.orig 2013-05-27 18:51:04.361832467 -0430 +++ nv-procfs.c 2013-05-27 18:52:30.267642126 -0430 @@ -34,6 +34,27 @@ "provides a short text file with a short description of itself\n" "in this directory.\n"; +struct proc_dir_entry { + unsigned int low_ino; + umode_t mode; + nlink_t nlink; + kuid_t uid; + kgid_t gid; + loff_t size; + const struct inode_operations *proc_iops; + const struct file_operations *proc_fops; + struct proc_dir_entry *next, *parent, *subdir; + void *data; + atomic_t count; /* use count */ + atomic_t in_use; /* number of callers into module in progress; */ + /* negative -> it's going away RSN */ + struct completion *pde_unload_completion; + struct list_head pde_openers; /* who did ->open, but not ->release */ + spinlock_t pde_unload_lock; /* proc_fops checks and pde_users bumps */ + u8 namelen; + char name[]; + }; + static struct proc_dir_entry *proc_nvidia; static struct proc_dir_entry *proc_nvidia_warnings; static struct proc_dir_entry *proc_nvidia_patches; @@ -45,48 +66,33 @@ static char nv_registry_keys[NV_MAX_REGISTRY_KEYS_LENGTH]; -#if defined(NV_PROC_DIR_ENTRY_HAS_OWNER) -#define NV_SET_PROC_ENTRY_OWNER(entry) ((entry)->owner = THIS_MODULE) -#else -#define NV_SET_PROC_ENTRY_OWNER(entry) -#endif - -#define NV_CREATE_PROC_ENTRY(name,mode,parent) \ - ({ \ - struct proc_dir_entry *__entry; \ - __entry = create_proc_entry(name, mode, parent); \ - if (__entry != NULL) \ - NV_SET_PROC_ENTRY_OWNER(__entry); \ - __entry; \ - }) + #define NV_CREATE_PROC_FILE(name,parent,__read_proc, \ __write_proc,__fops,__data) \ ({ \ - struct proc_dir_entry *__entry; \ - int __mode = (S_IFREG | S_IRUGO); \ + int __mode; \ + static struct file_operations fops = { \ + .owner = THIS_MODULE, \ + }; \ + \ + if((NvUPtr)(__fops) != 0 ) \ + fops = *(struct file_operations * )__fops; \ + fops.read = (ssize_t (*) (struct file *, char __user *, size_t, loff_t *))__read_proc; \ + fops.write = (ssize_t (*) (struct file *, const char __user *, size_t, loff_t *))__write_proc; \ + \ + __mode = (S_IFREG | S_IRUGO); \ if ((NvUPtr)(__write_proc) != 0) \ __mode |= S_IWUSR; \ - __entry = NV_CREATE_PROC_ENTRY(name, __mode, parent); \ - if (__entry != NULL) \ - { \ - if ((NvUPtr)(__read_proc) != 0) \ - __entry->read_proc = (__read_proc); \ - if ((NvUPtr)(__write_proc) != 0) \ - { \ - __entry->write_proc = (__write_proc); \ - __entry->proc_fops = (__fops); \ - } \ - __entry->data = (__data); \ - } \ - __entry; \ + \ + proc_create_data(name, __mode, parent, &fops, __data); \ }) #define NV_CREATE_PROC_DIR(name,parent) \ ({ \ struct proc_dir_entry *__entry; \ int __mode = (S_IFDIR | S_IRUGO | S_IXUGO); \ - __entry = NV_CREATE_PROC_ENTRY(name, __mode, parent); \ + __entry = proc_create("gpus",__mode, parent,NULL); \ __entry; \ })
works in release 3.10.0-0.rc3.git0.1.fc20.x86_64

--- nv-procfs.c.orig 2013-05-27 18:51:04.361832467 -0430
+++ nv-procfs.c 2013-05-27 18:52:30.267642126 -0430
@@ -34,6 +34,27 @@
"provides a short text file with a short description of itself\n"
"in this directory.\n";

+struct proc_dir_entry {
+ unsigned int low_ino;
+ umode_t mode;
+ nlink_t nlink;
+ kuid_t uid;
+ kgid_t gid;
+ loff_t size;
+ const struct inode_operations *proc_iops;
+ const struct file_operations *proc_fops;
+ struct proc_dir_entry *next, *parent, *subdir;
+ void *data;
+ atomic_t count; /* use count */
+ atomic_t in_use; /* number of callers into module in progress; */
+ /* negative -> it's going away RSN */
+ struct completion *pde_unload_completion;
+ struct list_head pde_openers; /* who did ->open, but not ->release */
+ spinlock_t pde_unload_lock; /* proc_fops checks and pde_users bumps */
+ u8 namelen;
+ char name[];
+ };
+
static struct proc_dir_entry *proc_nvidia;
static struct proc_dir_entry *proc_nvidia_warnings;
static struct proc_dir_entry *proc_nvidia_patches;
@@ -45,48 +66,33 @@

static char nv_registry_keys[NV_MAX_REGISTRY_KEYS_LENGTH];

-#if defined(NV_PROC_DIR_ENTRY_HAS_OWNER)
-#define NV_SET_PROC_ENTRY_OWNER(entry) ((entry)->owner = THIS_MODULE)
-#else
-#define NV_SET_PROC_ENTRY_OWNER(entry)
-#endif
-
-#define NV_CREATE_PROC_ENTRY(name,mode,parent) \
- ({ \
- struct proc_dir_entry *__entry; \
- __entry = create_proc_entry(name, mode, parent); \
- if (__entry != NULL) \
- NV_SET_PROC_ENTRY_OWNER(__entry); \
- __entry; \
- })
+

#define NV_CREATE_PROC_FILE(name,parent,__read_proc, \
__write_proc,__fops,__data) \
({ \
- struct proc_dir_entry *__entry; \
- int __mode = (S_IFREG | S_IRUGO); \
+ int __mode; \
+ static struct file_operations fops = { \
+ .owner = THIS_MODULE, \
+ }; \
+ \
+ if((NvUPtr)(__fops) != 0 ) \
+ fops = *(struct file_operations * )__fops; \
+ fops.read = (ssize_t (*) (struct file *, char __user *, size_t, loff_t *))__read_proc; \
+ fops.write = (ssize_t (*) (struct file *, const char __user *, size_t, loff_t *))__write_proc; \
+ \
+ __mode = (S_IFREG | S_IRUGO); \
if ((NvUPtr)(__write_proc) != 0) \
__mode |= S_IWUSR; \
- __entry = NV_CREATE_PROC_ENTRY(name, __mode, parent); \
- if (__entry != NULL) \
- { \
- if ((NvUPtr)(__read_proc) != 0) \
- __entry->read_proc = (__read_proc); \
- if ((NvUPtr)(__write_proc) != 0) \
- { \
- __entry->write_proc = (__write_proc); \
- __entry->proc_fops = (__fops); \
- } \
- __entry->data = (__data); \
- } \
- __entry; \
+ \
+ proc_create_data(name, __mode, parent, &fops, __data); \
})

#define NV_CREATE_PROC_DIR(name,parent) \
({ \
struct proc_dir_entry *__entry; \
int __mode = (S_IFDIR | S_IRUGO | S_IXUGO); \
- __entry = NV_CREATE_PROC_ENTRY(name, __mode, parent); \
+ __entry = proc_create("gpus",__mode, parent,NULL); \
__entry; \
})

#25
Posted 05/27/2013 11:34 PM   
The patch compiles fine, but not working so fine. It give some prints in dmesg, and in optimus technology it wont turn off the power of the card.
The patch compiles fine, but not working so fine.
It give some prints in dmesg, and in optimus technology it wont turn off the power of the card.

#26
Posted 05/28/2013 03:26 PM   
andresu, one of the calls to NV_CREATE_PROC_FILE is wrapped in nv_procfs_add_text_file. I don't see how you don't end up with a single static struct file_operations that every caller for this shares, or how a simple cast is supposed to do anything about the different call signatures of the struct file_operations members and the deprecated struct proc_dir_entry members. Did you test the files under /proc with this patch?
andresu, one of the calls to NV_CREATE_PROC_FILE is wrapped in nv_procfs_add_text_file. I don't see how you don't end up with a single static struct file_operations that every caller for this shares, or how a simple cast is supposed to do anything about the different call signatures of the struct file_operations members and the deprecated struct proc_dir_entry members. Did you test the files under /proc with this patch?

#27
Posted 05/30/2013 07:05 AM   
[quote="Unhelpful"]Compiles on 3.9.x, 3.10-rc2+, tested on 3.10-rc2. Didn't test write case for the registry /proc files but I *think* they're correct. [url]http://pastie.org/7942599[/url][/quote] How do you apply this patch ?? Thanks mahashakti89
Unhelpful said:Compiles on 3.9.x, 3.10-rc2+, tested on 3.10-rc2. Didn't test write case for the registry /proc files but I *think* they're correct.

http://pastie.org/7942599



How do you apply this patch ??

Thanks

mahashakti89

#28
Posted 05/31/2013 05:32 PM   
[quote="mahashakti89"]How do you apply this patch ?? Thanks mahashakti89[/quote] Run the NVIDIA-XXXXX.run with --extract-only, cd into the directory and run [code]patch -p1 <$PATCHFILE[/code]
mahashakti89 said:How do you apply this patch ??

Thanks

mahashakti89


Run the NVIDIA-XXXXX.run with --extract-only, cd into the directory and run
patch -p1 <$PATCHFILE

#29
Posted 06/02/2013 01:43 AM   
Thanks unhelpful, but I don't know what to do with the patched and extracted files...I can't run the biinary file nvidia-installer inside the extracted directory, and --apply-patch complains that i don't know what i'm doing (true). Do I re-archive the .run? How does that happen? Thanks for any help -- I'm so close! my super awesome 3.10 kernel with bcache is stuck in tty...sad.....
Thanks unhelpful, but I don't know what to do with the patched and extracted files...I can't run the biinary file nvidia-installer inside the extracted directory, and --apply-patch complains that i don't know what i'm doing (true). Do I re-archive the .run? How does that happen? Thanks for any help -- I'm so close!

my super awesome 3.10 kernel with bcache is stuck in tty...sad.....

#30
Posted 06/02/2013 04:10 PM   
  2 / 3    
Scroll To Top