Excessive amount of RAM used by GLSL programs
Hi, I am here to report a potential bug: It seems like NVIDIA's driver can commit up to several megabytes of RAM (CPU ram, not VRAM) for linked GLSL shader programs. As one would suspect, the amount of consumed RAM is somewhat proportional to the complexity of the shader program. However, it still seems much higher than it needs to be. For example, some of our more complex shader programs easily exceed 2MB. When dealing with high quantity of shaders, this becomes a huge problem. Longer description: In our application we generate shaders dynamically and they often end up being quite complex (Example [url=https://pastebin.com/K9PyBuQa]Vertex[/url] and [url=https://pastebin.com/K62caHaJ]Fragment[/url] shader). Furthermore, we deal with large amounts of shaders, in the range of 5k to 20k. The problem we are facing is that the graphics driver allocates up to 15[b]GB[/b] of RAM just for compiled shaders. The question is, is this intended behavior or a bug? We already double and triple checked to make sure this is not a mistake on our end. I wrote a test application to demonstrate the issue. Source is available [url=http://cemu.info/uploads/ShaderCompileTest.zip]here[/url] (VS2015). It links one set of vertex + fragment shader 1000 times and then prints the amount of RAM commited by the application. The application itself does not allocate any extra memory. Additionally, the .zip comes with multiple sets of example shaders taken from our application to see the difference in RAM usage. For more details see main.cpp Some other observations I made: [list] [.]Occurs on all driver versions and all Windows versions[/.] [.]RAM usage is proportional to complexity of shader (no surprise here)[/.] [.]Conditionals (if clauses and '?' operator) seem to massively increase RAM usage and compile times[/.] [.]The size of uniform buffer arrays only slightly affect RAM usage[/.] [.]Detaching and deleting shaders (glDetachShader+glDeleteShader) after glLinkProgram helps only a bit[/.] [.]Calling glDeleteProgram() correctly releases all memory, indicating there is no leak[/.] [.]Same problem occurs when the shader programs are loaded via glProgramBinary[/.] [/list] Thanks in advance! Edit: We ran the test application on a different set of GPUs from all three vendors (Nvidia, AMD and Intel). Results are [url=https://pastebin.com/dTmpuZPs]here[/url].
Hi,

I am here to report a potential bug:
It seems like NVIDIA's driver can commit up to several megabytes of RAM (CPU ram, not VRAM) for linked GLSL shader programs. As one would suspect, the amount of consumed RAM is somewhat proportional to the complexity of the shader program. However, it still seems much higher than it needs to be. For example, some of our more complex shader programs easily exceed 2MB. When dealing with high quantity of shaders, this becomes a huge problem.

Longer description:
In our application we generate shaders dynamically and they often end up being quite complex (Example Vertex and Fragment shader). Furthermore, we deal with large amounts of shaders, in the range of 5k to 20k. The problem we are facing is that the graphics driver allocates up to 15GB of RAM just for compiled shaders. The question is, is this intended behavior or a bug? We already double and triple checked to make sure this is not a mistake on our end.

I wrote a test application to demonstrate the issue. Source is available here (VS2015). It links one set of vertex + fragment shader 1000 times and then prints the amount of RAM commited by the application. The application itself does not allocate any extra memory. Additionally, the .zip comes with multiple sets of example shaders taken from our application to see the difference in RAM usage. For more details see main.cpp

Some other observations I made:
  • Occurs on all driver versions and all Windows versions
  • RAM usage is proportional to complexity of shader (no surprise here)
  • Conditionals (if clauses and '?' operator) seem to massively increase RAM usage and compile times
  • The size of uniform buffer arrays only slightly affect RAM usage
  • Detaching and deleting shaders (glDetachShader+glDeleteShader) after glLinkProgram helps only a bit
  • Calling glDeleteProgram() correctly releases all memory, indicating there is no leak
  • Same problem occurs when the shader programs are loaded via glProgramBinary

Thanks in advance!

Edit: We ran the test application on a different set of GPUs from all three vendors (Nvidia, AMD and Intel). Results are here.

#1
Posted 03/27/2017 06:53 PM   
I have same issue. I use linux and GTX 970, and the issue appeared since version 378.09 driver. As a workaround, I added 32GB RAMs into my PC.
I have same issue.

I use linux and GTX 970, and the issue appeared since version 378.09 driver.

As a workaround, I added 32GB RAMs into my PC.

#2
Posted 03/30/2017 02:44 PM   
This should be fixed, I think the usage its excesive, and the workaround posted up there its useless unless you want to waste money to fix a driver issue
This should be fixed, I think the usage its excesive, and the workaround posted up there its useless unless you want to waste money to fix a driver issue

#3
Posted 04/07/2017 11:28 AM   
Will we ever see this adressed by Nvidia?
Will we ever see this adressed by Nvidia?

#4
Posted 04/11/2017 07:55 PM   
This is tracked as a bug internally and being worked on.
This is tracked as a bug internally and being worked on.

#5
Posted 04/18/2017 11:41 AM   
Any update on this? its been a while
Any update on this? its been a while

#6
Posted 05/25/2017 07:56 PM   
It's likely that what is needed here is much better optimization or compression rather than only dealing with it if it's technically a bug. Intel according to the tests of the OP uses more than double the cache of NVIDIA itself. It comes to reason that what may be needed is for NVIDIA to find ways to compress the data or omit redundant data, rather than only treat it if it deems it a critical bug, because it can be very detrimental to its users if it does nothing since very few people nowadays justify more than 8 to 16GB of RAM.
It's likely that what is needed here is much better optimization or compression rather than only dealing with it if it's technically a bug. Intel according to the tests of the OP uses more than double the cache of NVIDIA itself. It comes to reason that what may be needed is for NVIDIA to find ways to compress the data or omit redundant data, rather than only treat it if it deems it a critical bug, because it can be very detrimental to its users if it does nothing since very few people nowadays justify more than 8 to 16GB of RAM.

#7
Posted 06/07/2017 10:46 AM   
Any update on this? 3 months later and no fix. I almost want to buy an AMD card for my next build because of this.
Any update on this? 3 months later and no fix. I almost want to buy an AMD card for my next build because of this.

#8
Posted 07/25/2017 08:39 PM   
Something that should also get mentioned is that there are reports that say say it isn't present in earlier driver(s). It seems that the driver of [url]http://www.nvidia.com/download/driverResults.aspx/114351/en-us[/url] of January 23rd 2017 has usage that's close to other opengl drivers on other brands. Hope it gets fixed! I'm trying to make a benchmark, but I'm basically capped with my 16GB ram!
Something that should also get mentioned is that there are reports that say say it isn't present in earlier driver(s). It seems that the driver of http://www.nvidia.com/download/driverResults.aspx/114351/en-us of January 23rd 2017 has usage that's close to other opengl drivers on other brands. Hope it gets fixed! I'm trying to make a benchmark, but I'm basically capped with my 16GB ram!

#9
Posted 07/26/2017 10:03 AM   
I ran the test on a 1050ti with driver version 382.53 here is the result : http://i.imgur.com/ZYX5y13.png
I ran the test on a 1050ti with driver version 382.53 here is the result : http://i.imgur.com/ZYX5y13.png

#10
Posted 07/26/2017 06:01 PM   
EDIT: My bad, I didn't realize that the values were the same. Disregard this comment
EDIT: My bad, I didn't realize that the values were the same. Disregard this comment

#11
Posted 07/27/2017 01:54 AM   
What do you mean ? It's in line with OP's results : https://pastebin.com/dTmpuZPs
What do you mean ? It's in line with OP's results : https://pastebin.com/dTmpuZPs

#12
Posted 07/27/2017 04:09 AM   
[quote="OP"][.]Conditionals (if clauses and '?' operator) seem to massively increase RAM usage and compile times[/.][/quote] Due to this reason, it might be considered normal due to aggressive optimization, but even in that case there could be an optional setting.
OP said:
  • Conditionals (if clauses and '?' operator) seem to massively increase RAM usage and compile times

  • Due to this reason, it might be considered normal due to aggressive optimization, but even in that case there could be an optional setting.

    #13
    Posted 08/16/2017 08:15 AM   
    Any word on a fix for this issue?
    Any word on a fix for this issue?

    #14
    Posted 08/16/2017 09:20 PM   
    I have the same problem, my PC has 8GB, i cannot add more ram and i need this thing to work. Please Nvidia fix it already. Thanks.
    I have the same problem, my PC has 8GB, i cannot add more ram and i need this thing to work. Please Nvidia fix it already. Thanks.

    #15
    Posted 08/31/2017 10:49 AM   
    Scroll To Top

    Add Reply