NvOptimusEnablement is not working in our OpenGL application

I work on an C++ & OpenGL based visualization application.
Despite having defined the NvOptimusEnablement variable from within our application and verifying that it is being exposed by the executable, several testers are reporting that the application defaults to using the integrated graphics cards.

I have also tried hard linking to several of the recommended libraries mentioned in the white paper that should also trigger the dedicated GPU. This also doesn’t work and would be less desirable than the NvOptimusEnablement option anyway.

What’s more, profiles for the application are deleted with every new driver installation, at least on Windows 8/8.1/10, causing users to suddenly have problems with the application even if they previously set up a profile to force the use of a dedicated GPU.

For reference, we have defined the variable as:
extern “C” {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

Using dependency walker I have verified that the variable is still in the application and is not being optimized out.

Anyone able to offer some insight on this?

Did you ever solve this? It doesn’t seem to work for me either:

//Turn on high power graphics for NVidia cards on laptops (with built in graphics cards + Nvidia cards)
extern “C”
{
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

then much later in a constructor:

RenderPipe::RenderPipe(HDC dc)
{
GLenum err;
m_XMin = 0.0;
m_XMax = 0.0;
m_YMin = 0.0;
m_YMax = 0.0;
m_ZMin = 0.0;
m_ZMax = 0.0;

m_hrc = NULL;
m_hdc = dc;
m_pipe = this;
/*static PIXELFORMATDESCRIPTOR pfd =
{
    sizeof(PIXELFORMATDESCRIPTOR),
        1,
        PFD_DRAW_TO_BITMAP |
        PFD_SUPPORT_GDI |
        PFD_SUPPORT_OPENGL,               // support OpenGL
        PFD_TYPE_RGBA,                     // RGBA type
        32,                                // 24-bit color depth
        0, 0, 0, 0, 0, 0,                  // color bits ignored
        8,                                 // alpha buffer bit depth
        0,                                 // shift bit ignored
        0,                                 // no accumulation buffer
        0, 0, 0, 0,                        // accum bits ignored
        32,                                // 32-bit z-buffer
        0,                                 // no stencil buffer
        0,                                 // no auxiliary buffer
        PFD_MAIN_PLANE,                    // main layer
        0,                                 // reserved
        0, 0, 0                            // layer masks ignored
 };*/

static PIXELFORMATDESCRIPTOR pfd =
{
    sizeof(PIXELFORMATDESCRIPTOR),
    1,
    PFD_DRAW_TO_WINDOW |            // support window
    PFD_SUPPORT_OPENGL |            // support OpenGL
    PFD_DOUBLEBUFFER,               // double buffered
    PFD_TYPE_RGBA,                  // RGBA type
    32,                             // 32-bit color depth
    0, 0, 0, 0, 0, 0,               // color bits ignored
    0,                              // no alpha buffer
    0,                              // shift bit ignored
    0,                              // no accumulation buffer
    0, 0, 0, 0,                     // accum bits ignored
    24,                        // 24-bit z-buffer
    0,                              // no stencil buffer
    0,                              // no auxiliary buffer
    PFD_MAIN_PLANE,                 // main layer
    0,                              // reserved
    0, 0, 0                         // layer masks ignored
};


    // Get device context only once.

// Pixel format.
int pixelFormat = ChoosePixelFormat(m_hdc, &pfd);
if (0 == pixelFormat)
{
    throw GfxEngineError("Error: Could not find a pixel format that works on your computer");
}
SetPixelFormat(m_hdc, pixelFormat, &pfd);

NvAPI_Status retVal = NvAPI_Initialize();
ASSERT(retVal == NVAPI_OK);
HMODULE advancedGraphicsDll;
advancedGraphicsDll = LoadLibrary(L"nvapi.dll");



NvOptimusEnablement = 0x00000001;



HGLRC temp_hrc = wglCreateContext(m_hdc);//make a 1.0 context so we can find the pointer to make the 4.3 context. Windows is stupid.
wglMakeCurrent(m_hdc, temp_hrc);


const GLubyte* vendor_string = glGetString(GL_VENDOR);
TRACE("%s\n",vendor_string);

const GLint attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 4,
    WGL_CONTEXT_MINOR_VERSION_ARB, 3,
    0 };

PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB;
wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB");

ASSERT(wglCreateContextAttribsARB);
// Create the OpenGL Rendering Context.
m_hrc = wglCreateContextAttribsARB(m_hdc,
                                   0,//don't share data between contexts
                                   attribs);//demand at least an openGL 4.3 window



//create glew context
m_GlewContext = new GLEWContext();
if (m_GlewContext == NULL)
{
    throw GfxEngineError("Error: Could not create GLEW Context!");
}
wglMakeCurrent(NULL, NULL);
wglDeleteContext(temp_hrc);
MakeGLContextCurrent();//use 4.3 context

vendor_string = glGetString(GL_VENDOR);
TRACE("%s", vendor_string);
vendor_string = glGetString(GL_RENDERER);
TRACE("%s", vendor_string);

GL_VENDOR is always reported as Intel unless I go into the Nvidia control panel and force high performance graphics. the LoadLibrary to get the dll returns a non-null value. I’m baffled. (Yes… this is an MFC program. please… its painful enough, don’t tease me about it)

Hello,

In order to force NVidia cards to be used when the users run the software I develop, I have used
"
extern “C” {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
"
for a long time.
It worked well until my last release (a few days ago).

Since it is exactly the same problem as yours (nvdortmont and larry_e_ramey), can you tell me if you finally solved it ?

Thank you,
Cedric

  • nukeygara -

I’ve got a hack-tastic work around. I found someone on a gaming forum that had figured this out:

[i]After a deep scan - i used sysinternals Tools to understand when optimus is being invoked, and why it cannot start to activate the right GFX choosen in the nvidia control Center. It is - as i thought - a really simple Problem in the registry. Each time an appliction starts, nvidia Needs to be triggered to start correctly. This is done by a registry entry : App_InitDlls Here you can tell Windows what Needs to be triggered when an app starts.

x32bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows
x64bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows

There you find the link to the NVINIT.DLL or NVINITx.DLL which lets nvidia invoking the right GFX.
…alright…

But under the reg key RequireSignedAppInit_DLLs the System requires a signed DLL to be invoked/triggered - the value of the key is 1 (true). After Setting this value to 0 (false) - my System runs perfectly!!
Solution:
set the following keys to 0
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\RequireSignedAppInit
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows\RequireSignedAppInit

You do not Need to restart the System - you can test it directly! It seems that Nvidea missed to sign These DLLs … thats all.

Best regards,
BoboFox[/i]

Thank you for the fast answer!
I gonna check this workaround with the users who faced the problem.
Regards,
Cedric

Well… Nvidia isn’t going to help you!

Dude… I feel your pain. This crap is responsible for canning my entire rewrite of our graphics code from OpenGL 1.0 to 4.3, the powers that be head’s exploded when I explained that the laptops we had been telling users to upgrade to refused to turn their graphics cards on unless we hacked the users registry to bypass driver signing during install.

Or you can go to Windows 10. (I suspect it would work in 8 too)