OptiX GL Interop Not Working on Linux Mint 18

Hi all,

I’ve run into issues trying to use the OptiX-GL interop features in both my own applications and in running the samples.

Each sample, such as isgShadows, crashes with a detected error: Not a valid OpenGL texture. In my own work, I see an issue with trying to map an OptiX buffer bound to a PBO; OptiX seems to think the PBO size is zero, which I have verified everywhere to be false.

Useful info:


OptiX 3.9.1
Number of Devices = 1

Device 0: Quadro K1100M
Compute Support: 3 0
Total Memory: 2146762752 bytes
Clock Rate: 705500 kilohertz
Max. Threads per Block: 1024
SM Count: 2
Execution Timeout Enabled: 1
Max. HW Texture Count: 128
TCC driver enabled: 0
CUDA Device Ordinal: 0

Constructing a context…
Created with 1 device(s)
Supports 2147483647 simultaneous textures
Free memory:
Device 0: 1958088704 bytes


Graphics: Card: NVIDIA GK107GLM [Quadro K1100M] bus-ID: 01:00.0
Display Server: X.Org 1.18.3 drivers: nvidia (unloaded: fbdev,vesa) Resolution: 3840x2160@60.00hz
GLX Renderer: Quadro K1100M/PCIe/SSE2 GLX Version: 4.5.0 NVIDIA 361.42 Direct Rendering: Yes


UDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: “Quadro K1100M”
CUDA Driver Version / Runtime Version 8.0 / 7.5
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2047 MBytes (2146762752 bytes)
( 2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores
GPU Max Clock rate: 706 MHz (0.71 GHz)
Memory Clock rate: 1400 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 262144 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 7.5, NumDevs = 1, Device0 = Quadro K1100M
Result = PASS


OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: Quadro K1100M/PCIe/SSE2
OpenGL core profile version string: 4.5.0 NVIDIA 361.42
OpenGL core profile shading language version string: 4.50 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.5.0 NVIDIA 361.42
OpenGL shading language version string: 4.50 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 361.42
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:


If you need any more info, please let me know.

I should follow-up by saying that within my application, when querying gl, it appears to be using the nvidia version, but I can’t be certain.

The same happening here. I ran the same example and got:

terminate called after throwing an instance of ‘optix::Exception’
what(): Invalid value (Details: Function “RTresult _rtTextureSamplerCreateFromGLImage(RTcontext, unsigned int, RTgltarget, RTtexturesampler_api**)” detected error: Not a valid OpenGL texture)

Built myself a short program setting OGL-interop with a 2D and 3D texture using Optix 4.0.2 and 4.1.0 and it also fails.

A third test was running the gInteractiveOptix from the NVIDIA gvdb library using Optix 4.0.2 and it breaks with the same error.

Using:

Ubuntu 16
Titan Z (Kepler)
CUDA 8
Nvidia Driver 375

Regards,

Hi Benjamin. Are you running your OptiX app remotely, e.g., over ssh -X? Or do you get this problem locally?

Hello Dylan,

All tests were run locally.

BTW I tried just the isgShadows example in another system locally, and got the same error. In this case the system configuration was:

Ubuntu 16
Titan X (Pascal)
CUDA 8
Nvidia Driver 375.26

Regards,

Can you tar this up and send it to me directly, or to optix-help? I tried a similar test here and it seemed fine, so it’s worth trying your exact test next.

I can try the gvdb sample too – I’ve run it before, but not lately. I don’t really trust isgShadows as a debugging tool ;)

Hey Eric, the issue we found on Benjamin’s machine was that two different versions of libGL.so were being loaded at runtime – one by libOptix and one by the sample/application layer. This could happen if you set a specific libGL.so path for the sample using CMake (the “OPENGL_gl_LIBRARY” variable). Internally OptiX calls dlopen() to find libGL.so, regardless of what path you set for your application.

You can check for this problem using the “strace” utility on Linux and grepping for successful calls to open() on libGL.so. All the paths should be the same (up to symlinks); if they’re not the same, then that’s the problem.

You can use LD_LIBRARY_PATH to control which version is loaded by OptiX, or alternatively, change the version linked by your app.

Does this fix your problem?