[Request] Add NVCC compatibility with glibc 2.26

According to FFmpeg developers, the NVCC compiler is currently not compatible with the newly released glibc 2.26.
Discussion here: https://trac.ffmpeg.org/ticket/6648.

I’m currently using CUDA/NVCC 8.0.61 and it produces errors with glibc 2.26.
When trying to compile FFmpeg or Caffe2 with CUDA support I’m getting these errors:

/usr/include/bits/floatn.h(61): error: invalid argument to attribute "__mode__"

/usr/include/bits/floatn.h(73): error: identifier "__float128" is undefined

Can you please add glibc 2.26 support to CUDA/NVCC 8?
Thank you.

CUDA 8 is not likely to be updated at this point, as release of CUDA 9 is “soon”.

You might want to test with CUDA 9 RC. (it may not work properly there either - 2.26 appears to be quite recent)

Also, requests of this type should be filed at developer.nvidia.com using the bug report system, not here on the forum, if you want the best possibility for a future change.

Taking a look at the CUDA 9 RC linux install guide, I see:

Table 1 Native Linux Distribution Support in CUDA 9.0
Distribution Kernel GCC GLIBC ICC PGI XLC CLANG
x86_64
RHEL 7.x 3.10 4.8.5 2.17
RHEL 6.x 2.6.32 4.4.7 2.12
CentOS 7.x 3.10 4.8.5 2.17
CentOS 6.x 2.6.32 4.4.7 2.12
Fedora 25 4.8.8 6.2.1 2.24-3
OpenSUSE Leap 42.2 4.4.27 4.8 2.22
SLES 12 SP2 4.4.21 4.8.5 2.22
Ubuntu 17.04 4.9.0 6.3.0 2.24-3
Ubuntu 16.04 4.4 5.3.1 2.23

So GNU 7.x toolchain is not officially supported (I don’t think) and glibc after about 2.23 or 2.24 are not officially supported.

So is there any solution?

Is not upgrading to glibc 2.26 not a feasible way to avoid the issue? I assume Linux hasn’t become Windows 10 yet, where users may be forced to upgrade.

Alternatively, you could file a bug with the glibc maintainers, pointing out that whatever interface changes they made broke existing applications (a big no-no in my book). I somewhat doubt this would do any good, as the Linux world often gives a rodent’s behind about backward compatibility.

Yes, but we have upgraded already the system

Did you install glibc 2.26 because your application requires a particular feature that is new to this version of the library? If not, you should be able to downgrade it.

Out of curiosity, did the release notes for glibc 2.26 point out that it contains interface changes that may break existing applications?

Sorry but I don’t know. I am using opensuse tumbleweed and I can not downgrade from 2.26. The update of the system was a mistake.

Sorry to hear that. I had never heard of Tumbleweed. I took a look at [url]https://en.opensuse.org/Portal:Tumbleweed[/url] and haven’t been able to figure out why any regular developer (i.e. not directly involved with the development of Linux itself) would want to use it. In software, newer does not always mean better, and there is rarely a good reason to be on the bleeding edge.

Tumbleweed has never been among the supported platforms for CUDA.
Having said that, I have successfully used CUDA on Tumbleweed for a couple of years. That involved checking of updates and installing matching compiler versions on my own. Admittedly though, I’ve never used OpenGL interop because that would have involved downgrading half of the system, or separately installing older versions of about everything graphics related.

You can normally roll back a problematic update using btrfs checkpoints, provided you had allocated enough space on the root filesystem. (yast2 automatically checkpoints before and after updates, as well as many other administration tasks).

CUDA 9.0RC
After not finding a conventional solution for Tumbleweed I did an evil thing and edited the file

/usr/include/bits/floatn.h

and short circuited the __HAVE_FLOAT128

/* Defined to 1 if the current compiler invocation provides a
floating-point type with the IEEE 754 binary128 format, and this
glibc includes corresponding *f128 interfaces for it. The required
libgcc support was added some time after the basic compiler
support, for x86_64 and x86. */
#if (defined x86_64
? __GNUC_PREREQ (4, 3)
: (defined GNU ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))

define __HAVE_FLOAT128 1

#else

define __HAVE_FLOAT128 0

#endif
//Short circuit the float128 feature

define __HAVE_FLOAT128 0

It would have been nice if NVCC had a command line option that would have turned off

Anyway short circuiting __HAVE_FLOAT128 should not cause any harm in the big picture of things.

do in the following on the command line I was able to successfully build all the cuda samples and test them

export HOST_COMPILER=/usr/bin/g+±6

Only odd thing is I have to run all my CUDA as root. If I run any of the samples as a normal user I get an error similar to
/1_Utilities/deviceQuery/deviceQuery
./1_Utilities/deviceQuery/deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 38
→ no CUDA-capable device is detected
Result = FAIL

I don’t get the error if I run at root user. Can anyone recommend a fix

1 Like

This is a “good” bad thing. I wiil try it tomorrow. I hope this solution to be ok with CUDA 8 too.

It works for both CUDA 8.0 and CUDA 9.0RC

for CUDA 8 if you are going to build all the samples from the command line remember to do this
export HOST_COMPILER=/usr/bin/g+±5

Also if you are using cmake your application you might want to do the following

-D CMAKE_C_COMPILER=/usr/bin/gcc-5 -D CMAKE_CXX_COMPILER=/usr/bin/g+±5

1 Like

Yes I know. thanks

Problem solved. Thanks

  1. Here is a patch to /usr/include/bits/floatn.h for avoiding __FLOAT128 only when compiling via NVCC
    https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/

  2. Here is how to use other GCC compiing via NVCC.
    https://liaa.dc.uba.ar/node/12

CUDA 9 also have same problem.
can you tell me a fix to get nvcc working

I recently upgraded to cuda 9 with upgraded to fedora 27
I have gcc-4.9, gcc-5.3, gcc-6.3 as well as default gcc 7.2

I tried with all the 3 nondefault gcc-c++

please advice a fix

I’m using openSUSE 12.2 and got same problem
gcc 4.85
g++ 4.85
cuda 9
glibc 2.26

can somebody give me an advice?