Bug in Mac driver / compiler? Works in linux, fails in OSX

Hello,

I wrote a very simple test program to test the mapping range of the texture signed integer to float conversion when using cudaReadModeNormalizedFloat. Specifically, I was wanting to know how the two’s complement range, [-2^(b/2), 2^(b/2)-1] where b is the number of bits, is mapped to floating point numbers as the Programming Guide is ambiguous on this.

(For those of you that are interested, the answer is it maps [-2^(b/2)-1, 2^(b/2)-1] uniformly to [-1.0, 1.0], and also maps -2^(b/2) to -1.0. So for 8-bit values, -128 → -1.0, and [-127,127] → [-1.0, 1.0] )

On my Linux machine with a Fermi card everything is as it should be. The problem is that my test program fails to give the correct answer for signed chars, but works fine for uchar, short and ushort on my Mac laptop (CUDA 4.0, 8600 GT mobile). On my Mac, chars apparently map to the range [-128,-1] ->[5.019608e-01,1] and [0,127] → [0, 4.980392e-1]. It looks like it is forgetting about the sign, and mapping as though it is uchar, but the code appears to be correct.

I have no idea why this could be. Code is attached, just compile with “nvcc texture_conversion.cu -o texture_conversion”. It fills arrays for char, uchar, short and ushort with all their possible respective values, and the kernel loads these values through the texture cache and saves out the results. The results are then stored in 4 separate files, where the integer value and it’s mapped floating point value are recorded.

I can only imagine that it’s a bug in the compiler / driver. I spend the morning trying to debug this before realizing that it worked fine on Linux, it just failed on my laptop…
texture_conversion.cu (4.24 KB)

As I recall the signedness of “char” is not specified in C/C++, it is implementation dependent. So it can be either signed or unsigned depending on platform. Try replacing all instances of “char” with either “signed char” or “unsigned char” in this program.