A more accurate and faster implementation of powf()

To first order, pow(x,y) = exp(y*log(x)), but pow() is actually one of the harder to implement functions in a standard math library. First, there are a ton of special cases (enumerated in the ISO C99 standard, for example) to consider. Second, in order to compute pow() accurately, the logarithm needs to be computed with more than native precision to compensate for the error magnification properties of the exponential function. For a single-precision implementation, one would ideally want the maximum relative error in the logarithm to be <= 2**(-31) ~ 4.6e-10.

This would be trivial to achieve by the us of double precision arithmetic for the highest-order terms of the core approximation. Alas, on most NVIDIA GPUs this approach would not be performance competitive due to the huge disparity in throughput for single-precision and double-precision operations. A workable alternative is the use of double-float techniques for these terms, making best possible use of FMA and eliminating the normalization step typically used at the end of every double-float operation.

[Edit 4/13/2022: Text and code below superseded by my post dated 4/13/2022 below]

The code below implements two alternatives in the extended-precision computation of the logarithm, depending on whether preference is given to accuracy of speed. Two-argument functions are difficult to test exhaustively, the data below is from extensive tests using about 2**42 test vectors.

CUDA 8.0 built-in powf()    max. ulp err. = 8.87789 @ (1.39190876, -259.933838)
my_powf(), PREFER_SPEED     max. ulp err. = 5.71140 @ (0.71992141,  259.478912)
my_powf(), PREFER_ACCURACY  max. ulp err. = 2.09680 @ (0.70102614, -243.910400)
Performance gains are modest. There are probably ways to squeeze a few more percent of performance out of the code. Currently, using the CUDA 8.0 tool chain, I measure the following throughput on a Quadro K2200 (sm_50):
CUDA 8.0 built-in powf()   5.716e9 function calls / second
my_powf(), PREFER_SPEED    6.338e9 function calls / second (+11%)
my_powf(), PREFER_ACCURACY 6.010e9 function calls / second (+5%)
The code below has been tested using nvcc's default compilation mode. Adjustments may need to be made when compiling in other modes, or when porting to non-CUDA platforms. powf() is a fairly complicated function and my test framework isn't full industrial strength, so some corner case may have slipped through testing. I will try to address functional deficiencies brought to my attention.
/*
  Copyright (c) 2016-2017, Norbert Juffa
  All rights reserved.

  Redistribution and use in source and binary forms, with or without 
  modification, are permitted provided that the following conditions
  are met:

  1. Redistributions of source code must retain the above copyright 
     notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright
     notice, this list of conditions and the following disclaimer in the
     documentation and/or other materials provided with the distribution.

  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 
  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 
  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
  HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 
  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/

#define PREFER_SPEED     0
#define PREFER_ACCURACY  1

#define LOG_VARIANT (PREFER_SPEED)

/* Compute natural logarithm of the argument with extended precision, returning
   the result as a normalized double-float loghi:loglo. If LOG_VARIANT is set 
   to PREFER_SPEED, maximum relative error of the result is 3.64709837e-9. If
   LOG_VARIANT is set to PREFER_ACCURACY, maximum relative error of the result
   is 8.56450852e-10.
*/
__device__ void my_logf_ext (float a, float *loghi, float *loglo)
{
    const float LOG2_HI =  6.93147182e-1f; //  0x1.62e430p-1f
    const float LOG2_LO = -1.90465421e-9f; // -0x1.05c610p-29f;

    float m, r, i, s, t, p, qhi, qlo;
    int e;

    /* Reduce argument to m in [181/256, 362/256] */
    i = 0.0f;
    if (a < 1.175494351e-38f){ // 0x1.0p-126
        a = a * 8388608.0f; // 0x1.0p+23
        i = -23.0f;
    }
    e = (__float_as_int (a) - __float_as_int (0.70703125f)) & 0xff800000;
    m = __int_as_float (__float_as_int (a) - e);
    i = fmaf ((float)e, 1.19209290e-7f, i); // 0x1.0p-23

    /* Compute q = (m-1)/(m+1) as a double-float qhi:qlo */
    p = m + 1.0f;
    m = m - 1.0f;
    asm ("rcp.approx.ftz.f32 %0, %1;\n\t" : "=f"(r) : "f"(p)); // r = 1.0f / p
    qhi = m * r;
    t = fmaf (qhi, -2.0f, m);
    s = fmaf (qhi, -m, t);
    qlo = r * s;

    /* Approximate atanh(q), q in [-75/437, 53/309] */ 
    s = qhi * qhi;
    r =             1.29455566e-1f;  // 0x1.092000p-3
    r = fmaf (r, s, 1.41983002e-1f); // 0x1.22c7fcp-3
    r = fmaf (r, s, 2.00014994e-1f); // 0x1.99a176p-3
    r = fmaf (r, s, 3.33333254e-1f); // 0x1.555550p-2
#if LOG_VARIANT == PREFER_SPEED
    r = r * s;
    s = fmaf (r, qhi, fmaf (r, qlo, qlo)); // r*(qhi:qlo) + qlo
#elif LOG_VARIANT == PREFER_ACCURACY
    t = fmaf (qhi, qlo + qlo, fmaf (qhi, qhi, -s)); // s:t = (qhi:qlo)**2
    p = s * qhi;
    t = fmaf (s, qlo, fmaf (t, qhi, fmaf (s, qhi, -p))); // p:t = (qhi:qlo)**3
    s = fmaf (r, p, fmaf (r, t, qlo));
#else
#error unsupported LOG_VARIANT
#endif    

    /* log(a) = 2 * atanh(q) + i * log(2) */
    t = fmaf ( LOG2_HI * 0.5f, i, qhi);
    p = fmaf (-LOG2_HI * 0.5f, i, t);
    s = (qhi - p) + s;                        
    s = fmaf ( LOG2_LO * 0.5f, i, s);
    r = t + t;
    *loghi = t = fmaf (2.0f, s, r);
    *loglo = fmaf (2.0f, s, r - t);
}

/* Compute exp(a). No check for overflow or underflow is performed.
   Maximum error = 0.86565 ulp
*/
__device__ float my_expf_unchecked (float a)
{
    float f, r, s, t, j;
    int i, ia;

    // exp(a) = 2**i * exp(f); i = rintf (a / log(2))
    j = fmaf (1.442695f, a, 12582912.f) - 12582912.f; // 0x1.715476p0, 0x1.8p23
    f = fmaf (j, -6.93145752e-1f, a); // -0x1.62e400p-1  // log_2_hi 
    f = fmaf (j, -1.42860677e-6f, f); // -0x1.7f7d1cp-20 // log_2_lo 
    i = (int)j;
    // approximate r = exp(f) on interval [-log(2)/2, +log(2)/2]
    r =             1.38187408e-3f;  // 0x1.6a4000p-10
    r = fmaf (r, f, 8.37522652e-3f); // 0x1.12707ep-7
    r = fmaf (r, f, 4.16693948e-2f); // 0x1.555b0ep-5
    r = fmaf (r, f, 1.66664496e-1f); // 0x1.555432p-3
    r = fmaf (r, f, 4.99999821e-1f); // 0x1.fffff4p-2
    r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+0
    r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+0
    // exp(a) = 2**i * r;
    ia = (i > 0) ?  0 : 0x83000000;
    s = __int_as_float (0x7f000000 + ia);
    t = __int_as_float ((i << 23) - ia);
    r = r * s;
    r = r * t;
    return r;
}

/* a**b = exp (b * log (a)), where a > 0, and log(a) is computed with extended 
   precision as a double-float. 
*/
__device__ float my_powf_core (float a, float b)
{
    const float MAX_IEEE754_FLT = __int_as_float (0x7f7fffff);
    const float EXP_OVFL_UNFL_F = 104.0f;
    const float MY_INF_F = __int_as_float (0x7f800000);
    float lhi, llo, thi, tlo, phi, plo, r;

    /* compute lhi:llo = log(a) */
    my_logf_ext (a, &lhi, &llo);
    /* compute phi:plo = b * log(a) */
    thi = lhi * b;
    tlo = fmaf (lhi, b, -thi);
    tlo = fmaf (llo, b, +tlo);
    /* normalize intermediate result thi:tlo, giving final result phi:plo */
    phi = __fadd_rz (thi, tlo);// avoid premature ovfl in exp() computation
    plo = (thi - phi) + tlo;
    /* exp'(x) = exp(x); exp(x+y) = exp(x) + exp(x) * y, for |y| << |x| */
    r = my_expf_unchecked (phi);
    /* prevent generation of NaN during interpolation due to r = INF */
    if (fabsf (r) <= MAX_IEEE754_FLT) {
        r = fmaf (plo, r, r);
    }
    /* severe overflow / underflow in exp() */
    if (fabsf (thi) > EXP_OVFL_UNFL_F) {
        r = (thi < 0.0f) ? 0.0f : MY_INF_F;
    } 
    return r;
}

/* Compute a**b. If LOG_VARIANT is PREFER_SPEED, maximum error observed in very
   extensive testing is 5.71140 ulp. If LOG_VARIANT is PREFER_ACCURACY observed 
   in very extensive testing is 2.09680 ulp. Because testing is not exhaustive 
   the error bounds cannot be guaranteed.
*/
__device__ float my_powf (float a, float b)
{
    const float MY_NAN_F = __int_as_float (0x7fffffff);
    const float MY_INF_F = __int_as_float (0x7f800000);
    int expo_odd_int;
    float r;

    r = my_powf_core (fabsf (a), b);
    /* special case handling per ISO C specification */
    expo_odd_int = fmaf (-2.0f, floorf (0.5f * b), b) == 1.0f;
    if ((a < 0.0f) && expo_odd_int) {
        r = -r;
    }
    if ((a == 1.0f) || (b == 0.0f)) {
        r = 1.0f;
    } else if (isnan (a) || isnan (b)) {
        r = a + b;  // convert SNaN to QNanN or trigger exception
    } else if (isinf (b)) {
        r = ((fabsf (a) < 1.0f) != (b < 0.0f)) ? 0.0f :  MY_INF_F;
        if (a == -1.0f) r = 1.0f;
    } else if (isinf (a)) {
        r = (b < 0.0f) ? 0.0f : MY_INF_F;
        if ((a < 0.0f) && expo_odd_int) r = -r;
    } else if (a == 0.0f) {
        r = (expo_odd_int) ? (a + a) : 0.0f;
        if (b < 0.0f) r = copysignf (MY_INF_F, r);
    } else if ((a < 0.0f) && (b != floorf (b))) {
        r = MY_NAN_F;
    }
    return r;
}

Thanks for posting!

How does the ‘PREFER_SPEED’ implementation compare to the intrinsic ‘__powf()’ in terms of performance?

__powf (x, y) is literally __expf (y * __logf (x)), where most of the heavy lifting is done by SFU instructions. So the expected performance difference is roughly an order of magnitude in favor of __powf().

My simple test framework (on the Quadro K2200) pegs __powf() at round 50e9 functions calls per second in default compilation mode, and around 75e9 functions calls per second with -ftz=true. Due to the unsophisticated nature of my performance measurement framework this is likely underestimating the performance of __powf() by non-trivial amounts, as the impact of the loads and stores in the code becomes exposed.

Generally speaking, “regular” math function implementations cannot compete with CUDA’s device intrinsics for performance, the race is not even close.

A recent publication, Vincenzo Innocente and Paul Zimmermann, “Accuracy of Mathematical Functions in Single, Double, Extended Double and Quadruple Precision”, Preprint hal-03141101v2, 2022, examines the accuracy of standard math functions across various software platforms including CUDA 11. Using a sophisticated framework, they found a worst-case error for CUDA’s built-in powf() of 10.3 ulp, with function arguments 0x1.714882p-1, -0x1.0f68b4p+8. I am able to confirm this case.

Even for a high-performance library, this amount of error seems excessively high, an error bound of 4 ulps or better seems advisable. Below I am showing an updated and refined version of my previously posted my_powf code that provides the choice between three different ACCURACY variants (GOOD, BETTER, and BEST). I used my own test harness for observing worst-case error. Because exhaustive testing is not feasible, this heuristic search cannot guarantee that the largest errors found represent the actual worst case overall.

CUDA 11.2 built-in powf()    max. ulp err. = 9.24834 @ (1.39084017, -262.628998)
my_powf), ACCURACY==GOOD     max. ulp err. = 6.08392 @ (0.70439535, -253.150070)
my_powf(), ACCURACY==BETTER  max. ulp err. = 2.06023 @ (0.70819575, -248.978500)
my_powf(), ACCURACY==BEST    max. ulp err. = 1.72737 @ (1.30725360,  304.301270)

The performance of my_powf() as shown below is competitive with the built-in powf() from CUDA 11.2. Taking the performance of the built-in function as 100%, I observe the following relative performance on an Quadro RTX 4000 (Turing, sm_75), for non-exceptional cases:

CUDA 11.2 built-in powf()   100%
my_powf), ACCURACY==GOOD    102%
my_powf(), ACCURACY==BETTER  99%
my_powf(), ACCURACY==BEST    98%

Obviously, the relative performance can vary with GPU architecture and CUDA version, so this is just one snapshot.

/*
  Copyright (c) 2016-2022, Norbert Juffa
  All rights reserved.

  Redistribution and use in source and binary forms, with or without 
  modification, are permitted provided that the following conditions
  are met:

  1. Redistributions of source code must retain the above copyright 
     notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright
     notice, this list of conditions and the following disclaimer in the
     documentation and/or other materials provided with the distribution.

  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 
  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 
  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
  HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 
  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/

#define GOOD     (0)
#define BETTER   (1)
#define BEST     (2)
#define ACCURACY (GOOD)

/* Compute natural logarithm of the argument with extended precision, returning
   the result as a normalized double-float loghi:loglo. With ACCURACY == GOOD,
   maximum relative error is 3.8324e-09, with ACCURACY == BETTER, maximum rel.
   error is 8.5626e-10, and with ACCURACY == BEST, maximum relative error is
   4.7523e-10.
*/
__device__ void my_logf_ext (float a, float *loghi, float *loglo)
{
    const float LOG2_HI =  6.93147182e-1f; //  0x1.62e430p-1
    const float LOG2_LO = -1.90465421e-9f; // -0x1.05c610p-29
    float m, r, i, s, t, p, qhi, qlo;
    int e;

    /* Reduce argument to m in [sqrt(0.5), sqrt(2.0)] */
    i = 0.0f;
    if (a < 1.175494351e-38f){ // 0x1.0p-126
        a = a * 8388608.0f; // 0x1.0p+23
        i = -23.0f;
    }
    e = (__float_as_int (a) - __float_as_int (0.70710678f)) & 0xff800000;
    m = __int_as_float (__float_as_int (a) - e);
    i = fmaf ((float)e, 1.19209290e-7f, i); // 0x1.0p-23

    /* Compute q = (m-1)/(m+1) as a double-float qhi:qlo */
    p = m + 1.0f;
    m = m - 1.0f;
    asm ("rcp.approx.ftz.f32 %0, %1;\n\t" : "=f"(r) : "f"(p)); // r = 1.0f / p
    qhi = m * r;
    qlo = r * fmaf (qhi, -m, fmaf (qhi, -2.0f, m));
    /* Approximate atanh(q), q in [sqrt(0.5)-1, sqrt(2)-1] */
    s = qhi * qhi;
#if ACCURACY == GOOD
    r =             0.129211426f;  // 0x1.08a000p-3
    r = fmaf (r, s, 0.142010465f); // 0x1.22d662p-3
    r = fmaf (r, s, 0.200014427f); // 0x1.99a12ap-3
    r = fmaf (r, s, 0.333333254f); // 0x1.555550p-2
    r = r * s;
    s = fmaf (r, qhi, fmaf (r, qlo, qlo)); // (q**2*r)*(qhi:qlo) + qlo
    r = 2 * qhi;
#elif ACCURACY == BETTER
    r =             0.129211426f;  // 0x1.08a000p-3
    r = fmaf (r, s, 0.141997904f); // 0x1.22cfccp-3
    r = fmaf (r, s, 0.200014785f); // 0x1.99a15ap-3
    r = fmaf (r, s, 0.333333254f); // 0x1.555550p-2
    t = fmaf (qhi, qlo + qlo, fmaf (qhi, qhi, -s)); // s:t = (qhi:qlo)**2
    p = s * qhi;
    t = fmaf (s, qlo, fmaf (t, qhi, fmaf (s, qhi, -p))); // p:t = (qhi:qlo)**3
    s = fmaf (r, p, fmaf (r, t, qlo));
    r = 2 * qhi;
#elif ACCURACY == BEST
    r =             0.129394531f;  // 0x1.090000p-3
    r = fmaf (r, s, 0.141957462f); // 0x1.22ba98p-3
    r = fmaf (r, s, 0.200015724f); // 0x1.99a1d8p-3
    r = fmaf (r, s, 0.333333254f); // 0x1.555550p-2
    t = fmaf (qhi, qlo + qlo, fmaf (qhi, qhi, -s)); // s:t = q**2
    p = s * qhi;
    t = fmaf (t, qhi, fmaf (s, qlo, fmaf (s, qhi, -p))); // p:t = q**3
    s = p * r;
    t = fmaf (t, r, fmaf (p, r, -s)); // s:t = r*q**3
    r = qhi + s;
    s = (((qhi - r) + s) + t) + qlo;  //r:s = q+r*q**3
    r = 2 * r;
#else
#error unsupported ACCURACY setting
#endif    
    /* log(a) = 2 * atanh(q) + i * log(2) */
    t = fmaf ( LOG2_HI, i, r);
    p = fmaf (-LOG2_HI, i, t);
    s = fmaf ( LOG2_LO, i, fmaf (2.f, s, r - p));
    *loghi = p = t + s;    // normalize double-float result
    *loglo = (t - p) + s;
}

/* Compute exponential base e. Maximum ulp error = 0.86565 */
__device__ float my_expf_unchecked (float a)
{
    float f, r, s, t, j;
    int i, ia;

    // exp(a) = 2**i * exp(f); i = rintf (a / log(2))
    j = fmaf (1.442695f, a, 12582912.f) - 12582912.f; // 0x1.715476p0, 0x1.8p23
    f = fmaf (j, -6.93145752e-1f, a); // -0x1.62e400p-1  // log_2_hi 
    f = fmaf (j, -1.42860677e-6f, f); // -0x1.7f7d1cp-20 // log_2_lo 
    i = (int)j;
    // approximate r = exp(f) on interval [-log(2)/2, +log(2)/2]
    r =             1.37805939e-3f;  // 0x1.694000p-10
    r = fmaf (r, f, 8.37312452e-3f); // 0x1.125edcp-7
    r = fmaf (r, f, 4.16695364e-2f); // 0x1.555b5ap-5
    r = fmaf (r, f, 1.66664720e-1f); // 0x1.555450p-3
    r = fmaf (r, f, 4.99999851e-1f); // 0x1.fffff6p-2
    r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+0
    r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+0
    // exp(a) = 2**i * r;
    ia = (i > 0) ?  0 : 0x83000000u;
    s = __int_as_float (0x7f000000u + ia);
    t = __int_as_float (((unsigned int)i << 23) - ia);
    r = r * s;
    r = r * t;
    return r;
}

/* a**b = exp (b * log (a)), where a > 0, and log(a) is computed with extended 
   precision as a double-float.
*/
__device__ float my_powf_core (float a, float b)
{
    const float MAX_IEEE754_FLT = __int_as_float (0x7f7fffff);
    const float EXP_OVFL_UNFL_F = 104.0f;
    const float MY_INF_F = __int_as_float (0x7f800000);
    float lhi, llo, thi, tlo, phi, plo, r;

    /* compute lhi:llo = log(a) */
    my_logf_ext (a, &lhi, &llo);
    /* compute phi:plo = b * log(a) */
    thi = lhi * b;
    tlo = fmaf (lhi, b, -thi);
    tlo = fmaf (llo, b, +tlo);
    /* normalize intermediate result thi:tlo, giving final result phi:plo */
    phi = __fadd_rz (thi, tlo);// avoid premature ovfl in exp() computation
    plo = (thi - phi) + tlo;
    /* exp'(x) = exp(x); exp(x+y) = exp(x) + exp(x) * y, for |y| << |x| */
    r = my_expf_unchecked (phi);
    /* prevent generation of NaN during interpolation due to r = INF */
    if (fabsf (r) <= MAX_IEEE754_FLT) {
        r = fmaf (plo, r, r);
    }
    /* severe overflow / underflow in exp() */
    if (fabsf (thi) > EXP_OVFL_UNFL_F) {
        r = (thi < 0.0f) ? 0.0f : MY_INF_F;
    }
    return r;
}

/* Compute a**b. With ACCURACY == GOOD, maximum observed error is 6.08392 
   ulp. With ACCURACY == BETTER, maximum observed error is 2.06023 ulp. With 
   ACCURACY == BEST, maximum observed error is 1.72737 ulp. Because testing 
   is not exhaustive these error bounds cannot be guaranteed to represent the
   possible worst case.
*/
__device__ float my_powf (float a, float b)
{
    const float MY_NAN_F = __int_as_float (0x7fffffff);
    const float MY_INF_F = __int_as_float (0x7f800000);
    int expo_odd_int;
    float r;

    r = my_powf_core (fabsf (a), b);
    /* special case handling per ISO C specification */
    expo_odd_int = fmaf (-2.0f, floorf (0.5f * b), b) == 1.0f;
    if ((a < 0.0f) && expo_odd_int) {
        r = -r;
    }
    if ((a == 1.0f) || (b == 0.0f)) {
        r = 1.0f;
    } else if (isnan (a) || isnan (b)) {
        r = a + b;  // convert SNaN to QNanN or trigger exception
    } else if (isinf (b)) {
        r = ((fabsf (a) < 1.0f) != (b < 0.0f)) ? 0.0f :  MY_INF_F;
        if (a == -1.0f) r = 1.0f;
    } else if (isinf (a)) {
        r = (b < 0.0f) ? 0.0f : MY_INF_F;
        if ((a < 0.0f) && expo_odd_int) r = -r;
    } else if (a == 0.0f) {
        r = (expo_odd_int) ? (a + a) : 0.0f;
        if (b < 0.0f) r = copysignf (MY_INF_F, r);
    } else if ((a < 0.0f) && (b != floorf (b))) {
        r = MY_NAN_F;
    }
    return r;
}
2 Likes