Use of C99 extra long double math functions after r236148

Bruce Evans brde at optusnet.com.au
Sun Aug 12 23:11:08 UTC 2012


On Fri, 20 Jul 2012, Stephen Montgomery-Smith wrote:

> On 07/20/2012 04:19 AM, Bruce Evans wrote:
>> %             x0 = ldexp(floor(ldexp(x, 24)), -24);
>> %             x1 = x - x0;
>> %             y0 = ldexp(floor(ldexp(y, 24)), -24);
>> %             y1 = y - y0;
>> 
>> This has a chance of working iff the bound away from 1 is something like
>> 2**-24.  Otherwise, multiplying by 2**24 and flooring a positive value
>> will just produce 0.  2**-24 seems much too small a bound.  My test
>> coverage is not wide enough to hit many bad cases.
>
> This is meant to cover a situation where x = cos(t) and y = sin(t) for some t 
> not a multiple of PI/2.
>
> Now, hypot(x,y) will be 1, but only to within machine precision, i.e. an 
> error of about 1e-17.

Actually more like DBL_MIN = 2e-308 for the real part (2e-321 for the
smallest denormal).  If you are sloppy and have an error of 1e-17,
then the relative error is a 2e304 times :-).  In ulps I only saw
errors below 1e20 ulps.

> So log(hypot(x,y)) will be about 1e-17.  The true answer being 0, the ULP 
> will be infinite.

The scaling for a correctly rounded result of 0 is unclear.  But when
the correctly rounded result is the smallest denormal, the scaling is
clear: a result of 0 is off by 1 ulp, and a result of 1e-17 is off by
2e304 ulps :-).  Both with a factor of 2 of fuzziness for the limited
precision of the result.

> BUT (and this goes with Goldberg's paper when he considers the quadratic 
> formula when the quadratic equation has nearly equal roots), suppose
>
> x = (double)(cos(t))
>
> that is, x is not exactly cos(t), but it is the number "cos(t) written in 
> IEEE double precision".  Similarly for y.  That is, even though the formula 
> that produce x and y isn't exact, let's pretend that x and y are exact.
>
> Again log(hypot(x,y)) will be about 1e-17.  But the true answer will also be 
> about 1e-17.  But they won't be the same, and the ULP will be about 1e17.
>
> What my formula does is deal with the second case, but reduce the ULP to 
> about 1e8!  That is, if x and y are exact numbers, and it so happens that 
> hypot(x,y) is very close to 1, my method will get you about 8 extra digits of 
> accuracy.

It was still giving errors of 1e17 (maybe 1e304) ulps for me.

> Now you have special formulas that handle the cases when z is close to 1, -1, 
> I and -I.

That was my old version.  Now I use your formula in most cases.  It seems
to give 300 or so extra digits after debugging it.  I still need the
special formula the smallest denormal result.  That may be the only
case (when y*y/2 == 0 instead of the smallest denormal).

> Earlier this morning I sent you a formula, which I think might be 
> slightly more accurate than yours, for when z is close to 1.  I think similar 
> formulas can be produced for when z is close to -1, I and -I.

I think this only gives a small error relative to |log(z)|, so it gives
a huge error relative to real(log(z)) when that is nearly 0.  Indeed,
the error increased from 1 ulp to 2e16 ulps.  It can't handle the case
of the smallest denormal: z = 1 + I*y where y*y/2 ~= smallest denormal.
Then |z - 1| is |y| which is about 1e-162, and log(|z|) is about y*y/2
= smallest denormal.  Denormals only go a few powers of 10 lower here.
This magic 2e16 is essentially 2**(DBL_MANT_DIG + 1).  Errors in ulps
are never of the order of 1e304 or 1e162.

> To get ULP of about 1 when x and y are exact, and it happens that hypot(x,y) 
> is close to 1, but z is not close to 1, -1, I or -I, would require, I think, 
> hypot(x,y)-1 being computed using double double precision (i.e. a mantissa of 
> 108 bits), and then feeding this into log1p.

It does need about that, but this is routine for hi-lo decompositions,
and you claimed to already have it :-) :

@   x0 and y0 are good approximations to x and y, but have their bits trimmed
@   so that double precision floating point is capable of calculating
@   x0*x0 + y0*y0 - 1 exactly. */

Bruce


More information about the freebsd-numerics mailing list