svn commit: r216142 - head/share/man/man3

Bruce Evans brde at optusnet.com.au
Fri Dec 3 13:08:10 UTC 2010


On Fri, 3 Dec 2010, David Schultz wrote:

> Log:
>  Explain some of the reasons that fpsetprec() is unlikely to work as
>  one might expect.  (These functions have already been deprecated for
>  many years.)

Um, only the functions other than fpsetprec() are deprecated (since it
is a bug in the fenv family of functions that it has no replacement).

> Modified: head/share/man/man3/fpgetround.3
> ==============================================================================
> --- head/share/man/man3/fpgetround.3	Fri Dec  3 04:39:48 2010	(r216141)
> +++ head/share/man/man3/fpgetround.3	Fri Dec  3 07:01:07 2010	(r216142)
> @@ -32,7 +32,7 @@
> .\"     @(#)fpgetround.3	1.0 (Berkeley) 9/23/93
> .\" $FreeBSD$
> .\"
> -.Dd August 23, 1993
> +.Dd December 3, 2010
> .Dt FPGETROUND 3
> .Os
> .Sh NAME
> @@ -164,6 +164,10 @@ and
> .Fn fpsetprec
> functions provide functionality unavailable on many platforms.
> At present, they are implemented only on the i386 and amd64 platforms.
> +Changing precision isn't a supported feature:
> +it may be ineffective when code is compiled to take advantage of SSE,
> +and many library functions and compiler optimizations depend upon the
> +default precision for correct behavior.

But it is much more needed for correct behaviour than when it was
"deprecated".  There are now several long double library functions
that depend on it having been used to change the rounding precision
across calls to them, else they produce garbage or are useless.  Not
to mention user code that uses long doubles so must use it if it wants
long doubles to actually be useful.  Of course, it is so difficult to
use correctly that long doubles are almost useless anyway.  For the
library functions, you have to know which can benefit from it, and
then you probably have to keep the extended precision across more than
the function call, to actually use the results.  The use in the PR
seems to be an un-useful use that can only cause problems: keep extended
precision while calling a non-long-double libm function.

Analysis of safety and correctness of most long double precision functions
in libm:

Ones that work when called with any rounding precision, with no known major
bugs (some known minor bugs for unsupported formats):
     ceill()
     copysignl()
     fabsl()
     fdiml()
     floorl()
     fmal()
     fmaxl()
     fminl()
     fmodl()
     frexpl()
     fpclassify() and other classification functions
     ilogbl()
     ldexpl()
     llrintl()
     llroundl()
     lrintl()
     logbl()
     lroundl()
     modfl()
     nearbyintl()
     next*()
     rintl()
     roundl()
     scalblnl()
     scalbnl()
     signbitl()
     sqrtl() (the C version needs extended precision, but i386 uses the asm
 	     version)
     truncl()
These can work since they mostly only use integer operations.  ceill(),
floorl(), lrintl() and a couple of others actually use the FPU dangerously
on i386, so I'm not sure if they work.

Need extended precision, and believed to work with it
     acosl()
     asinl()
     atan2l()
     atanl()
     hypotl()
     remainderl() (not sure if this needs it)
     remquol()    (not sure if this needs it)

Need extended precision, but known to be broken even with it:
     cosl()
     sinl()
     tanl()
These fail because arg reduction doesn't work in extended precision.
This bug used to affect float precision, but only for large args, and
was fixed long ago, first by using the STRICT_ASSIGN() macro and then
by always doing the reduction in double precision, which works provided
assignments to double precision variable work, which doesn't happen for
gcc in extended precision.  For long doubles, it doesn't help that the
general case of arg reduction is used for almost all args, so that arg
reduction is very slow and the bug is active for almost all args.
The arg reduction is one of the few places that takes precautions against
this.  It uses STRICT_ASSIGN(), but STRICT_ASSIGN() intentionally doesn't
work with extended precision because that would pessimize the usual case
of non-extended precision.  Compiler bugs in this area are to avoid the
same pessimization, except if extended precision is the default then the
buggy case is the usual case.  FreeBSD makes the buggy case the unusual
case by defaulting to double precision.

Special:
     exp2l()
This falls back to using exp() if the precision is not already extended,
and is the only libm function with such a fallback.  It, and all other
functions that need extended precision, should do the same test as
this one to see if the precision is already, and if not then switch to
it, and back later.  This would be slower, but only slightly slower in
what should be the usual case of the caller already having switched
unconditionally.

exp2l() has been more tested than the other long double transcendenta
functions, and is known to have some inaccuracies, probably due to
a buggy polynomial.  According to saved data, the worst case that I
could find was:

x = 0.003906249999999999805394000000
   = 0.003906249999999999805394180368  when rounded to 64 bits

exp2l(x) = 1.002711275050202485151000000 (according to libm)
          = 1.002711275050202485150871445 when rounded to 64 bits
          = 0x8058d7d2d5e5f6ae.0p-63

exp2l(x rounded) =
            1.002711275050202485295489388 (according to pari; 28 decimal digs)
          = 1.002711275050202485259291663 when rounded to 64 bits
          = 0x8058d7d2d5e5f6af55.0p-71"

The error is about 1 + 0x55/0x100 ulps (this is the difference in the 0xp
values after the 64th mantissa bit), but the claimed accuracy is < 0.511 ulps.

The value of 0.003906... is related to the endpoints of the intervals
used by exp2l().  There are large errors for many values close to the
endpoint x = 1/256.0.  This value less 333 ulps is another.

Bruce


More information about the svn-src-head mailing list