[PATCH] Fixes for asin[fl].

Bruce Evans brde at optusnet.com.au
Mon Apr 17 09:24:58 UTC 2017


On Sun, 16 Apr 2017, Steve Kargl wrote:

> On Mon, Apr 17, 2017 at 02:07:57PM +1000, Bruce Evans wrote:
>> On Sun, 16 Apr 2017, Steve Kargl wrote:
>>
>>> Please commit.
>>>
>>> * libm/msun/src/e_asin.c:
>>>  . Use integer literal constants where possible.  This reduces diffs
>>>    between e_asin[fl].c.
>>>  . Remove a spurious tab in front of a single line comment.
>>>  . Sort declaration.
>>>  . Remove initialization of 't' in declaration statement.  It's not needed.
>>
>> This is mostly backwards.  Don't make style changes away from fdlibm.
>> fdlibm only has e_asin.c, and cygnus' e_asinf.c might have gratuitous
>> differences, and our e_asinl.c is too different due to its hiding of
>> the polynomial coeeficients and evaluation in invtrig.[ch] (there are
>> many style bugs and small errors in the coefficients hidden in the
>> latter).
>
> fdlibm will never be updated.  So whatever fdlibm does/did
> is irrelevant.  For e_asinf.c, like most cygnus attributed
> code, the conversion from double to float is, ahem, rather
> questionable.  As for e_asinl.c, I have doubts to its
> suitability to ld128.

Predicting the future is not easy.  Consider my version as the
reference if not fdlibm.  Cygnus did a not so bad job in the
apparently short time available.  They didn't increase the
maintainence problem with cosmetic changes, perhaps for the smae
portability reasons.  I haven't tested e_asinl.c much.  e_atanl.c is
more of a problem.  This fails some accuracy tests even with ld80.
Otherwise, the inverse trig (non-hyperbolic) functions seem to be
easier to evaluate accurately than most functions -- errors for
float and double precision are somehow less than 1 ulp.

>>> Index: src/e_asin.c
>>> ===================================================================
>>> --- src/e_asin.c	(revision 1904)
>>> +++ src/e_asin.c	(working copy)
>>> @@ -50,12 +50,13 @@ __FBSDID("$FreeBSD: head/lib/msun/src/e_
>>> #include "math_private.h"
>>>
>>> static const double
>>> -one =  1.00000000000000000000e+00, /* 0x3FF00000, 0x00000000 */
>>
>> Spelling 1 as "one" is standard style in fdlibm.  If fdlibm supported
>> float precision, then it whould do this even more, since this allows
>> spelling all the float constants without an F, as needed to avoid them
>> being double constants.
>
> Most compilers for the last 20 years know how to convert 1 to
> float, double, and long double.  This is in line with getting
> rid of (float)0.5 type code.

I think that is my idea :-).  I only use it in new code.

>>> 	/* 1> |x|>= 0.5 */
>>> -	w = one-fabs(x);
>>> -	t = w*0.5;
>>> +	w = 1-fabs(x);
>>> +	t = w/2;
>>
>> It was bad to spell 1 as one, but 2 as 2.
>
> ?

'one' was used for some technical or stylistic reason.  Perhaps just
to ensure that the compiler doesn't mistranslate it.  Here 2 was used
inverted as 0.5.  There are even more reasons to spell this 0.5 as
'half' as is done in some places.

It takes knowing a lot about the C standard and compilers to know that
the compiler will first promote 2 to (type)2.0 with the correct type,
and then strength-reduce the multiplication to division iff that is
good.  I think the unambiguous promotion rule was new in C90.  Too
new for fdlibm a couple of years later.  The strength reduction is
only correct if either the floating point is fuzzy or if it is as
in Annex F or some other extension, and the result doesn't depend
on the method in any rounding mode or if a pragma allows fuzziness.
I think the result doesn't depend on the method in any rounding mode
in IEEE754 arithmetic.

We use this simplication in new code though it isn't portable.  Anyone
wanting portable code would have to change back to multiplying by 0.5.

>>> 	p = t*(pS0+t*(pS1+t*(pS2+t*(pS3+t*(pS4+t*pS5)))));
>>> -	q = one+t*(qS1+t*(qS2+t*(qS3+t*qS4)));
>>> +	q = 1+t*(qS1+t*(qS2+t*(qS3+t*qS4)));
>>
>> This still uses a slow rational function approximation and evaluates it
>> slowly using Horner's method.  I have only fixed this in my float version.
>> The polynomial has terms out to S11, and would need too many terms for
>> long doubles.  But so does the rational function.
>
> I'll get to that problem later.  See my sinpi[fl] inplementations.
> One does not need a rational approximation (at least with sinpi).

sin() is fundamentally different.  Its power series of course converges
everywhere, and deeper magic lets it converge fairly fast out to pi/4.
This also allows good polynomial approximations (no worse than some
leading terms of the power series).  Other functions benefit more from
using rational functions.  tan() does, and is actually implemented as
a disguised rational function.

>>> 	s = sqrt(t);
>>> 	if(ix>=0x3FEF3333) { 	/* if |x| > 0.975 */
>>> 	    w = p/q;
>>> -	    t = pio2_hi-(2.0*(s+s*w)-pio2_lo);
>>> +	    t = pio2_hi-(2*(s+s*w)-pio2_lo);
>>> 	} else {
>>> 	    w  = s;
>>> 	    SET_LOW_WORD(w,0);
>>> 	    c  = (t-w*w)/(s+w);
>>> 	    r  = p/q;
>>
>> The rational function is not all that slow, since there is a sqrt() as
>> well as a division in some cases, but the division limits accuracy.  So
>> does the sqrt().  The float version cheats by using double for sqrt().
>
> If you look at my patch, you'll see that the sqrt() in the
> float version has been replaced with sqrtf().  The float version
> cheats by using double in a few places.  I think I know how to
> fix that issue.

It's difficult without losing accuracy or efficiency.  That is why we
switched to using sqrt().

My accuracy tests:

float precision:
acos:  max_er = 0x1cbc92d0 0.8980, avg_er = 0.170, #>=1:0.5 = 0:5422147 (old)
acos:  max_er = 0x1cc99656 0.8996, avg_er = 0.170, #>=1:0.5 = 0:5584360
asin:  max_er = 0x1d9cc0b9 0.9254, avg_er = 0.013, #>=1:0.5 = 0:2357938 (old)
asin:  max_er = 0x1619e27c 0.6907, avg_er = 0.012, #>=1:0.5 = 0:3315992
atan:  max_er = 0x1b44773c 0.8521, avg_er = 0.186, #>=1:0.5 = 0:6406812 (old)
atan:  max_er = 0x1b3b6262 0.8510, avg_er = 0.186, #>=1:0.5 = 0:6759430

double precision:
acos:  max_er =      0x708 0.8789, avg_er = 0.137, #>=1:0.5 = 0:766994
asin:  max_er =      0x6ca 0.8486, avg_er = 0.003, #>=1:0.5 = 0:354026
atan:  max_er =      0x618 0.7617, avg_er = 0.140, #>=1:0.5 = 0:1142804

Here '(old)' is the committed version and the other float precision one
is the development version (5-10 years old).  The development version
doesn't use rational functions (except possibly disguised).  It is
slightly less accurate, except for asinf() it is much more accurate.
asinf() still uses sqrt(), but acosf() always used sqrt().

Note that the double precision versions are accurate enough despite not
using sqrtl().  I think they sacrifice efficiency to be so accurate.

> I also think that sqrt[fl] can be replaced by an in-line specialized
> sqrt to avoid the overhead of sqrt[fl].

No way.  sqrt[fl] is already inlined by the compiler at -O1 on most or
all arches.  It is also in libm as a function, but the function is
rarely called.  The C versions are too slow to be usable.  On Haswell
with gcc-3.3:

fdlsqrtf: cycles per call: 101-168
fdlsqrt: cycles per call: 596-680
fdlsqrtl: cycles per call: 1119-1121

Most of the time is for perfect rounding which is not needed.  Take that
out and the software sqrtl might be only 10 times slower than the hardware
version instead of 50 times slower.  It would be about as slow as cbrtl:

fdlcbrtf: cycles per call: 38-38
fdlcbrt: cycles per call: 166-166
fdlcbrtl: cycles per call: 77-81

This is with my optimized cbrtl (also cbrtf?) and committed cbrt.
Haswell handles standard optimizations especially well but does badly
with the old method in cbrt.  Someone in comp.arch optimized cbrtl
much more using methods that are too unportable for me.  It is possible
to get cbrt down to 20-30 cycles on Ivy Bridge.  I can only do this for
cbrtf using more portable methods and I still need SSE1.

Bruce


More information about the freebsd-standards mailing list