svn commit: r336299 - in head: include lib/msun lib/msun/ld128 lib/msun/ld80 lib/msun/man lib/msun/src

Bruce Evans brde at optusnet.com.au
Thu Sep 20 18:54:18 UTC 2018


On Thu, 20 Sep 2018, John Baldwin wrote:

> On 9/20/18 8:54 AM, Mark Johnston wrote:
>> On Sun, Jul 15, 2018 at 12:23:11AM +0000, Matt Macy wrote:
>>> Author: mmacy
>>> Date: Sun Jul 15 00:23:10 2018
>>> New Revision: 336299
>>> URL: https://svnweb.freebsd.org/changeset/base/336299
>>>
>>> Log:
>>>   msun: add ld80/ld128 powl, cpow, cpowf, cpowl from openbsd
>>>
>>>   This corresponds to the latest status (hasn't changed in 9+
>>>   years) from openbsd of ld80/ld128 powl, and source cpowf, cpow,
>>>   cpowl (the complex power functions for float complex, double
>>>   complex, and long double complex) which are required for C99
>>>   compliance and were missing from FreeBSD. Also required for
>>>   some numerical codes using complex numbered Hamiltonians.
>>>
>>>   Thanks to jhb for tracking down the issue with making
>>>   weak_reference compile on powerpc.
>>>
>>>   When asked to review, bde said "I don't like it" - but
>>>   provided no actionable feedback or superior implementations.
>>>
>>>   Discussed with: jhb
>>>   Submitted by: jmd
>>>   Differential Revision: https://reviews.freebsd.org/D15919
>>
>> This seems to have broken the gcc build:
>> https://ci.freebsd.org/job/FreeBSD-head-amd64-gcc/
>>
>> /workspace/src/lib/msun/ld80/e_powl.c:275:1: error: floating constant exceeds range of 'long double' [-Werror=overflow]
>>   if( y >= LDBL_MAX )
>>   ^~
>
> Which architecture?  i386 doesn't get build with i386-xtoolchain-gcc pending
> some patches I haven't yet posted for review related to the weirdness we do
> with floating point on i386.

It can only be i386, since only amd64 and i386 use this file (or anything in
ld80)0, and amd64 doesn't have the problem.

The powl() is completely broken on i386 (even worse than the previous
imprecise misimplementation).  It depends on bugs in clang to work.  i386
still uses a default rounding precision of 53 bits.  gcc supports this and
rounds long double constants to 53 bits.  This breaks most of the constants
in this file, since they are only correct when rounded to 64 bits.  This
loses even more than 11 bits of accuracy since the bits are lost at a more
critical stage than in the imprecise version.  clang is too incompatible to
do correct default rounding.

gcc's technically correct rounding is inconvenient, but all other files in
ld80 are carefully written to use either 53-bit (double) constants or spell
the long double constants in binary (best done using the LD80C() macro) so
that they work with either gcc or clang.  53-bit constants are also faster.
Not all other files in ld80 arei careful to switch the rounding
precision at runtime (best done using the ENTERI() macro), so callers
must do the switch in general.  Even for the functions that switch
internally, this gives a null internal switch which is faster, and the
external switch should be around multiple FP operations both for efficiency
and so that the external operations are actually in 64-bit precision too.

LDBL_MAX has always been broken on i386.  Before 2002, it was DBL_MAX.
Then its precision was correct (53 bits) but its exponent range was
wrong.  Now its precision is wrong (64 bits) but its exponent range is
correct.  Thus it is unusable in libm (except in ld128 where there are
no i386 parts).  gcc detects its brokenness, and this already caused problems
in catrigl.c.  I tested catrigl.c a lot, but apparently I without enoygh
warning flags, so catrigl.c was committed before this was fixed.

LDBL_MAX is not ifdefed in x86/float.h.  If it had the correctly rounded
value, then clang would detect this problem too.  Correct rounding for
clang would require it to be a hex constant, since it is to hard to
represent binary precision 53 in decimal precision when the compiler will
round to binary precision 64.  The correctly rounded LDBL_MAX would be
inconsistent with incorrectly rounded other constants.

clang has the following partly related bugs near x86/float.h:
- the declarations of float_t and double_t (these are in math.h) are not
   ifdefed, but clang does the dubious optimization of using SSE[2] for
   floats and [doubles] if -march is used at compile time to indicate
   that SSE[2] is usable at runtime.  float_t and double_t don't vary,
   so are only correct when the i387 is used for all precisions.  The
   misdeclarations as long double are only efficiency bugs.  Optimal
   code should use float_t and double_t a lot, but this guarantees that
   the dubious optimizations are negative when float_t and double_t are
   wider than necessary (negative optimizations then result from extra
   conversions).
- the predefined value of FLT_EVAL_METHOD is wrong.  This varies with the
   SSE optimization, but is always wrong since clang doesn't understand that
   the default long double precision is only 53.  Only a value of -1
   (indeterminate) means that the evaluation method is very context-dependent.
   Since indeterminate can be almost anything, it may even allow incorrect
   rounding at compile time, and it certainly allows the differences given
   by the SSE optimizations.

Hopefully you fixed all these problems in gcc ports.  I never used a gcc
port except 4.8 when it was installed for a few months on freefall.  It
seemed to get everything wrong:
- float.h wasn't even a "fixed" copy of the system one.  IIRC, it was
   claimed to be MI, so didn't need to start with the system one.  But
   it used 64-bit precision LDBL constants and FLT_EVAL_METHOD = 2 (which
   requires a default rounding precision of 64 bits).
- since float.h didn't understand that the default rounding precision was
   53 bits, it seems unlikely that the compile-time rounding precision was
   53 bits.  gcc should still have the FreeBSD gcc-compile-time option to
   configure this.  It was implemented by a gcc maintainer and didn't
   require special porting for gcc-4.  Rather the reverse -- it is
   entwined with the i386 code so it is hard to avoid.

There should be a gcc-runtime option to select the precision.  The default
can be changed to 64 bits without breaking much except testing of MFCs and
code outside of /usr/src.  FreeBSD barely uses floating point outside of
libm, and we fixed all of the large extra-precision bugs in libm.  I test
most double functions in libm for this, but not by default (float functions
were fixed first and get more regression testing since they are easier to
test).  I also only test most long double functions in 64-bit precisions
and haven't done much testing of them to see how bad they are in 53-bit
precision.  Switching the precision to 64-bits internally defeats this
test.

Bruce


More information about the svn-src-head mailing list