svn commit: r336299 - in head: include lib/msun lib/msun/ld128 lib/msun/ld80 lib/msun/man lib/msun/src

Bruce Evans brde at optusnet.com.au
Fri Sep 21 12:51:29 UTC 2018


On Thu, 20 Sep 2018, John Baldwin wrote:

> On 9/20/18 2:43 PM, Li-Wen Hsu wrote:
>> ...
>> I suspect this.  Each build is in a fresh created jail with the latest
>> branch of packages from pkg.freebsd.org.
>>
>> At the beginning of (warning: 56MB file)
>> https://ci.freebsd.org/job/FreeBSD-head-amd64-gcc/7262/consoleText
>>
>> There is:
>>
>> New packages to be INSTALLED:
>>         amd64-xtoolchain-gcc: 0.4_1
>>         amd64-gcc: 6.4.0_2
>>         mpfr: 4.0.1
>>         gmp: 6.1.2
>>         mpc: 1.1.0_1
>>         amd64-binutils: 2.30_5,1
>>
>> Number of packages to be installed: 6
>>
>> Or is there a newer version of devel/amd64-gcc I am not aware?
>
> That has the change Mark Millard is thinking of:
>
> https://svnweb.freebsd.org/ports?view=revision&revision=475290
>
> However, I suspect this is due to a different issue.  I still have some
> patches that I need to get an i386 world to build with external GCC that
> I'm not sure of and haven't posted for review yet.  I bet these also matter
> for the -m32 build:

This is more broken than before.

> Index: lib/libc/tests/stdio/printfloat_test.c
> ===================================================================
> --- lib/libc/tests/stdio/printfloat_test.c	(revision 338373)
> +++ lib/libc/tests/stdio/printfloat_test.c	(working copy)
> @@ -315,7 +315,7 @@
> 	testfmt("0x1p-1074", "%a", 0x1p-1074);
> 	testfmt("0x1.2345p-1024", "%a", 0x1.2345p-1024);
>
> -#if (LDBL_MANT_DIG == 64)
> +#if (LDBL_MANT_DIG == 64) && !defined(__i386__)
> 	testfmt("0x1.921fb54442d18468p+1", "%La", 0x3.243f6a8885a308dp0L);
> 	testfmt("0x1p-16445", "%La", 0x1p-16445L);
> 	testfmt("0x1.30ecap-16381", "%La", 0x9.8765p-16384L);

This is further loss of test coverage for i386.  I don't know why this
wasn't already turned off.  Perhaps because the test is broken, but clang
is compatibly broken.  On i386, the default rounding precision is supposed
to be 53 bits, so 64-bit hex constants can only work if their 11 lowest bits
are all zero, but the first one in this ifdef only has the 2 lowest digits
all zero.

I don't like the method used in these tests and haven't looked at them much.
I would never write ld80 long doubles in the hex used format above.  This
makes them unreadable by putting 1 bit to the left of the decimal point so
63 to the right.  They have to be shifted left by 1 to decrypt the low bits.
Shiftung 8468 gives 4234.  The low 12 bits are 234, but correct rounding
would give 000 (800 for some other values).  This is an easy case since
there are no carries.

> Index: sys/x86/include/float.h
> ===================================================================
> --- sys/x86/include/float.h	(revision 338373)
> +++ sys/x86/include/float.h	(working copy)
> @@ -86,10 +86,18 @@
> #define LDBL_EPSILON	1.0842021724855044340E-19L
> #define LDBL_DIG	18
> #define LDBL_MIN_EXP	(-16381)
> +#if defined(__i386__)
> +#define LDBL_MIN	33621031431120935
> +#else
> #define LDBL_MIN	33621031431120935063
> +#endif
> #define LDBL_MIN_10_EXP	(-4931)
> #define LDBL_MAX_EXP	16384
> +#if defined(__i386__)
> +#define	LDBL_MAX	1.1897314953572316e+4932L
> +#else
> #define LDBL_MAX	1.1897314953572317650E+4932L
> +#endif
> #define LDBL_MAX_10_EXP	4932
> #if __ISO_C_VISIBLE >= 2011
> #define	LDBL_TRUE_MIN	3.6451995318824746025E-4951L

I already pointed out that it is difficult to write these values in
decimal, since clang will round to 64 bits.

Note that these values must be written in decimal to support C90.

Testing shows that clang produces the following wrong values from the
new limits:

For LDBL_MAX, with 64-bit precision the lower 3 nybbles would be FFF;
rounding 53 bits should change these nybbles to 800, but with the new
value above clang gives 600.

Actually, it is not very hard to write the correct value in decimal.
Simply print 0x0.fffffffffffff800p16384 in decimal with enough digits.
"enough" is DECIMAL_DIG = 21.  The main error in the above is that it
misrounds by discarding 4 decimal digits (from DECIMAL_DIG = 21 digits
down to DBL_DECIMAL_DIG = 17).  Since clang will round to 64 binary
digits, 21 decimal digits are still needed.

Correct value from this:

#define LDBL_MAX	1.1897314953572316330E+4932L

(This uses only 20 decimal digits, since 21 is rarely needed and is never
needed in float.h and float.h now uses 20.)

For LDBL_MIN, the result 16 ulps below the largest denormal which is
already 1 ulp too small (on a different ulp scale).

No ifdef is needed for LDBL_MIN, since all bits except the highest bit
in its mantissa are 0, so rounding to any nonzero number of bits doesn't
change anything.

C11 added LDBL_TRUE_MIN and friends (for the denormal limit).  No ifdef
was added for it, and none is needed, as above

With the old limits, the values produced are:

For LDBL_MIN and LDBL_TRUE_MIN: correct in all cases.

For LDBL_MAX: gigo (+Inf) with gcc.  gigo (LDBL_MAX for non-default 64-bit
precision) with clang.

The clang bug breaks sanity tests and is easy to test for.  E.g., when
x == LDBL_MAX, then x + 0 should be x, but with clang it is +Inf unless
the program has switched the runtime precision to 64 bits.

Bruce


More information about the svn-src-head mailing list