cvs commit: src/sys/i386/include _types.h

Bruce Evans brde at optusnet.com.au
Fri Mar 7 01:21:52 UTC 2008


On Wed, 5 Mar 2008, Colin Percival wrote:

> Mike Silbersack wrote:
>> On Wed, 5 Mar 2008, Bruce Evans wrote:
>>>  Change float_t and double_t to long double on i386.  All floating point
>>
>> 1)  Does this really change every double to a long double in anything
>> compiled?
>
> No, it changes double_t (which is not the same as double).
>
>> 2)  How does this affect ABI compatibility?
>
> This breaks ABI compatibility (when people use float_t or double_t).

Only if float_t or double_t is actually used in an ABI.  Such use is
dubious since these types are like the int_fastN_t types they are the
most efficient types that are at least as wide as float, double and
intN_t, respectively.  They aren't very suitable for anything except
intermediate values in registers.  I know of one useful use for them
in ABIs: to work around the following bug in the C standard (C99 at
least): a function returning double is permitted to return extra
precision, and this is sometimes useful, but C perversely requires
extra precision to be lost on return iff the function is implemented
using double_t so as to ensure that extra precision is not lost internally.
E.g.:

     double xp1(double x) { return x + 1.0; }

This is permitted to evaluate x + 1.0 in extra precision and then is
required (?) (at least permitted) to return the extra precision.

     double xp1(double x) { double_t tmp1 = x, tmp2 = 1.0; return tmp1 + tmp2; }

If double_t has more precision than double, then
(1) this evaluates x + 1.0 in extra precision.  double_t is only used to
     emphasize this.  In a more practical example. There would be many more
     temporary intermediate variables and we should use double_t for them
     all to prevent loss of precision on assignment.
(2) This is required to lose the extra precision on return.

To avoid the loss of precision on return, the simplest method is to
change the ABI to return double_t instead of double.  Since we really
want to return the extra precision, this is not wrong.  However, it
may be prevented by API and ABI considerations.  Note that on i386,
all functions that are declared to return float or double actually
return a long double, so declaring them as actually returning a long
double is only an API change, but it will significantly affect the
callers due to this alone (callers will actually know that they are
getting a long double and may generate extra code to either discard
or keep the extra precision which may or may not be present).

Why did no one complain when I fixed float_t on amd64 and some other
non-i386 arches? :-)  This changed the ABI as a side effect.

>> 3)  How does this change the operation of various programs in the ports
>> tree that use floating point, such as mplayer, mpg123, etc.  Will this
>> cause different behavior when these apps are used on FreeBSD vs other
>> operating systems?
>
> Most code in the ports tree will be using double instead of double_t, and
> will consequently not be affected.  Code which uses double_t will be slower
> due to the increased cost of loads and stores, and may produce different
> output if it changes the i387 precision flags.

I hope nothing actually uses float_t or double_t.  They are esoteric
and are still too broken in FreeBSD to trust.  I'm changing them mainly
because I want to be able to trust them for use in libm, without using
extensive ifdefs which would amount to replacing them by non-broken
versions for libm's internal use only.

> At the moment, FreeBSD behaves the same way as Microsoft Windows and C99,
> and differently from Linux: Linux sets the i387 by default to extended
> precision,

Does Windows and its compilers still default to non-extended precision and
thus less than null support for long doubles?  20 years ago, this seemed
to be normal for DOS compilers (though I never tested it on a Microsoft
one), and I was happy to keep it in 386BSD in 1992 and FreeBSD later.
In 386BSD, it was apparently inherited from ibcs2.  Anyway, gcc didn't
really support long doubles until 1993 or 1994.  IIRC, Linux started in
1991 with the default of extended precision, and I was responsible for
getting this changed to non-extended precision.  Then when gcc started
supporting long doubles, Linux changed the default back to extended
precision, as is necessary but not sufficient for long doubles to work.
Microsoft would have been more constrained by backwards compatibilty
but should have changed by now.  Maybe it is a compiler option
(default = old in Windows?).

> which has the result of decreasing the average rounding error
> while increasing the maximum rounding error (due to double rounding when
> values which were internally rounded to extended precision are rounded
> to double precision later) and sometimes breaking code entirely (when
> the properties of floating-point rounding are used deliberately, e.g.,
> to rapidly round a floating-point value to the nearest integer).

I've only found double rounding to be a minor problem.  It gives an
error of at most 0.5 ulps, which isn't usually a problem.  Breakage
of the properties of floating point is mostly due to compiler bugs
(assignments and casts don't work).  However, fixing assignment requires
large pessimizations.  They used to be not so large, but now gcc's
optimizer is reasonably good they are very large.  So I now think that
it is a bug in the C standard to require assignments and casts to
discard extra precision.  This should be implementation-defined, with
some other standard method for discarding extra precision.

One of the first things I need a non-broken float_t for is rapid rounding
to the nearest [fraction of] an integer, without doing extra work to
discard extra precision.  E.g.: to round a non-huge float to the nearest
integer, the fastest method is often:

 	float f, result;
 	result = f + 0x1.8pN - 0x1.8pN;

where N is:
 	FLT_MANT_DIG - 1 if float_t is float
 	DBL_MANT_DIG - 1 if float_t is double
 	LDBL_MANT_DIG - 1 if float_t is long double

This is painful to configure if float_t is unknown or wrong.  (It is still
moderately painful with float_t only a type since it is hard to ifdef on
a type.  I would like to be able to write the N as FLT_T_MANT_DIG-1 and not
have to use magic to build a literal out of it.)

This depends on float_t being either float, or on float FLT_MANT_DIG being
much smaller than FLT_T_MANT_DIG, else double rounding may be a problem.

This method is already used, with double instead of float, for arg reduction
in trig functions.  There double rounding can occur but isn't a problem.
It currently has N hard-coded as DBL_MANT_DIG-1.  This assumes that
double_t is never really long double (it still works when with my change
to declare double_t as long double on i386, since long double isn't really
long double -- N = DBL_MANT_DIG-1 still works, but N = LDBL_MANT_DIG-1
wouldn't work).

> I've answered queries from some mathematicians and scientists who were
> confused as to why they were seeing higher rounding errors on FreeBSD and
> Windows than they were on Linux; but when I've explained the behavioural
> differences to numerical analysts, the universal reaction I've received
> has been "dear god, Linux does WHAT?" -- oddly enough, mathematicians like
> to think that proving that their code is correct will be enough to ensure
> that it will produce correct output.

Most code isn't proved to be correct.  It's impossible for numeric
code written and proved by non-numerical-analysts and impractical for
large code written by anyone.  Extended precision is supposed to reduce
the risks from this, and it seems to help in most cases.  Unfortunately,
extended precision isn't implemented in SSE, so even Linux on i386's
now doesn't have it when the i386's are in amd64 mode, so extended
precision no longer occurs automatically, and using it (by using
long doubles explicitly) costs more than ever since it lives in a
different register set.

Bruce


More information about the cvs-src mailing list