gcc strangeness

David Schultz das at FreeBSD.ORG
Sun Jul 11 14:01:07 PDT 2004


On Sun, Jul 11, 2004, Dmitry Morozovsky wrote:
> one of my friends has raisen very strange issue regarding gcc rounding:
[...]
> marck at woozle:/tmp/tsostik> cat x.c
> #include <stdio.h>
> int main ()
> {
>         float a;
>         for(a=0.01;a<=0.1; a+=0.01)
>           printf("%f %.3f %d\n", a*100, a*100, (int)(a*100));
> return 0;
> }

0.01 is not exactly representable in IEEE 754 floating-point, so
when you use the float type, you get a rounding error of ~2.2e-10.
After 10 additions, the error grows to ~2.2e-9.  Then you multiply
by 100, which results in a maximum error bound of ~2.2e-7.  That
is, on the last loop iteration, a has a value that is roughly
0.09999999404.  That's why you should always use at least double
precision; it's at least as fast as single precision on most
architectures anyway.

> marck at woozle:/tmp/tsostik> cc x.c
> marck at woozle:/tmp/tsostik> ./a.out
> 1.000000 1.000 0
> 2.000000 2.000 1
> 3.000000 3.000 2
> 4.000000 4.000 3
> 5.000000 5.000 5
> 6.000000 6.000 6
> 7.000000 7.000 7
> 8.000000 8.000 7
> 9.000000 9.000 8
> 9.999999 10.000 9
> 
> Any comments?

Both printf() and gcc got everything right here.  As I mentioned,
a*100 is approximately 9.999999404 due to rounding error.  The
closest 7-digit decimal to a*100 is 9.999999, and the closest
5-digit decimal is 10.000.  However, 9.999999404 < 10, and
floating to integer casts are required to round down in the C
language, which is why the third field has the values it does.


More information about the freebsd-hackers mailing list