increased machine precision for FreeBSD apps?

David Schultz das at FreeBSD.ORG
Thu Jul 15 08:39:50 PDT 2004


On Sun, Jul 11, 2004, Jay Sern Liew wrote:
> 	This is probably more of an organizational issue than an architectural
> one. In the event of running applications with heavy floating point
> arithmetic (e.g. scientific computation apps for one, encryption
> algorithms, compression, .. etc ), would it be a good idea to decrease
> rounding errors by increasing the data type size? 

Most programmers expect a float to be an IEEE 754 single-precision
number, and they expect a double to be a double-precision number.
On most modern FPUs, however, floats, doubles, and sometimes even
long doubles take about the same number of clock cycles.  The only
good reason to use float is for things like genomics, where you're
doing calculations on gigabytes of data, and the goal is to fit as
much of it in memory as possible.

So here's how you handle the problem.  If you want to use the
largest type that's at least as big as a float, use C99's float_t
instead.  If you want the largest type that's at least as big as a
double, use double_t.  If you want the largest type possible, at
the expense of having to emulate it on architectures such as
PowerPC, use long double.  If you want to run one of those
scientific apps that phk says don't exist, use an
arbitrary-precision arithmetic package such as Gnu MP, Paul
Rouse's public domain clone thereof, QLIB, or Mathematica.


More information about the freebsd-arch mailing list