Complex arg-trig functions

Stephen Montgomery-Smith stephen at missouri.edu
Thu Sep 6 16:32:23 UTC 2012


Life is busy, so it will be a few days before I can process most of what 
you have said.  But let me respond to a few of the points.

On 09/06/2012 08:50 AM, Bruce Evans wrote:

> This code in catrig.c seems to be related:
>
> @ /*
> @  * sum_squares(x,y) = x*x + y*y.
> @  * Assumes x*x and y*y will not overflow.
> @  * Assumes x and y are finite.
> @  * Assumes fabs(x) >= DBL_EPSILON.
> @  */
> @ inline static double
> @ sum_squares(double x, double y)
> @ {
> @     /*
> @      * The following code seems to do better than
> @      * return (hypot(x, y) * hypot(x, y));
> @      */
> @ @     if (fabs(y) < MIN_4TH_ROOT)
> @         if ((int)y==0) /* raise inexact */
> @             return (x*x);
> @     return (x*x + y*y);
> @ }
>
> but it isn't really -- it is more like the case of a large difference
> of exponents, so x*x+y*y reduces to x*x (at a smaller exponent difference
> than for hypot(x, y) reducing to |x|).  hypot() sets inexact by returning
> |x|+|y| in that case, but the above avoids using y*y since it would give
> spurious underflow.
>
> Hmm, MIN_4TH_ROOT seems to be logically wrong here, and physically
> wrong for float precision.  In double precision, MIN_4TH_ROOT/DBL_EPSILON
> is ~0.5e-59, but in float precision, MIN_4TH_ROOT/FLT_EPSILON is only
> ~1e-2, so when x = FLT_EPSILON, y = MIN_4TH_ROOT is very far from giving
> a discardable y*y.
>
> The logically correct threshold is something like y/x < 2**-MANT_DIG/2
> or y < 2**-MANT_DIG/2 * EPSILON to avoid the division.  clog*() uses
> essentially y/x < 2**-MANT_DIG for some reason (it actually uses a fuzzy
> check based on exponent differences: kx - ky > MANT_DIG; MANT_DIG/2
> would be a little too small here, but doubling it seems wasteful).
> hypot() uses essentially y/x < 2**-60, where the magic 60 is DBL_MANT_DIG
> plys a safety margin.  hypotf() uses a magic 30.  The non-magic integer
> exponent thresholds work very well in clog*(), but depend on having the
> exponents handy for efficiency, which may be possible via optimization
> for an inline function like the above but takes more source code.

The code is different for catrigf.c for precisely the reason you stated. 
  This is what I was talking about a while back when I said that the 
float code was a little harder than the double and long code.

In the days or weeks to come, I might go over all these kinds of 
conditions over again.  The paper by Hull et al, and the code used by 
boost, is written in such a way that the code for float is the same as 
the code for double.  My code was designed just enough to work, whereas 
they put more thought into it.

> I don't see how the above avoids cancelation problems like the ones in
> clog().  You never subtract 1 from sum_squares(), but there are open-coded
> expressions like '(1-ax)*(1+ax) - ay*ay' to reduce the cancelation error.
> I found that for clog() it was worth using extra precision out to about
> |z| = 4 (or |z| < 1/4) to reduce the cancelation error.  But somehow my
> tests didn't show large errors.  Perhaps because I'm happy with < 4 ulps
> instead of < 1.

Because for clog, when x^2+y^2 is close to one, the real part of clog is 
very close to zero.

Whereas when x^2+y^2 is close to 1, the imaginary part of catanh is 
close to Pi/4.

So even though the absolute error in computing Im(catanh) might be much 
higher than Re(clog), the relative error is comparable.





More information about the freebsd-numerics mailing list