j0 (and y0) in the range 2 <= x < (p/2)*log(2)

Steve Kargl sgk at troutmask.apl.washington.edu
Sat Sep 8 16:12:22 UTC 2018


On Sat, Sep 08, 2018 at 10:09:36PM +1000, Bruce Evans wrote:
> On Fri, 7 Sep 2018, Steve Kargl wrote:
> > On Sat, Sep 08, 2018 at 12:07:26AM +1000, Bruce Evans wrote:

(a lot of detail removed)

> > interval [2,2.857] we accept whatever the truncated asymptotic
> > series gives for u and adjust v to compensate.  In the next
> > interval, [2.857,4.547], we flip the roles of u and v based on
> > the behavior of cc and ss.  Eventually, we get to the interval
> > [8,inf).  The divergent asymptotic series in 8 <= x < p/2*log(2)
> > give u and v, which yield poor values for j0(x).  There are
> > 4 zeros of j0(x) in this range, and cc and ss have 4 and 3 zeros,
> > respectfully.
> 
> I didn't notice at first that you were stating my idea in different
> words here.  I think you did see.

Maybe.  Even after 15 or so years of exchanging emails about
libm issues, it still takes me a bit to unravel your emails.

> > There are some subtleties in your approach that I don't
> > understand, yet!
> 
> The only subtlety seems to be that we didn't understand the full
> awfulness of using the asympotic formula where it doesn't apply.
> The subtleties mostly go away if we use the simple method of a
> different poly approx on each subinterval.
> 
> I don't like rational function approximations (with a nontrivial
> denominator), and don't fully understand why they are often more
> accurate.  Now I suspect that their main advantage is reducing the
> number of zeros in the approximating function, and that this advantage
> is especially small here since we use small intervals to avoid zeros
> in the function being approximated.  Higher degree polys have a higher
> number of zeros, and these zeros must all be outside of the interval
> except ones matching the zeros of the function being approximated.
> Using a rational function reduces the degree, by using poles instead
> of zeros.  The poles must also be outside of the interval, but this is
> apparently easier to arrange.

You're not the only one who prefers polynomial approximations
over rational approximations!  I went looking for information about
when one method should be used instead of the other.  I've looked
in standard books in numerical analysis (e.g., Ralston and
Rabinowitz, Hildebrand, etc).  Never found a simple rule.  I've
come to the conclusion:  try to find a minimax polynomial first
and fall back to rational approximation if the polynomial is bad.

Just found https://dlmf.nist.gov/3.11

You might find the Example interesting.

(discussion of j(x) and jx(x+e) removed for brevity).

> >> I noticed that this method cannot work for jn, since u and v depend
> >> on n so there is no way to generate before runtime.  jn actually uses
> >> a different method and I suspect that it is much more inaccurate.
> >> My tests don't cover jn since its n arg is an integer and I only
> >> have coverage for 2 real args or 1 complex arg.
> >
> > jn with n > 1 only uses the large argument asymptotic expansion for
> > the case of very large x where p0(x) = 1 and q0(x) = 0.  This then
> > gives (from the comment in e_jn.c)
> >
> >     *     Jn(x) = cos(x-(2n+1)*pi/4)*sqrt(2/x*pi)
> >
> > The work is in computing cos().  Elsewhere jn is computed via
> > upward or downward recursion.
> 
> I fear that the recursion is very inaccurate.  It seems to do about n
> multiplications, so must have an inaccuracy of >= n/2 ulps.

For the regions that I've used recursion (n  and x < 20000), recursion
works well.  I have a paper by Olver that discussions the stability
of forward and backwards recursion.  You can also find info at

https://dlmf.nist.gov/10.74#iv

> >>> I'm beginning to like my use of the product formula.  It
> >>> allows one to remove the zero from an interval.  I did some
> >>
> >> I haven't seen that.  Is it for the final evaluation or for
> >> determining the approximation?  I don't know of any better way
> >> determining approximations near zeros except to multiply by a
> >> linear factor to remove the zero.
> >>
> >> It is becoming clearer why a separate interval is needed for each zero.
> >> Polynomials can only give good approximations for one zero at a time
> >> except in special cases.
> >
> > I haven't sent the code to you as its a work in progress.  I did
> > send you some of my testing results for 2 <= x < 4 a couple weeks
> > ago.

Here's exhaustive testing of j0f(x) on 2 <= x < 4

mobile:kargl[230] ./testf -m 2 -M 4 -E double -D
Interval tested: [2.0000e+00,4.0000e+00]
        ulp <= 0.50:  38.034% ( 3190526) |  38.034% ( 3190526)
0.50 <  ulp <  0.55:   3.435% (  288112) |  41.469% ( 3478638)
0.55 <= ulp <  0.60:   3.343% (  280402) |  44.811% ( 3759040)
0.60 <= ulp <  0.70:   6.358% (  533378) |  51.170% ( 4292418)
0.70 <= ulp <  0.80:   5.903% (  495183) |  57.073% ( 4787601)
0.80 <= ulp <  0.90:   5.460% (  458035) |  62.533% ( 5245636)
0.90 <= ulp <  1.00:   4.960% (  416092) |  67.493% ( 5661728)
1.00 <= ulp <  2.00:  26.941% ( 2259993) |  94.434% ( 7921721)
2.00 <= ulp <  3.00:   5.057% (  424212) |  99.491% ( 8345933)
3.00 <= ulp        :   0.509% (   42675) | 100.000% ( 8388608)
Max ulp: 5.246863 at 3.98081970e+00

> >  For j0(x), the infinite product is (https://dlmf.nist.gov/10.21)
> >
> > j0(x) = prod(1 - (x/zj)**2), j = 0, 1, 2, ...
> >
> > with z0 the first zero of j0(x), etc.  This can be rewritten and
> > approximated by
> >
> > j0(x) / ((x-z0)*(x+z0)) = R(x) / S(x)
> >
> > While developing the rational approximation, on the LHS we use
> > z0 = z0hi + z0lo.  Both z0hi and z0lo have p-bits of precision.
> > The final result is
> >
> > j0(x) = (x - z0hi - z0lo) * (x + z0) * R(x) / S(x)
> 
> I see.  I think this can only work well near one zero, and reduces to
> my idea for [2, 2.857].

I tried to do 2 <= x < 8, where we have z0 and z1.  I did not
like the results.

>  For simplicity, I used j0(x) = P(x) and omitted
> the details for handling the zero.  Your formula gives a good way to handle
> the zero.  I doubt that the rational function works much better than the
> polynomial.  It is difficult to evaluate the multiplicatons accurately
> and the division gives further inaccuracies.  The expression should be
> written as:
> 
>      lower terms + (x - z0hi - z0lo) * (x + z0) * P0.
> 
> This still has an inaccuracy of about 1.5 ulps.  To do better, everything
> in this expression would have to be done in extra precision, starting with
> P0 = P0hi + P0lo, but this is too hard.
> 
> It obviously can't work on an infinite interval, since there are an infinite
> number of zeros there, but only a finite number pf zeros in R(x) or P(x), and
> poles in 1/S(x).  It's surprising that the asymptotic expansion seems to
> find all the zeros of j0f with errors of less than 4 million ulps (according
> to consistency checks between j0f and j0).

The problem with using an approximation for the infinite product of
j0(x) is that there doesn't seem to be a similar product for y0(x).
I haven't comtemplated what to do about y0(x).

-- 
Steve


More information about the freebsd-numerics mailing list