standards/142803: j0 Bessel function inaccurate near
zeros of the function
Steven G. Kargl
kargl at troutmask.apl.washington.edu
Fri Jan 15 01:53:11 UTC 2010
Bruce Evans wrote:
> On Thu, 14 Jan 2010, Steven G. Kargl wrote:
> > Bruce Evans wrote:
>
> >> Anyway, if you can get anywhere near < 10 ulp error near all zeros using
> >> only an asymptotic method, then that would be good. Then the asymptotic
> >> method would also be capable of locating the zeros very accurately. But
> >> I would be very surprised if this worked. I know of nothing similar for
> >> reducing mod Pi for trigonometric functions, which seems a simpler problem.
> >> I would expect it to at best involve thousands of binary digits in the
> >> tables for the asymptotic method, and corresponding thousands of digits
> >> of precision in the calculation (4000 as for mfpr enough for the 2**100th
> >> zero?).
> >
> > The 4000-bit setting for mpfr was a hold over from testing mpfr_j0
> > against my ascending series implementation of j0 with mpfr
> > primitives. As few as 128-bits is sufficient to achieve the
> > following:
> >
> >>> x my j0f(x) libm j0f(x) MPFR j0 my err libm err
> > 2.404825 5.6434398E-08 5.9634296E-08 5.6434400E-08 0.05 152824.59
> > 5.520078 2.4476657E-08 2.4153294E-08 2.4476659E-08 0.10 18878.52
> > 8.653728 1.0355303E-07 1.0359805E-07 1.0355306E-07 0.86 1694.47
> > 11.791534 -3.5291243E-09 -3.5193941E-09 -3.5301714E-09 75.93 781.53
>
> Wonder why this jumps.
Below x=10, I use the ascending series. Above x=0, I use an asymptotic
approximation (AA). x = 11.79... is the first zero I hit with the AA.
> > 14.930918 -6.4815082E-09 -6.3911618E-09 -6.4815052E-09 0.23 6722.88
> > 18.071064 5.1532352E-09 5.3149818E-09 5.1532318E-09 0.23 10910.50
> > 21.211637 -1.5023349E-07 -1.5002509E-07 -1.5023348E-07 2.70 56347.01
> > ...
> >
> > As I suspected by adding additional terms to the asymptotic
> > approximation and performing all computations with double
> > precision, reduces 'my err' (5th column). The value at
> > x=11.7... is the best I can get. The asymptotic approximations
> > contain divergent series and additional terms do not help.
>
> The extra precision is almost certainly necessary. Whether double
> precision is nearly enough is unclear, but the error near 11.7 suggests
> that it is nearly enough except there. The large error might be caused
> by that zero alone (among small zeros) being very close to a representable
> value.
The AA is j0(x) = (P(x) * cos(x+pi/4) + Q(x) * sin(x+pi/4)) where
P(x) and Q(x) are infinite, divergent sums. I use the first 11
terms in P(x) and Q(x) to achieve the 'my err' = 75.9. Additional
terms cause an increase in 'my err' as does fewer terms. This is
probably the limit of double precision.
I haven't investigated the intervals around the values I listed. So,
there may be larger errors that are yet to be found.
BTW, the MPFR website has a document that describes all their algorithms.
They claim that the AA can be used for |x| > p*log(2)/2 where p is
the precision of x; however, in the mpfr code the criterion is |x| > p/2.
http://www.mpfr.org/algo.html
> I forgot the mention that the error table in my previous mail is on amd64
> for comparing float precision functions with double precision ones, assuming
> that the latter are correct, which they aren't, but they are hopefully
> correct enough for this comparision. The errors on i386 are much larger,
> due to i386 still using i387 hardware trigonometric which are extremely
> inaccurate near zeros, starting at the first zero. Here are both tables:
My values are also computed on amd64. I seldomly use i386 for numerical
work. A quick change to my test code to use the double precision j0()
suggests that it has sufficient precision for the comparison you've
done.
--
Steve
http://troutmask.apl.washington.edu/~kargl/
More information about the freebsd-standards
mailing list