Use of C99 extra long double math functions after r236148

Steve Kargl sgk at troutmask.apl.washington.edu
Tue Jul 10 20:53:12 UTC 2012


On Tue, Jul 10, 2012 at 01:11:38PM -0500, Stephen Montgomery-Smith wrote:
> On 07/10/2012 11:50 AM, Steve Kargl wrote:
> >On Tue, Jul 10, 2012 at 11:39:59AM -0500, Stephen Montgomery-Smith wrote:
> >>On 07/09/2012 12:02 AM, Steve Kargl wrote:
> >>
> >>>Yep.  Another example is the use of upward recurion to compute
> >>>Bessel functions where the argument is larger than the order.
> >>>The algorithm is known to be unstable.
> >>
> >>By upward recursion, do you mean equation (1) in
> >>http://mathworld.wolfram.com/BesselFunction.html?
> >
> >Yes.
> >
> >>So what do people use.  Maybe something like
> >>http://en.wikipedia.org/wiki/Bessel_function#Asymptotic_forms (second
> >>set of equations), but finding some asymptotics with a few extra terms
> >>in them?
> >
> >They use downward recursion, which is known to be stable.
> >NIST has revised Abramowitz and Stegun, and it is available
> >on line.  For Bessel function computations, look at
> >http://dlmf.nist.gov/10.74
> >and more importantly example 1 under the following link
> >http://dlmf.nist.gov/3.6#v
> 
> These algorithms are going to be subject to the same problems as using 
> Taylor's series to evaluate exp(x) for x>0.  The computation will 
> require several floating point operations.  Even if the method is not 
> unstable, I would think getting a ULP of about 1 is next to impossible, 
> that is, unless one performs all the computations in a higher precision, 
> and then rounds at the end.

Downward recurion is used for n > 1.  j0(x) and j1(x)
are computed using two different schemes.  If x <= 2, 
then the ascending series can be summed (this only
takes somewhere around 10 terms for double).  Alternately,
one can use a minimax polynomial for x in [0:2].  For
x > 2, the Hankel expansions are used (see 10.17.3 at
http://dlmf.nist.gov/10.17).  The summations are approximated
by minimax polynomials.  See msun/src/s_j0.c.

I've tested j0f() and know that one can get less than 1 
ULP over various ranges of x.  I also know that when is
close to a zero of j0(x), then one has ULP on the order
of 1000000.

> Whereas as a scientist, having a bessel function computed to within 10 
> ULP would be just fine.

As someone who has spent a portion of career computing 
Bessel functions, I am disinclined to agree with your
statement.

> I am looking at the man page of j0 for FreeBSD-8.3.  It says in conforms 
> to IEEE Std 1003.1-2001.  Does that specify a certain ULP?

I doubt that it would specify ULP for any function other than
perhaps sqrt().  I don't have IEEE Std 1003.1-2001 handy,
but I would guess that it simply statements the math funtion
return an floating point approximation to its true value.

> I am 
> searching around in this document, and I am finding nothing.  Whereas 
> the IEEE-754 document seems rather rigid, but on the other hand it 
> doesn't specifically talk about math functions other than sqrt.

-- 
Steve


More information about the freebsd-current mailing list