# j0 (and y0) in the range 2 <= x < (p/2)*log(2)

Bruce Evans brde at optusnet.com.au
Fri Sep 7 14:07:32 UTC 2018

```On Thu, 6 Sep 2018, Steve Kargl wrote:

> On Fri, Sep 07, 2018 at 02:02:20AM +1000, Bruce Evans wrote:
>> ...
>> I now think that no single fudge factor works for both P0 and Q0.  P0
>> and Q0 have very different characteristics.  Merely choosing the
>> ...
>> To find different fudge factors, I would try starting with a common one
>> and then iteratively adjust to different ones.
>>
>> j0 has a zero in the middle of at least the first subinterval, and the
>> relative error should be minimized for this.  I think the choice of
>> subintervals is related to this.  With only one zero per subinterval,
>> the relative error near it can be minimized without messing up the
>> relative error near other zeros.  The adjustment for this is absolutely
>> tiny, so it is easier to arrange that it doesn't mess up the absolute
>> error too much.

In fact, this must more or less work, and the difficulty is to make it work
is not specific to j0, except it gives an example of a nontrivial formula.)
cc and ss are held fixed.  The best choice of u is not known, but we if we
can get fairly close to it (perhaps even off by a factor of 2), then we can
throw away any previous approximation for v and solve for v (in high
precision), then approximate that.  There must be no singularities for this
to work in high precision.  We expect to avoid the singularities by some
combination of starting with u close enough, keeping the interval small,
and special handling of zeros.

Here the formula for v is something divided by ss, so we want to avoid
zeros of ss.  ss is sin(x) - cos(x) which is close to sqrt(2) of the
entire interval [2,2.857], so there is no problem with pole singularities.
In the next interval, ss has a zero in the middle of the interval, so we
should switch the roles of u and v so as to divide by cc instead (it is
not so close to -sqrt(2)).

Some magic is needed to make the resulting v easy to approximate.
Hopefully not much more than a good u and a small interval.

Then there is the known zero in the middle of the interval.  v should be
chosen to at least have the correct signs on the FP values on each side
of the zero.

I noticed that this method cannot work for jn, since u and v depend on n
so there is no way to generate before runtime.  jn actually uses a different
method and I suspect that it is much more inaccurate.  My tests don't
cover jn since its n arg is an integer and I only have coverage for 2
real args or 1 complex arg.

> I'm beginning to like my use of the product formula.  It
> allows one to remove the zero from an interval.  I did some

I haven't seen that.  Is it for the final evaluation or for determining
the approximation?  I don't know of any better way determining approximations
near zeros except to multiply by a linear factor to remove the zero.

It is becoming clearer why a separate interval is needed for each zero.
Polynomials can only give good approximations for one zero at a time
except in special cases.

> code spelunking and found
>
> #ifndef lint
> static char sccsid[] = "@(#)j0.c 4.2 (Berkeley) 8/21/85";
> #endif not lint
>
> Found at http://www.retro11.de/ouxr/43bsd/usr/src/usr.lib/libm/j0.c.html
>
> The algorithm is completely different than what we have in libm.

The one in Net/2 (1992) is the same as now, all the way down to style bugs
like spelling qzero as pzero in 1 place in the comment about qzero.

Bruce
```