4

In an answer to the question Fastest way to calculate $e^x$ upto arbitrary number of decimals? there is a description of a method by which the number of terms needed to calcluate $e^x$ to a given precision can be halved:

Although the exponential function is not even (or odd), it satisfies a relation: $$e^{-x} = 1/e^x$$ that implies it is a simple rational transformation of an even function:

$$e^x = (1 + x*f(x^2))/(1 - x*f(x^2))$$ which may be solved for a power series or continued fraction approximation of f(z). This is the idea of symmetry as it applies to the exponential function. Not only does it reduce the evaluaton of the exponential function on the whole real line to just the positive (or the negative) half, it also "contracts" the number of terms needed for a given accuracy by half (retaining only $x^2$ terms in the expansion).

Unfortunately, I am not able to understand how to 'solve for the power series approximation of f(z)'.

The context is that I am trying to improve the performance of code that calclulates $e^x$ to arbitrary precision.

MikeM
  • 163

2 Answers2

4

Let's write:

$$ e^x = \frac{1+g(x)}{1-g(x)} $$

We can then solve for $g(x)$ as follows:

$$ e^x(1 - g(x)) = 1 + g(x) $$

$$ e^x - 1 = (e^x + 1)g(x) $$

$$ g(x) = \frac{e^x - 1}{e^x + 1} = \tanh(\frac{x}{2}) $$

Now $g(x)$ is an odd function, analytic in a neighborhood of zero, and so $g(x) = x f(x^2)$ where $f(x)$ has a Taylor series expansion:

$$ g(x) = \sum_{n=1}^\infty \frac{2(2^{2n}-1)B_{2n}x^{2n-1}}{(2n)!} $$

$$ f(x) = \sum_{n=1}^\infty \frac{2(2^{2n}-1)B_{2n}x^{n-1}}{(2n)!} $$

where the $B_n$ are the Bernoulli numbers.

Since $g(x)$ has only odd powers of $x$ in its Taylor series, where the Taylor series expansion of $e^x$ itself has nonzero coefficients for all powers of $x$, we "economize" in evaluating the series for $g(x)$ by halving the number of terms needed to reach the same degree of approximation locally at $x=0$.

There are two caveats. A minor point is the cost of the extra divide (and add/subtract) to get $e^x$ once $g(x)$ has been (approximately) computed. Of greater importance is the finite radius of convergence of the Taylor series for $g(x)$ versus the infinite radius for $e^x$. The above series for $g(x)$ converges when $|x| \lt \pi$, so if we combine the odd series evaluation for $g(x)$ with the range reduction described in answering the previous Question, any precision desired may be obtained.

Evaluation of truncated Taylor series (polynomials) is of course attractive for the small amount of code required. However for future reference consider the generalized continued fraction expansion of $e^x$ which depends in all numerators below the top one on $x^2$ rather than $x$. It turns out there is a forward recurrence relation for evaluating generalized continued fractions that allows us to incrementally include more nested terms until a desired accuracy is achieved.


Added:

The Comments brought out the issue of computing the even-index Bernoulli numbers $B_{2n}$ for the coefficients of the Taylor series expansion, and the fact that they are more difficult to produce than the reciprocal factorials of the usual exponential power series. So for an arbitrary precision routine, in which all the coefficients must be obtained "on the fly", it might be objected that this is "one step forward, two steps back".

The basic recursion that produces $B_n$ in terms of all the preceding entries:

$$ B_n = \sum_{k=0}^{n-1} \frac{1}{k!(n-k+1)} B_k $$

needs $O(n)$ arithmetic steps to get $B_n$ from $B_0,B_1,\ldots,B_{n-1}$, so $O(n^2)$ steps to get all $B_0,\ldots,B_n$ (although we are helped by the fact that odd-indexed $B_k = 0$ when $k \gt 1$). In terms of bit-operations this winds up being $O(n^{3+o(1)}$ complexity, so effectively a "cubic" order algorithm.

Brent and Harvey (2011) showed ("Fast computation of Bernoulli, Tangent and Secant numbers") that we can do better, giving algorithms for the first $n$ such numbers with $O(n^2\cdot(\log n)^{2+o(1)})$ bit-operations. Particularly attractive is the method for computing the Tangent numbers, which are all positive integers:

$$ T_n = \frac{(-1)^{n-1}2^{2n}(2^{2n}-1)}{2n} B_{2n} $$

since this nicely incorporates much of the expression for coefficients we required above.

The specific details are described in a 2012 talk by David Harvey. One drawback is that the faster speed of this algorithm depends on knowing in advance the extent of terms $n$ that will be needed.

In principle one could use those values to go further by falling back on the basic recursion to obtain a few additional terms. In this connection note the earlier Question, A recursion similar to the one for Bernoulli numbers. See also the Wikipedia article on Alternating permutations, whose counting is directly connected to the Tangent numbers (and the Secant numbers).

hardmath
  • 37,015
  • Thanks. From the above I get the first five terms of $g(x)$ as $x/2 - x^3/24 + x^5/240 - 17x^7/40320 + 31x^9/725760$. A quick check shows it seems to work, but as well as the extra division, addition and subtraction, each term looks like it would be more than twice as expensive to calculate then those of $e^x = \sum_{n = 0}^{\infty} x^n/n!$, unless I had them already simplified as above, which is not possible. – MikeM Oct 03 '13 at 14:51
  • @MikeM: The Bernoulli numbers are not as easy to compute as the respective coefficients in the exponential Taylor series, but hardly anything could be as simple as that. I take it you want to compute the power series coefficients "on the fly". Perhaps it is time to ask more about what "arbitrary precision" means in your application. At some point design has to take into account whether you will be computing 100 digits of precision, or one million! – hardmath Oct 03 '13 at 18:17
  • 1
    Arbitrary-precision here means to any precision, but realistically, as it's JavaScript, to perhaps 1000 digits. I'm using it for exponentiation. After the argument reduction, which your previous answer helped me with, I'm reasonably happy with the performance as it is. There is also the $e^x = \sinh(x) + \sqrt{1 + \sinh^2(x)}$ to be considered, as that also gets rid of half the terms. Then there's the continued fractions you advocate... I'll need to test each. – MikeM Oct 03 '13 at 18:37
2

What I've used in the past is the Pade acceleration. There are different levels you can use, i.e. how many terms to compute. For example:

$$ e^z \sim \frac{\left(1 + \frac{1}{2} z + \frac{1}{12}z^2\right)}{\left(1 - \frac{1}{2}z + \frac{1}{12} z^2\right)} $$

is just one of many Pade approximations you can select. Another transformation is the Shanks. Information on both can be found online.

Paul Safier

hejseb
  • 4,745
  • 1
  • 17
  • 35