What are other faster methods of calculating $e^x$ up to any number of decimals other than using the Taylor series formula?
-
3I'm told the series converges rapidly and is well-conditioned. Have you tried that? – Zhen Lin Jan 21 '11 at 17:33
-
3Why not use the standard series expansion $\sum_i x^i/i!$ ? I'm not sure, but it may pay of to first convert to a value in $[0,1[$, e.g. to calculate $e^{10}$, calculate $(e^{1/2})^{20}$ – Myself Jan 21 '11 at 17:34
-
1For what values of $x$ are you trying to calculate $e^x$? The exponential series itself converges pretty fast for small values of $x$. Are you looking for something faster than that? Is $x$ likely to be a large number? – svenkatr Jan 21 '11 at 17:36
-
1I believe Haskell supports it. And many programmings languages have standard libraries which support it. E.g. Java has java.math.BigDecimal. – Peter Taylor Jan 21 '11 at 18:10
-
@Peter: Haskell (and Python) have arbitrary precision integers, but not arbitrary precision floating point, so one still has a little bit of plumbing to do. – Hans Lundmark Jan 21 '11 at 19:10
-
I don`t about BigDecimal in java but I am familiar with there is long in Python which is similar to it and even it can do addition and multiplication of large integers, but when it comes to decimals numbers having a long fractional part, it often rounds them and am pretty sure this is the case with Java too... So the conclusion is that you cannot use this series method for finding e^x upto a large number of decimals, because you have to redefine all elementary operations again and this formula thus, believe me, takes a very large amount of time. – Pushpak Dagade Jan 22 '11 at 09:50
-
2I used the generalized continued fraction method (http://en.wikipedia.org/wiki/Generalized_continued_fraction) to find e upto large number of decimals, and it is way to fast than this series method... The only problem with is that I don`t know how deep I should go to find e^x upto a particular decimal places... I usually have to find it for large value... So if someone can help me with this it would be great!! – Pushpak Dagade Jan 22 '11 at 09:55
-
1To evaluate $e^x$ for large values of $x$, use the idea of range reduction explained in my answer. You'll need to evaluate the constant $\ln 2$ to an accuracy corresponding to what your computation requires, but since it's a constant, you may only need to compute and store it once. I'll add a note on how to get $\ln 2$ to my answer. – hardmath Jan 22 '11 at 14:39
-
The GNU library for bignum deals with rational numbers, and it should be usable from lots of languages. – Mariano Suárez-Álvarez Feb 02 '11 at 04:02
6 Answers
If I wanted to implement an arbitrary precision approximation to $e^x$ from "scratch" (building on an arbitrary precision numeric library such as GMP) with faster convergence than Taylor series/polynomial approximations, I would design around continued fraction/rational function approximations.
Just as the precision given by the well-known Taylor series can be extended as required by using additional terms, so too will the continued fraction expansion give any required precision by using additional convergents.
At first it may seem that evaluating the continued fraction approximation entails a significant disadvantage in comparison with the power series. Additional terms can be added to the partial sums of the power series, to obtain left-to-right sequential approximations. There is a way to achieve the same effect with continued fractions, namely evaluating the partial numerators and denominators into the convergents through a recurrence relation.
There are a couple of important ideas worth considering even if another method for evaluating the exponential function were to be used on a bounded interval such as $[0,1]$, as for example power series or even lookup tables(!). One is symmetry of the exponential function, and the other is a method of range reduction.
Although the exponential function is not even (or odd), it satisfies a relation:
$$e^{-x} = 1/e^x$$
that implies it is a simple rational transformation of an even function:
$$e^x = (1 + x*f(x^2))/(1 - x*f(x^2))$$
which may be solved for a power series or continued fraction approximation of f(z). This is the idea of symmetry as it applies to the exponential function. Not only does it reduce the evaluaton of the exponential function on the whole real line to just the positive (or the negative) half, it also "contracts" the number of terms needed for a given accuracy by half (retaining only $x^2$ terms in the expansion).
Although a continued fraction or Taylor series approximation to the exponential function may converge for arbitrary real (or complex) values), the rate of convergence is not uniform. The Taylor series converges faster the closer the argument is to zero. Thus a range reduction is important for expressing the exponential function at an arbitrary real value in terms of a value in some interval like $[0,1]$. For example, if the floating point arithmetic is binary (radix 2), it can be especially convenient to use the familiar law of exponents with multiples of $\ln 2$:
$$e^x = 2^k * e^r \ \text{where} \ x = r + k*\ln 2$$
which allows $e^x$ to be evaluated, up to a change in binary exponent, using an approximation that converges rapidly over $[0,\ln 2]$.
Combining these two ideas (symmetry, range reduction) the speed of convergence can be limited to the interval $[0, \ln 2/2]$. Limiting the interval of evaluation may allow you to determine in advance how many terms of the continued fraction or Taylor series expansion have to be retained to obtain a desired accuracy. When this is known the evaluation can be done more efficiently (backwards recursion for continued fractions or Horner's method for truncated Taylor series/polynomials) than if we were forced to continually introduced further terms until the desired accuracy is reached.
Added:
The "faster" continued fraction I had in mind is formula (11.1.2) here:
[Handbook of continued fractions for special functions (Cuyt et al, 2008)]
http://books.google.com/books?id=DQtpJaEs4NIC&dq=exponential+function+continued+fraction+convergence
Their reference is to this modern classic:
The application of continued fractions and their generalizations to problems in approximation theory
A.N. Khovanskii, 1963 (P.Noordhoff)
A neat webMathematic-based site by Andreas Lauschke presents some ideas for accelerating convergence of continued fractions by "contractions". The IP address changes from time to time and cannot be used as a link within StackExchange, but you can Google for it:
[Convergence Acceleration with Canonical Contractions of Continued Fractions: Exponential function -- Andreas Lauschke Consulting]
I have some notes on his formulas (derived by contraction from the one noted above) if that would be helpful. Some related material was contributed to Wolfram Demonstrations.
Computing the constant $\ln 2$
Generations of math students have been introduced to the concept of conditional versus absolute convergence by the example of the alternating harmonic series:
$$\ln 2 = 1 - 1/2 + 1/3 - 1/4 + ...$$
Of course this series, derived from a power series expansion of $\ln x$ around $x = 1$, has such slow convergence that even if we combine consecutive pairs of terms:
$$\ln 2 = 1/2 + 1/12 + 1/30 + ...$$
the resulting absolutely convergent series is useless for obtaining arbitrary precision values of $\ln 2$. For convenience the first seven partial sum of this are:
0.50000000...
0.58333333...
0.61666666...
0.63452381...
0.64563492...
0.65870518...
Since $\ln 2$ is 0.69314718..., we have an error of about a third of a unit in the first decimal place. In other words not much more convergence than one decimal place correct.
It therefore makes a striking contrast with the nice convergence of a continued fraction expansion of the same value:
$$\ln 2 = \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{3 + \cfrac{4}{4 + \cfrac{4}{5 + \cfrac{9}{6 + \cfrac{9}{7 + ...}}}}}}}$$
The first seven convergents are:
0.66666666...
0.70000000...
0.69230769...
0.69333333...
0.69312169...
0.69315245...
The error here is about half a unit in the fifth decimal place.

- 37,015
-
Please explain further the "idea of symmetry" that allows the number of terms needed to be halved. I don't understand it. – MikeM Oct 01 '13 at 17:22
-
2@MikeM: Briefly the function $e^x$ can be expressed as $(1+g(x))/(1-g(x))$ where $g(x)$ is an odd function of $x$. Try replacing $g(x)$ with $g(-x)$ and you will see it implies the "symmetry" $e^{-x} = 1/e^x$. Now a power series expansion of $g(x)$ will have only odd powers, or $g(x) = x f(x^2)$. Thus a series expansion of $g(x)$ has only half as many nonzero terms as one for $e^x$. If you want a detailed comparison of op counts for equal accuracy, please post that as a Question so it can be addressed at length. – hardmath Oct 02 '13 at 12:56
-
@MikeM: Note that this "economy" of using only even powers of $x$ can be realized by generalized continued fraction as well as power series expansions of $e^x$. – hardmath Oct 02 '13 at 13:28
-
2
-
To compute the range reduction don't you also need to compute log2(e)? Most algorithms I see compute k = floor(log2(e) * x) and r = x - k*log(2) for exp(x). – Eric Lagergren Jan 10 '18 at 02:38
-
@EricLagergren: Yes, we do need such a constant. The section I devote above to arbitrary precision approximation of $\ln 2$ corresponds to getting $\log_2(e) = 1/\ln 2$. – hardmath Jan 10 '18 at 03:38
-
Thanks for the response. Not sure how I missed that :( That saves a lot of extra complexity! – Eric Lagergren Jan 10 '18 at 03:45
-
@hardmath I should mention: in my own tests for my arbitrary precision library I found using $$exp(x) = e^{r ^ {10^k}}~\text{where}~x = r * 10^k$$ provides much better performance. If $r$ is reduced to $[0, 1)$ and $k >= 0$ you can skip generating the constant which required either a lot of overhead or a huge constant. – Eric Lagergren Jan 10 '18 at 05:34
-
@hardmath Sorry, I edited the notation. Forgot I could use it in comments until I had already posted it. – Eric Lagergren Jan 10 '18 at 05:43
-
With a large enough constant, your original reduction algorithm (sorry if I use words improperly, I’m not a mathematician) was around 12% faster than the method I just commented. However, once the required precision exceeded the precision of the constant, the improvement vanished—the overhead of generating a new log(2) was too much. – Eric Lagergren Jan 10 '18 at 05:57
-
Would you take a different approach for $x^\alpha$ or would you settle for $e^{\alpha\cdot \ln x}$? I'm aware of a similar symmetrical expansion of $\ln x$. – Feb 12 '18 at 20:41
-
@Robert: It's less clear. With $e^x$ there is only one variable/argument, while for $x^\alpha$ we conceivably have two arguments that can vary. The context matters. If you needed a "one size fits all" function, then calling $e^{\alpha \ln x}$ would be attractive, but if $\alpha$ is fixed, I'd be inclined to consider a tailored approach. – hardmath Feb 12 '18 at 21:20
-
@hardmath I had in mind a fixed $\alpha$. Hopefully instead of creating a new question/answer pair, is there something specific I should look at? Newton's method creates the chicken-and-the-egg problem. – Feb 12 '18 at 21:48
-
@Robert: Our goal here is an arbitrary precision computation. For fixed $\alpha$, an arbitrary precision would require an exact value for $\alpha$. This suggests a rational value (or better, an integer?) for $\alpha$, so that $x^\alpha$ is algebraic (and Newton's method is feasible). If you have a chicken-and-egg problem related to that, I'd love to see it created as a new Question. – hardmath Feb 13 '18 at 01:39
-
1Coming back to this several years later (from another question on approximating logarithms!) it may be worth noting that the alternating series for $\ln 2$ is a fine candidate for the Euler transformation of alternating series, and that gives convergence much closer to the continued fraction representation. – Steven Stadnicki May 04 '22 at 15:21
The theoretically fastest way appears to be to use Newton iteration to reduce the problem to computing the logarithm function and then using an algorithm based on the arithmetic-geometric mean to compute the logarithm. See this wikipedia page. http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
In practice the Taylor series should work fine, given a good implementation. The following webpage http://numbers.computation.free.fr/Constants/constants.html has an example impletation of using the taylor series to compute e. They claim it took 0.02 seconds to compute e to a thousand decimals on a PentiumII, 450 MHz processor.

- 2,239
A very reasonable way to do this is to use
$e^x = 1 + x/1! + x^2/2! + x^3/3!...$
as it converges really fast.
A "trick" I read in some library for calculating $e^1$ is to calculate the series for $x=1/2^k$ where $k$ is the root of the amount of required bits, then you have to square the result k times to get your final number, it avoids that big/small numbers make the result too inaccurate.
Edit: Also this is the exact same topic like Iterative refinement algorithm for computing exp(x) with arbitrary precision
Maybe it should be merged / closed?
-
1Well, this is a slow method, I know other methods which are more faster than this... I just want some other method – Pushpak Dagade Jan 21 '11 at 17:51
-
I think the other question is significantly different as it asks for methods to increase the precision of computed values without starting over "from scratch". Here the emphasis seems to be on the speed with which a given accuracy can be achieved. – hardmath Jan 21 '11 at 18:20
-
Fastest way is to use Newton method of calculation of square root $x_{n+1}=0.5 (x_n + a/x_n)$ from $\mathsf e$, it will give you $$\exp(^2.1),$$ where $^2$ denotes a binary number.
Then you calculate square root of square root, giving you $$\exp(^2.01)$$
And so on. This will allow you to create tables of logarithms and exponents. This is the fastest way to do by hand calculation and it was used historically.
Pade approximations are good, but they do not converge as fast as Newton method, which effectively doubles your "accurate number count" by single iteration.
Then you multiply the required amount of "roots" you have, effectively stacking any binary number you need to exponentiate, and you also is able to reuse your roots to build an "acceleration table", where you keep exponents for often used numbers like $\exp(^2.101)$
Note that you can shift between different exponentiation digits by "rooting" of your number, so if you take square root you obtain $$\sqrt{\exp(^2.101)}=\exp(^2.0101)$$
It is a very strong cost-efficient method, but it requires from calculator to keep square roots tables and "often used" exponents.
Especially if you are allowed to have large enough exponentiation table with separate roots of high precision, you can achieve super strong results with this. Modern computers don't do it because they are using some Taylor series or variation of Taylor series, because they are "saving" memory for each call. Also they can be using generalized continued fractions like this https://en.wikipedia.org/wiki/Exponential_function#Continued_fractions_for_ex, but in my expirience GCFs more often a compromise between "table" approach and "series". Something in between. Not super speed.
PS. You can further accelerate it by good searching algo which will determine patterns in your binary number and re-use table ot exponents for those patterns. Typical patterns are $^211, ^2101, ^2111, ^21111, ^211011$ and so on. Also you can further accelerate it by using stronger approximations for starting iteration for your Newton root-finder, something based on GCFs like that $$\sqrt{z} = \sqrt{x^2+y} = x+\cfrac{2x \cdot y} {2(2z - y)-y-\cfrac{y^2} {2(2z - y)-\cfrac{y^2} {2(2z - y)-\ddots}}}$$
This will be super efficient computation-wise if you prepare some tables of squares and re-use those every time for your initial GCF guess.
PPS. Note that binary search algo could be enhanced to not only look for additive patterns, but also for negating patterns. You can think of binary sequence like that $^2.1011101=^21-.0100010$ minimizing required roots amount from 5 to 2.

- 253
I don't know that this is necessarily the fastest way, but it's fun. If you want to know how it works, Google.
package com.akshor.pjt33.math; import com.akshor.pjt33.math.symbalg.Rational; import java.math.BigInteger; public abstract class Spigot { /** The relevant components of the prefix matrix. */ Rational h00, h01=Rational.ZERO, h11=Rational.ONE; BigInteger i=BigInteger.ZERO; boolean verify=true; final int base; private StringBuilder sb=new StringBuilder(); public Spigot(BigInteger multiplier) { this(multiplier, 10); } public Spigot(BigInteger multiplier, int base) { h00=new Rational(multiplier, BigInteger.ONE); this.base=base; } /** * Advances this spigot algorithm, producing as many digits as each consumption permits. * @param beta An upper bound (positive) on the magnitude of all subsequent term ratios. * @return The number of digits output. */ public int advance(Rational beta) { if (verify && beta.signum()0) throw new IllegalArgumentException(Et_t+" exceeds bound "+beta); h00=h00.mul(Et_t); i=i.add(BigInteger.ONE); // Produce (if we can: without a converging bound we can't). int rv=0; if (beta.compareTo(Rational.ONE)0; i++) numdigits-=advance(beta); return i; } protected abstract Rational termRatio(); public String toString() { return sb.toString(); } /** * A spigot implementation which evaluates any hypergeometric sum for which you can * provide a term bound. * E.g. PI = 3 F(1/2, 1, 1, 8/5 ; 3/5, 4/3, 5/3 | 2/27) with term limit 2/27. */ public static class Hypergeometric extends Spigot { private final Rational[] a, b; protected final Rational z; public Hypergeometric(Rational[] a, Rational[] b, Rational z) { this(a,b,z,BigInteger.ONE); } public Hypergeometric(Rational[] a, Rational[] b, Rational z, BigInteger mul) { super(mul); this.a=a.clone(); this.b=new Rational[b.length+1]; this.b[0]=new Rational(1,1); System.arraycopy(b,0,this.b,1,b.length); this.z=z; } protected Rational termRatio() { Rational rv=z; for (Rational num : a) rv=rv.mul(num.add(i)); for (Rational num : b) rv=rv.div(num.add(i)); return rv; } } public static class Exp extends Hypergeometric { public Exp(Rational z) { super(new Rational[0], new Rational[0], z); } // We rely on the fact that the term ratio for Exp always decreases. public int output(int numdigits) { int i=0; for (; numdigits>0; i++) numdigits-=advance(termRatio().abs()); return i; } } public static void main(String[] args) { Exp e=new Exp(Rational.ONE); int count=e.output(100); System.out.println(e); System.out.println("Iterations required: "+(count+1)); Exp e2=new Exp(new Rational(2,1)); count=e2.output(100); System.out.println(e2); System.out.println("Iterations required: "+(count+1)); Exp em2=new Exp(new Rational(-2,1)); count = em2.output(100); System.out.println(em2); System.out.println("Iterations required: "+(count+1)); } }
Implementing the Rational class is left as an exercise. And yes, this code is an argument for limited operator overloading.

- 13,425
-
The spigot algorithm for $e$ is interesting, but it does not allow one to compute $e^x$ without considerable additional work. – hardmath Jan 21 '11 at 19:34
-
Oops, I haven't actually included the implementation of e^x. Most of the additional work is done, though. – Peter Taylor Jan 21 '11 at 20:08
My webMathematica site hardmath is referring to is indeed down at the moment as I have problems getting webM work with the latest tomcat.
A few comments: - the question of the o/p was about "fastest way". That should be made a bit more precise. Usually people mean fast convergence rates with that. But "fast" in its original sense really means running time. And contractions of continued fractions are a good example of dramatically increasing the convergence rate, but at the expense of getting more and more complicated terms to compute for that. In other words, you need much fewer terms to attain higher precision, but every term is more complex than the simpler ones, of which you need more. Sometimes more cheap steps are faster, and sometimes few expensive terms are faster. It depends on how the computation is done. High-level language? A VM? Machine code? Assember? - hardmath, regarding your third paragraph. Every power series, not just Taylor series, can be converted into an EQUIVALENT continued fraction expansion. The converse is not generally true, especially for delta fractions. The continued fraction expansion is (almost?) always more stable than a power series solution as the terms don't grow as fast. In a power series you are adding many terms of which numerator and denominator are growing at very fast rates, incurring a lot of numerical problems. The c/f expansions are (usually) much more stable, in fact, the parabola theorem states that Stieltjes, C-, and several other c/f expansions are even self-correcting if the backwards recursion is used. And many power series are outright trash if the function to be approximated has poles, which isn't the case here with exp(x), so I won't go into that (just consider that the p/s for tan(x) involves Bernoulli polynomials, converges slowly, converges only in a small interval between poles, whereas the c/f expansion converges rapidly with very simple terms in the whole complex plane EXCEPT at the poles -- which form a null set -- and is very stable with small terms in the convergents). - there's also a few papers on the web that describe how sin, cos, exp, and log are computed by the chip manufacturers. The fastest way to compute these functions is to burn them on a chip and let the semiconductors do it for you, than writing software. The math they use for this is truly mind-boggling. A combination of numerical math, implementation on a chip, and sometimes GPU-based parallelization that is beyond fascinating.
Feel free to contact me through my website, www.lauschkeconsulting.com/contact.html if you need more information, I can provide you with my contraction formulas of the formulas hardmath is referring to (in the Cuyt et al book). The convergence rates are enormous, but the terms also get pretty complicated very fast, which makes numerical computations a bit slower again as more complicated terms have to be evaluated, than if you didn't contract the c/f.
Added on 20110201: My webMathematica site is live again, I solved the tomcat problem.