8

Using $$ \sinh x = x + \tfrac{x^3}{3!}+ \tfrac{x^5}{5!} + \tfrac{x^7}{7!}+ \cdots$$ as the Standard Power Series. This series takes a very long time to run. Can it be written without using the exponentials divided by a huge factorial. The example functions in Is there a way to get trig functions without a calculator? using the "Tailored Taylor" series representation for sin and cosine are very fast and give the same answers. I want to use it within my calculator program.

Thank you very much.

MPW
  • 43,638

3 Answers3

13

Note that $$\sinh x=\frac{e^x-e^{-x}}2$$ So all you need is a fast way to calculate the exponential $e^x$. You can use the regular Taylor series, but that's slow. So you can use the definition $$e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n$$ For calculation purposes, use $n$ as a power of $2$, $n=2^k$. You calculate first $y=1+\frac x{2^k}$, then you repeat the $y=y\cdot y$ operation $k$ times. I've got the idea about calculating the fast exponential from this article.

Andrei
  • 37,370
5

Let me consider the problem from a computing point of view assumin that you do not know how to compute $e^x$.

The infinite series is $$\sinh(x)=\sum_{n=0}^\infty \frac{x^{2n+1}}{(2n+1)!}$$ If you compute each term independently of the other, for sure, it is expensive since you have to compute each power of $x$ as well as each factorial.

But suppose that you write instead $$\sinh(x)=\sum_{n=0}^\infty T_n \qquad \text{where} \qquad T_n=\frac{x^{2n+1}}{(2n+1)!}\qquad \text{and} \qquad T_0=x$$ then $$T_{n+1}= \frac {t\,\, T_n}{m(m+1)}\qquad \text{where} \qquad t=x^2\qquad \text{and} \qquad m=2n+2$$ This would be much less expensive in terms of basic operations and number of them.

You could use the same trick for most functions expressed as infinite series.

  • This approach also have the advantage that you know when to stop (as a function of $x$). One can estimate an upper limit for the error. +1 – Andrei Mar 06 '19 at 19:40
  • @Andrei. Could you elaborate about the upper limit of the error ? In fact, I wanted to show that we can skip all IF tests "knowing" in advance the number of terms to be added for an accuracy of $10^{-k}$. Thanks. – Claude Leibovici Mar 07 '19 at 02:25
  • see something like https://math.dartmouth.edu/~m8w19/ErrorEstimates.pdf. You calculate the difference between $\sinh x$ and the order $n$ expansion. The difference will be less than $\sinh \xi \frac{x^{2n+2}}{(2n+2)!}$, where $0\le \xi\le x$. This is a generalization of mean value theorem. – Andrei Mar 07 '19 at 04:13
  • @Andrei. This one, I know ! The problem is precisely $\sinh(\xi)$ which makes entreing an infinite loop. Any other idea ? Thanks and cheers. – Claude Leibovici Mar 07 '19 at 04:16
  • Use the current approximation for an estimate. For $x>0$, $\sinh$ is an increasing function, so $\sinh\xi<\sinh x\approx\sum_n T_n$ – Andrei Mar 07 '19 at 04:55
  • @Andrei. I agree with you but I don'think that we can define if advance the number of terms since we need to know $\sum_n T_n$ to compute the difference between $\sinh(x)$ and the order $n$ of expansion. What I would have liked is, say, $x$ being given as well as $k$, then $n \approx \text{something}$. If you have any idea, it would be great ! Thanks again. – Claude Leibovici Mar 07 '19 at 05:29
  • @Andrei: It's strange that you say that Claude's approach tells you when to stop, when technically my answer gives much more detail on that. You can't only deal with the error due to the Taylor remainder, but also have to deal with the error due to the finite precision of arithmetic operations. Moreover, Claude's approach would be very inefficient, as should be clear from my answer, since it essentially makes a bad choice for $c$. – user21820 Mar 07 '19 at 16:31
2

A much better option than Andrei's answer is to use the identity $\exp(x) = \exp(x·2^{-k})^{2^k}$ for judicious choice of $k$, and use the Taylor series to approximate $\exp(x·2^{-k})$ to sufficient precision.

Suppose we want to compute $\exp(x)$ using the above identity to $p$-bit precision, meaning a relative error of $2^{-p}$. We shall perform each arithmetic operation with $q$-bit precision, where $q=p+k+m$, and $k,m$ are positive integers that we will determine later. To compute $\exp(x·2^{-k})^{2^k}$ with relative error $2^{-p}$ we need to compute $\exp(x·2^{-k})$ with relative error at most about $r_0 := 2^{-p-k-1}$, because on each squaring the error $r_n$ changes to about $r_{n+1} ≤ 2r_n+2^{-q}$, giving $r_k+2^{-q} ≤ (r_0+2^{-q})·2^k$ and hence $r_n ≤ 2^{-p}$.

Therefore we need to use enough terms of the Taylor expansion for $\exp(x·2^{-k})$ so that our error is at most $|\exp(x·2^{-k})|·2^{-p-k-1}$. If $k = \max(0,\log_2 |x|) + c$ for some positive integer $c$, then $|x·2^{-k}|<2^{-c}$ and so $|\exp(x·2^{-k})| > \exp(-1/2) > 1/2$, and thus it suffices to have our error less than $2^{-p-k-1}/2$. We allocate this error margin to two halves, one half for the Taylor remainder and one half for error in our arithmetic operations. Letting $z := x·2^{-k}$, we have $\sum_{i=n}^∞ |z^i/i!| ≤ |z|^n/n! · \sum_{i=0}^∞ (|z|/n)^i ≤ |z|^n/n! ≤ 2^{-c·n}$ for any $n ≥ 1$, so we only need to compute $\sum_{i=0}^{n-1} z^i/i!$ where $n ≥ 1$ and $2^{-c·n} < 2^{-p-k-1}/4$, both of which hold if $c·n ≥ p+k+3$. Each term requires one addition, one multiplication and one division, via the trivial identity $z^{n+1}/(n+1)! = z^n/n! · z/n$, and so if we start with $z$ at $q$-bit precision then the cumulative relative error is at most about $2n·2^{-q}$ since each "$· z/n$" introduces an extra error factor of about $(1+2^{-q})^2$. Since we want $2n·2^{-q} < 2^{-p-k-1}/4$, equivalently $n < 2^{m-4}$, it is enough to have $m = \log_2 n + 4$.

Even if we use schoolbook multiplication, namely that multiplying at $q$-bit precision takes $O(q^2)$ time, the above method yields a relatively efficient algorithm by choosing $k$ appropriately. The Taylor phase takes $O(n·q^2)$ time, and the exponentiation phase takes $O(k·q^2)$ time. If we choose $c = \sqrt{p}$ we can choose $n = \sqrt{p}+k$, which will give $k,n ∈ O( \sqrt{p} + \log |x| )$ and $q ∈ O( p + \log |x| )$. Thus for $x$ within a bounded domain, the whole algorithm takes $O(p^{2.5})$ time.


The above is based on purely elementary techniques. A more careful analysis using the same techniques yields an even better algorithm (see Efficient Multiple-Precision Evaluation of Elementary Functions).

There are ways to do much much better, basically coming down to using an AM-GM iteration to compute $\ln$, and then using Newton-Raphson inversion to obtain $\exp$ (see The Arithmetic-Geometric Mean and Fast Computation of Elementary Functions).

user21820
  • 57,693
  • 9
  • 98
  • 256