There are essentially three questions to consider:
- For which values of $x$ is the subtraction $d = e^x - e^{-x}$ subject to subtractive cancellation?
- How do we approximate $d$ with a small relative error?
- How accurately can we evaluate the approximation?
We begin by considering the more general problem of subtracting two real numbers, i.e.
$d = a - b$. This problem is ill conditioned, when
$a \approx b$. In particular, if
$\hat{a}$ and
$\hat{b}$ are approximations of
$a$ and
$b$ respectively, i.e.
$$ \hat{a} = a(1+\Theta_a), \quad \hat{b} = b(1 + \Theta_b)$$
and
$\hat{d} = \hat{a} - \hat{b}$, then
$$ \frac{d - \hat{d}}{d} = \frac{b\Theta_b - a \Theta_a}{a-b}.$$
It follows that the relative error is bounded by
$$ \left| \frac{d - \hat{d}}{d} \right| \leq \frac{|a|+|b|}{|a-b|}\max\{|\Theta_a|, |\Theta_b| \}.$$
If
$a \approx b$, then the right hand side can be large and there is no guarantee that
$d$ is computed with a small relative error. In general,
$d$ will be computed with a very large relative error. This is the phenomenon know as subtractive cancellation or catastrophic cancellation.
On the other hand, if $|a| \ge 2|b|$ or if $|b| \ge 2|a|$, then
$$ \frac{|a|+|b|}{|a-b|} \leq \frac{3}{2}.$$
This is an application of the triangle inequality. It follows that any subtraction $d = a - b$ causes only a modest increase in the relative error if one operand is at least twice as big as the other.
In particular, if $a = e^x$ and $b = e^{-x}$, then subtraction is perfectly safe, when $e^x > 2e^{-x}$ or equivalently, when $x > \log{2}/2$. Similarly, the subtraction is also perfectly safe, when $x < -\log{2}/2$.
Now, let $\alpha = \frac{\log 2}{2} \approx 0.3466$ (four significant figures) and let $I = [-\alpha, \alpha]$. For $x \in I$ subtractive cancellation (s.c.) is an issue. The severity of the problem increases as we approach $0$. For $x \not \in I$, s.c. is a non-issue and we can compute $d$ using the formula
$$ y \gets e^{x}, \quad d \gets y - \frac{1}{y}.$$
Computing the exponential function accurately is a problem for another day. Here I will assume that a library implementation is available.
Next we consider the second problem of approximating $d$ accurately for $x \in I$. By exploiting the symmetry, it suffices to consider $x \in (0,\alpha]$. I will use Taylor series. The main problem is to determine the smallest number of terms which will result in a given relative error.
We have the familiar series
$$ e^x = 1 + x + \frac{1}{2} x^2 + \frac{1}{3!} x^3 + \dotsc = \sum_{j=0}^\infty \frac{x^j}{j!} , $$
which is valid for all $x \in \mathbb{R}$. It follows that
$$ e^{-x} = 1 - x + \frac{1}{2} x^2 - \frac{1}{3!} x^3 + \dotsc = \sum_{j=0}^\infty \frac{(-1)^j x^j}{j!}. $$
Moreover, it is clear that
$$ d(x) = e^x - e^{-x} = 2(x + \frac{1}{3!} x^3 + \dotsc) = 2 \sum_{j=0}^\infty \frac{x^{2j+1}}{(2j+1)!}. $$
Now, let $p_m$ denote the $m$th order Taylor polynomial of $d$ about the point $x_0 = 0$. Let $x \in (0,\alpha]$. Then by Taylor's formula, there exists $\xi \in [0,\alpha]$ such that
$$ d(x) - p_m(x) = \frac{d^{m+1}(\xi)}{(m+1)!} x^{m+1}.$$
When designing library software that will be executed using floating point arithmetic, we care little for the absolute value of the error, the primary goal is to reduce the relative error. In this case, there is a small advantage to choosing $m+1 = 2n$, because $d^{(2n)}(x) = d(x)$ and $d$ is monotone increasing, which makes for an easy estimate of the relative error. We find, that
$$ \left| \frac{d(x) - p_m(x)}{d(x)} \right| \leq \frac{d(\xi)}{d(x)} \frac{1}{(m+1)!} x^{m+1} \leq \frac{\alpha^{2n}}{(2n)!}.$$
A brute force approach shows that $n = 7$ is the smallest integers such that the right hand side is smaller than the unit roundoff in double precision, $u = 2^{-53}$. It follows, that we can approximate $d(x)$ using $p_{13}(x)$ for all $x \in (0,\alpha]$ with a relative error which is less than $u$.
Finally, we consider the problem of actually computing
$p_m(x)$ for
$m=13$. Here Horner's method is useful. We have
$$ p_{13}(x) = x\left(1 + \frac{1}{3!} x^2 + \frac{1}{5!} x^4 + \frac{1}{7!} x^6 + \frac{1}{9!} x^8 + \frac{1}{11!} x^{10} + \frac{1}{13!} x^{12}\right),$$
which suggests that we should evaluate
$p_{13}(x)$ using the formula
$$ p_{13}(x) = x q(x^2),$$
where
$q$ is a polynomial of degree
$6$. We get a relative backward error of about 12u from Horner's method. There is another factor of
$u$ from the the squaring of
$x$, another
$u$ from the last multiplication, and no more than
$13u$ from the (brute force) construction of the reciprocal factorials. All in all,
$p_{13}$ can be evaluated with a relative backward error which is less than
$27u$.
Moreover, since the coefficients are all positive, this estimate carries to the relative forward error, i.e. $p_{13}(x)$ can be evaluated with a relative error of at most $27u$ for every floating point number $x$ in $(0,\alpha]$.
If we want to be fanatical about it, we should also consider the conditioning of $d$ itself. Fortunately, $d$ is exceedingly well conditioned on $I$ with a condition number less than $2$. All, in all we are looking at an overall relative error of $30u$ using elementary methods.