Let's write:
$$ e^x = \frac{1+g(x)}{1-g(x)} $$
We can then solve for $g(x)$ as follows:
$$ e^x(1 - g(x)) = 1 + g(x) $$
$$ e^x - 1 = (e^x + 1)g(x) $$
$$ g(x) = \frac{e^x - 1}{e^x + 1} = \tanh(\frac{x}{2}) $$
Now $g(x)$ is an odd function, analytic in a neighborhood of zero, and so $g(x) = x f(x^2)$ where $f(x)$ has a Taylor series expansion:
$$ g(x) = \sum_{n=1}^\infty \frac{2(2^{2n}-1)B_{2n}x^{2n-1}}{(2n)!} $$
$$ f(x) = \sum_{n=1}^\infty \frac{2(2^{2n}-1)B_{2n}x^{n-1}}{(2n)!} $$
where the $B_n$ are the Bernoulli numbers.
Since $g(x)$ has only odd powers of $x$ in its Taylor series, where the Taylor series expansion of $e^x$ itself has nonzero coefficients for all powers of $x$, we "economize" in evaluating the series for $g(x)$ by halving the number of terms needed to reach the same degree of approximation locally at $x=0$.
There are two caveats. A minor point is the cost of the extra divide (and add/subtract) to get $e^x$ once $g(x)$ has been (approximately) computed. Of greater importance is the finite radius of convergence of the Taylor series for $g(x)$ versus the infinite radius for $e^x$. The above series for $g(x)$ converges when $|x| \lt \pi$, so if we combine the odd series evaluation for $g(x)$ with the range reduction described in answering the previous Question, any precision desired may be obtained.
Evaluation of truncated Taylor series (polynomials) is of course attractive for the small amount of code required. However for future reference consider the generalized continued fraction expansion of $e^x$ which depends in all numerators below the top one on $x^2$ rather than $x$. It turns out there is a forward recurrence relation for evaluating generalized continued fractions that allows us to incrementally include more nested terms until a desired accuracy is achieved.
Added:
The Comments brought out the issue of computing the even-index Bernoulli numbers $B_{2n}$ for the coefficients of the Taylor series expansion, and the fact that they are more difficult to produce than the reciprocal factorials of the usual exponential power series. So for an arbitrary precision routine, in which all the coefficients must be obtained "on the fly", it might be objected that this is "one step forward, two steps back".
The basic recursion that produces $B_n$ in terms of all the preceding entries:
$$ B_n = \sum_{k=0}^{n-1} \frac{1}{k!(n-k+1)} B_k $$
needs $O(n)$ arithmetic steps to get $B_n$ from $B_0,B_1,\ldots,B_{n-1}$, so $O(n^2)$ steps to get all $B_0,\ldots,B_n$ (although we are helped by the fact that odd-indexed $B_k = 0$ when $k \gt 1$). In terms of bit-operations this winds up being $O(n^{3+o(1)}$ complexity, so effectively a "cubic" order algorithm.
Brent and Harvey (2011) showed ("Fast computation of Bernoulli, Tangent and Secant numbers") that we can do better, giving algorithms for the first $n$ such numbers with $O(n^2\cdot(\log n)^{2+o(1)})$ bit-operations. Particularly attractive is the method for computing the Tangent numbers, which are all positive integers:
$$ T_n = \frac{(-1)^{n-1}2^{2n}(2^{2n}-1)}{2n} B_{2n} $$
since this nicely incorporates much of the expression for coefficients we required above.
The specific details are described in a 2012 talk by David Harvey. One drawback is that the faster speed of this algorithm depends on knowing in advance the extent of terms $n$ that will be needed.
In principle one could use those values to go further by falling back on the basic recursion to obtain a few additional terms. In this connection note the earlier Question, A recursion similar to the one for Bernoulli numbers. See also the Wikipedia article on Alternating permutations, whose counting is directly connected to the Tangent numbers (and the Secant numbers).