The computation of the exponential function with arbitrary accuracy has been discussed previously. The problem posed here is not so much the choice of an approximation as it is the choice of evaluation schemes with good numerical conditioning.
The conditioning of polynomial approximations on a bounded interval such as $[0,\ln 2]$ is pretty good. You are summing a finite series of positive terms, so summing from smallest to largest avoids any possible "accumulation" of rounding errors.
If you want to do better than $1$ ulp accuracy in a double precision "computation", you necessarily need higher than double precision accuracy in the intermediate results and care in rounding to a final double precision result. This is true for basic arithmetic operations, and especially so for the exponential function, where essentially the only exact value that can be represented is $\exp(0) = 1$.
An alternative is to avoid "computation" and resort to a gigantic lookup table. Effectively one precomputes the correctly rounded results and stores them for retrieval at runtime. A mixed strategy of using short computations over a moderately large number of short subintervals has been explored by several authors.
A key idea that has been proposed is that we can do better than exhaustive search to find the difficult to round cases.
References
In The Table Maker's Dilemma (1998) Lefvre, Muller, and Tisserand wrote:
The Table Maker's Dilemma is the problem of always getting correctly rounded results when computing the elementary functions. After a brief presentation of this problem, we present new developments that have helped us to solve this problem for the double-precision exponential function in a small domain.
Some more recent exposition by two of these same authors is offered as Introduction to the Table Maker's Dilemma, last modified in 2003, and focuses on the search for "worst cases", i.e. arguments $x$ where $f(x)$ is exactly in between two machine representable values. A superset of authors contributed to a book chapter for Handbook of Floating-point Arithmetic in Solving the Table Maker's Dilemma (2010).
Tips and Tricks
Note that any absolute rounding error present in an input value is scaled with the derivative of the exact exponential function in the output value, so that as a relative error it would be conserved. For simplicity we will henceforth treat the input value $x$ for $\exp(x)$ as if it were representing an exact double-precision floating point equivalent under IEEE 754.
With care you can approach correct rounding, i.e. $1/2$ ulp accuracy (in a double precision answer), but why bother? The OP responded:
Why I am interested in this this problem? Firsty I wrote a library, which correctness depends on high accuracy of log and exp (symmetric level index representation of doubles) and I dont have time to check its correctness for all possible values of doubles is considered region. Secondly I found I a library with lower error and higher efficiency, than VS C++ std implementation, but I dont understand why it works. And I can't find any error analysis of the exp function.
As broached in my responding Comment, implementing a mathematical library for a specific CPU architecture (such as x86) allows one to optimize code at an assembly language level for best accuracy ("lower error") and speed ("higher efficiency"). Pursuit of this topic is outside the scope of Math.SE, but as a starting point I refer you the Intel Floating Point Instruction Set and F2XM1
in particular.
What is within the scope of Math.SE is discussion of algorithms and in generic terms the trade-off implications for speed vs. accuracy. High-level languages such as C allow a variety of algorithms, and it makes sense to consider "error analysis" and operation complexity as a proxy for real world performance (accuracy and speed).
The CORDIC algorithm for the exponential function is attractive for simplifying the intermediate operations required to adding and shifting, whose round-off errors are easily modeled. By adding extra "guard bits" to the computation, the computation can be pushed as close to correct rounding as we wish. However the number of operations (complexity) is roughly linear with the number of bits (including guard bits) required.
A classical analysis of optimal approximation of the exponential function would begin with a polynomial approximation on a bounded interval. The truncated Taylor series is of greatest accuracy at (a small neighborhood of) $x=0$. For an extended interval such as $[0,\ln 2]$ one asks for the best uniform accuracy (minmax) by applying the Remez algorithm.
That said, I'd recommend using best rational approximations rather than polynomial approximations. The rational approximations can cheaply impose the symmetry of the exponential function:
$$ \exp(-x) = \frac{1}{\exp(x)} $$
Details (which involve Bernoulli numbers) can be found here.
The Padé approximant of appropriate degree (mentioned by J. M. is not a mathematician) is often used as a starting value for the rational version of the Remez algorithm.
Regardless of whether the approximating function is polynomial or rational, a reduction of operation count is possible by subdividing the interval of approximation $[0,\ln 2]$ into suitable smaller pieces. In the limit one imagines a lookup table (see this previous Question)! Short of this one exerts effort to obtain pre-computed values of the exponential function which are especially close to their floating-point representations. Using these as endpoints of the subintervals allows us to get economical computations of known accuracy (fewer operations making the error analysis easier and more conservative).