2

Let $r_n$ be the unique real root of the Taylor polynomial of $e^x$ of degree $2n+1$: $P_{2n+1}(x)=\sum_{k=0}^{2n+1}\frac{x^k}{k!}$.

Can we find an asymptotic expansion of the sequence $r_n$?


I have shown thus far that $-(2n+3)<r_n<-(2Cn+C)$ where $C=W(1/e).$

The upper bound is interesting because $2C=0.5568$, which is suspiciously close to the coefficient found by computer in the post below.

Edit: Now I have in fact proved that $r_n$ is asymptotic to $-2W(1/e)n$, by exentending the method used to derive the upperbound. Now as for the remaining terms in the expansion I don't know...

math_lover
  • 5,826
  • It may (or may not) help to observe $$P_{2n+1}(x) = e^x \frac{\Gamma(2n+2, x)}{\Gamma(2n+2)}$$where $\Gamma(s,z)$ is the incomplete gamma function. –  Jan 13 '17 at 08:23
  • It seems to be very close to linearity. – Claude Leibovici Jan 13 '17 at 08:27
  • Well actually its easy to verify that the growth is at most linear. First show that $r_n$ is strictly decreasing and $r_n$ tends to minus infinity. This implies that $r_n>-(n+2)$. After that some more work is required... – math_lover Jan 13 '17 at 11:11
  • Sorry I meant you get that $r_n>-(2n+3)$. – math_lover Jan 13 '17 at 14:27
  • I can also show quite easily that $r_n$ must grow at least linearly. We can get a lower bound by using Taylor's theorem. – math_lover Jan 13 '17 at 14:39
  • Related: http://math.stackexchange.com/q/51586/5531 – Antonio Vargas Jan 13 '17 at 16:32
  • @AntonioVargas : the question of course is where does formula (1) in that link come from. Clearly Stirling's approximation comes into play, which is what I used to calculate the upper bound on $r_n$ (which somehow ends up giving the right answer)... – math_lover Jan 13 '17 at 19:35
  • @JoshuaBenabou, I could give an overview on how that's derived in an answer if you'd like. The basic idea is that you start with a nice integral representation for the partial sums (there are two that come to mind that work well) and apply the saddle point method to approximate it. – Antonio Vargas Jan 13 '17 at 20:30
  • Yeah that would be helpful. I only know how to derive the first term of the asymptotic expansion, but after that I think its a bit tricky without using more advanced tools. – math_lover Jan 13 '17 at 20:34

2 Answers2

3

One fruitful way to approximate the root(s) of the Taylor polynomials is to first approximate the polynomials themselves then approximate the roots of the approximation. If you're careful you can get good error bounds between the approximate zeros and the actual zeros.

A simple way to get an integral representation for the Taylor polynomials is to use the integral form for the Taylor remainder:

$$ e^x = s_n(x) + \frac{1}{n!} \int_0^x e^t (x-t)^n\,dt, \tag{1} $$

where

$$ s_n(x) := \sum_{k=0}^{n} \frac{x^k}{k!}. $$

Making the substitution $t = xs$ in the integral then yields

$$ e^x = s_n(x) + \frac{x^{n+1}}{n!} \int_0^1 e^{xs}(1-s)^n\,ds. \tag{2} $$

Inside the integral, if we were to send $n \to \infty$, we might notice that interesting things could happen when $x$ is proportional to $n$. Then the factors $e^{xs}$ and $(1-s)^n$ will both have an "$n$" in the exponent (and hence balance in a sense), and we can expect that the integral will tend to different values depending on the constant of proportionality. This suggests that we replace $x$ by $nx$ in the equation and thus consider

$$ \begin{align} e^{nx} &= s_n(nx) + \frac{(nx)^{n+1}}{n!} \int_0^1 e^{nxs}(1-s)^n\,ds \\ &= s_n(nx) + \frac{(nx)^{n+1}}{n!} \int_0^1 [e^{xs}(1-s)]^n\,ds \\ &= s_n(nx) + \frac{(nx)^{n+1}}{n!} \int_0^1 \exp\{n[xs + \log(1-s)]\}\,ds. \tag{3} \end{align} $$

For $x < 1$ the exponent

$$ \varphi(s) = xs + \log(1-s) $$

is strictly decreasing on the interval $(0,1)$ and near its maximum at $s=0$ we have

$$ \varphi(s) = (x-1)s + O\!\left(s^2\right). $$

Following the usual steps of the Laplace method we can therefore deduce that

$$ \int_0^1 \exp\{n[xs + \log(1-s)]\}\,ds = \int_0^\infty e^{n[(x-1)s]}\,ds + O\!\left(\frac{1}{n^2}\right), \tag{4} $$

where the constant in the error term is uniform as long as $x$ remains bounded away from $1$. Of course we can calculate the remaining integral exactly:

$$ \int_0^\infty e^{n[(x-1)s]}\,ds = \frac{1}{n} \frac{1}{1-x}, $$

and substituting these in $(3)$ yields

$$ e^{nx} = s_n(nx) + \frac{n^n}{n!} \frac{x^{n+1}}{1-x} \left[1 + O\!\left(\frac{1}{n}\right)\right]. \tag{5} $$

A little help from Stirling's formula

$$ \frac{n^n}{n!} = \frac{e^n}{\sqrt{2\pi n}} \left[ 1 + O\!\left(\frac{1}{n}\right)\right] $$

allows us to simplify this a bit and we end up with

$$ e^{nx} = s_n(nx) + \frac{e^n x^{n+1}}{\sqrt{2\pi n}(1-x)} \left[1 + O\!\left(\frac{1}{n}\right)\right]. \tag{6} $$

Finally, if $x$ is a zero of $s_n(nx)$ then we may conclude that it satisfies

$$ e^{nx} = \frac{e^n x^{n+1}}{\sqrt{2\pi n}(1-x)} \left[1 + O\!\left(\frac{1}{n}\right)\right] $$

or, after a bit of rearranging,

$$ \left(x e^{1-x}\right)^{-n} = \frac{x}{\sqrt{2\pi n}(1-x)} \left[1 + O\!\left(\frac{1}{n}\right)\right]. \tag{7} $$

This is essentially equation $(1)$ in my answer here and can be used as a starting point to approximate the real zero $x$ (which tends to $-W(1/e)$ as $n \to \infty$ with $n$ odd) using the methods indicated there.

If I recall correctly you should be able to use $(7)$ to calculate the terms in the asymptotic series for the root up to order $O(n^{-2})$ (which is the appoximation I gave in my linked answer). To calculate more terms in the series you can be more careful with the application of the Laplace method in equation $(4)$ above; if you calculate the full asymptotic expansion for the integral in $(4)$ you should, in principle, be able to calculate any term in the asymptotic expansion of the root $x$.

I am not aware of a closed form for the coefficients in the asymptotic expansion of the root though I suppose one could be obtained using something like the Lagrange inversion formula.

  • And, in conclusion, $r_n$ is the sum of a linear term, a logarithmic term, a constant term, and an $o(1)$ error. –  Jan 14 '17 at 02:35
1

Just to provide a few numbers.

The tabe below gives the values of $r_n$ $$\left( \begin{array}{cc} n & r_n \\ 10 & -6.74309 \\ 20 & -12.3733 \\ 30 & -17.9809 \\ 40 & -23.5783 \\ 50 & -29.1698 \\ 60 & -34.7574 \\ 70 & -40.3424 \\ 80 & -45.9254 \\ 90 & -51.5068 \\ 100 & -57.0870 \\ 110 & -62.6662 \\ 120 & -68.2446 \\ 130 & -73.8223 \\ 140 & -79.3993 \\ 150 & -84.9759 \\ 160 & -90.5519 \\ 170 & -96.1276 \\ 180 & -101.703 \\ 190 & -107.278 \\ 200 & -112.853 \end{array} \right)$$

If you plot them, you should notice an almost linear trend. A very simple linear regression gives $$r_n=-1.24153-0.55822 \,n$$ with very highly significant coefficients. $$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ a & -1.24153 & 0.01338 & \{-1.26976,-1.21330\} \\ b & -0.55822 & 0.00011 & \{-0.55845,-0.55798\} \\ \end{array}$$ This values of the above table were generated using Newton method, the initial estimates being generated from previous values $$r_n^{(0)}=2r_{n-1}-r_{n-2}$$ which allow very fast convergence.

For example, $r_{48}=-28.05181722$, $r_{49}=-28.61081015$ give $r_{50}^{(0)}=-29.16980308$ leading to $r_{50}=-29.16976403$ basically obtained after a single iteration.

Edit

Taking into account Antonio Vargas's elegant answer as well as Hurkyl's comment, a much better fit is obtained
$$r_n=-0.95857-0.557040 \,n-0.09199 \log (n)$$ $$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ a & -0.95857 & 0.00263 & \{-0.96414,-0.95299\} \\ b & -0.55704 & 0.00001 & \{-0.55707,-0.55702\} \\ c & -0.09199 & 0.00084 & \{-0.09375,-0.09021\} \\ \end{array}$$

which makes a significant improvement (for the first model, the sum of squared errors is $1.41 \times 10^{-2}$ while, for the second, it is reduced to $1.88 \times 10^{-5}$ that is to say $750$ times smaller).