I understand how to prove a power series diverges, but that seems to contradict simple logic for me. The idea behind constructing a Taylor series is that it is a polynomial that has the same nth derivative at a given center for all positive integer n. And what is it that governs a function's behavior at a point? its derivative at that point! If every order derivative matches, it should make sense that the infinite sum should exactly match the function everywhere because each point can be "traced back" as a result of a derivative at a point. Clearly this is wrong, but I can't seem to connect these diverging (pun intended) lines of logic here.
-
Possible duplicate: https://math.stackexchange.com/questions/694697/infinitely-differentiable-function-with-divergent-taylor-series – avs Apr 17 '19 at 22:59
-
1The taylor series diverges if you are not close to the desired point (this means that you can not simply trace it back). However, if your series is centered around the desired point, it can be traced back and give an optimal value. – Sina Babaei Zadeh Apr 17 '19 at 23:03
-
But if it is matching up to the any order derivative, my whole point is that it should be possible to trace it back, no matter how far away from the center you are. – Apr 17 '19 at 23:05
-
This may help , https://math.stackexchange.com/questions/1308992/why-doesnt-a-taylor-series-converge-always – I0_0I Apr 17 '19 at 23:06
-
As Bernard's answer below shows, for smooth real functions, knowing all the derivatives at a point is not sufficient to determine the behaviour near that point. – copper.hat Apr 17 '19 at 23:13
-
Local behavior doesn't determine global behavior. $\frac 1{1-x}=\sum x^n$ if $x$ is near $0$ but not at, say, $x=2$, even though $\frac 1{1-x}$ is well defined at $2$. – lulu Apr 17 '19 at 23:49
3 Answers
Saying a function is the sum of its Taylor series means the function is analytic. On $\mathbf R$, these notions are not equivalent. For instance, the function $$f(x)=\begin{cases}\mathrm e^{-\frac1{x^2}}&\text{ if }x\ne 0,\\ 0&\text{ if }x=0,\end{cases}$$ has derivatives of all orders, which are $0$ at $0$, hence its Taylor series near $0$ is $0$, yet $f(x)\ne 0$ if $x\ne 0$.

- 175,478
Every Taylor series $$T_{f,x_0}(x) = \sum_{k=0}^{\infty} \frac{(x-x_0)^k f^{(k)}(x_0)}{k!} $$ has a point of development $x_0 \in \mathbb{R}$. Every (literally every) Taylor series will agree with the value of its point of development exactly:
$$T_{f,x_0}(x_0) = f(x_0).$$
In particular, a Taylor series cannot diverge at its point of development $x_0$. If you look at the formula for $T_{f,x_0}(x)$, you can see, that it only contains information about the derivatives of $f$ at the point $x_0$. Therefore, this point is "save" and the Taylor series always converges to $f(x_0)$ for $x=x_0$. But if you move away from the point of development, crazier things can happen.
In particular, there are Taylor series, which diverge everywhere, except at their point of development $x_0$. This can happen, because the area of "good" approximation of a finite-order Taylor polynomial $$T_{f,x_0,n}(x) = \sum_{k=0}^{n} \frac{(x-x_0)^k f^{(k)}(x_0)}{k!} $$ can get smaller around $x_0$ while $n \rightarrow \infty$. So if you increase $n$, the approximation of $f$ gets better, but the area of approximation around $x_0$ in which this approximation is good, gets smaller at the same time.
To truly understand this, I encourage you to study the following canonical example:
$$h : \mathbb{R} \rightarrow \mathbb{R}, \quad h(x) = e^{-1/x²} for \ x \neq 0, \quad h(0) := 0 .$$
Compute and plot the finite dimensional Taylor polynomials of $h$ at $x_0$ for $n = 0,1,2,3,4,5,...$. I guarantee, that if you plot these, it will become crystal clear for you how a Taylor series can diverge. In the case of $h$ the phenomenon I described occurs; the approximation gets better with increasing $n$ around $x_0$, but, somewhat paradoxically, at the same time, the size of the area where this approximation is actually useful shrinks.

- 1,784
- 11
- 18
-
-
-
Excuse me, I have a question, did you mean "Error" from Taylor's Theorem when you said "area of good approximation"? If you didn't, then what does "area of good approximation" exactly mean? Is there a formal term of it? Thank you. – Orange Cat May 26 '22 at 02:25
Let $f: I \to \mathbb R$ be a $C^\infty$ function and $a \in I$, where $I$ is an open interval. Let $P_n(x)=\sum_{k=0}^n f^{(k)}(a) \frac{(x-a)^n}{n!}$ be the $n$-th Taylor polynomial of $f$ at $x=a$.
What the various Taylor theorems say (read carefully the precise statements and don't be satisfied with vague sentences like "you can trace back using the derivatives") are essentially this: $|P_n(x) - f(x)| \leq C_n |x-a|^{n+1}$ for some constant $C_n$ depending on $n$.
However, it may very well happen that the constant $C_n$ tends to infinity as $n$ increases to infinty. In that case, the approximation by $P_n$ gets better but only on intervals $I_n$ centered around $a$, with $I_n$ getting smaller and smaller as $n$ goes to infinity. In that case it is entirely possible that either the Taylor series does not converge, or converges to a function different from $f$. The other answer and the comments give examples of that.

- 9,170