2

The following is a problem from Spivak's Calculus

Chapter 20, Problem 18: Deduce Theorem $1$ as a corollary of Taylor's Theorem, with any form of the remainder. (The catch is that it will be necessary to assume one more derivative than in the hypotheses for Theorem 1).

Here is Theorem $1$

Theorem 1 Suppose that $f$ is a function for which

$$f'(a),...,f^{(n)}(a)$$

all exist. Let

$$a_k=\frac{f^{(k)}(a)}{k!}, 0\leq k\leq n$$

and define

$$P_{n,a}(x)=a_0+a_1(x-a)+...+a_n(x-a)^n$$

Then

$$\lim\limits_{x\to a} \frac{f(x)-P_{n,a}(x)}{(x-a)^n}=0$$

and here is Taylor's Theorem

Theorem 4 (Taylor's Theorem) Suppose that $f',...,f^{(n+1)}$ are defined on $[a,x]$, and that $R_{n,a}(x)$ is defined by

$$f(x)=f(a)+f'(a)(x-a)+...+\frac{f^{(n)}(a)}{n!}(x-a)^n+R_{n,a}(x)$$

Then

$$R_{n,a}(x)=\frac{f^{(n+1)}(t)}{(n+1)!}(x-a)^{n+1},\text{ for some } t \text{ in } (a,x)$$

(Lagrange form of the remainder).

My primary question is: why can or can't we use the following proof

Assume we know Taylor's theorem is true and assume the first $n+1$ derivatives of $f$ exist on $[a,x]$.

Taylor's theorem tells us that

$$f(x)-P_{n,a}(x)=R_{n,a}(x)=\frac{f^{(n+1)}(t)}{(n+1)!}(x-a)^{n+1}, \text{ for } t\in (a,x)$$

Then,

$$\lim\limits_{x\to a} \frac{f(x)-P_{n,a}(x)}{(x-a)^n}=\lim\limits_{x\to a} \frac{f^{(n+1)(t)}}{(n+1)!}(x-a)=0$$

In addition, let me just show the solution manual solution and my own understanding of it in more steps

Solution Manual

Suppose $|f^{(n+1)}|$ is bounded, by some $M$, on some interval around $a$. Then for $x$ in this interval we have

$$|R_{n,a}(x)|=\frac{|f^{(n+1)}(t)|}{n!}|x-a|^{n+1}\tag{1}$$

so

$$\frac{|R_{n,a}(x)|}{|x-a|^n}\leq M|x-a|\tag{2}$$

so

$$\lim\limits_{x\to a} \frac{R_{n,a}(x)}{(x-a)^n}=0\tag{3}$$

A similar proof works for the integral form of the remainder and for the Cauchy form.

Here is my understanding of this proof in more steps

Suppose we know Taylor's theorem is true, and assume the assumptions of that theorem are true on some interval $[a,x]$.

One of those assumptions is that $f',...,f^{(n+1)}$ are defined on $[a,x]$. Thus, $f^{(n+1)}$ is continuous on $[a,x]$, and hence bounded on this interval.

Thus, there exists some $M>0$, such that for any $x_1\in[a,x]$ we have $|f^{(n+1)}(x_1)|\leq M$.

In addition, Taylor's theorem tells us that there is some $t\in (a,x)$ such that

$$R_{n,a}(x)=\frac{f^{(n+1)}(t)}{(n+1)!}(x-a)^{n+1}\tag{4}$$

Second question: why does $(1)$ have the denominator as $n!$ and not $(n+1)!$ as in $(4)$?

Now we take the absolute value of both sides of $(4)$

$$|R_{n,a}(x)|=\frac{|f^{(n+1)}(t)|}{(n+1)!}|(x-a)|^{n+1}$$

$$0\leq \frac{|R_{n,a}(x)|}{|x-a|^{n}}=\frac{|f^{(n+1)}(t)|}{(n+1)!}|x-a|$$

$$\leq \frac{M}{(n+1)!}|x-a|\leq M|x-a|$$

Hence

$$\lim\limits_{x\to a} \frac{|R_{n,a}(x)|}{|x-a|^{n}}=\lim\limits_{x\to a} \frac{|f(x)-P_{n,a}(x)|}{|x-a|^{n}} =0$$

$$\implies \lim\limits_{x\to a} \frac{f(x)-P_{n,a}(x)}{(x-a)^{n}} =0$$

which is the result of Theorem 1.

xoux
  • 4,913

2 Answers2

1

For your second question: yes I think it is a typo, and it should be $(n+1)!$.

For your first question: note that the $t$ depends on $x$, the $f^{(n+1)}(t)$ term will vary as $x \to a$ (you cannot treat it as a constant in $x$), and you cannot immediately conclude $\lim_{x \to a} \frac{f^{(n+1)}(t)}{(n+1)!} (x-a) = 0$. As the solution manual shows, this can be fixed as long as you have a condition that ensures the existence of some $M$ such that $|f^{(n+1)}(t)| \le M$ for all $x$ near $a$.

angryavian
  • 89,882
0

Firstly, regarding your paragraph where you express your understanding, I would note that just because $f^{(n+1)}$ is defined on a closed interval DOES NOT imply that $f^{(n+1)}$ is continuous on that same interval. The classic counter example is: $f(x)=x^2\sin(\frac{1}{x})$ with $f(0)=0$. $f'$ is defined on $[-c,c]$ but is NOT continuous on $[-c,c]$ because $\lim_{x \to 0}f'(x) \text{ DNE}$ (which means that $f'$ is not continuous at $x=0$ by definition).

Secondly, just to add some detail to concretize angryavian's response, consider the function $f(x)=x^2\sin(\frac{1}{x^2})$. This derivative is defined on $[0,c]$ but is not bounded. This is why Spivak assumes boundedness in his proof.

S.C.
  • 4,984