0

Here's a proof (with highlighted parts) from Steven Krantz's textbook «Real Analysis and Foundations»: part1 part2

$\color{red}{(1)}$ (from the image) feels extremely arbitrary. What was the author's possible reasoning for trying such a substitution? I see that it indeed does the job, yet I have no idea what motivated it

$\color{orange}{(2)}$ I think most probably the author simply meant $\lim\limits_{k\to\infty} \max\limits_{l \ge k}|a_l| = 0$, right?

$\color{orange}{(3)}$ If the answer to the previous question is «yes», then isn't that expression meaningless? I'm not entirely sure that we're guaranteed that the infinite set $ \{ |a_k|, |a_{k+1}|, \ldots \} $ always has the largest element (even though $\sup$ apparently exists in this case)

$\color{green}{(4)}$ And here I'm completely lost.

  • First, why $\color{orange}{(3)} \implies \limsup\limits_{n\to\infty}{|\rho_n|} \le \epsilon \cdot A$?
  • And more importantly, why does the lim sup inequality imply $(|\rho_n|)\to 0$?

P.S. I'm sorry for including images (and especially links to the images), but as a new user I cannot embed them into the post directly «just yet» $\, \require{HTML} \style{display: inline-block; transform: rotate(90deg)}{:(}$

P.P.S. Please, let me know if there's any better way to show this (preferably referencing a textbook)

1 Answers1

1

(1) The idea is the following: We have $B_n \xrightarrow{n \to \infty} \beta$, so if we set $\lambda_n = B_n - \beta$ then we are "measuring" how far $B_n$ is away from its limit $\beta$. You can think of $\lambda_n$ as an "error term", telling you how far away you are from the limit. Convergence means, that we can make this error as small as we want, which as you can see turns out to be useful in the proof.

(2) Yes.

(3) It is in fact a maximum, which follows from the fact that $a_n \xrightarrow{n \to \infty} 0$. You could use the supremum as well, the same proof still works. It is a good exercise to show that it is indeed a maximum.

(4) The computation shows that

$$ |\rho_{N+k}| \leq (N+1)\max_{l \geq k} |a_l| \cdot \max_{0 \leq j \leq N}|\lambda_j| + \varepsilon\cdot A. $$ Now we can "take the $\limsup$ on both sides" (which preserves the inequality) so that $$ \limsup_{k \to \infty} |\rho_k| \leq \varepsilon \cdot A. $$ Think of taking the limit on both sides,we just have to be careful because the limit may not exist a priori so we take the $\limsup$ just to be sure. Also, if we have $|\rho_k|$ or $|\rho_{k + N}|$ in the expression of the $\limsup$ is irrelevant since $N$ is a fixed number.

Finally, the above estimate holds for any $\varepsilon > 0$. In other words, we can choose any positive value we want and we see that $\limsup_{k \to \infty} |\rho_k|$ must be smaller than this value, this is only possible if $\limsup_{k \to \infty} |\rho_k|$ is in fact equal to 0. Since $|\rho_k|\geq 0$ is true for all $k \in \mathbb{N}$ as well, it follows that $\liminf_{k \to \infty}|\rho_k|\geq 0$ so that we deduce that $\lim_{k \to \infty} |\rho_k| = 0$.

If you are interested in different proofs of this fact, you may want to search for "Merten's Theorem", which is the name (I know) of the result above. (Actually, Merten's Theorem only supposes that one of the two series converges absolutely, while the other one may converge conditionally. Maybe this is the reason why in your textbook it is not referred to as Merten's Theorem.)

ADDED: In this thread you find a different proof which also assumes that both series converge absolutely. There is a nice visual interpretation that may help.

noam.szyfer
  • 1,498