12

We say $a_n$ converges slower than $b_n$ if there exist an $x$ such that for all $m>x$, $a_m>b_m$ and both $\sum a_n$ and $\sum b_n$ converges.

Ignoring constant factors, which type of function converges the slowest?

TROLLHUNTER
  • 8,728
  • See also Qiaochu Yuan's answer to this related question: http://math.stackexchange.com/questions/9350/smallest-function-whose-inverse-converges/9353#9353 – Jonas Meyer Feb 04 '11 at 13:05

1 Answers1

20

There is not such a function! Walter Rudin in his "Principles of Mathematical Analysis" has a series of exercises trying to indicate that there is no function at the "boundary" between convergence and divergence.

There was an interesting series of answers about this very issue at MathOverflow (MO): https://mathoverflow.net/questions/49415/nonexistence-of-boundary-between-convergent-and-divergent-series

A quick way of seeing that there is no "slowest" convergent series is Rudin's exercise 12.b, mentioned at the link above: If $\sum a_n$ converges, and the $a_n$ are positive, then $\sum a_n/\sqrt {r_n}$ converges, where $r_n$ is the tail $\sum_{i\ge n}a_i$. Note $r_n\to 0$, so $a_n/\sqrt{r_n}>a_n$ for $n$ large enough.

I recommend that you take a look at the answers at MO and at the references they suggest, for more subtle examples.

  • 2
    What happens if you apply that trick recursively, b_n->a_n/(r_n)^0.5, c_n->b_n/(r(b)_n)^0.5, then it converges for a,b,c,d,e.., if we had inf letters, and took the limit, it should also converge? What function does it converge to? – TROLLHUNTER Feb 04 '11 at 11:08
  • 1
    @solomoan: If you have such a countable sequence of series, you can always diagonalize to get a larger one (carefully, to ensure convergence), and re-start the process. If you know about ordinals, you see that this will lead us to a "sequence" of length at least $\omega_1$. I'm pretty sure how far one can go quickly leads to independence questions in set theory. – Andrés E. Caicedo Feb 04 '11 at 14:53
  • 6
    Andres: I seem to remember something about the 'Gowers series' mentioned in that answer being the 'primitive recursive' boundary: If we define $f(n) = n\mathrm{log}(n)...*\mathrm{log}^{(k)}(n)$, iterated to the last positive term, then the series $\Sigma {1\over f(n)}$ diverges, but for any unbounded PR function $g(n)$, the series $\Sigma {1\over f(n)g(n)}$ converges. Maybe this was in Companion To Concrete Mathematics somewhere? I don't have my copy to hand... – Steven Stadnicki Feb 04 '11 at 18:25
  • @StevenStadnicki I know this is from nearly six years ago, but is there any chance you ever found where this is from or are now willing to look for it? I'm very interested in this 'primitive recursive boundary', but the only google result for it is this comment. – CodeLabMaster Dec 23 '16 at 11:20
  • @CodeLabMaster One can prove that $\int_a^\infty \frac1{f_k(x)\log^{(k)}(x)} = \frac{1}{log^{(k)}(a)}$, so at least for integrals this shows the result for any function that grows faster than some amount of iterated logs. – Thomas Ahle Sep 22 '19 at 15:29