Solving $\lim_\limits{n \to \infty}\sqrt{n^2 + n} - n\,$ is a classic question from Rudin Chapter 3. Its standard solution (on this site and in solutions manuals) is to multiply and divide by the conjugate. Below I present a novel approach, which to me is clearer and more direct, which I request verification and critique of.
Note that by placing this in Chapter 3, Rudin is requesting a solution without L'Hopital's rule.
Solution
Observe that $(n + \frac{1}{2})^2 = n^2 + n + \frac{1}{4}$, so $\sqrt{n^2 + n} = n + \frac{1}{2} + O(\frac{1}{n})$, from which the solution $\frac{1}{2}$ follows immediately.
Or, more explicitly, define $g$ as a function of $n \in \mathbb{N} \to (-1,0)$ such that $2ng + g + g^2 = -\frac{1}{4}$ and $n + \frac{1}{2} + g > 0$. Dividing both sides by $n$ gives $2g + \frac{g}{n} + \frac{g^2}{n} = -\frac{1}{n}$, and since $|g| < 1, \lim_{n \to \infty}g = 0$. Thus, $(n + \frac{1}{2} + g)^2 = n^2 + n$ and $\sqrt{n^2 + n} - n = \frac{1}{2} + g \to \frac{1}{2}$.
Discussion
I prefer this solution over the conjugate solution for several reasons:
1. It makes clear why the result is true: $\sqrt{n^2 + n} \approx n + \frac{1}{2}$. The absolute error is fixed ($\frac{1}{4}$), so the relative error for large $n$ vanishes.
2. The solution uses no tricks, but is developed by estimating $\sqrt{n^2 + n}$, a technique at the heart of analysis. An initial estimate is $n$, but its error is $O(1)$, too large. So we proceed to $n + \frac{1}{2}$, obvious from $(a + b)^2 = a^2 + 2ab + b^2$.
Contrast this with the conjugate solution, which uses a "trick" seemingly pulled out of a hat. See e.g. Bergman (p.25):
In this problem you can use the ‘‘trick’’ for simplifying such limits from first-year calculus; ask a friend if you didn’t learn such a trick. Unfortunately, after the first simplification, the ‘‘obvious’’ next step is really an application of continuity of the square root function, and we can’t talk about continuity until Chapter 4. So instead...
3. The calculation avoids messy algebraic manipulations.
Questions
- Is my proof correct?
- Is it rigorous? My first line simply assumes that to compensate for a constant error, the offset will be $O(\frac{1}{n})$, since it is multiplied by $n$. I find this convincing but not rigorous. But I believe my second paragraph, which introduces an explicit $g$ and bounds it, is indeed rigorous.
- Can the writing be improved?
- Do you agree with the advantages I've argued?
Conclusions
Great responses! The main conclusions so far are:
- L.F. and Charles Hudgins showed how the solution above assumes, without proof, that $g$ will always be in $(-1, 0)$
- Multiple fixes were suggested; by far the best, IMO, is Will Jagy's idea to add a lower bound
- I simplified this approach, giving a two sentence proof which I feel is clear, rigorous, and pedagogical, and request verification and critique of.