I'm currently working my way through David M. Bressoud's "Factorization and Primality Testing", and I'm struggling with an exercise (exercise 5.7) that asks the reader to prove that the following algorithm produces the greatest integer less than or equal to the square root of $n$:
\begin{align} \text{INITALIZE:} \quad &\text{READ} \;n \\ &a \leftarrow n \\ &b \leftarrow \lfloor (n+1)/2 \rfloor \\\\ \text{MYSTERY_LOOP:} \quad &\text{WHILE} \: b < a \; \text{DO} \\ &\quad a \leftarrow b \\ &\quad b \leftarrow \lfloor (a \times a + n) / (2 \times a) \rfloor \\\\ \text{TERMINATE:}\quad &\text{WRITE} \; a \end{align}
I know from real analysis that the sequence $x_{m+1} = \frac{x_m^2 + n}{2 x_m}$ converges to the square root of $n$ (for any sensible $x_0$), so morally, I can believe that the above algorithm converges to the integer square root of $n$.
How do we formally prove that the above algorithm works (i.e. terminates in a finite number of steps, such that $a^2 \leq n$ and $(a+1)^2 >n$), and moreover, why start the iterations with $b = \lfloor\frac{n+1}{2} \rfloor$? Is $\lfloor\frac{n+1}{2} \rfloor$ just a very crude approximation? What is its significance?
I have tried splitting the floor into a real part minus a part between 0 and 1 (i.e. $\lfloor x \rfloor = x - \{x\}$), but I haven't been able to get very far with this.
In addition, do we have the same quadratic convergence that we get when we compute the (ordinary) square root using Newton's method?