In Intro To Real Analysis by Bartle & Sherbert there is a discussion / argument that the $\lim _{x \rightarrow c} x^2 = c^2$. As a whole I understand the argument, but I am getting stuck up on two seemingly small details.
For the limit to exist we want to show that $|x^2-c^2| < \epsilon$ by determining a $\delta$ such that $|x-c|<\delta \rightarrow |x^2-c^2| < \epsilon$.
To do so we note that $ |x^2-c^2| = |x-c||x+c|$. (1) Suppose $|x-c|<1$. Then with a little algebra we find that $|x+c|<2|c|+1$ which implies that $ |x^2-c^2| < |x-c|(2|c|+1)$ so if we let (2) $\delta = \inf [1, \frac{\epsilon}{(2|c|+1)}]$ we end up with our result that $|x-c|<\delta \rightarrow |x^2-c^2| < \epsilon$.
So my questions are : 1) Why are we allowed to suppose that $|x-c|<1$? I understand that in evaluating a limit we are trying to have $x$ 'approach' a point, but what guarantees that we start so close to our $c$? 2)Whats the purpose of the $1$ in the delta assignment? Im guessing it has to do with our assumption that $|x-c|<1$, but I cant see where it can come into play. It seems that if the latter choice of $\delta$ satisfies the implication even if a large $\epsilon$ is given.
Thanks for the help!