Suppose $f:\mathbb{R}\to\mathbb{R}$ is everywhere differentiable, and suppose that $\lim_{x\to\infty}f(x)$ and $\lim_{x\to\infty}f^{\prime}(x)$ both exist. I am trying to prove that the latter limit is necessarily $0$. I have the following argument, but I'm not sure if it's completely sound.
Since $f$ is differentiable everywhere, we can apply the Mean Value Theorem to $f$ on $[x,x+1]$ for all relevant $x$. This guarantees an $\alpha_{x}\in(x,x+1)$ such that $$f^{\prime}(\alpha_{x}) = \frac{f(x+1)-f(x)}{x+1-x} = f(x+1)-f(x).$$ Now, the limit as $x\to\infty$ of the right-hand side of this expression must be $0$, since $\lim_{x\to\infty}f(x)$ exists by assumption (and must equal $\lim_{x\to\infty}f(x+1)$). On the left hand side, we notice that $\alpha_{x}\to\infty$ as $x\to\infty$, since $\alpha_{x}>x$ always, so that: \begin{eqnarray*} 0 & = & \lim_{x\to\infty}[f(x+1)-f(x)]\\ & = & \lim_{x\to\infty}f^{\prime}(\alpha_{x})\\ & = & \lim_{y\to\infty}f^{\prime}(y), \end{eqnarray*} proving the result.
I took inspiration for this argument from other sources which use the same trick of "use the Mean Value Theorem to introduce a quantity $\alpha_{x}$ which we have some bounds on, then take limits". However, this style of argument seems dodgy to me: we haven't actually defined a function $\alpha$ to take the limit of as $x\to\infty$, and it's not clear to me that defining such a function is always possible. For example, we can't just say "take the least such value and call it $\alpha_{x}$", because we haven't shown that there will always be a least such value.
Here are my questions:
In the above, where have we used the fact that $\lim_{x\to\infty}f^{\prime}(x)$ exists? This is an important assumption: consider for example the function $x\mapsto\sin{(x^{2})}/x$. My guess is that it's used in the last line, where we must assume this fact to use the chain rule, but I'd like confirmation of this.
Does the "$\alpha_{x}$ trick" require something like the Axiom of Choice in general? In particular, the thing which makes me slightly anxious about just saying "choose an $\alpha_{x}$ for every $x$" is that we have to make (uncountably) infinitely many "choices", and we have no prescribed method of doing this. EDIT: It turns out this has been answered in other questions on this site, see link in the comments below.
EDIT: Note that the first question is different to others on related topics because here I am asking very specifically about this argument and why it works.