Yes, a rigorous proof looks quite different, and will start from the simple definition of a convex function $f : \mathbb R \to \mathbb R$ (suitably extended to convex subsets of vector spaces) as being one such that if $s,t \in \mathbb R, \lambda \in (0,1)$ then $\lambda f(s) + (1-\lambda) f(t) \geq f(\lambda s + (1-\lambda)t)$. This is the translation of the geometric definition "the line segment joining two points in the graph always stays above the graph until its endpoints".
We do not assume differentiability : only that $f$ satisfies the above inequality, that is all.
But basically, even if we cannot get differentiability, we can still get enough information to work as if the function were differentiable!
Namely, one has the following claim for convex functions on an open interval :
Suppose that $f$ is a convex function on an open interval $U$. Then, for all $x \in U$, the left and right derivatives $Df^-(x)$ and $Df^+(x)$ exist. Furthermore, for all $x<y \in U$, we have $$
Df^-(x) \leq Df^+(x) \leq Df^-(y) \leq Df^+(y)
$$
The proof of this claim is follows via the so-called ""three-slopes property" of convex functions which I'll sketch a proof of. Take, say $s<t<u \in U$. Then we can write $t = \lambda u + (1-\lambda)s$ where $\lambda \in (0,1)$ can easily be found in terms of $s,t,u$ by solving for it in this equation : I leave the reader to check that $\lambda = \frac{t-s}{u-s}$.
Now, we know that $f(t)\leq \lambda f(u) + (1-\lambda)f(s)$ by convexity.
First, write $(1-\lambda)f(s) = f(s) - \lambda f(s)$ and take $f(s)$ to the other side. Then,$$
f(t)-f(s) \leq \lambda(f(u)-f(s)) \implies \frac{f(t)-f(s)}{t-s} \leq \frac{f(u)-f(s)}{u-s}
$$
On the other hand, begin with the convexity statement again and write $\lambda f(u) = f(u) - (1-\lambda)f(u)$. Now take $f(u)$ to the other side,$$
f(t)-f(u) \leq (1-\lambda)(f(s)-f(u)) \implies \frac{f(u)-f(t)}{u-t} \geq \frac{f(u)-f(s)}{u-s}
$$
Combining these two statements, we have proven the three-slopes property.
If $f : U \to \mathbb R$ is convex where $U$ is an open interval and $s<t<u \in U$, then $$
\frac{f(t)-f(s)}{t-s} \leq \frac{f(u)-f(s)}{u-s} \leq \frac{f(u)-f(t)}{u-t}
$$
Remarkably, this result is enough to actually prove the result of your question, but in any case I'll finish the proof of the claim completely to show you the power of the three-slopes property.
Let us use some easier language now : given two points $u_1<u_2 \in U$, we let $S(u_1,u_2) = \frac{f(u_2)-f(u_1)}{u_2-u_1}$ be the slope of the line joining the points $(u_1,f(u_1)), (u_2,f(u_2))$. The three slopes property tells us that if $s<t<u \in U$ then $U(s,t) \leq U(s,u) \leq U(t,u)$.
Let's see how we can prove the existence of a right derivative at every point : the left derivative has a similar argument. Let $x \in U$ and let $y < x \in U$ be a fixed point.
Suppose that $h>h'>0$ are such that $x+h \in U$. Then, by the three-slopes property applied to $x<x+h'<x+h$ , $S(x,x+h') < S(x,x+h)$. In particular, $S(x,h)$ is monotonically decreasing in $h$ so we only need to prove that it's bounded below for it to have a limit as $h \to 0$.
However, fix $h>0$. By the three-slopes property applied to $y<x<x+h$, $S(x,y) \leq S(x,x+h)$. Therefore, $S(x,x+h)$ is also bounded below. Hence, $\lim_{h \to 0} S(x,x+h) = Df^+(x)$ exists. In a similar way, $Df^-(x)$ also exists.
Next suppose that $x$ is fixed and $h' < 0 < h$. By the three slopes property applied to $x+h' < x < x+h$, we get $S(x+h',x) \leq S(x,x+h)$. By letting $h',h \to 0$ it is clear that $Df^-(x) \leq Df^+(x)$.
Finally, suppose that $x < y$. Then, let $h > 0$ be such that $x < x+h<y$. By the three slopes property applied to $x,x+h,y$, $S(x,x+h) < S(x+h,y)$. However, we have already seen that $S(x,x+h) \geq Df^+(x)$ as the RHS is the limit of a decreasing sequence involving the LHS. Similarly, $S(x+h,y) \leq Df^-(y)$ (while doing the analogous argument for the left hand derivative, this inequality is clear). Therefore, $Df^+(x) \leq Df^-(y)$ and we are done with the proof.
Using this property, a lot of regularity in convex behavior is easily obtained.
For example, in this case, we know that $f$ is strictly increasing. However, this means that there are two points such that $x>y$ and $f(x)>f(y)$. In particular, $S(x,y) >0$ is some fixed constant. Note how this is a much much weaker conclusion than just the function being strictly increasing.
Indeed, this is all you need, because if $z>x$ then applying the three-slopes property to $y<x<z$, we get $$
S(z,x) > S(x,y) \implies f(z) - f(x) \geq (z-x)S(x,y) \implies f(z) \geq f(x) + (z-x)S(x,y)
$$
It is now obvious that $f(z) \to +\infty$ as $z \to \infty$. Furthermore, $f(z)$ converges at least linearly to infinity.
Note that we actually proved something much weaker than the statement in question.
If $f: \mathbb R \to \mathbb R$ is such that there are two points $x>y$ such that $f(x)>f(y)$ , then $\lim_{x \to \infty} f(x) = +\infty$.
Furthermore, the claim we proved isn't the end of the story. One can finish with either of the following claims, the second being a fairly hard one to prove but still worth it. It is a special case of Alexandrov's theorem.
If $f : \mathbb R \to \mathbb R$ is a convex function, then $f$ is differentiable except at countably many points.
If $f : \mathbb R \to \mathbb R$ is a convex function then $f$ is twice differentiable except at a set of points of "measure zero" (not elaborating on this, but it's a very small set).
These results allow us to work with convex functions nearly as if they were once or twice differentiable. One big example of this is that if $g$ is convex then one can often assume that $g$ is differentiable for the purposes of integration-by-parts or maxima-minima calculations/heuristics. Then one must revert to results like the three-slope property or the two results above for a rigorous proof.