What (if anything) is wrong with this proof
Yes there is something missing from this proof, as mentioned in the comments. In general, it is not possible to replace two limits with a single limit
$$\lim_{h\to0}f_h(\lim_{t\to0}x_t) \neq \lim_{s\to0}f_s(x_s)$$
However, there is some general situation when this is possible. For that reason, the OPs argument is not wrong per se, but incomplete.
I will describe the general setup, how this fails in case of non-uniform convergence, and then why in this situation we have uniform convergence and the limit is valid.
This may seem like overkill for computing the derivative $e^x$, but I think the question is more about the validity of the technique of combining limits, so I hope this can give a guiding principle for a more general case.
Problem. Let $f_h \to f_0$ be a family of functions converging pointwise, and $x_t \to x_0$ be a family of real values approaching the limit point $x_0$. How can we compute the value of the limit function $f_0(x_0)$ at the limit point in terms of $f_h$ and $x_t$?
We can try to make the following computations:
$$
\begin{aligned}
f_0(x_0)
&= \lim_{h\to0}f_h(\lim_{t\to0} x_t)\\
&\overset{!}= \lim_{h\to0} \lim_{t\to0} f_h(x_t)\\
&\overset{!!}= \lim_{t\to0} \lim_{h\to0} f_h(x_t)
\end{aligned}
$$
The (!) equality holds if the $f_h$ are continuous at $x_0$. The (!!) equality, with order of limits switched, is even less likely to hold. It is equivalent to $f_0$ being continuous at $x_0$, which need not be true even if the $f_h$ are continuous.
A slight generalization of the OP's question is to consider some some monotonic functions $\alpha, \beta$ (such that $\alpha(s),\beta(s) \to 0$ as $s \to 0$) and take a single limit $$f_0(x_0) \overset{?}=\lim_{s\to0} f_{\alpha(s)}(x_{\beta(s)})$$
with the special case of $\alpha(s) = \beta(s)$ in the OP.
Example. Take $f_h(x) = x^{1/h}$ converging to $f_0(x) = \lfloor x \rfloor$ on $[0,1]$, and $x_t = 1 - t$ converging to $x_0 = 1$ from below. For a linear $\alpha(s) = \lambda s$ then
$$f_{\lambda s}(x_s) = (1-s)^{1/\lambda s} \to 1/\sqrt[\lambda]{e}$$
So we see that this failed to produce the correct limit $f_0(x_0) = 1$. Taking the limit $\lambda \to \infty$ fixes it, but that is equivalent to not combining the limits in the first place.
The above is a well known example of non-uniform convergence. As long as we have uniform convergence, there is no issue:
Solution to Problem. If $f_h \to f_0$ converge uniformly locally at $x_0$ then the limit $f_0(x_0)$ can be computed in either order or via $\alpha,\beta$ as above.
This is not too hard to see, it just comes from the estimate
$$|f_h(x_t) - f_0(x_0)| \leq |f_h(x_t) - f_0(x_t)| + |f_0(x_t) - f_0(x_0)|$$
Uniform convergence locally at $x_0$ allows us to control the first term for all $x_t$ near $x_0$. By the uniform limit theorem, it also implies $f_0$ is continuous at $x_0$ which controls the second term as well.
Now back to the OP's problem.
We have $f_h(x) = (x^h - 1)/h \to \log(x)$ and $x_t = (1+t)^{1/t} \to e$. To combine limits, we should try to see that $f_h \to f_0$ is converging uniformly locally at $x_0 = e$.
It is helpful that we already know the limit function is continuous (it is the natural logarithm), as there is a partial converse to the uniform limit theorem, called Dini's theorem. This states that if $f_h \to f_0$ is converging monotonically to a continuous function $f_0$, then the convergence is locally uniform. So we just need to check the convergence is monotonic.
(For an example about how this can fail without monotonicity, see pictures at Does pointwise convergence against a continuous function imply uniform convergence?)
So we want to show that for $0 < s < t$ then $f_s(x) < f_t(x)$ for, say $x \geq 0$ (any neighborhood of $x=e$ would work). Since the $f_s(x)$ are monotonic increasing, this is equivalent to the reverse inequality for the inverse functions, $f_t^{-1}(x) < f_s^{-1}(x)$ for $x \geq 1$. (Apply $f_s^{-1}$ to the original inequality, then substitute $x \leftarrow f_t^{-1}(x)$. The new lower bound is $f_t^{-1}(0) = 1$.)
The inverse is $f_h^{-1}(x) = (1+hx)^{1/h}$, and the inequality
$$(1+tx)^{1/t} < (1+sx)^{1/s}$$
is just the fact that compound interest grows more quickly at higher compounding frequency. There are a variety of proofs at How to prove $(1+1/x)^x$ is increasing when $x>0$?, including an elementary one not using calculus.
To summarize, this kind of computation is valid as long as you have uniform convergence locally at the limit point. It is sufficient to check that:
- the limit function is continuous, and
- the convergence to the limit function is monotonic