7

I'm in physics and am not super adept at developing completely rigorous proofs. I was curious to prove that $\frac{d}{dx}e^x=e^x$ for myself and I came up with the following proof, but I can't find anyone else doing the proof in precisely this way, which makes me suspect that there is some subtle issue with it. I'd like to understand what that issue is or if it is, in fact, a valid proof.

I begin with the limit definition of $e$

$$e\equiv\lim_{h\to0}(1+h)^{\frac{1}{h}}$$

The definition of the derivative (obviously):

$$\frac{df}{dx}\equiv\lim_{h\to0}\frac{f(x+h)-f(x)}{h}$$

Then evaluate these as most people do:

$$\frac{d}{dx}e^x=\lim_{h\to0}\frac{e^{x+h}-e^x}{h}$$ $$=\lim_{h\to0}\frac{e^{x}e^{h}-e^{x}}{h}$$ $$=e^x\lim_{h\to0}\frac{e^h-1}{h}$$

Plugging in the definition of $e$:

$$=e^x\lim_{h\to0}\frac{((1+h)^{\frac{1}{h}})^h-1}{h}$$ $$=e^x\lim_{h\to0}\frac{(1+h)-1}{h}=e^x\lim_{h\to0}(1)=e^x(1)=e^x$$

The only thing I can think that may be wrong with this proof is if it is not valid to fold the limit that comes from the derivative and the limit that comes from the definition of $e$ into the same limit index $h$, but if that is the case, I don't see why.

Ben
  • 6,766
  • 8
    There is an indexing variable $h$ from your use of the derivative definition. That would be independent from the indexing variable from the definition of $e$. So your next to last line should be like $e^x\lim\limits_{h\to0}\frac{\lim\limits_{k\to0}\left((1+k)^{1/k}\right)^h-1}{h}$. – 2'5 9'2 Jun 25 '22 at 03:31
  • If the indexing variables are both "going to the same place" why can't I just fold them into each other? Aren't they in some sense dummy variables? – Cody Payne Jun 25 '22 at 03:33
  • 5
    The limit might not agree (or even exist) along all possible "paths" to (0, 0). – user3716267 Jun 25 '22 at 03:34
  • @CodyPayne If you compare the two lines before and after "plugging in" you see that all you did was replacxe $,e^h,$ with $,\left(\left(1+h\right)^{1/h}\right)^h=1+h,$. That's wrong, because $,e^h \ne 1+h,$. – dxiv Jun 25 '22 at 03:36
  • In the limit it is though, right? – Cody Payne Jun 25 '22 at 03:37
  • I do see now why the folding in of the indices of the limits is not a valid operation, which answers my question. – Cody Payne Jun 25 '22 at 03:38
  • @CodyPayne You wrote: $;\displaystyle e^x\lim_{h\to0}\frac{\color{red}{e^h}-1}{h} = e^x\lim_{h\to0}\frac{\color{red}{((1+h)^{\frac{1}{h}})^h}-1}{h},$. There is no limit taken yet at the point of that substitution. – dxiv Jun 25 '22 at 03:40
  • why is it invalid to do that substitution?...Since they are approaching the same limit, I don't see why that is a valid substitution. – alienare 4422 Jun 25 '22 at 04:01
  • 6
    You could use this same technique to prove $\lim\limits_{x\to 0} \left(\lim\limits_{y \to 0} ;y\right)^x = \lim\limits_{x \to 0} ;x^x$, which isn't true. – TomKern Jun 25 '22 at 04:41
  • 1
    @TomKern Well, neither of the 2 limits exist. I am not saying that the approach of converting nested limits to single limits is true, but if you want to prove it's not, provide a counter example where both limits exist and are unequal so it's at least clear to the OP. –  Jun 25 '22 at 08:21
  • 2
    Here's an approach that doesn't require a double limit: since $\lim_{p\to0}(1+p)^{1/p}=e$, taking base-$e$ logarithms gives $\lim_{p\to0}\frac1p\log_e(1+p)=1$. Define $h:=\log_e(1+p)$ so $\lim_{h\to0}\frac{h}{e^h-1}=1$. – J.G. Jun 25 '22 at 08:31
  • both limits in my example exist: the left hand side is 0 and the right hand side is 1. – TomKern Jun 27 '22 at 03:56
  • 1
    @TomKern I think you want one-sided limits viz. $\lim_{x\to0^+}0^x=0,,\lim_{x\to0^+}x^x=1$. – J.G. Jun 27 '22 at 07:49
  • I stand corrected – TomKern Jun 27 '22 at 13:36

4 Answers4

7

What (if anything) is wrong with this proof

Yes there is something missing from this proof, as mentioned in the comments. In general, it is not possible to replace two limits with a single limit $$\lim_{h\to0}f_h(\lim_{t\to0}x_t) \neq \lim_{s\to0}f_s(x_s)$$ However, there is some general situation when this is possible. For that reason, the OPs argument is not wrong per se, but incomplete.

I will describe the general setup, how this fails in case of non-uniform convergence, and then why in this situation we have uniform convergence and the limit is valid.

This may seem like overkill for computing the derivative $e^x$, but I think the question is more about the validity of the technique of combining limits, so I hope this can give a guiding principle for a more general case.


Problem. Let $f_h \to f_0$ be a family of functions converging pointwise, and $x_t \to x_0$ be a family of real values approaching the limit point $x_0$. How can we compute the value of the limit function $f_0(x_0)$ at the limit point in terms of $f_h$ and $x_t$?

We can try to make the following computations: $$ \begin{aligned} f_0(x_0) &= \lim_{h\to0}f_h(\lim_{t\to0} x_t)\\ &\overset{!}= \lim_{h\to0} \lim_{t\to0} f_h(x_t)\\ &\overset{!!}= \lim_{t\to0} \lim_{h\to0} f_h(x_t) \end{aligned} $$ The (!) equality holds if the $f_h$ are continuous at $x_0$. The (!!) equality, with order of limits switched, is even less likely to hold. It is equivalent to $f_0$ being continuous at $x_0$, which need not be true even if the $f_h$ are continuous.

A slight generalization of the OP's question is to consider some some monotonic functions $\alpha, \beta$ (such that $\alpha(s),\beta(s) \to 0$ as $s \to 0$) and take a single limit $$f_0(x_0) \overset{?}=\lim_{s\to0} f_{\alpha(s)}(x_{\beta(s)})$$ with the special case of $\alpha(s) = \beta(s)$ in the OP.

Example. Take $f_h(x) = x^{1/h}$ converging to $f_0(x) = \lfloor x \rfloor$ on $[0,1]$, and $x_t = 1 - t$ converging to $x_0 = 1$ from below. For a linear $\alpha(s) = \lambda s$ then $$f_{\lambda s}(x_s) = (1-s)^{1/\lambda s} \to 1/\sqrt[\lambda]{e}$$

So we see that this failed to produce the correct limit $f_0(x_0) = 1$. Taking the limit $\lambda \to \infty$ fixes it, but that is equivalent to not combining the limits in the first place.

The above is a well known example of non-uniform convergence. As long as we have uniform convergence, there is no issue:

Solution to Problem. If $f_h \to f_0$ converge uniformly locally at $x_0$ then the limit $f_0(x_0)$ can be computed in either order or via $\alpha,\beta$ as above.

This is not too hard to see, it just comes from the estimate

$$|f_h(x_t) - f_0(x_0)| \leq |f_h(x_t) - f_0(x_t)| + |f_0(x_t) - f_0(x_0)|$$

Uniform convergence locally at $x_0$ allows us to control the first term for all $x_t$ near $x_0$. By the uniform limit theorem, it also implies $f_0$ is continuous at $x_0$ which controls the second term as well.


Now back to the OP's problem.

We have $f_h(x) = (x^h - 1)/h \to \log(x)$ and $x_t = (1+t)^{1/t} \to e$. To combine limits, we should try to see that $f_h \to f_0$ is converging uniformly locally at $x_0 = e$.

It is helpful that we already know the limit function is continuous (it is the natural logarithm), as there is a partial converse to the uniform limit theorem, called Dini's theorem. This states that if $f_h \to f_0$ is converging monotonically to a continuous function $f_0$, then the convergence is locally uniform. So we just need to check the convergence is monotonic.

(For an example about how this can fail without monotonicity, see pictures at Does pointwise convergence against a continuous function imply uniform convergence?)

So we want to show that for $0 < s < t$ then $f_s(x) < f_t(x)$ for, say $x \geq 0$ (any neighborhood of $x=e$ would work). Since the $f_s(x)$ are monotonic increasing, this is equivalent to the reverse inequality for the inverse functions, $f_t^{-1}(x) < f_s^{-1}(x)$ for $x \geq 1$. (Apply $f_s^{-1}$ to the original inequality, then substitute $x \leftarrow f_t^{-1}(x)$. The new lower bound is $f_t^{-1}(0) = 1$.)

The inverse is $f_h^{-1}(x) = (1+hx)^{1/h}$, and the inequality $$(1+tx)^{1/t} < (1+sx)^{1/s}$$ is just the fact that compound interest grows more quickly at higher compounding frequency. There are a variety of proofs at How to prove $(1+1/x)^x$ is increasing when $x>0$?, including an elementary one not using calculus.


To summarize, this kind of computation is valid as long as you have uniform convergence locally at the limit point. It is sufficient to check that:

  • the limit function is continuous, and
  • the convergence to the limit function is monotonic
Ben
  • 6,766
1

Finding the limit:

$$\lim_{h \rightarrow 0} \frac{e^h-1}{h}$$

Performing a substitution: Let

$ u = e^h-1 $

$ h = \ln{(1+u)} $

$h \rightarrow 0 , u\rightarrow 0$

$$=\lim_{u\to 0}\frac{u}{\ln{(1+u)}} $$

Multiply $\frac{1}{u}$ on both the numerator and the denominator:

$$=\lim_{u\to 0}\frac{\frac{1}{u}\cdot u}{\frac{1}{u} \ln{(1+u)}}$$

Apply logarithm rule:

$$=\lim_{u\to 0}\frac{1}{\ln{[(1+u)^{\frac{1}{u}}]}}$$

Limit of a quotient:

$$=\frac{\lim\limits_{u\to 0} (1)}{\lim\limits_{u\to 0}{\ln{\left[\left(1+u\right)^{\frac{1}{u}}\right]}}} $$

The numerator is $1$

$$=\frac{1}{\lim\limits_{u\to 0}{\ln{\left[\left(1+u\right)^{\frac{1}{u}}\right]}}}$$

$\ln(x)$ is continuous on $(0,\infty)$

$$=\frac{1}{\ln{\lim\limits_{u\to 0}[{\left(1+u\right)^{\frac{1}{u}}}}]} $$

Use your definition now:

$$=\frac{1}{\ln{(e)}}$$

$$\therefore\hspace{4pt}=1$$

(Not necessary here but anyways:)

Additionally, you can let $n=\frac{1}{u}$ and get the preferred definition of $e$, which is more common in textbooks.

$$\lim\limits_{n\to\infty}\left[{\left(1+\frac{1}{n}\right)^n}\right]=e$$

1

Here's a more rigorous proof starting from basic definitions:

Define

$$\ln(x)=\int_{1}^x \frac{1}{t}dt$$

$1)$: For any $r\in\mathbb{R}$ we have

$$\frac{d}{dx}\ln(x^r)=\frac{du}{dx}\frac{d}{du}\ln(u)$$

where $u=x^r$. Then

$$=rx^{r-1}\frac{1}{u}=\frac{rx^{r-1}}{x^r}=\frac{r}{x}$$

In a similar manner, we have

$$\frac{d}{dx} r\ln(x)=\frac{r}{x}$$

Since the derivatives are equal, we may conclude

$$\ln(x^r)=r\ln(x)+C$$

for some constant $C$. To find this, note that at $x=1$ we have

$$\ln(1)=\int_1^1\frac{1}{t}dt=0$$

Thus

$$0=\ln(1)=\ln(1^r)=r\ln(1)+C=C$$

We conclude that for all $r\in\mathbb{R}$ we have $\ln(x^r)=r\ln(x)$.

$2)$ Consider

$$\lim_{x\to 0}\frac{\ln(1+x)}{x}$$

Using L'Hopital's rule this is

$$=\lim_{x\to 0}\frac{\frac{1}{1+x}}{1}=\lim_{x\to 0}\frac{1}{1+x}=1$$

$3)$ Define

$$e=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$$

We have

$$\ln(e)=\ln\left(\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n\right)$$

Since $e\geq 1$ and $\ln(x)$ is continuous for all positive real numbers, this becomes

$$=\lim_{n\to\infty}\ln\left(1+\frac{1}{n}\right)^n=\lim_{n\to\infty}n\ln\left(1+\frac{1}{n}\right)$$

Using the limit proved in $2)$ this becomes

$$=\lim_{n\to\infty}n\cdot \frac{1}{n}=\lim_{n\to\infty}1=1$$

$4)$ This then implies

$$\ln(e^x)=x\ln(e)=x\cdot 1=x$$

Since both $\ln(x)$ and $e^x$ are increasing functions on their domains, this is enough to conclude that these are inverse functions.

$5)$ We will now state the inverse function theorem:

Suppose that $f(x)$ and $g(x)$ are inverse functions. Then

$$\frac{d}{dx}f(x)=\frac{1}{g'(f(x))}$$

Using this with $\ln(x)$ and $e^x$ gives us

$$\frac{d}{dx}e^x=\frac{1}{\left.\frac{d}{dx}\ln(x)\right|_{e^x}}=e^x$$

as desired.

QC_QAOA
  • 11,796
0

As people have mentioned in the comments, in order for this to work, we have to prove: \begin{eqnarray} 1&=&\lim_{(k,h)\to(0^-,0^+)}\dfrac{(1+h)^{\frac{k}{h}}-1}{k}\\ &=&\lim_{(k,h)\to(0^+,0^-)}\dfrac{(1+h)^{\frac{k}{h}}-1}{k}\\ &=&\lim_{(k,h)\to(0^+,0^+)}\dfrac{(1+h)^{\frac{k}{h}}-1}{k}\\ &=&\lim_{(k,h)\to(0^-,0^-)}\dfrac{(1+h)^{\frac{k}{h}}-1}{k}\\ \end{eqnarray} It's easy enough to see: \begin{eqnarray} \lim_{(k,h)\to(0^-,0^+)}\dfrac{(1+h)^{\frac{k}{h}}-1}{k}&=&\lim_{h\to0^+}\dfrac{(1+h)^{\frac{-h}{h}}-1}{-h}\\ &=&\lim_{h\to 0^+}\dfrac{1-(1+h)}{-h(1+h)}\\ &=&\lim_{h\to 0^+}\dfrac{1}{1+h}\\ &=&1\\ \lim_{(k,h)\to (0^+,0^-)}\dfrac{(1+h)^{\frac{k}{h}}-1}{k}&=&\lim_{h\to 0^-}\dfrac{(1+h)^{-1}-1}{-h}\\ &=&1\\ \lim_{(k,h)\to (0^+,0^+)}\dfrac{(1+h)^{\frac{k}{h}}-1}{k}&=&\lim_{h\to 0^+}\dfrac{1+h-1}{h}\\ &=&1\\ \lim_{(k,h)\to (0^-,0^-)}\dfrac{(1+h)^{\frac{k}{h}}-1}{k}&=&\lim_{h\to 0^-}\dfrac{1+h-1}{h}\\ &=&1 \end{eqnarray}