36

I am aware that $e$, the base of natural logarithms, can be defined as:

$$e = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$$

Recently, I found out that

$$\lim_{n\to\infty}\left(1-\frac{1}{n}\right)^n = e^{-1}$$

How does that work? Surely the minus sign makes no difference, as when $n$ is large, $\frac{1}{n}$ is very small?

I'm not asking for just any rigorous method of proving this. I've been told one: as $n$ goes to infinity, $\left(1+\frac{1}{n}\right)^n\left(1-\frac{1}{n}\right)^n = 1$, so the latter limit must be the reciprocal of $e$. However, I still don't understand why changing such a tiny component of the limit changes the output so drastically. Does anyone have a remotely intuitive explanation of this concept?

Bluefire
  • 1,668
  • 4
    Compare with the limits $\lim_{n\to\infty}(\frac1n\times n)=1$ while $\lim_{n\to\infty}((-\frac1n)\times n)=-1$, and that even though $\lim_{n\to\infty}\frac1n=0=\lim_{n\to\infty}-\frac1n$. – Marc van Leeuwen Apr 05 '16 at 13:22
  • 2
    I think your comparison is fundamentally different. Here, changing the plus sign into a minus subtracts a small quantity, whereas in your example changing the sign changes the sign of the whole expression. – Bluefire Apr 05 '16 at 13:25
  • 9
    No, it is not fundamentally different, but my example uses multiplication where you example uses exponentiation. These operations correspond via the logarithm: taking the $n$-th power of a number multiplies its logarithm by $n$. Now the logarithm of $1+\frac1n$ is close to $\frac1n$ while the logarithm of $1-\frac1n$ is close to $-\frac1n$. So under the logarithm, the difference between adding or subtracting $\frac1n$ results in a sign change (and the magnitude of the change then gets multiplied by $n$). Your example is really very close to mine. – Marc van Leeuwen Apr 05 '16 at 14:54
  • 2
    It's because 0.999 to a large power gets very small, while 1.001 to a large power gets very large. – BlueRaja - Danny Pflughoeft Apr 06 '16 at 15:29
  • Maybe you should look for a rigorous way of proving it, considering that your intuition is wrong. – anomaly Apr 06 '16 at 20:58

17 Answers17

48

The point is that $1-\frac{1}{n}$ is less than $1$, so raising it to a large power will make it even less-er than $1$. On the other hand, $1+\frac{1}{n}$ is bigger than $1$, so raising it to a large power will make it even bigger than $1$.


There's been some brouhaha in the comments about this answer. I should probably add that $(1-\epsilon(n))^n$ could go to any value less than or equal to $1$, and in particular it could go to $1$, as $n$ increases. It so happens that in this example, it goes to something less than $1$. The reason it goes to something less than $1$ is because we end up raising something sufficiently less than $1$ to a sufficiently high power.

  • 7
    @xyz But the question wasn't why $e \neq 1$, it was how one limit can go to $e$ while the other goes to $e^{-1}$. – fgp Apr 06 '16 at 14:21
  • 2
    This argument is wrong. It will not work with $(1+\frac1{n^2})^n$. –  Apr 06 '16 at 17:34
  • 3
    @YvesDaoust It's not supposed to be general. The question asked for intuition; I provided it. No-one expects intuition to hold generally. – Patrick Stevens Apr 06 '16 at 17:49
  • @YvesDaoust - The argument given holds for that expression, too - it's only the limit that doesn't maintain the difference, and it doesn't need to. What Patrick's argument shows is that the limit in the positive case cannot be smaller than 1, and the negative case cannot be larger than 1. – Glen O Apr 06 '16 at 18:26
  • 1
    @GlenO: if I recall right, $1=1^{-1}$. But okay, the argument is not wrong, it is just dangerous. –  Apr 06 '16 at 18:48
  • 3
    @Yves Daoust is making an excellent point here. You can't just say that because it's less that one/greater than one, that's why the limit remains like that. In Yves' example, that limit is 1: http://www.wolframalpha.com/input/?i=limit+as+n+approaches+infinity+of+(1%2B1%2Fn%5E2)%5En. – Paul Raff Apr 07 '16 at 04:06
  • 1
    @PaulRaff I'm entirely aware of this. But we already know the value of the two limits, and the question was "How can they be different when their expressions are so similar?". I answered the question by pointing out how they could be different, and it turns out they actually are different. I'll amend my answer's wording from "will" to "could". – Patrick Stevens Apr 07 '16 at 07:27
40

Perhaps think about the binomial expansions of $\left(1 + \frac{1}{n}\right)^n$ and $\left(1 - \frac{1}{n}\right)^n$. The first two terms are $1 + n \frac{1}{n}$ and $1 - n \frac{1}{n}$ respectively. And after that the terms in $\left(1 + \frac{1}{n}\right)^n$ are all positive, whereas the terms in $\left(1 - \frac{1}{n}\right)^n$ alternate. So the difference between the two limits is going to be at least 2.

Therkel
  • 1,332
  • While intuitive, how does this actually justify $(1-\frac1n)^n=e^{-1}$? It only seems to approximate. – Simply Beautiful Art Apr 06 '16 at 21:02
  • 2
    I was trying to respond to the final paragraph of the question. The OP says "I still don't understand why changing such a tiny component of the limit changes the output so drastically," so I wanted to focus on that. – Aidan Sims Apr 06 '16 at 22:22
  • Ok, I guess, but Patrick Steven's answer just seemed much more intuitive and easier to understand in my opinion. But of course, that's only my opinion. (And try "@simpleart" next time.) – Simply Beautiful Art Apr 06 '16 at 22:31
  • 1
    @simpleart Yes, his is a good answer. Perhaps putting the two of them together is better again, since applying the binomial-expansion idea also indicates why $(1-\frac{1}{n^2})^n$ behaves differently (in response to Yves Daoust's comment below). In particular, with a little work it shows why, as the OP says, $(1 - \frac{1}{n})^n(1 + \frac{1}{n})^n = (1 - \frac{1}{n^2})^n \to 1$, answering your original comment. – Aidan Sims Apr 06 '16 at 22:43
21

The true issue is not why changing the sign has such an impact, it is why adding such a small quantity as $\dfrac1n$ drastically changes the result.

$$1^n\to1\text{ vs. }\left(1+\frac1n\right)^n\to e$$

(and very similarly $\left(1-\frac1n\right)^n\to e^{-1}$.)

The reason is that the tiny quantity gets multiplied over and over so that it becomes a finite quantity,

$$\left(1+\frac1n\right)\left(1+\frac1n\right)\left(1+\frac1n\right)\cdots=1+\frac1n+\frac1n+\frac1n+\cdots>2$$ as there are $n$ terms $\dfrac1n$ (and yet others). The "tininess" of the terms is well compensated by the amount of terms.

Also notice that the "asymmetry" shown by $e-1\ne 1-e^{-1}$ is just due to the non-linearity of the exponential.

17

Actually you have the stronger true statement that $$ \lim_{x\to0}(1+x)^{1/x}=e, $$ of which the initial limit you stated is a special case, approaching $0$ through the sequence of values $x=\frac1n$ for $n\in\Bbb N_{>0}$. But if you approach $0$ through the sequence of values $x=-\frac1n$ for $n\in\Bbb N_{>1}$, the same limit gives you $$ \lim_{n\to\infty}\left(1-\frac1n\right)^{-n}=e. $$ Now it is a simple matter to see that the sequence of inverses $\left(1-\frac1n\right)^n$ tends to the inverse value $e^{-1}$.

It should be noted that while the first limit above is more general than the limits for $n\to\infty$, it is also less elementary to define, since it involves powers of positive real numbers with arbitrary real exponents. Introducing such powers requires studying exponential functions in the first place, which is why the limit statement with integer exponents is often preferred. But the more general limit statement is true, and can serve to give intuition for the relation between the two limits in your question.

7

Here's a useful generalization of the limit definition of $e$ from the OP:

Given

$$e = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$$

Raise both sides to the power of $x$:

$$e^x = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}$$

This is trivially true when $x = 0$, as both sides evaluate to 1

Assume $x \ne 0$ and let $m = nx$, i.e., $n = \frac{m}{x}$

As $n\to\infty, \, m\to\infty$

$$e^x = \lim_{m\to\infty}\left(1+\frac{x}{m}\right)^{m}$$

[Note the similarity between this and the first limit in Marc van Leeuwen's answer].

In particular, for $x = -1$

$$e^{-1} = \lim_{m\to\infty}\left(1+\frac{-1}{m}\right)^{m}$$

or

$$e^{-1} = \lim_{m\to\infty}\left(1-\frac{1}{m}\right)^{m}$$


As mathmandan notes in the comments, my derivation is flawed when $x < 0$, since then $n\to\infty \implies m\to -\infty$ :oops:

I'll try to justify my result for negative $x$ without relying on the fact that $e^x$ is an entire function and that there is only a single infinity in the (extended) complex plane.

For any finite $u, v \ge 0$, we have

$$e^u = \lim_{n\to\infty}\left(1+\frac{u}{n}\right)^{n}$$

and

$$e^v = \lim_{n\to\infty}\left(1+\frac{v}{n}\right)^{n}$$

Therefore,

$$e^{u-v} = \lim_{n\to\infty}\left(\frac{1+\frac{u}{n}}{1+\frac{v}{n}}\right)^{n}$$

Let $m = n + v$. For any (finite) $v$ as $n\to\infty, \, m\to\infty$.

$$\begin{align}\\ \frac{1+\frac{u}{n}}{1+\frac{v}{n}} & = \frac{n + u}{n + v}\\ & = \frac{m + u - v}{m}\\ & = 1 + \frac{u - v}{m}\\ \end{align}$$

Thus $$\begin{align}\\ e^{u-v} & = \lim_{n\to\infty}\left(1+\frac{u - v}{m}\right)^{n}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{m-v}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{-v}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m\\ \end{align}$$

since

$$\lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{-v} = 1$$

In other words,

$$e^{u-v} = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m$$

is valid for any finite $u, v \ge 0$. And since we can write any finite $x$ as $u-v$ with $u, v \ge 0$, we have shown that

$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^{n}$$

is valid for any finite $x$, so

$$e^{-x} = \lim_{n\to\infty}\left(1+\frac{-x}{n}\right)^{n}$$
And hence $$e^{-x} = \lim_{n\to\infty}\left(1-\frac{x}{n}\right)^{n}$$

PM 2Ring
  • 4,844
  • Wait...if $x = -1$, then as $n \to \infty$, we'll have $m \to -\infty$, surely? – mathmandan Apr 06 '16 at 14:07
  • @mathmandan: :oops: Good point! Let me think about that for a minute or two... :) The short answer is that my expression for $e^x$ is actually valid for the whole complex plane, and if we map the complex plane to the Riemann sphere there's only a single point at infinity, and the distinction between $+\infty$ and $-\infty$ from the real number line evaporates. But that's not elementary, and I haven't actually proved that. – PM 2Ring Apr 06 '16 at 14:11
  • @mathmandan We can see when we actually try to do the limit, we run into $\lim_{n\to\infty}\ln(1+\frac1n)$, which, when replaced with $-\infty$, the limit stays the same. In fact, just as PM 2Ring noted, it works for $\lim_{|n|\to\infty}$, where all infinities converge in the complex plane – Simply Beautiful Art Apr 06 '16 at 22:35
  • @mathmandan: I've added some new material to my answer which I believe addresses your concerns. – PM 2Ring Apr 07 '16 at 09:05
  • This looks good, thanks! Two suggestions on your edit: First, I'd replace both instances of "positive" with "nonnegative" since you're setting $u=0$. Second: you end up with $\lim_{m\to\infty}\left(1-\frac{x}{m}\right)^n = \lim_{m\to \infty}\left(1-\frac{x}{m}\right)^{m-x}$, and you want $\lim_{m\to \infty}\left(1 - \frac{x}{m}\right)^m$. These will end up being equal since $x$ is fixed while $m\to \infty$, and you can factor out $\lim_{m\to \infty}\left(1 - \frac{x}{m}\right)^{-x} = \left(\lim_{m\to \infty}(1-\frac{x}{m})\right)^{-x}=1^{-x}=1$. But it might be nice to state this explicitly. – mathmandan Apr 08 '16 at 11:05
  • @mathmandan: Thanks again for your valuable input! Somehow that $m-x$ exponent slipped under my radar. :) – PM 2Ring Apr 08 '16 at 12:33
  • There is a fundamental and non-trivial problem when you switch from variable $n$ to $m$. In the original definition of $e$ with which you start, the variable $n$ is a positive integer, but $m$ is not an integer. Note that if $x$ is a real variable and $f(x) \to L$ as $x \to \infty$ then $f(n) \to L$ as $n \to \infty$ where $n$ is integer variable. The converse of this does not hold in general (check with $f(x) = \sin \pi x$) and you seem to be using some sort of the converse. – Paramanand Singh Apr 09 '16 at 11:09
  • Another problem is about raising to the power of $x$. When $x$ is irrational this is again a non-trivial issue and some explanation should be provided. – Paramanand Singh Apr 09 '16 at 11:10
  • @ParamanandSingh: Ok, but I'm unsure why $e = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$ requires $n$ to be an integer. OTOH, I do understand that a rigorous treatment of $a^x$ for irrational $x$ does require us to define it as a limit on $a^x$ for a sequence of rational $\frac{p}{q}$ that converges on $x$. Or (equivalently) to define general exponentiation by a real power in terms of logarithms, which in this case leads to a somewhat circular argument. :) – PM 2Ring Apr 09 '16 at 12:35
  • @ParamanandSingh (cont) But please feel free to add info to the end of my answer, but if you do so, please mark your material clearly as your contribution, and separate it from my text with a <hr> tag. – PM 2Ring Apr 09 '16 at 12:36
  • 1
    It is not necessary to keep $n$ as integer but then you need to define general power $a^{x}$ and then the definition for $e$ will not be simple. Keeping $n$ as integer makes the definition very simple and yet it is powerful enough to show that $(1+(x/n))^{n}$ tends to $e^{x}$ when $x$ is rational. For irrational $x$ it is possible to show that limit exists and can be taken as definition of $e^{x}$. – Paramanand Singh Apr 09 '16 at 16:53
5

Intuitively,

$$1-\frac1n\approx\frac1{1+\dfrac1n}.$$

For example,

$$0.99999=\frac1{1.000010000100001\cdots}\approx \frac1{1.000001}$$ so that

$$0.99999^{100000}=0.36787760177\dots=\frac1{2.7182954100\cdots}\\ \approx \frac1{1.000001^{100000}}=\frac1{2.7182682371\cdots}$$


More rigorously,

$$\left(1+\frac1n\right)^n\left(1-\frac1n\right)^n=\left(1-\frac1{n^2}\right)^n=\sqrt[n]{\left(1-\frac1{n^2}\right)^{n^2}}.$$

As the expression under the radical goes to a finite value, the $n^{th}$ root goes to one.

You can also use the binomial formula,

$$\left(1-\frac1{n^2}\right)^n=1-\frac n{n^2}+\frac{(n)_2}{2n^4}-\frac{(n)_3}{3!n^6}\cdots\to1$$ ($(n)_k$ is the falling factorial).

  • I've already established that :) I was asking for a more intuitive proof... – Bluefire Apr 05 '16 at 11:06
  • @Bluefire: that's right. My first formula gives the best intuition. But for a full understanding, you need to show that the truncation (terms neglected in the approximation of the inverse) makes no difference. I have added a numerical example to please you. –  Apr 05 '16 at 11:23
2

Logarithms were invented (discovered?) by John Napier before there was calculus and before a generalized theory of exponents. It was found that you can find approximate logs to a base very close to $1$ by calculation, for example, by repeated squaring, and other short-cuts. For example if $b=1.000,001$ then $b^x$ is about $2$, where $x=693 147$, so $\log_{1.000 001}$ is about $693,147.$ The motivation for logs was for calculation, replacing $\times$ with $+$ by using tables of logs and anti-logs.

Logs to base $1+1/n$ could be "normalized" by dividing them by $n.$ (So $\log 2$ normalized always is about $ 0.693147$ .) The number $e=2.71828...$ kept showing up as the approximate "normalized" anti-log of $1 $ in base $(1+1/n)$ for any large $n$. Which is because $2.71828....=\lim_{n\to \infty}(1+1/n)^n.$

It was found that if $f(x)=\int_1^x (1/t)\;dt$ for $x>0,$ then $f(a b)=f(a)+f(b),$ that is, $f$ is a logarithm. And that its base $b$, which satisfies $1=\log_b b=\int_1^b (1/t)\;dt$ is that same number, so we could take the def'n of e as the solution $x$ to $f(x)=1.$

We can take any other equation or formula that has $e$ for its unique solution as the def'n of $e$.

(But defining it as the unique $x>1$ such that $\int_{-\infty}^{\infty} x^{-t^2}\;dt=\sqrt \pi$ is not advisable even though it's a true equation.)

  • 1
    Thanks for adding the historical perspective. Of course, once we have calculus and the product rule, it's easy to see the logarithmic nature of $\int_1^x (1/t);dt$. The product rule says $d(uv) = udv + vdu$. Dividing through by $uv$ and taking integrals yields $\int\frac{d(uv)}{uv} = \int\frac{dv}{v} + \int\frac{du}{u}$ – PM 2Ring Apr 06 '16 at 11:27
  • @PM 2Ring . Nice way to show it. – DanielWainfleet Apr 06 '16 at 12:00
2

If you take $(1-1/n)^n$, the result is obviously less than 1 for every n. So it is absolutely obvious that there cannot be a limit greater than 1.

If you take $(1+1/n)^n$, your argument "1/n gets smaller and smaller" still applies. So if that sequence has a limit of e ≈ 2.718 which is greater than 1, then it is a priori unreasonable to argue "-1/n gets smaller and smaller" as evidence that this sequence cannot have a limit significantly less than 1.

gnasher729
  • 10,113
2

Consider that $e \times \frac{1}{e} = 1$. In our case, the $\frac{1}{n^2}$ is too small.

$$ \lim_{n \to \infty} \left( 1 - \frac{1}{n} \right)^n \cdot \lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n = \lim_{n \to \infty} \left( 1 - \frac{1}{n^2} \right)^n \to 1$$

cactus314
  • 24,438
  • 4
    You should give an argument for the last. With Bernoulli's inequality, we have $$1 - n\cdot \frac{1}{n^2} < \biggl( 1 - \frac{1}{n^2}\biggr)^n < 1$$ for $n > 1$. – Daniel Fischer Apr 06 '16 at 19:02
2

If you know that $$\lim_{n \to \infty}\left(1 + \frac{1}{n}\right)^{n} = e\tag{1}$$ (and some books / authors prefer to define symbol $e$ via above equation) then it is a matter of simple algebra of limits to show that $$\lim_{n \to \infty}\left(1 - \frac{1}{n}\right)^{n} = \frac{1}{e}\tag{2}$$ Clearly we have \begin{align} L &= \lim_{n \to \infty}\left(1 - \frac{1}{n}\right)^{n}\notag\\ &= \lim_{n \to \infty}\left(\frac{n - 1}{n}\right)^{n}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(\dfrac{n}{n - 1}\right)^{n}}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(1 + \dfrac{1}{n - 1}\right)^{n}}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(1 + \dfrac{1}{n - 1}\right)^{n - 1}\cdot\dfrac{n}{n - 1}}\notag\\ &= \frac{1}{e\cdot 1}\notag\\ &= \frac{1}{e} \end{align} Using similar algebraic simplification it is possible to prove that $$\lim_{n \to \infty}\left(1 + \frac{x}{n}\right)^{n} = e^{x}\tag{3}$$ where $x$ is a rational number. For irrational/complex values of $x$ the relation $(3)$ holds, but it is not possible to establish it just using algebra of limits and equation $(1)$.

Regarding the intuition about "changing a tiny component in limit expression changes the output" I think it is better to visualize this simple example. We have $$\lim_{n \to \infty}n^{2}\cdot\frac{1}{n^{2}} = 1$$ and if we change the second factor $1/n^{2}$ with $(1/n^{2} + 1/n)$ then we have $$\lim_{n \to \infty}n^{2}\left(\frac{1}{n^{2}} + \frac{1}{n}\right) = \lim_{n \to \infty} 1 + n = \infty$$ The reason is very simple. The change of $1/n$ which you see here is small but due to the multiplication with other factor $n^{2}$ its impact its magnified significantly resulting in an infinite limit. You always calculate the limit of the full expression (and only when you are lucky you can evaluate the limit of a complicated expression in terms of limit of the sub-expressions via algebra of limits) and any change in a sub-expression may or may not impact the whole expression in a significant way depending upon the other parts of the expression.

  • If you define real exponentiation with positive base by squeezing using rational exponent, then to get (3) for real $x$ you just have to squeeze using rational $x$. Though personally I prefer going straight to complex exponentiation via the Taylor series from the beginning. =) – user21820 Apr 27 '16 at 06:42
1

Let me try. Consider $$A=\left(1+\frac{a}{n}\right)^n$$ Take logarithms $$\log(A)=n\log(1+\frac a n)$$ Now, when $x$ is small, by Taylor, $$\log(1+x)=x-\frac{x^2}{2}+O\left(x^3\right)$$ Replace $x$ by $\frac{a}{n}$. This makes $$\log(A)=n \Big(\frac{a}{n}-\frac{a^2}{2 n^2}+O\left(\frac{1}{n^3}\right)\big)=a-\frac{a^2}{2 n}+O\left(\frac{1}{n^2}\right)$$ Now, $$A=e^{\log(A)}=e^a-\frac{a^2 e^a}{2 n}+O\left(\frac{1}{n^2}\right)$$ Now, play with $a$.

Hoping that this makes things clearer to you.

1

Let me offer you the synopsis of a proof: (you can find the full proof in Apostol)

First of all you need to show that for any $a \in \mathbb{R}$, the sequence of the form $(1+\frac{a}{n})^n$ converges to a number say G(a).

Next you show that the function $G(a)$ is of the form $p^a$ where $p$ is some fixed number.

To find $p$ all you have to do is find $G(1)$ which is the limit of the sequence $(1+\frac{1}{n})^n$.

Miz
  • 2,739
0

Going off from cactus314's answer,

$$\lim_{n\to\infty}\left(1+\frac1n\right)^n\left(1-\frac1n\right)^n=\lim_{n\to\infty}\left(1-\frac1{n^2}\right)^n=1$$

So we really only have to prove the right side:

$$\lim_{n\to\infty}\left(1-\frac1{n^2}\right)^n=\lim_{n\to\infty}\left(\left(1-\frac1{n^2}\right)^{n^2}\right)^{1/n}$$

$$=\lim_{n\to\infty}e^{1/n}$$

$$=e^0=1$$

0

$$(1-\frac{1}{n})^n=(\frac{n-1}{n})^n=(\frac{n}{n-1})^{-n}=(\frac{n-1+1}{n-1})^{-n}$$ $$=(1+\frac{1}{n-1})^{-n}=\frac{1}{(1+\frac{1}{n-1})^{n}}=\frac{1}{(1+\frac{1}{n-1})^{n-1}\cdot(1+\frac{1}{n-1})}$$

Now take the limit

$$\lim_{n\to \infty}(1-\frac{1}{n})^n=\lim_{n\to \infty}\frac{1}{(1+\frac{1}{n-1})^{n-1}\cdot(1+\frac{1}{n-1})}=\lim_{n\to \infty}\frac{1}{(1+\frac{1}{n-1})^{n-1}}\cdot\lim_{n\to \infty}\frac{1}{1+\frac{1}{n-1}}=\frac{1}{e}$$

MrYouMath
  • 15,833
0

Actually, a variant of this question was answered long ago in the famous text on Algebra by Chrystal by using an intuitive argument as follows: Expand $\left(1-\frac{1}{n}\right)^n$ as a binomial series for a positive integer $n$ and then tend $n$ to infinity, to give the power series for $e^{-1}$. Chrystal himself expanded $\left(1+\frac{1}{n}\right)^n$ for a positive integer $n$ to obtain $e$. So this answers your question nicely.

0
  1. For any $n$, $$f_n:[0,1]→ [0,1], \quad f_n(x) = x^n$$ always takes the values $0$ at $x=0$ and $1$ at $x=1$. This is illustrated for $n=1,2,…,30$ in the below picture.
  2. By continuity of each $f_n$, we can by Intermediate Value Theorem always find an $x_n∈ [0,1]$ so that $f_n(x_n)$ is any $y_n∈[0,1]$ we pick. In particular, we can create a sequence $x_n$ so that $f_n(x_n)$ converges to any value in $[0,1]$.

These make it not as surprising that this is a particular one that gets you the limit $1/e$. Here you can see the point $x_n := (1-\frac{1}{n})^n$ which lies on the graph of $y=x^n$ get closer to the line $y=1/e$.

enter image description here

Calvin Khor
  • 34,903
-1

The first definition of $e$ is $$ \lim_{n \to \infty} \left(1 + \frac{1}{n} \right)^{n} = e^1 $$ which is basically just answering the question of what happens if you take the limit of discrete compounded growth by 100% to continuous growth. Note that $e>2$, i.e. this limit of discrete compounded growth asymptotically approaches a value that is greater than one we would have arrived at with the initial rate of growth. This means that even though we're chipping away at the rate we're growing by with every step of compounding, the aggregate effect is more growth. Also, I just want to note that $e$ is the universal constant from continuous growth by a certain rate - meaning that simply raising $e^{rt}$ will give the effect of continuously growing at a rate $r$ for $t$ units of time.

If we decide instead to see what happens when we take the limit of discrete compounded decay instead of growth, $$ \lim_{n \to \infty} \left( 1- \frac{1}{n} \right)^n = e^{-1} $$ we see that the opposite happens. Going from discrete compounded decay to instantaneous decay lessens the amount to which we are decaying by with every step of compounding, and asymptotically approaches a value $1/e> 0$, the value we would have been at if we did a single step of decay at a rate of 100%.

Don't know if this helps at all.

D. W.
  • 866