0

I have been looking at the proof of the existence of $e^x$ and its properties, and I understand the induction argument which yields the Taylor series expansion around $x=0$. For example,

$E_1(x)=1 + x$, $E_{n+1}(x)=1 + \int_0^x E_n(t)$, etc.

However, I wonder how this argument was developed informally before the proof. For example, how was $E_1(x)$, etc. chosen?

Arturo Magidin
  • 398,050
analysisj
  • 2,700
  • 2
    I guess you could have started with $E_0(x) = 1$ as well. – Sasha Aug 31 '11 at 20:29
  • I haven't much wisdom on the history, but Hardy's Pure Mathematics does a pretty good job on the existence and properties of exponential and logarithmic functions. In fact I've not seen his treatment of the logarithm as a limit anywhere more recent (the modern treatment seems to be as integral of 1/x). – Mark Bennet Aug 31 '11 at 20:58
  • There is also the fact that $e^x$ is the inverse of logx. – gary Aug 31 '11 at 21:08
  • Should this be tagged [math-history]? –  Aug 31 '11 at 22:29

3 Answers3

6

One way of solving a differential equation of the form $$ \begin{align} &\frac{dy}{dx}=F(y(x)),\\ &y(0)=a, \end{align} $$ is to rewrite it in integral form $$ y(x)=a+\int_0^xF(y(u))\,du. $$ Here, we are solving for functions $y\colon\mathbb{R}^+\to\mathbb{R}$ and $F\colon\mathbb{R}\to\mathbb{R}$ is given. The integral form can be solved iteratively. First choose any (continuous) initial guess $y_0\colon\mathbb{R}^+\to\mathbb{R}$ then iteratively define $y_{n+1}(x)$ in terms of $y_n$ $$ y_{n+1}(x)=a+\int_0^xF(y_n(u))\,du. $$ This is method is known as Picard iteration, and is guaranteed to converge to the unique solution to the differential equation for a large class of functions $F$. For example, it always converges if $F$ is Lipschitz continuous.

The exponential function $y(x)=\exp(x)$ satisfies $\frac{dy}{dx}=y$ and $y(0)=1$. This differential equation can be solved by Picard iteration by taking $F(y)=y$ and using $y_0=0$ or $y_0=1$ gives the iteration described in the question.

  • Thank you. +1. How then from the taylor series was it determine that it was the exponential function and that it specifically was of the form e^x? Was e already known or did this help in its discovery? – analysisj Aug 31 '11 at 21:29
  • @analysisjb: Sorry, I don't know in what order the various arguments were developed historically. This answer is just really saying that the sequence you state is a natural way of finding exp(x) if you start from the differential equation. No doubt Picard iteration for general differential equations came later. Showing that it converges to exp(x) rather depends on how you define exp. It is guaranteed to converge to the unique solution of $dy(x)/dx=y(x)$. The fact that $y(x_1+x_2)=y(x_1)y(x_2)$ follows from linearity. Solutions for different initial conditions are obtained by scaling. – George Lowther Aug 31 '11 at 21:34
  • But e has been known for a long time so, almost certainly, it was already known. For a very short history, see Wikipedia (http://en.wikipedia.org/wiki/E_%28mathematical_constant%29#History). e has its roots in logarithm tables. – George Lowther Aug 31 '11 at 21:37
  • But, as mentioned in the Wikipedia link, e was first used explicitly by Leibniz. As he developed differential/integral calculus, I suppose that he could have used an argument like this (non-rigorously). – George Lowther Aug 31 '11 at 21:41
0

I like starting with the functional equation $f(x+y)=f(x)f(y)$. From this, assuming differentiability, we can show that $f'(x) = f'(0)f(x)$. ($f(x+h)-f(x) = f(x)f(h)-f(x) = f(x)(f(h)-1)$ so $(f(x+h)-f(x))/h = f(x)(f(h)-1)/h$, and let $h \rightarrow 0$)

This also works for the log, inverse tan, and other functions.

Once you have the differential equation, proceed as usual.

Of course, I claim no originality for this - I just like it.

marty cohen
  • 107,799
0

For example, how was $E_1(x)$, etc. chosen?

As long as $E_1(x)$ has the correct value at $x=0$, the iteration will converge to the same target.

zyx
  • 35,436