15

Given $\left\{ a_{i}\right\} _{i=0}^{n}\subset\mathbb{R}$ which are distinct, show that $\left\{ e^{a_{i}t}\right\} \subset C^{0}\left(\mathbb{R},\mathbb{R}\right)$, form a linearly independent set of functions.

Any tips on how to go about this proof. I tried a working from the definition of an exponential and combining sums but that didn't seem to get me anywhere. I saw a tip on the internet that said write it in the form

$\mu_{1}e^{a_{1}t}+\dots+\mu_{n}e^{a_{n}t}=0$ to try to show $\mu_{1}=\dots=\mu_{n}=0$ considering each term of the left hand side must be positive, but I can't get my head around that because while I understand $e^{x}>0\forall x\in\mathbb{R}$ I cannot see why $\mu_{i}$ must be positive in any case. I have thought about differentiating but that doesn't seem to help. The question did originally ask for a "rigourous" proof but I'll take any hints right now and the provided the solution of 'is obvious' is most unhelpful to me.

Any input would be fantastic. Thank you.

Arturo Magidin
  • 398,050
Tim Green
  • 315
  • 6
    Differentiating does in fact help a lot; you can compute the Wronskian and show that it's nonzero. Alternately divide by the largest term and let t tend to infinity. – Qiaochu Yuan Feb 21 '11 at 20:45
  • Hint: take the $\max_i a_i$, take out the exponential corresponding to it with distributivity, assuming that its coefficient is non-zero (if it's zero, take the next biggest $a_i$). Reason about the linear combination. – Raskolnikov Feb 21 '11 at 20:46

4 Answers4

9

If you put $t=0$, the sum of the $\mu_i$ is zero, so at least one of them is negative (unless they are all zero).

Intuitively, assume $a_n$ is greater than all of the other $a$'s. Then if $t$ gets very large, that term will dominate over all the others, so $\mu_n$ must be zero. Then argue the same about the next largest.

Ross Millikan
  • 374,822
8

HINT $\rm\ e^{a_0\:t} = c_1\ e^{a_1\:t}\:+\:\cdots\:+ c_n\ e^{a_n\:t}\ \Rightarrow\ e^{a_0\:t}\ $ is killed by $\rm\ D-a_0\ $ and by $\rm\ (D-a_1)\:\cdots\:(D-a_n)\: $. But applying the latter to $\rm\ f = e^{a_0\:t}\ $ yields $\rm\ (a_0-a_1)\cdots (a_0-a_n)\ f \ne 0\ $ since $\rm\ a_{\:i}\ne a_{\:0}\:$ for $\rm\: i\ne 0\:$.

In other words, they are unequal because their characteristic polynomials share no roots.

Bill Dubuque
  • 272,048
8

Although the answer by Ross Millikan is probably the easiest elementary approach, the answer by Bill Dubuque points at a more profound reason that these exponential functions must be linearly independent functions: they are eigenvectors (eigenfunctions) of the differentiation operation $D:f\mapsto f'$ for distinct eigenvalues $a_1,\ldots,a_n$. It is therefore an instance of the fundamental fact that eigenspaces for distinct eigenvalues always form a direct sum. The essence of the argument can be formulated without any advanced language as follows.

We can prove the linear independence by induction of the number $n$ of distinct exponentials involved; the cases $n\leq1$ are trivial (an exponential function is not the zero function). Then by the induction hypothesis one can assume $e^{a_1x},\ldots,e^{a_{n-1}x}$ to be linearly independent. Now if $e^{a_1x},\ldots,e^{a_nx}$ were linearly dependent, the dependency relation must involve the final exponential $e^{a_nx}$ with nonzero coefficient, and therefore (after division by the coefficient) allow that function to be expressed as linear combination of $e^{a_1x},\ldots,e^{a_{n-1}x}$: $$ c_1e^{a_1x}+\ldots+c_{n-1}e^{a_{n-1}x}=e^{a_nx} $$ Now (restricting to the subspace of differentiable functions, where all our exponentials obviously live), the operator $D-a_nI: f\mapsto f'-a_nf$ has the property of annihilating the final exponential function $f=e^{a_nx}$, but multiplying all other exponentials by a nonzero constant (namely $a_i-a_n$ in the case of $f=e^{a_ix}$). Moreover this operator is linear so it can be applied term-by-term; application to both sides of our identity turns it into $$ c_1(a_1-a_n)e^{a_1x}+\ldots+c_{n-1}(a_{n-1}-a_n)e^{a_{n-1}x}=(a_n-a_n)e^{a_nx}=0. $$ But by the (induction) hypothesis of linear independence this can only be true if all the coefficients $c_i(a_i-a_n)$ on the left are zero, which means that all $c_i$ are zero. But in view of our original expression that is absurd. So $e^{a_1x},\ldots,e^{a_nx}$ cannot be linearly dependent, completing the induction step and the proof.

4

Suppose that for some reason you do not want to differentiate or let $t$ tend to $\infty$. Evaluating $\mu_1e^{a_1t}+\cdots+\mu_ne^{a_nt}$ at $t=0,1,\ldots,n-1$ yields the system of equations

$\begin{matrix} \mu_1 & + & \mu_2 & + & \cdots & + & \mu_n &=&0\\ \mu_1e^{a_1} &+& \mu_2e^{a_2} &+& \cdots &+& \mu_ne^{a_n}&=&0\\ \mu_1\left(e^{a_1}\right)^2 &+& \mu_2\left(e^{a_2}\right)^2 &+& \cdots &+& \mu_n\left(e^{a_n}\right)^2&=&0\\ \vdots& &\vdots & &\vdots& &\vdots & &\vdots\\ \mu_1\left(e^{a_1}\right)^{n-1} &+& \mu_2\left(e^{a_2}\right)^{n-1} &+& \cdots &+& \mu_n\left(e^{a_n}\right)^{n-1}&=&0, \end{matrix}$

or in other words,

$\begin{bmatrix} 1&1&\cdots&1\\ e^{a_1}&e^{a_2}&\cdots&e^{a_n}\\ \left(e^{a_1}\right)^2&\left(e^{a_2}\right)^2&\cdots&\left(e^{a_n}\right)^2\\ \vdots&\vdots&\ddots&\vdots\\ \left(e^{a_1}\right)^{n-1}&\left(e^{a_2}\right)^{n-1}&\cdots&\left(e^{a_n}\right)^{n-1} \end{bmatrix} \begin{bmatrix} \mu_1\\ \mu_2\\ \vdots\\ \mu_n \end{bmatrix} =\vec 0.$

Because the $e^{a_i}$ are distinct, the Vandermonde matrix $A$ above is invertible, and therefore the unique solution $\vec x$ to $A\vec x=\vec 0$ is $\vec x=\vec 0$. Thus all $\mu_i$ are zero.

Similar reasoning would apply if you evaluated at any arithmetic progression of $n$ terms. Alternatively, evaluating the $0^\text{th}$ through $(n-1)^\text{st}$ derivatives at $0$ (or another point) would give a Vandermonde system with $a_i$ in place of $e^{a_i}$. The latter approach is finding the Wronskian, as referred to in Qiaochu's comment.

Jonas Meyer
  • 53,602