4

so I'm supposed to let $A$ be a square real matrix with the property that every nonzero vector in $\mathbb{R}^n$ is an eigenvector of $A$. And I'm supposed to show that $A=\lambda I$ for a constant $\lambda$ in $\mathbb{R}$.

But I am having a lot of trouble with how to start this proof... Does it have anything to do with $Av=\lambda v$? I'm really confused. Any help would be appreciated.

  • This question may be resolved as a special case of a much more general result, which in fact holds for vector spaces $V$ over any field $\Bbb F$; there is no restriction on dimension. See http://math.stackexchange.com/questions/1090362/if-every-vector-is-an-eigenvector-the-operator-must-be-a-scalar-multiple-of-the. Cheers! – Robert Lewis Jan 30 '15 at 04:31

1 Answers1

4

If $n=1$, there's nothing to prove, as all matrices are scalar multiples of the identity.

If $n\ge 2$, take two linearly independent vectors $v_1,v_2$ with corresponding eigenvalues $\lambda_1$, $\lambda_2$. Since every vector is an eigenvector, so is $v_1-v_2$ with corresponding eigenvalue $\lambda_3$. So then $A(v_1-v_2) = \lambda_3 (v_1 -v_2) = \lambda_3v_1 - \lambda_3v_2$.

Distributing $A$ however gives us $A(v_1-v_2) = Av_1-Av_2 = \lambda_1v_1 - \lambda_2v_2$, and so $\lambda_1v_1 - \lambda_2v_2 = \lambda_3v_1 - \lambda_3v_2$. Combining like terms on both sides, we see $(\lambda_1-\lambda_3)v_1 + (\lambda_3-\lambda_2) v_2 = 0$, so then as $v_1$ and $v_2$ are linearly independent, $\lambda_1-\lambda_3 = 0$ and $\lambda_3-\lambda_2 = 0$. Thus $\lambda_1=\lambda_2=\lambda_3$.

Since this works for any two vectors, namely it works for any pair of basis vectors. So letting $\{ v_1,\cdots, v_n\}$ be a basis for $\mathbb{R}^n$, we see that the above shows that every basis vector is sent to a scalar multiple of itself, and all those scalars are the same, say $\lambda$. Thus for any $v \in \mathbb{R}^n$, we can write $v = a_1v_1+\cdots +a_nv_n$ where each $a_i \in \mathbb{R}$, and: $$Av = A\displaystyle\sum_{i=1}^n a_iv_i = \sum_{i=1}^n a_iA(v_i) = \sum_{i=1}^n a_i\lambda v_i = \lambda\sum_{i=1}^n a_iv_i = \lambda v.$$ Since this holds for any $v \in V$, we have $A = \lambda I$.

Notably, we used nothing about the fact that we were using a matrix, real or not, and any property of $\mathbb{R}^n$. This fact actually holds for any linear operator on a finite dimensional vector space satisfying this property. It may work in infinite dimensions too, but I haven't done much infinite dimensional linear algebra.

walkar
  • 3,844
  • Nice proof, *plus one!* One question: does your argument cover the case when $v_1$ and $v_2$ are in fact linearly dependent? Cheers! – Robert Lewis Jan 30 '15 at 04:29
  • @RobertLewis The crux is that because the dimension is at least 2, we can always find two vectors $v_1$ and $v_2$ that ARE linearly independent by properties of bases. If $v_1$ and $v_2$ are dependent, they're no good for this proof because we wouldn't be guaranteed that $(\lambda_1-\lambda_3)v_1 + (\lambda_3-\lambda_2)v_2 = 0$ implies $\lambda_1-\lambda_3 = 0 = \lambda_3-\lambda_2$. We need just two independent vectors to get this nice result. – walkar Jan 30 '15 at 04:36
  • In the case of $v_1$, $v_2$ linearly dependent, there is a different argument. See the linked citing in my comment above if you are interested in the answer. But I thought your proof was good enough to upvote! More Cheers! – Robert Lewis Jan 30 '15 at 05:22
  • @RobertLewis Thanks for the kind words. Here I don't think we necessarily need to consider that case if we're just trying to prove the property that $A$ must have in this problem. We /can/ get the two independent vectors, which shows us that $A$ sends every basis element to the same scalar multiple, and then we're done. Maybe the infinite dimensional case requires we consider pairs of dependent vectors, but in the finite dimensional I think we have everything we need just from the existence of the independent vectors. – walkar Jan 30 '15 at 13:51
  • Assuming Axiom of Choice, it works just fine for the infinite case. Take a Hamel Basis, any basis is automatically a basis of eigenvectors by assumption and one exists by AC. There can be at most 1 eigenvalue, if there were more than 1 you could pick an eigenvector for each of the two and use your argument above gto show they must be the same. – Alan Feb 23 '22 at 05:56