2

In the book of Linear Algebra by Greub, at page 229 question 4, it is asked that

Consider linear transformation $\phi$ of a real linear space $E$. Prove that an inner product can be introduced in $E$ such that $\phi$ becomes an orthogonal projection if and only if $\phi^2 = \phi$

We already know that if $\phi$ has n linearly independent eigenvector, we can define an inner product in $E$ s.t $\phi$ is self-adjoint.

To show $\phi^2 = \phi \Rightarrow \phi $ is orthogonal projection, first I need to know that $\phi$ has n linearly independent eigenvectors, but $\phi^2 = \phi$ only says that $\phi$ is stable under $Im \phi$, and does not say anything about the eigenvectors of $\phi$, so how can we prove this part ?

Our
  • 7,285
  • You want an orthogonal projection, not an orthogonal transformation. – lhf Mar 26 '18 at 11:29
  • Given your talk about eigenvectors, am I correct in assuming that we're dealing with finite-dimensional (in fact, $n$-dimensional) spaces? – Theo Bendit Mar 26 '18 at 11:57
  • Hint: Think about the projection $h$ defined by $$v \mapsto v - \langle v, e_1\rangle e_1.$$ (1) What's the formula for $h(x, y, z)$, where $(x, y, z)$ is any vector in 3-space? (2) Is it true that $h^2 = h$? – John Hughes Mar 26 '18 at 12:19
  • @TheoBendit Yes, it is finite dimensional real vector space – Our Mar 26 '18 at 13:59
  • @JohnHughes But that is a specific orhogonal. projection, whereas we are dealing with an arbitrary orthogonal projection ? I mean I did not understand what you are impliying :) – Our Mar 26 '18 at 14:01
  • What makes it orthogonal is the choice of inner product. – John Hughes Mar 26 '18 at 14:13
  • @JohnHughes Yes, I know that, but what I meant was that you are giving a specific map above. – Our Mar 26 '18 at 16:04
  • Yes, and by carefully examining the properties of a specific map, you may (or may not) be able to generalize. I'm sorry that my hint appears to not have helped. You get what you pay for, I guess. – John Hughes Mar 26 '18 at 16:21

4 Answers4

2

Since $\phi^2 - \phi = 0$, the eigenvectors are at most the roots of the polynomial $z^2 - z = 0$, i.e. $0$ and $1$.

Because this polynomial is square-free, it is the minimal polynomial, and hence $\phi$ is diagonalisable. Otherwise, to show that there is a basis of eigenvalues, we can show the generalised eigenspaces coincide with the eigenspaces.

First, note that since $\phi = \phi^2$, we have $\operatorname{ker} \phi = \operatorname{ker}(\phi^2)$, which means that the eigenspace corresponding to $0$ is the generalised eigenspace.

Second, note that $$(\phi - I)^2 = \phi^2 - 2\phi + I = \phi - 2\phi + I = I - \phi,$$ hence $$\operatorname{ker}((\phi - I)^2) = \operatorname{ker}(I - \phi) = \operatorname{ker}(\phi - I).$$ Similarly, the eigenspace corresponding to $1$ is the generalised eigenspace.

Since $0$ and $1$ are the only possible eigenvalues, this makes $\phi$ diagonalisable.

Theo Bendit
  • 50,900
  • What do you mean by square-free ? – Our Mar 26 '18 at 14:02
  • What is a generalised eigenspace ? – Our Mar 26 '18 at 14:03
  • But your argument does not show that $\phi$ indeed has n eigenvectors (with eigenvalues $0$ and $1$) which are linearly independent. – Our Mar 26 '18 at 14:05
  • By "square-free", I mean that the polynomial contains no square factors. For example, $z^2(z - 1)$ and $z(z - 1)^2$ are not square-free. – Theo Bendit Mar 26 '18 at 21:07
  • By generalised eigenspace corresponding to $\lambda$, I'm referring to the kernel of $(\phi - \lambda I)^n$. The kernel of $(\phi - \lambda I)^k$ is the eigenspace when $k = 1$, and continues to grow as $k$ grows. Once the sequence of spaces fails to grow once, it stays the same no matter how large $k$ grows. So, if$$\operatorname{ker} ((\phi - \lambda I)^2) = \operatorname{ker}(\phi - \lambda I),$$then the generalised eigenspace is the same as the eigenspace. Then a Jordan Basis (which always exists for any operator on finite-dim spaces - look it up) consists of eigenvectors. – Theo Bendit Mar 26 '18 at 21:12
  • Clearly these are tools that you haven't come across yet. Sorry about that. Jordan bases are a powerful tool for studying diagonalisability though! – Theo Bendit Mar 26 '18 at 21:15
  • Ok, I will take a look at those. Thanks both for explanations and the answer. – Our Mar 27 '18 at 03:51
1

We don't have to mention eigenvalues or eigenvectors.

Let $$ U:={\rm im}(\phi)\subset E,\quad V:={\rm ker}(\phi)\subset E\ .$$ Then ${\rm dim}(U)+{\rm dim}(V)={\rm dim}(E)$; furthermore $U\cap V=\{0\}$. (To prove the latter consider an $x\in U\cap V$. Then there is a $y\in E$ with $x=\phi(y)$, hence $0=\phi(x)=\phi^2(y)=\phi(y)=x$.)

It follows that $E=U\oplus V$. Choose a basis $(e_1,\ldots, e_r)$ of $U$ and a basis $(e_{r+1},\ldots, e_n)$ of $V$. Then $(e_1,\ldots, e_n)$ is a basis of $E$. Declare this basis orthonormal.

  • How does this lead us to find a lin. independent eigenvectors of $\phi$ ? – Our Mar 26 '18 at 14:02
  • The vectors $e_i$ are a basis of linearly independent eigenvectors of $\phi$: for $i \le r$, we have $\phi(e_i) =1 e_i$ (although you'd have to write a 2-line proof of this); for $i > r$, we have $\phi(e_i) = 0 e_i$. – John Hughes Mar 26 '18 at 14:15
  • @JohnHughes Please don't get me wrong, but the whole purpose of this question is ask for that two line of proof. As I have said, if I can show that $\phi$ has lin. independent eigenvectors, I can do the rest of it (OK, maybe I wasn't emphasised that part in the question, but this is the case.) – Our Mar 26 '18 at 16:39
  • Well....good luck coming up with those two lines. You might want to think about what you know about $\phi$, since the claim is not true for an arbitrary linear transformation. – John Hughes Mar 26 '18 at 19:49
0

Just want to give a (relatively) easier explanation for my question:

Consider $\phi - i$.Since $$\phi (\phi - i) = \phi^2 - \phi = 0 $$

we have

$$(\phi -i)(v) \in Ker(\phi) \quad \forall v \in E.$$

Now, if we restrict $(\phi -i)$ to $Im (\phi)$, that is $$(\phi -i)(\phi(v)) = \phi^2 - \phi = 0,$$ so $$\phi (v) = i(v) \quad \forall v \in Im(\phi).$$

Moreover, we do now that, $$E = Ker (\phi) \oplus Im (\phi).$$ In fact, given $a \in E$, $$\phi (a) = b \in Im(\phi) \Rightarrow \phi^2 (a) = \phi(b) = \phi(a) \\ \Rightarrow \phi (b-a) \in Ker(\phi),$$ hence $\exists h \in Ker(\phi)$ s.t $$h = b-a \Rightarrow a = h + b$$, which proves the result.

Now, since $\phi|_{Ker(\phi)} = 0_E$ and $\phi|_{Im\phi} = i_E$, $\phi$ clearly has $n$ linearly independent eigenvectors corresponding to the eigenvalues $0$ and $1$.

Now, introduce an inner product in $E$ s.t these eigenvectos are orthogonal, and we are done.

Our
  • 7,285
0

Just to increase the diversity of the answers:

Observe that $\phi$ make the polynomial $$p(x) = x^2 - x = x(x-1)$$ zero, so the minimal polynomial of $\phi$, $m_\phi$ has to divide $p$. This implies that either $m_\phi (x) = x$, or $m_\phi (x) = x-1$, or $m_\phi (x) = x(x-1)$.

In the first case, $\phi$ has to be the zero map, so the result is trivial.

In the second case, $\phi = i$, again the result is trivial.

In the third case, since the minimal polynomial of $\phi$ contains distinct linear factors of power $1$ (moreover, minimal and characteristic polynomial have the same roots), by the theorem, $\phi$ is diagonalizable, hence have $n$ linearly independent eigenvectors corresponding to the eigenvalues $0$ and $1$.

Our
  • 7,285