3

$\textbf{Definition:}$ Let $P$ be a matrix. We say is diagonalizable matrix iff

$\exists$ an invertible matrix $A$: $\exists$ a diagonal matrix $D$: $A^{-1}PA=D$.

I asked a question earlier as to how to prove the following proposition. I am on a mission to get to the end of the full proof as I have listed everything below to help see where I am going. I put the part where I am stuck currently.

$\textbf{Proposition:}$ Let $P$ be an $n\times n$ matrix.

If $P^2=P$, then $P$ is diagonalizable. Link to help for future reference.

$\textbf{Proof:}$ Let $P$ be an $n\times n$ matrix. Assume $P^2=P$. [First, show $Im(I-P)=ker(P)$.]

$\subseteq$ Let $y\in Im(I-P)$ [Show $y\in ker(P)$. It suffices to show $P(y)=0$.] Then, $y=(I-P)(z)$ for some $z\in \mathbb{R}^n$. Thus, the following holds true:

$$\begin{equation}\begin{aligned} P(y)&=P((I-P)(z)) \text{ as } y=(I-P)(z)\\ &=P(z-P(z)) \\ &=P(z)-\underbrace{P^2}_{=P}(z)) \\ &=P(z)-P(z) \\ &=0. \end{aligned}\end{equation}$$

$\supseteq$ Let $y\in ker(P)$. [Show $y\in Im(I-P)$.] Then,

$$\begin{equation}\begin{aligned} (I-P)(y)&=I(y)-P(y)\\ &=I(y)-0\\ &=y. \end{aligned}\end{equation}$$ As $y=(I-P)(y)$, $y\in Im(I-P)$.

Thus, it has been shown that $ker(P)=Im(I-P)$.

$\textbf{Question/Part Stuck On:}$ In the case $P\neq I$, since $Im(I-P)=ker(P)$, there must be an eigenspace that corresponds to an eigenvalue zero. Why must there be an eigenspace that corresponds to an eigenvalue of zero and also an eigenspace that corresponds to an eigenvalue of one here? It does not make sense to me as to why $X^2-X=X(X-1)=0$ gives us our eigenvalues of $1$ and $0$. I have also seen others go from knowing $v=(v-P(v))+P(v)$ to all of the sudden knowing the direct sum of these Eigenvalues yields the whole space which is also quite a jump for me. I am lost as to where to go next in this proof, and any help would be greatly appreciated.

W. G.
  • 1,766
  • The only possible eigenvalues are $0$ and $1$, but $1$ might not be an eigenvalue of the matrix at hand. E.g., the zero matrix is diagonalizable and satisfies $P^2=P$. – amd Aug 23 '18 at 17:42
  • Why are the only possible eigenvalues $0$ and $1$ here? I am just going to assume $P\neq I$ for now on here, as the theorem holds true when $P=I$. – W. G. Aug 23 '18 at 17:51
  • The minimal polynomial divides $x^2-x$, so its only linear factors can be $x$ and $(x-1)$. – amd Aug 23 '18 at 17:53
  • 1
    More generally, if a linear operator $T$ on a (not necessarily finite-dimensional) vector space $V$ over a field $K$ satisfies a polynomial equation $p(T)=0$ and $p(x)\in K[x]$ factors into linear factors over $K$, each having multiplicity $1$, then $T$ is diagonalizable over $K$. See https://math.stackexchange.com/questions/2117662/if-a3-a-prove-that-ker-lefta-i-rightim-lefta-i-right-v/2117755#2117755. – Batominovski Aug 23 '18 at 21:46

3 Answers3

4

You are wrong when you assert that, if $P\neq\operatorname{Id}$, then $P$ has an eigenspace that corresponds to the eigenvalue $1$. Supose that $P$ is the null matrix.

Since $P^2=P$, then $P^2-P=0$. So, if $Q(x)=x^2-x$, $Q(P)=0$. So, the minimal polynomial $m_P(x)$ of $P$, which must divide $Q(x)$ is one of these polynomials: $Q(x)$, $x$, or $x-1$. In each case, it has no roots other than $0$ and $1$. Since the eigenvalues of $P$ are the roots of $m_P$, $P$ cannot have other eigenvalues.

  • As $m_p(x):=det(xI-P)$, why must $m_p(x)|Q(x)$? Could you explain a little more about that? I truly appreciate your help. I am thinking it could be for some reason like this which might be way off: Say $m_p(x)=(x-r_1)(x-r_2)...(x-r_m)$ for some $m\in \mathbb{N}$. Let $k$ be an arbitrary element in $\lbrace 1, 2, ..., m\rbrace$. Then $(x-r_k)|m_p(x)$ iff $m_p(r_k)=0$. [Show $Q(r_k)=0$ as this will imply $(x-r_k)|Q(x)$ and since $k$ was arbitrary then $m_p(x)|Q(x)$.] But I do not know how to do this or if this is on the right track. – W. G. Aug 23 '18 at 18:25
  • 2
    Why are you sayng that $m_p(x)=\det(x\operatorname{Id}-P)$? That's the characteristic polynomial. I am talking about the minimal polynomial of $P$, that is, the non-zero monic polynomial $Q(x)$ with smallest degree such that $Q(P)=0$. – José Carlos Santos Aug 23 '18 at 18:27
  • Hello @JoséCarlosSantos, I still have one question and I'd really appreciate your help. How does the fact that the eigen values are 0 and 1, implies that P is diagonalizable? – luisegf Sep 08 '20 at 22:11
  • @luisegf I did not claim that. It is not true. The only eigenvalue of $\left[\begin{smallmatrix}1&1\0&1\end{smallmatrix}\right]$ is $1$, but this matrix is not diagonalizable. – José Carlos Santos Sep 08 '20 at 22:21
4

We have $I = P + (I-P)$ and so any $x$ can be written as $x=x_1+x_2$ where $x_1 \in {\cal R}P$ and $x_2 \in {\cal R}(I-P)$. Furthermore, if $Py_1 = (I-P)y_2$, then multiplying by $P$ shows that $Py_1 = 0$ and $(I-P) y_2 = 0$ from which we get ${\cal R}P \cap {\cal R}(I-P) = \{0\}$. Hence $\mathbb{R}^n = {\cal R}P \oplus {\cal R}(I-P)$.

Note that if $x \in {\cal R}P $ we have $Px = x$ and if $x \in {\cal R}(I-P)$ we have $Px = 0$. In particular, if $b_1,...,b_{k_1}$ form a basis for ${\cal R}P$ and $c_1,...,c_{k_2}$ form a basis for ${\cal R}(I-P)$ then $b_1,...,b_{k_1},c_1,...,c_{k_2}$ forms a basis for $\mathbb{R}^n$ and $P$ is diagonal in this basis.

copper.hat
  • 172,524
1

Note that the eigenvalues of $P$ are $0$ or $1$

View $P$ as a linear operator on $\Bbb{R}^n$. Use $\text{ker}\; P=\text{Im}\;(I-P)$ to prove $\Bbb{R}^n=\text{Im} P \oplus \text{ker} P$ and then prove the matrix of $P$ under some basis has the form $$\text{diag}\;(1,1,...,1,0,0,...,0)$$ where the number of $1's$ is the rank of $P$.

The conclusion follows from the above fact!