5

Given the following matrix:
$$B=\begin{bmatrix} 1 & 0 & 0 \\ 1 & 0 & a^2 \\ 1 & 1 & 0 \end{bmatrix}$$ I tried to find for which values of $a$, the matrix $B$ is diagonalizable.

I found that the characteristic polynomial is: $P_B(x) = (1-x)(x+a)(x-a)$.
(Stop reading here and skip to the Edit section below)

Therefore I tried to find the eigenspace for each eigenvalue, but eventually concluded that:

  • for $a=(-1)$, the eigenspaces are linearly dependent.
  • for $a=1$, the trace of the diagonal form matrix (call it D) isn't equal to the trace of the matrix that's composed of the eigenvectors (call it Q).
  • ('a' must be 1 or (-1) according to the homogeneous equations with which I found the eigenspaces)

The diagonal form matrix that I have found (D): $$D=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{bmatrix}$$

The matrix that's composed of the eigenvectors that I have found (Q): $$Q=\begin{bmatrix} 0 & -1 & 0 \\ 1 & 2 & -1 \\ 1 & 1 & 1 \end{bmatrix}$$


Edit:

I have found a mistake in the row reduction process of the matrices... So now we have:

The diagonal form matrix (D): $$D=\begin{bmatrix} 1 & 0 & 0 \\ 0 & a & 0 \\ 0 & 0 & -a \end{bmatrix}$$

The matrix that's composed of the eigenvectors (Q): $$Q=\begin{bmatrix} 1-\frac{a^2+1}{2} & 0 & 0 \\ \frac{a^2+1}{2} & a & -a \\ 0 & 1 & 1 \end{bmatrix}$$

D and Q should be similar, thus by comparing their trace, I have found that: $a_1=(1+\sqrt{2})$ and $a_2=(1-\sqrt{2})$. Does it make sense?


Edit2 - To conclude:

I was confused about the relations between the matrices $B,Q \text{ and }D$:
At first I thought that matrices $Q \text{ and }D$ must be similar, but that isn't necessarily true!
Only matrices $B \text{ and }D$ must be similar.


Edit 3 - Response to Marc:

I understood everything until the last sentence. Also I tried an example using Wolfram Alpha on this matrix. Does it have something to do with a nilpotent characteristic of the matrix? Indeed I can compute $(B-pI)(B-qI)$ and also could've seen with the example that it's true, but don't understand the rules (or characteristics) which allow this to be true.

Dor
  • 1,074
  • 1
    About the edit: I think you are confused. It is not $D$ and $Q$ that must be similar, but $D$ and the original matrix $B$. And $Q$ is the change of basis matrix, so if you made no mistakes you should have $B=QDQ^{-1}$. Note also that $Q$ is just some matrix whose columns are eigenvectors; those columns can be scaled arbitrarily, so taking the trace of $Q$ is meaningless. – Marc van Leeuwen Jan 24 '15 at 08:27
  • @MarcvanLeeuwen: Indeed I was confused..! (I wrote below a comment about that to the responder @abel). Thanks for the emphasis! :) – Dor Jan 24 '15 at 12:57

2 Answers2

4

The eigenvalues are $1, a, -a$ so if $a^2 \neq 1,$ then $A$ has three different eigenvalues so $A$ is diagonalizable.

Case $a^2 = 1$ : the rank of $$A - I = \pmatrix{0&0&0\\1&-1&1\\1&1&-1}$$ is $2$ because the first and second columns are linearly independent, so the dimension of the null space of $A-I$ is $1$. Therefore when $a^2=1, A$ is not diagonalizable.

added after the user1551's comment. Case $a = 0$:

the rank of $A$ is $2$ therefore the null space of $A$ has dimension $1$ so again $A$ is not diagonalizable.

abel
  • 29,170
1

Given that the set of eigenvalues is $E=\{1,a,-a\}$, the matrix will be diagonalisable if and only if $P[B]=0$ where $P=\prod_{\lambda\in E}(X-\lambda)$ (a polynomial without repeated factors). If $E$ has $3$ elements then $P$ is the characteristic polynomial, and $P[B]=0$ always (by the Cayley-Hamilton theorem, or simply because you know that the characteristic polynomial having $n=3$ distinct roots of implies being diagonalisable). So there remain the following two cases to consider:

  1. Case $a=0$. Then $P=X(X-1)$, and $B$ is not diagonalisable since $B(B-I)\neq0$.

  2. Case $a^2=1$. Then $P=(X-1)(X+1)=X^2-1$, and $B$ is not diagonalisable since $B^2-I\neq0$.

  • Why did you place $B$ in the equation of the characteristic polynomial? – Dor Jan 20 '15 at 18:10
  • Not I did not place $B$ in the characteristic polynomial, but in the polynomial that contains each eigenvalue as a simple root (this coincides with the characteristic polynomial only in the case of the first paragraph). It is a theorem that the result of this substitution is the zero matrix if and only if $B$ is diagonalisable. It is easy to see that the condition is necessary for being diagonalisable, because every eigenspace is in the kernel of the substitution by construction. But in fact it is also a sufficient condition, which is why I used it. – Marc van Leeuwen Jan 21 '15 at 06:17
  • I think that I understand. By "theorem" do you mean to $(B - \lambda I)\mathbf v = 0$ ? Would you please add a reference to a further reading..? Tnx :) – Dor Jan 22 '15 at 20:36
  • The theorem says what I said is says. If $P$ is a polynomial whose roots are simple and such that $P[\lambda]=0$ for any eigenvalue of $B$, then $B$ is diagonalisable if and only if $P[B]=0$. See also this answer. – Marc van Leeuwen Jan 22 '15 at 21:41
  • Sorry but I can't relate the facts that you presented with my understanding of Algebra that I were taught. It seems that I'm missing a link between what I know and what you write, but don't know what it is.. Yet, the idea seems logical.. I appreciate the effort :) – Dor Jan 23 '15 at 23:54
  • @Dor: You may not have seen the theorems, but the idea is not complicated. Testing diagonalisability does not necessarily require computing the eigenspaces. First, computing polynomials (linear combinations of powers) of$~B$ is compatible with change of basis. Being diagonalisable means becoming diagonal after some change of basis. Now if you imagine $B$ to diagonalise to $D$ with diagonal entries all in some given set, say $S={p,q}$, then $(D-pI)(D-qI)$ will be zero (easy computation). Without knowing $D$ you can compute $(B-pI)(B-qI)$; if it is nonzero then $B$ cannot be diagonalisable. – Marc van Leeuwen Jan 24 '15 at 08:39
  • Due to my long comment, I edited my question above. – Dor Jan 24 '15 at 14:03