0

Consider the matrix $$A=\begin{pmatrix}q & p & p\\p & q & p\\p & p & q\end{pmatrix}$$ with $p,q\neq 0$. Its eigenvalues are $\lambda_{1,2}=q-p$ and $\lambda_3=q+2p$ where one eigenvalue is repeated. I'm having trouble diagonalizing such matrices. The eigenvectors $X_1$ and $X_2$ corresponding to the eigenvalue $(q-p)$ have to be chosen in a way so that they are linearly independent. Otherwise the diagonalizing matrix $S$ becomes non-invertible. What is the systematic way to find normalized linearly independent eigenvectors in this situation?

  • 1
    There isn't a systematic way. For some matrices, diagonalization is entirely impossible, like with $\left(\begin{smallmatrix}1&1\0&1\end{smallmatrix}\right)$. – Arthur Apr 10 '18 at 19:52
  • Try to use Gauss method to solve $$AX = (p-q)X$$ – Kroki Apr 10 '18 at 19:53
  • 5
    in this example the matrix is symmetric so the eigenvectors can be mutually orthogonal – David Quinn Apr 10 '18 at 19:53
  • 3
    What's more, as every row has an identical sum $q + 2p$, $(1, 1, 1)$ must be an eigenvector. – Connor Harris Apr 10 '18 at 19:53
  • @DavidQuinn "must be" or there exists an orthogonal set of eigenvectors. – Doug M Apr 10 '18 at 20:02
  • They don't have to be if you have a repeated ev. In this example, as has been pointed out, $(1,1,1)$ is an eigenvector corresponding to the single ev $q+2p$. For the other e-vectors, any two independent vectors in the plane perpendicular to this will be eigenvectors for the repeated ev, such as $(1,-1,0)$ and, if you like, $(1,0,-1)$ which are not actually perpendicular to each other. It's just more convenient to choose an orthogonal diagonailizing matrix – David Quinn Apr 10 '18 at 20:55
  • The systematic way is to compute a basis for the null space of $A-(q-p)I$. There’s no guarantee that a matrix with repeated eigenvalues is diagonalizable, though. That said, this particular type of matrix has come up many times on this site, here, for instance. – amd Apr 10 '18 at 22:59

2 Answers2

1

In linear algebra, there is a distinction is made between algebraic and geometric multiplicity. The algebraic multiplicity of an eigenvalue is its multiplicity in the characteristic polynomial, and the geometric multiplicity is the dimension of its associated eigenspace. If the geometric multiplicity of the eigenvalues matches their algebraic multiplicities, then we can diagonalize.

To find eigenvalues, one can take A-$\lambda$I and row reduce it, which gives somewhat arbitrary solutions. For instance, here you can first try to get an eigenvector using just the "first" two dimensions, and get (1,-1,0), and then get an eigenvector of the remaining eigenspace. Or you could take (1,-.5,-.5) as your first eigenvector, and end up with a different basis.

If there is an eigenvalue with algebraic multiplicity larger than its geometric multiplicity, then you cannot diagonalize the matrix, but there is a more generalized concept called Jordan Canonical Form that applies to every matrix.

Acccumulation
  • 12,210
0

The sum of each row of the matrix is $q+2p$ and therefore $(1,1,1)$ is an eigenvector corresponding to the eigenvalue $q+2p$. Now to compute the remaining eigenvectors, look for a basis of the null space of$$A-(q-p)\operatorname{Id}=\begin{pmatrix}p&p&p\\p&p&p\\p&p&p\end{pmatrix}.$$You can take $(1,-1,0)$ and $(0,1,-1)$.