2

Matrix $A$ given as $$\begin{bmatrix} 2 & 0 & 1\\ 0 & 2 & 0 \\ 1 & 0 & 2 \end{bmatrix}$$

I am told to find eigen values. Now before heading to characteristic polynomial and subtracting $\lambda i$ from principle diagonal elements. I am thinking of simplifying this matrix like we often do before calculating determinant. I will show what I am saying below:

$A$= $\begin{bmatrix} 2 & 0 & 1\\ 0 & 2 & 0 \\ 1 & 0 & 2 \end{bmatrix}$

$R_3=2R_3-R_1$

A=$\begin{bmatrix} 2 & 0 & 1\\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix}$

It seems redundant right now.It just became upper triangular matrix and eigen values are $2,3$.I know I am wrong and it just occurred in my mind and I want to know what wrong going in my head. Someone correct me please.

Daman
  • 2,138
  • 1
    What makes you think that eigenvalues are invariant under elementary transformations? A non-example: dividing the first row of $\pmatrix{3 & 0 \ 0 & 1}$ by $3$ changes it to the identity. The first matrix has $(3, 1)$ as its eigenvalues; the latter has only $1$ as an eigenvalue. – John Hughes Apr 03 '18 at 11:38
  • @JohnHughes May be i am lacking geometric interpretation of vectors and eigen values . Well I know it's wrong I just couldn't figure out reason of it by my self. – Daman Apr 03 '18 at 11:41
  • 4
    An elementary transformation is the same as multiplication by certain matrix $M$. It is not the same $MA-\lambda I$ than $M(A-\lambda I)=MA-\lambda MI$. On the other hand, if you simultaneously do the row transformation together with its inverse for the columns, then it works. Namely, $M(A-\lambda I)M^{-1}=MAM^{-1}-\lambda I$. –  Apr 03 '18 at 11:48
  • @yixing Thanks. That's understandable. – Daman Apr 03 '18 at 11:50
  • 3
    @yixing you should post it as an answer ...If you do I will upvote your answer. – user577215664 Apr 03 '18 at 11:54
  • Note that, with a bit of experience, you can find the eigenvalues of this matrix by inspection. – amd Apr 03 '18 at 19:47
  • @amd Tell me please. – Daman Apr 03 '18 at 20:52

1 Answers1

1

Elementary row operations don’t preserve eigenvalues, as the simple example John Hughes gave in his comment shows. If you’re going to fiddle with a matrix in this way in order to try to simplify it, you have to do this to $\lambda I-A$, not $A$ itself, since that’s the determinant that you’re trying to compute when you’re working out the characteristic polynomial. (Yixing’s comment gives you a way to manipulate $A$ without affecting its eigenvalues, but I’ve not found that method to be very practical when calculating by hand. It’s very easy to make errors when applying the inverse of an elementary operation to the columns of the matrix.) Because of all the zeros in $A$, you can get a partially-factored characteristic polynomial for it pretty easily, but it’s also possible to find the eigenvalues and corresponding eigenvectors by inspection.

Observe first that the second column is a multiple of $(0,1,0)^T$. Recalling that the columns of a transformation matrix are the images of the basis vectors, we have our first pair: $(0,1,0)^T$ with eigenvalue $2$. This parallels what happens when expanding $\det(\lambda I-A)$ along the second column: you find that the characteristic polynomial must be of the form $(\lambda-2)p(\lambda)$, so you get one of the eigenvalues right away.

Turning now to the other two columns, it’s obvious that their first and last rows sum to the same value. Adding the first and last columns is equivalent to multiplying by $(1,0,1)^T$, so we have our second pair: $(1,0,1)^T$ with eigenvalue $2+1=3$.

For the last eigenvalue and eigenvector, you might notice that the difference of the first and last columns is $(1,0,-1)^T$, which is exactly what you’d multiply $A$ by to get this difference of columns, so there’s the last pair. However, you always get the last eigenvalue “for free” because the sum of the eigenvalues is equal to the trace: $2+3+\lambda = 6$, therefore $\lambda = 1$. So even if you don’t spot this particular linear combination of columns, you have the third eigenvalue. To find a corresponding eigenvector, you can take advantage of the symmetry of the matrix: the eigenspaces of a real symmetric matrix are orthogonal, which means that for this matrix, the eigenspace of $1$ is the orthogonal complement of the sum of the other two eigenspaces. We’re working in $\mathbb R^3$, so we can compute this via a cross product: $(0,1,0)^T\times(1,0,1)^T = (1,0,-1)^T$ as computed previously.

One other shortcut that can be used for this matrix takes advantage of a different aspect of $A$. If you delete the second row and column of $A$, you’re left with a matrix of the form $aI+b\mathbf 1$ (here $\mathbf 1$ is a square matrix of all $1$s), eigenvalues and eigenvectors of which are particularly easy to find by inspection. I won’t go into the details here because they’ve been amply covered in other questions, such as this one.

amd
  • 53,693