1

I have this matrix:

$$ A= \begin{pmatrix} 0 & 1 & 1 \\ 0 & 1 & 0 \\ -1 & 1 & 2 \\ \end{pmatrix} $$

I have founded the eigenvalues: $$\lambda_{1,2,3}=1$$ So $$\lambda=1$$$$\mu=3$$

I'm expecting to have one eigenvector plus two generalized eigenvectors. But, proceding I have some troubles:

$$ A= \begin{pmatrix} -1 & 1 & 1 \\ 0 & 0 & 0 \\ -1 & 1 & 1 \\ \end{pmatrix} \begin{pmatrix} x \\ y\\ z\\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ \end{pmatrix} $$

Which cleary brings to two equal equations:

$$-x+y+z=0$$

I don't know how I should proceed. I can find a solution by trying some values but I don't like this method. Which is the best and secure method to solve this problem? Thank you very much.

muserock92
  • 331
  • 2
  • 9

3 Answers3

1

Note that since $rank(A-\lambda I)=1$ possible solutions are "only two" (that is the dimension of Ker is 2) and you can find them by setting two free parameters in the final equation which lead, for example, to

  • $(1,0,1)$ and $(0,1,-1)$

then the geometric multiplicity of $\lambda=1$ is 2.

The method is

  • set $x=t$ and $y=s$

then for

  • $(t,s)=(1,0)\implies (1,0,1)$
  • $(t,s)=(0,1)\implies (0,1,-1)$
user
  • 154,566
  • Hi and thank you for your reply. As "Free parameter" should I have to assign a "random" definite value to two variables and see what happens? Thank you very much. – muserock92 Feb 26 '18 at 16:00
  • Thank you so much, problem solved. And I have learned the method. :) – muserock92 Feb 26 '18 at 16:02
  • I've added some detail, the standard method is to assign (t,s)=(1,0) and (t,s)=(0,1) to have linearly independent eigenvectors. – user Feb 26 '18 at 16:02
  • @muserock92 Well done, You are welcome! Bye – user Feb 26 '18 at 16:02
0

Since $A-I=\begin{pmatrix}-1&1&1 \\0&0&0\\ -1&1&1\end{pmatrix}$, the eigenspace is defined by the single equation $x=y+z$, and it has dimension $2$, being isomorphic to $\mathbf R^2$ by the isomorphism: \begin{align}\mathbf R^2&\longrightarrow \ker(A-I)\\\begin{pmatrix} y\\z\end{pmatrix}&\longmapsto \begin{pmatrix}y+z\\ y\\z\end{pmatrix}\end{align} You can take as a basis of the eigenspace, say the image of the canonical basis of $\mathbf R^2$, i.e. $$v_1=\begin{pmatrix}1 \\1\\0\end{pmatrix},\quad v_2=\begin{pmatrix}1 \\0\\1\end{pmatrix}.$$ To obtain a Jordan basis, you need to complete it with a vector $v_3$ such that $(A-I)v_3=v_2$ In such a basis the matrix of the endomorphism associated to $A$ will be $$J=\begin{pmatrix}1&0&0 \\0&1&1\\ 0&0&1\end{pmatrix} \quad\text{since }\enspace Au_1=u_1,\;Au_2=u_2,\;Au_3=u_3+u_2.$$ Now to solve the vector equation $(A-I)v_3=v_2$, we have to solve the linear system $$\begin{cases}1=-x+y+z, \\0=0,\\1=-x+y+z,\end{cases}$$ which has an obvious solution: $x=0,\;y=1,\;z=0$, in other words $u_3=e_2$.

As a conclusion, $A$ has a Jprdan normal form in basis $$\mathcal B=(e_1+e_2,\, e_1+e_3,\, e_2).$$

Bernard
  • 175,478
0

I like to keep things as matrices and column vectors, visual. Since $(A - I)^2 = 0$ but $A - I \neq 0,$ we can take the third column of $R$ (for "Right") as anything we like for which $$ (A - I) w \neq 0. $$ I like $$ w = \left( \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right) $$ The second column will be $v = (A-I)w,$ which is automatically an eigenvector (WHY??) $$ v = \left( \begin{array}{c} 1 \\ 0 \\ 1 \end{array} \right) $$ For the first column we can take any eigenvector that is independent of $v,$ looking back at $A-I$ we can take $$ u = \left( \begin{array}{c} 1 \\ 1 \\ 0 \end{array} \right) $$ So, we have $$ R = \left( \begin{array}{ccc} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 1 \end{array} \right) $$ which has determinant $1,$ very helpful, and $$ R^{-1} A R = J. $$

The direction that is actually useful is $R J R^{-1} = A.$ Useful for finding $e^A$ or $A^{100}$ or any $f(A)$ with $f$ analytic. $$ R^{-1} = \left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & -1 & 0 \\ -1 & 1 & 1 \end{array} \right) $$

$$ \left( \begin{array}{ccc} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 1 \end{array} \right) \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{array} \right) \left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & -1 & 0 \\ -1 & 1 & 1 \end{array} \right) = \left( \begin{array}{rrr} 0 & 1 & 1 \\ 0 & 1 & 0 \\ -1 & 1 & 2 \end{array} \right) $$

Will Jagy
  • 139,541