3

Let $x, y$ be a right and a left eigenvector corresponding to the same simple eigenvalue (algebraic multiplicity is $1$) of a matrix. Show that $x, y$ cannot be orthogonal.

In my opinion, if the eigenvalue with algebraic multiplicity is $1$, that means the power of $(A-\lambda I)$ must be $1$. Does it mean that the eigenvalues must be different? If the eigenvalues are different, then $x,y$ should be orthogonal. So, how to prove $x,y$ cannot be orthogonal?

Thanks a lot.

glS
  • 6,818
GGG
  • 133

2 Answers2

3

the eigenvalue with algebraic multiplicity is 1, that means the power of $\boldsymbol{A}-\lambda\mathbf{I}$ must be 1

No. That means that the nullity of $\boldsymbol{A}-\lambda\mathbf{I}$ must be $1$, and consequently the index must also be $1$. In brief, $\mathrm{alg}(\lambda)=1$ means $\dim\ker(\boldsymbol{A}-\lambda\mathbf{I})^{n}=1$. For any eigenvalue, we have $\mathrm{geom}(\lambda)\geq1$, that is, $\dim\ker(\boldsymbol{A}-\lambda\mathbf{I})\geq1$. Together, all this implies $$\dim\ker(\boldsymbol{A}-\lambda\mathbf{I})^{n}=\dim\ker(\boldsymbol{A}-\lambda\mathbf{I})^{k}=\dim\ker(\boldsymbol{A}-\lambda\mathbf{I})=1,\quad \forall k\geq1.$$

The least $k$ for which $\ker\boldsymbol{M}^{k+1}=\ker\boldsymbol{M}^{k}$ is called the index of $\boldsymbol{M}$. Equivalent conditions are $\ker\boldsymbol{M}^{k}\oplus\mathrm{Im}\boldsymbol{M}^{k}$ or, by the rank-nullity theorem, $\mathrm{Im}\boldsymbol{M}^{k+1}=\mathrm{Im}\boldsymbol{M}^{k}$. This is something we will use later.

Now, back to the main question. We have $$\ker(\boldsymbol{A}-\lambda\mathbf{I})=\mathrm{span}\{\boldsymbol{x}\}, \quad \ker(\boldsymbol{A}-\lambda\mathbf{I})^{*}=\mathrm{span}\{\boldsymbol{y}\},\quad \boldsymbol{x},\boldsymbol{y}\neq\boldsymbol{0}.$$

So, how to prove $\boldsymbol{x},\boldsymbol{y}$ cannot be orthogonal?

To show that $\boldsymbol{y}^{*}\boldsymbol{x}\neq0$, suppose the contrary, $\boldsymbol{y}^{*}\boldsymbol{x}=0$, which implies $$\boldsymbol{x}\in\mathrm{span}\{\boldsymbol{y}\}^{\perp}=\ker(\boldsymbol{A}-\lambda\mathbf{I})^{*\perp}=\mathrm{Im}(\boldsymbol{A}-\lambda\mathbf{I}).$$ The existence of $\boldsymbol{x}\neq\boldsymbol{0}$, such that $\boldsymbol{x}\in\mathrm{Im}(\boldsymbol{A}-\lambda\mathbf{I})\cap\ker(\boldsymbol{A}-\lambda\mathbf{I})$, requires that $\mathrm{Im}(\boldsymbol{A}-\lambda\mathbf{I})^{2}\subsetneq\mathrm{Im}(\boldsymbol{A}-\lambda\mathbf{I})$, which contradicts that the index of $\boldsymbol{A}-\lambda\mathbf{I}$ is $1$. Therefore, the only possibility is $\boldsymbol{y}^{*}\boldsymbol{x}\neq0$.

0

Another approach to the question: if $\lambda$ is a simple eigenvalue, meaning it has algebraic (and thus geometric) multiplicity $1$, then there's an invertible $P$ such that $A=P(\lambda\oplus A')P^{-1}$ for some $A'$. This is just a restatement of the existence of a right eigenvector $v=Pe_1$ such that $Av=\lambda v$.

Then, the left eigenvector corresponding to $\lambda$ is $u^T=e_1^T P^{-1}$, which gives you $u^T A=\lambda u^T$.

It follows that $u^T v=e_1^T P^{-1} Pe_1=1$, i.e. the vectors aren't orthogonal. Being the eigenspaces one-dimensional, it's clear that any other choice of such eigenvectors just gives scalar multiples and doesn't change the result.

Example with diagonalisable non-degenerate $2\times 2$ matrix

Consider $A=\begin{pmatrix}-1&-2\\1&2\end{pmatrix}$, whose eigenvalues are $\lambda_1=0$ and $\lambda_2=1$, and we can write it as $$A = \underbrace{\begin{pmatrix}-1&-2 \\ 1 &1\end{pmatrix}}_{\equiv P} \begin{pmatrix}1&0\\0&0\end{pmatrix} \underbrace{\begin{pmatrix}1&2 \\ -1 &-1\end{pmatrix}}_{= P^{-1}} ,$$ and thus $P e_1=\binom{1}{-1}$ is a right eigenvector of $\lambda=1$, and $e_1^T P^{-1}=(-1,-2)$ is a corresponding left eigenvector. These are different, but have unit (and thus nonzero) overlap.

Similarly, the eigenvectors corresponding to $\lambda=0$ are $P e_2=\binom{2}{-1}$ and $e_2^T P^{-1}=(1,1)$, and it's interesting to note how they're orthogonal to the right eigenvectors with different eigenvalues.

Any example with a non-degenerate diagonalisable matrix works like this: you write $A=PDP^{-1}$ for some diagonal $D$, and thus the right eigenvectors $Pe_j$ and the left eigenvectors $e_i^T P^{-1}$ are biorthogonal.

Counterexample with non-diagonalisable matrix

Consider as another eaxmple $A=\begin{pmatrix}1&1\\0&1\end{pmatrix}$. This has the only eigenvalue $\lambda=1$, with algebraic multiplicity $2$ and geometric multiplicity $1$, and is thus not diagonalisable.

While it remains true that both left and right eigenvectors correspond to the only eigenvalue $\lambda=1$, in this case we have $A e_1= e_1$ and $e_2^T A=e_2^T$. In other words, the left and right eigenvectors corresponding to the same eigenvalue are orthogonal. This shows that the assumption of having a simple eigenvalue is crucial to the statement.

glS
  • 6,818