0

My textbook says that if eigenvalue $\lambda$ of matrix $A$ has multiplicity $k$, then the characteristic polynomial tells us that the null space of $(A - \lambda I)^k$ has dimensionality $k$.

Why is this true?

vladimirm
  • 1,028
  • 1
  • Thanks Ben. If there is no more basic way to explain this (without using Jordan normal forms), I'm happy to delete this question. – vladimirm Jul 09 '23 at 23:43
  • 1
    if the field is algebraically closed (or at least the characteristic polynomial splits linearly) then this is a straightforward consequence of Cayley-Hamilton combined with Sylvester's Rank Inequality. No knowledge of Jordan forms is needed. I gave a proof in the first half of this answer: https://math.stackexchange.com/questions/3878092/prove-that-the-intersection-of-2-generalised-eigenspaces-is-the-zero-space/3878536#3878536 – user8675309 Jul 10 '23 at 00:34
  • 1
    Thank you @user8675309! That does it for me. – vladimirm Jul 11 '23 at 12:22

1 Answers1

0

Have to whip out old notes for this one, lol. This can certainly be done by using Jordan canonical form, but the way my class was designed, we used this

First, suppose that $V$ is a vector space, $T:V\to V$ is linear, and $\lambda, \lambda'$ are distinct eigenvalues of $T$. Let $$ K_\lambda=\{v\in V:(T-\lambda I)^pv=0,~p\in\mathbb Z^+\} $$ $$ E_\lambda=\{v\in V:(T-\lambda I)v=0\} $$ In other words, $K_\lambda$ is the set of generalized eigenvectors corresponding to $\lambda$ and $E_\lambda$ is the set of (regular) eigenvectors corresponding to $\lambda$.

Notice that $(T-\lambda I)$ and $T$ commute. So, $T(K_\lambda)\subseteq K_\lambda$. We also need that $K_\lambda\cap K_{\lambda'}=\{0\}$. This is proven in this post: Proof of Linear independence of generalized eigenvectors without applying generalized eigenspace decomposition.

Let $T_{K_\lambda}$ denote the restriction of $T$ to $K_\lambda$. Notice that because $E_{\lambda'}\not\subseteq K_\lambda$, we know that $\lambda$ is the only eigenvalue of $T_{K_\lambda}$. Therefore, the characteristic polynomial of $T_{K_\lambda}$ must be $h(t)=(-1)^d(t-\lambda)^d$, where $d=\operatorname{dim}K_\lambda$.

However, the characteristic polynomial of a restriction of a linear operator must divide the original characteristic polynomial, which for $T$ we will denote as $f$. Therefore, $(t-\lambda)^d$ divides $f$, or in other words, the multiplicity $m$ of $\lambda$ must be $d\leq m$.

We know that $\operatorname{N}(T-\lambda I)^m\subseteq K_\lambda$. We want to show that $\operatorname{N}(T-\lambda I)^m=K_\lambda$. We have that $f(t)=(t-\lambda)^mg(t)$, where $g$ is some polynomial which is divisible by factors of $(t-\mu_i)$, where $\mu_1,\dots,\mu_n$ are the eigenvalues of $T$ not equal to $\lambda$. Because $\mu_i$ is not an eigenvalue of $K_\lambda$, we know that $g(T)$ must be bijective on $K_\lambda$.

Let $x\in K_\lambda$. There exists $y\in K_\lambda$ such that $g(T)(y)=x$. This means $(T-\lambda I)^m(x)=(T-\lambda I)^mg(T)(y)=f(T)(y)$. By the Cayley-Hamilton Theorem, we know $f(T)(y)=0$. Therefore, $x\in\operatorname{N}(T-\lambda I)^m$. However, this then implies that $K_\lambda=\operatorname{N}(T-\lambda I)^m$.

Now, let $\lambda_1,\dots,\lambda_k$ be the eigenvalues of $T$, let $\beta_i$ be a basis for $K_{\lambda_i}$, and let $m_i$ be the multiplicity corresponding to $\lambda_i$. By this post, we can show that $\beta=\bigcup_{i=1}^k\beta_i$ is a basis for $V$ direct sum of generalized eigenspaces

However, this means that $\operatorname{dim}(V)=\sum_{i=1}^k\operatorname{dim}(K_{\lambda_i})$. By contradiction, it is easy to show that this must mean that $\operatorname{dim}(K_{\lambda_i})=m_i$, which is exactly what you wanted to show.