3

I'm studying linear algebra and learning about generalized eigenspaces, and i have 3 questions regarding a specific proof which i think i have to write down before i can ask the questions (I'm translating it to english, but it should hopefully be clear anyway).

First one piece of terminology: $EG_{\lambda}$ is defined as the generalized eigenspace corresponding to the eigenvalue $\lambda$, i.e.:$\bigcup\limits_{i=1}^{\infty} Ker(T-\lambda I)^i.$

Proposition: Let T be an operator on V with eigenvalues $\lambda_1,\lambda_2,....,\lambda_r$, then: $$V=GE_{\lambda_1}\oplus GE_{\lambda_2}\oplus....\oplus GE_{\lambda_r}.$$

Proof: It's a proof by induction over the dimension of $V$, with $n$ being the dimension.

Base case: If $n=1$, then it's obviously true.

Inductive argument: Let $n\geq 2$ and assume the proposition is true for all vectorspaces with dimension $\lt n$. T has an eigenvalue $\lambda_1$. Let $V_1=Ker(T-\lambda_1I)^n$ and $V_2=Ran(T-\lambda_1 I)^n$. We know that $V=V_1\oplus V_2$. If $\lambda_1$ is the only eigenvalue then $V_2=\{0\}$ and $V=GE_{\lambda_1}$ and we are done. if $\lambda_1$ isn't the only eigenvalue then, since $V_2$ is invariant, we can restrict T to $V_2$ with the eigenvalues $\lambda_2,\lambda_3,....,\lambda_r$. According to the inductive assumption, $V_2=GE_{\lambda_2}\oplus GE_{\lambda_3}\oplus....\oplus GE_{\lambda_r}$, and we get: $V=GE_{\lambda_1}\oplus V_2=GE_{\lambda_1}\oplus GE_{\lambda_2}\oplus....\oplus GE_{\lambda_r}$. And we are done!

First of all, it doesn't say what field V is over, but i think it's over a complex field (the course has been handling complex and real fields only), since we assume T has at least one eigenvalue, and that is only true if it's over a complex field (as far as i know).

Question 1: The statement: "If $\lambda_1$ is the only eigenvalue then $V_2=\{0\}$", is not clear to me. I've tried to prove it to my self and googling, but i can't figure it out. It is probably something trivial since it is not motivated in the proof, but i can't figure it out. Why would the fact that there is only one eigenvalue make this true is beyond me.

Question 2: In this statement: "we can restrict T to $V_2$ with the eigenvalues $\lambda_2,\lambda_3,....,\lambda_r$". Why does T have those eigenvalues when restricted to $V_2$?. It seems intuitively to be true, but how can we be sure that the eigenvectors corresponding to $\lambda_2,\lambda_3,....,\lambda_r$ isn't in $GE_{\lambda_1}$. For example, maybe there is some eigenvector v, corresponding to say $\lambda_4$, and some $k$ such that: $(T-\lambda_1 I)^kv=0$.

Question 3: How does the inductive argument work, i mean it's structure. In every case before where i've dealt with inductive proofs, it's been of the sort: "Prove a base case (usually $n=0$), and then assume it's true for $k=n-1$ and show that $P(k) \implies P(n)$". But in this proof it is stated explicitly "Assume it's true for all $k\lt n$" and prove that it's true for $n$. We don't know if $k=n-1$ or if it's $k=n-100$ (Unless we know that $dim(GE_{\lambda_1})=1$, then how do we know that?). I can do an almost identical proof (given that i know the answer to question 1 and 2) of the proposition if we let $n$ be the number of eigenvalues, and then it would be a "normal" proof by induction. But how does this version work?

edit: $V_2=Ran(T-\lambda_1 I)^n$, not $V_2=Ran(T-\lambda_2 I)^n$

Crille
  • 51

2 Answers2

2

Q0: Yes, we need to work over a field containing all of the eigenvalues of $T$. (Note that this is more general than saying "algebraically closed field," and gets you the same decomposition over $\mathbb{R}$ if $T$ has all real eigenvalues.)

Q1: If $\lambda_1$ is the only eigenvalue then $T - \lambda_1 I$ has only the zero eigenvalue, so is nilpotent (e.g. by the Cayley-Hamilton theorem), so $(T - \lambda_1 I)^n = 0$. In other words, in this case the generalized eigenspace of $\lambda_1$ is all of $V$.

Q2: If $v \in \text{Ran}(T - \lambda_1 I)^n)$ were to have an eigenvalue of $\lambda_1$ then we would have $(T - \lambda_1 I) v = 0$. But by definition we have $v = (T - \lambda_1 I)^n w$ for some $w$, so this gives $(T - \lambda_1 I)^{n+1} w = 0$, which gives $(T - \lambda_1 I)^n w = 0$ because the generalized eigenspace stabilizes at $n$ (e.g. by the Cayley-Hamilton theorem again). So $v = 0$.

Q3: It's a proof by strong induction, which is equivalent to ordinary induction. You prove the base case $P(1)$ and then the inductive step is to show that if $P(k)$ is true for all $k < n$ then $P(n)$ is true.

Edit: Without Cayley-Hamilton we can prove directly the necessary result, which is the following.

Proposition: Let $T : V \to V$ be a linear map on an $n$-dimensional vector space. Then for all $N \ge n$ we have $\text{im}(T^N) = \text{im}(T^n)$ and $\text{ker}(T^N) = \text{ker}(T^n)$.

Proof. The sequence of subspaces $I_k = \text{im}(T^k)$ are decreasing in the sense that $I_0 \supseteq I_1 \supseteq I_2 \dots$, and in particular their dimensions are decreasing. Since $\dim I_0 \le n$ it follows that for some $k \le n$ we must have $\dim I_k = \dim I_{k+1}$, which gives $I_k = I_{k+1}$. This means that $T(\text{im}(T^k)) = \text{im}(T^k)$, hence $T$ is invertible when restricted to $\text{im}(T^k)$. From that point on we must have $I_k = I_{k+1} = I_{k+2} = \dots$ so the sequence of subspaces stabilizes, and $k \le n$ so in particular $I_n = I_{n+1} = \dots $.

Similarly for the kernels define the sequence of subspaces $J_k = \text{ker}(T^k)$ which is increasing in the sense that $J_0 \subseteq J_1 \subseteq \dots$, and in particular their dimensions are increasing. Since $\dim J_k \le n$ for all $n$ it follows that for some $k \le n$ we must have $\dim J_k = \dim J_{k+1}$, which gives $J_k = J_{k+1}$. This means that $T^{-1}(\text{ker}(T^k)) = \text{ker}(T^k)$, meaning $T$ is invertible when restricted to $\text{ker}(T^k)$. From that point on we must have $J_k = J_{k+1} = J_{k+2} = \dots$ as above. $\Box$

Corollary: If $T : V \to V$ has no nonzero eigenvalues then $T^n = 0$ (and in particular $T$ is nilpotent).

Proof. If $T$ has no nonzero eigenvalues then it cannot be invertible when restricted to any proper subspace of $V$ (otherwise we would be able to produce an eigenvector with nonzero eigenvalue in that subspace), so when running the argument above for the subspaces $I_k = \text{im}(T^k)$ we get that once they stabilize they must be equal to zero, and in particular $I_n = 0$. $\Box$

Corollary: The generalized eigenspace $E_{\lambda} = \bigcup_{k \ge 1} \text{ker}((T - \lambda I)^k)$ is equal to $\text{ker}(T - \lambda I)^n$.

We can actually do better than this: it turns out that $E_{\lambda} = \text{ker}(T - \lambda)^m$ where $m$ is the multiplicity of $\lambda$ as a root of the minimal (not characteristic) polynomial of $T$, and this bound is sharp: $E_{\lambda} \neq \text{ker}(T - \lambda)^{m-1}$. See, for example, this answer.

Qiaochu Yuan
  • 419,620
  • Thanks for taking time in answering my questions. I am not familiar with the Cayley-Hamilton theorem (it's for later in my course), is there a way to prove Q1 without that theorem? It might be that the author of my textbook is not expecting the student to understand that part until later in the course (when they have learned about the Cayley-Hamilton theorem) – Crille Oct 11 '20 at 22:52
  • @Crille: I edited in proofs avoiding Cayley-Hamilton. – Qiaochu Yuan Oct 11 '20 at 23:22
  • I can still see a problem. You have proved that the eigenvalues of the restriction of $T$ to $V_2$ are among the $\lambda_2, \ldots, \lambda_n$ , but why are they exactly $\lambda_2, \ldots, \lambda_n$ as stated by the OP? – Romanda de Gore Aug 18 '21 at 14:28
  • Show that $GE_{\lambda_i} \subseteq Ran(T-\lambda_1)^n$ if $i \neq 1$. If $u$ is an eigenvector of $T$ corresponding to $\lambda_i$, then $u \in GE_{\lambda_i} $, thus $u \in V_2=Ran(T-\lambda_1)^n$. Therefore $\lambda_i$ an eigenvalue of $T|_{V_2}$. – Romanda de Gore Aug 18 '21 at 20:45
  • $(T-\lambda_1)GE_{\lambda_i}\subseteq GE_{\lambda_i}$. Thus we can look at the restriction of $(T-\lambda_1)$ to $GE_{\lambda_i}$. Suppose it's not injective $(T-\lambda_1)x=0$, for an $x \in GE_{\lambda_i} $, $x\neq 0$. $(T-\lambda_i)^nx=0$ but $(T-\lambda_i)x=Tx - \lambda_ix=(\lambda_1 - \lambda_i)x$ and eventually $(T-\lambda_i)^nx=(\lambda_1 - \lambda_i)^nx \neq 0$, contradiction. Since $V$ is finite - dimensional and the restriction of $(T-\lambda_1)$ injective, it's surjective. So, $(T-\lambda_1)GE_{\lambda_i}=GE_{\lambda_i}$ from which also $(T-\lambda_1)^nGE_{\lambda_i}=GE_{\lambda_i}$ – Romanda de Gore Aug 18 '21 at 21:16
  • Hello @QiaochuYuan, would appreciate feedback on my answer. – Steven Xu Jan 04 '24 at 04:19
1

We might be over-complicating the matter.

If $Ran(T - \lambda I)^n = \{ 0 \}$, then $(T - \lambda I)^n = 0$ (i.e. the zero matrix), and $\forall v\in V: (T - \lambda I)^nv = 0$.

The equivalency of all three statements about $(T - \lambda I)^n$ is truthful at the level of definition.

Tell me if you think the following is a short and snappy rewrite of the proposed proof, that makes its rigor indisputably clear:

=======

Claim: The generalized eigenspaces of $T\in\text{Hom}(V,V)$ span $V$.

Proof:

Given $T\in \text{Hom}(V,V)$, the fundamental theorem of algebra guarantees at least one eigenvalue $\lambda\in \mathbb C$. Choose $n$ sufficiently high that Ran$(T-\lambda I)^n = \text{Ran} (T- \lambda I)^{n+1}$.

Case 1: Ran$(T-\lambda I)^n = W \neq \{0\}$

  • Then $(T-\lambda I)$ is invertible on $W$
  • By inductive hypothesis (induction on dim$V$), the generalized eigenspaces of $(T - \lambda I)$, (corresponding to some set of eigenvalues, $\{\lambda_k\}$) span all of $W$.
  • The OP already proved that $V = \text{Ran}(T-\lambda I)^n \oplus \text{Ker}(T-\lambda I)^n$. We are done (NOTE: the generalized $\lambda_k$-eigenspace of $(T-\lambda I)$, is the generalized $(\lambda_k+\lambda)$-eigenspace of $T$).

Case 2: Ran$(T-\lambda I)^n = \{0\}$

  • This preceding statement is equivalent to $\forall v: (T-\lambda I)^nv = 0$ (definition of the Range of an operator).
  • Then the $\lambda$ generalised eigenspace (i.e. ker$(T- \lambda I)^n$) is simply $V$ itself. We are done.
Steven Xu
  • 65
  • 7