I'm studying linear algebra and learning about generalized eigenspaces, and i have 3 questions regarding a specific proof which i think i have to write down before i can ask the questions (I'm translating it to english, but it should hopefully be clear anyway).
First one piece of terminology: $EG_{\lambda}$ is defined as the generalized eigenspace corresponding to the eigenvalue $\lambda$, i.e.:$\bigcup\limits_{i=1}^{\infty} Ker(T-\lambda I)^i.$
Proposition: Let T be an operator on V with eigenvalues $\lambda_1,\lambda_2,....,\lambda_r$, then: $$V=GE_{\lambda_1}\oplus GE_{\lambda_2}\oplus....\oplus GE_{\lambda_r}.$$
Proof: It's a proof by induction over the dimension of $V$, with $n$ being the dimension.
Base case: If $n=1$, then it's obviously true.
Inductive argument: Let $n\geq 2$ and assume the proposition is true for all vectorspaces with dimension $\lt n$. T has an eigenvalue $\lambda_1$. Let $V_1=Ker(T-\lambda_1I)^n$ and $V_2=Ran(T-\lambda_1 I)^n$. We know that $V=V_1\oplus V_2$. If $\lambda_1$ is the only eigenvalue then $V_2=\{0\}$ and $V=GE_{\lambda_1}$ and we are done. if $\lambda_1$ isn't the only eigenvalue then, since $V_2$ is invariant, we can restrict T to $V_2$ with the eigenvalues $\lambda_2,\lambda_3,....,\lambda_r$. According to the inductive assumption, $V_2=GE_{\lambda_2}\oplus GE_{\lambda_3}\oplus....\oplus GE_{\lambda_r}$, and we get: $V=GE_{\lambda_1}\oplus V_2=GE_{\lambda_1}\oplus GE_{\lambda_2}\oplus....\oplus GE_{\lambda_r}$. And we are done!
First of all, it doesn't say what field V is over, but i think it's over a complex field (the course has been handling complex and real fields only), since we assume T has at least one eigenvalue, and that is only true if it's over a complex field (as far as i know).
Question 1: The statement: "If $\lambda_1$ is the only eigenvalue then $V_2=\{0\}$", is not clear to me. I've tried to prove it to my self and googling, but i can't figure it out. It is probably something trivial since it is not motivated in the proof, but i can't figure it out. Why would the fact that there is only one eigenvalue make this true is beyond me.
Question 2: In this statement: "we can restrict T to $V_2$ with the eigenvalues $\lambda_2,\lambda_3,....,\lambda_r$". Why does T have those eigenvalues when restricted to $V_2$?. It seems intuitively to be true, but how can we be sure that the eigenvectors corresponding to $\lambda_2,\lambda_3,....,\lambda_r$ isn't in $GE_{\lambda_1}$. For example, maybe there is some eigenvector v, corresponding to say $\lambda_4$, and some $k$ such that: $(T-\lambda_1 I)^kv=0$.
Question 3: How does the inductive argument work, i mean it's structure. In every case before where i've dealt with inductive proofs, it's been of the sort: "Prove a base case (usually $n=0$), and then assume it's true for $k=n-1$ and show that $P(k) \implies P(n)$". But in this proof it is stated explicitly "Assume it's true for all $k\lt n$" and prove that it's true for $n$. We don't know if $k=n-1$ or if it's $k=n-100$ (Unless we know that $dim(GE_{\lambda_1})=1$, then how do we know that?). I can do an almost identical proof (given that i know the answer to question 1 and 2) of the proposition if we let $n$ be the number of eigenvalues, and then it would be a "normal" proof by induction. But how does this version work?
edit: $V_2=Ran(T-\lambda_1 I)^n$, not $V_2=Ran(T-\lambda_2 I)^n$