11

I realized that I have some difficulties for prove this exercise.

Let $T : V \rightarrow V$ be a linear operator on a finite dimensional vector space $V$ over a field $F$, and invariant subspaces $U,W \subset V$ such that $V = U \oplus W$. Show that if $T$ is diagonalizable then $T_{|U}, T_{|W}$ are diagonalizable.

Any help would be greatly appreciated. Thanks!

JimmyJP
  • 394
  • 2
    $T$ being diagonalizable is equivalent to saying that every vector is a sum of eigenvectors. Now take a vector in $U$. Can you prove that it is a sum of eigenvectors in $U$? – user180040 Oct 03 '14 at 04:05

4 Answers4

10

There is an elementary proof of the more general statement that the restriction of a diagonalisable linear operator$~T$ to a $T$-stable subspace $U$ is again diagonalisable (in the finite dimensional case), along the lines of my other answer. See also this answer for a variant formulation.

Any vector of $u\in U$ decomposes uniquely in$~V$ as $u=v_1+\cdots+v_k$, a sum of eigenvectors for distinct eigenvalues $\lambda_1,\ldots,\lambda_k$, and it suffices to show that those eigenvectors $v_i$ lie in$~U$. Since $U$ is $T$-stable, it is also stable under any $T-\lambda I$. Now to show that $v_i\in U$, apply to the equation successively $T-\lambda_j I$ for $j\in\{1,2,\ldots,k\}\setminus\{i\}$. At each application each term $v_j$ is multiplied by a scalar, which is zero if and only if $T-\lambda_j I$ was being applied. The result is that only a nonzero scalar multiple of $v_i$ remains, and since we started out with a vector $u$ of $U$, this result is still in$~U$. After division by the scalar this shows that $v_i\in U$.

An equivalent formulation is by induction on$~k$, the number of (nonzero!) eigenvectors needed in the sum for$~u$. The starting cases $k\leq1$ are obvious. Otherwise application of $T-\lambda_kI$ gives $$ Tu-\lambda_ku=(\lambda_1-\lambda_k)v_1+(\lambda_2-\lambda_k)v_2+\cdots+(\lambda_{k-1}-\lambda_k)v_{k-1}, $$ and one can apply the induction hypothesis to the vector $Tu-\lambda_ku\in U$ to conclude that all the individual terms (eigenvectors) in the right hand side lie in $U$. But then so do the unscaled $v_1,\ldots,v_{k-1}$ (since all scalar factors are nonzero), and by necessity the remaining term $v_k$ in $u=v_1+\cdots+v_k$ must also lie in$~U$.

4

Here is an alternative approach if $\mathbb{F} = \mathbb{C}$.

First note that $T$ has a basis of eigenvectors $v_k$.

Then $v_k = u_k+w_k$, where $u_k \in U, w_k \in W$. We have $Tv_k = Tu_k + T w_k = \lambda_k v_k = \lambda_k u_k + \lambda_k w_k$. Since $U,W$ are $T$-invariant and $V = U \oplus W$, we have $Tu_k = \lambda_k u_k$ and $Tw_k = \lambda_k w_k$ (note the $u_k$ or the $w_k$ may be zero).

Furthermore, since the $v_k$ span $V$, we see that the $u_k$ span $U$ and the $w_k$ span $W$. Choose a subset $u_{n_i}$ that forms a basis for $U$, and similarly, a subset $w_{m_j}$ that form a basis for $W$.

Then $T_{|U}$ is diagonal in the basis $u_{n_i}$, and $T_{|W}$ is diagonal in the basis $w_{m_i}$.

copper.hat
  • 172,524
4

If you know about the theorem that says that a linear operator on a finite dimensional vector space over$~F$ is diagonalisable (over$~F$) if and only if it is annihilated by some polynomial that can be decomposed in$~K[X]$ as a product of distinct factors of degree$~1$, then this is easy. Let by the theorem $P$ be such a polynomial for the diagonalisable operator$~T$ (so $P[T]=0$), then certainly $P[T|_U]=0$ and $P[T|_W]=0$, which by the same theorem shows that $T|_U$ and $T|_W$ are diagonalisable. In this high level answer it is irrelevant that both $U,W$ are given, and that they form a direct sum; it shows that more generally the restriction of a diagonalisable operator$~T$ to any $T$-stable subspace is diagonalisable.

There is however a more low level reasoning that applies for this question, based on the fact that the projections on the factors of a $T$-stable direct sum decomposition commute with$~T$. This fact is immediate, since if $v=u+w$ with $u\in U$ and $w\in W$ describes the components of$~u$, then $Tv=Tu+Tw$ with $Tu\in U$ and $Tw\in W$ by $T$-stability, so it describes the components of$~Tv$. This means in particular that the projections on $U$ and $W$ of an eigenvector of$~T$ for$~\lambda$ are again eigenvectors of$~T$ for$~\lambda$ (or one of them might be zero), as the projection of $Av=\lambda v$ is $\lambda$ times the projection of$~v$.

Now to show that $T|_U$ and $T|_W$ are diagonalisable, it suffices to project every eigenspace$~E_\lambda$ onto$~U$, and onto$~W$; its images are eigenspaces for$~\lambda$ of $T|_U$ and $T|_W$, or possibly the zero subspace. As it is given that $V=\bigoplus_\lambda E_\lambda$, the sums of the projections of the spaces $E_\lambda$ in $U$ respectively $W$ (which sums are always direct) fill up $U$ respectively $W$, in other words $T|_U$ and $T|_W$ are diagonalisable. Alternatively, to decompose a vector $u\in U$ as a sum of eigenvectors for $T|_U$, just decompose it into a sum of eigenvectors for$~T$, and project the summands onto$~U$ (parallel to$~W$), which projections clearly add up to$~u$ (and in fact it is easy to see that the projections did nothing; the eigenvectors for$~T$ were already inside$~U$).

Just one final warning: don't take away from this that projections onto $T$-stable subspaces always commute with$~T$, or send eigenspaces to eigenspaces for the restriction. That is not true in general: it only holds when the projection is along another $T$-stable subspace.

2

$\;T\;$ is diagonalizable (over its definition field , all the time from now on)) iff its minimal polynomial is a product of different linear factors.

If we denote by $\;m_U(x)\;,\;\;m_V(x)\;$ the minimal polynomials of $\;T\;$ over $\;U,V\;$ resp., since $\;m_U(T)=0\;$ , we get that $\;m_V(x)\mid m_U(x)\implies\;$ also $\;m_U(x)\;$ is a product of different linear factors and we're done

Timbuc
  • 34,191
  • 1
    This answer, as currently written, has multiple errors. Consider $T = \begin{bmatrix}1 & 0 \ 0 & 2 \end{bmatrix}$, $V = \mathbb{R}^2$. Then $U = \text{span}{\begin{bmatrix}1 \ 0\end{bmatrix}}$ is $T$-invariant, with $m_U(x) = x - 1$, $m_V(x) = (x-1)(x-2)$. It is not true that $m_U(T) = 0$, nor is it true that $m_V(x) \mid m_U(x)$. Moreover, $m_V(x) \mid m_U(x)$ does not imply $m_U(x)$ has distinct linear factors. – zcn Oct 03 '14 at 06:04
  • Surely you mean $m_U \mid m_V$? – copper.hat Oct 03 '14 at 06:36
  • @zcn: Right. $U$ and $V$ have been interchanged. – Marc van Leeuwen Oct 03 '14 at 09:21
  • Several points to clear out for some people that perhaps didn't understand and/or which were misled by my confusing between $;U,V;$: when speaking of $;T;$ on $;U, V, W;$ etc, the point is to take everything as embedded in the bigger space (otherwise even the matrices representing $;T;$ in the different spaces have different order), and it should have been, of course, $;m_U(\left. T\right|_U)=0;$ . Anyway, I hope the OP understood this even with my mistakes. Thanks for the comments. – Timbuc Oct 03 '14 at 12:32
  • Timbuc: Your comment confuses me even more. Certainly @copper.hat is right that it should be $m_U \mid m_V$, since you want to deduce from the given fact that $m_V$ decomposes into distinct linear factors that $m_U$ does so as well. And you don't want to embed in the larger space, i.e. view $T|_U$ as a map $U\to V$, since in that case its matrix becomes rectangular and evaluating polynomials in it impossible. You don't really need to talk about matrices at all, but certainly you want the one of $T|_U$ to be square of size $\dim U$ only. – Marc van Leeuwen Oct 04 '14 at 05:24