2

Let $A \in \mathbb C^{m\times m}$ and $B \in \mathbb C^{n\times n}$, and let $C=\begin{pmatrix} A & 0 \\ 0 & B\\ \end{pmatrix} \in \mathbb C^{(m+n)\times (m+n)}$.

  1. Calculate the minimal polynomial of $C$ based on the minimal of $A$ and the minimal of $B$.

  2. Prove that $C$ is diagonalizable if and only if $A$ and $B$ are.

The attempt at a solution

I have no idea how to prove 1). For 2) I got stuck in a lot of parts:

$\Leftarrow$ If $C$ is diagonalizable, then $C=P^{-1}DP$ where $D$ is a diagonal matrix. Somehow, I must construct from D two diagonal matrices $D_1 \in \mathbb C^{m\times m}$ and $D_2 \in \mathbb C^{n \times n}$ and two invertible matrices $Q$ and $S$ so that $A=Q^{-1}D_1Q$ and $B=S^{-1}D_2S$, I don't know how to construct all these matrices.

$\Rightarrow$ Suppose $A$ and $B$ are diagonalizable, so $A=Q^{-1}D_1Q$ and $B=S^{-1}D_2S$, with both $D_1$ and $D_2$ diagonal matrices. My guess is $C$ can be written as

$\pmatrix{Q^{-1}&0\\ 0&S^{-1}}\pmatrix{D_1&0\\ 0&D_2}\pmatrix{Q&0\\ 0&S}$.

Now, I would have to prove that $\pmatrix{Q^{-1}&0\\ 0&S^{-1}}\pmatrix{Q&0\\ 0&S}=Id_{m+n}$ and that $C=\pmatrix{Q^{-1}&0\\ 0&S^{-1}}\pmatrix{D_1&0\\ 0&D_2}\pmatrix{Q&0\\ 0&S}$ in order to show $C$ is diagonalizable.

I would appreciate help in all these points where I am stuck and any suggestion or hint with regard to 1.

user100106
  • 3,493
  • Although I just answered this question, I am going to close this question as a duplicate that I came across; to improve navigation on this site. Even if the linked question does not explicitly ask about 1), answers for that that part are provided there too. – Marc van Leeuwen Dec 30 '14 at 18:02

2 Answers2

5
  1. Notice that if $P$ is a polynomial then $$P(C)=\begin{pmatrix} P(A) & 0 \\ 0 & P(B)\\ \end{pmatrix}$$ so we see that $P$ annihilates $C$ if and only if it annihilates $A$ and $B$. If we denote $\pi_A$ and $\pi_B$ the minimal polynomial of $A$ and $B$ respectively then the polynomial $P=\pi_A\lor \pi_B$ annihilates $C$ so $\pi_C$ divides $P$ and conversely since $\pi_C$ annihilates $C$ so it annihilates $A$ and $B$ so $\pi_A$ and $\pi_B$ divides $\pi_C$ and then $P$ divides $\pi_C$. We conclude that $$\pi_C=\pi_A\lor \pi_B$$
  2. $A$ and $B$ are diagonalizable if and only if $\pi_A$ and $\pi_B$ have simple roots, if and only if $\pi_A\lor \pi_B=\pi_C$ has simple roots, if and only if $C$ is diagonalizable.
  • Is it always true that if a polynomial $P$ of a diagonal block matrix give us zeros out of the diagonal? – Mat999 Jan 07 '24 at 19:13
0

The fact that the field is $\Bbb C$ is irrelevant here, so I'll just write $F$.

You have here that the obvious direct sum decomposition $F^{m+n}\cong F^m\oplus F^n$ is stable under the linear operator$~T$ defined by the matrix $C$ (that is, each of the summands is mapped into itself by$~T$), and the restrictions of $T$ to those summands have matrices $A$ respectively $B$. Therefore, for any polynomial$~P$ one has $P[C]=0$ if and only if both $P[A]=0$ and $P[B]=0$ (the restrictions of $P[T]$ to both summands must vanish). The latter means $P$ is a common multiple of the minimal polynomials $\mu_A$ and $\mu_B$ of $A,B$, respectively. Then $\mu_C=\operatorname{lcm}(\mu_A,\mu_B)$.

So for part 1 I just repeated the answer by Sami Ben Romdhane. But for part 2, you do not really need to use part 1 (even though that is a natural thing to do if you know that diagonalisability can be read off from the minimal polynomials), provided you know instead that the restriction of a diagonalisable operator$~T$ to a $T$-stable subspace is always diagonalisable, or even the weaker result that this holds for a subspace that is $T$-stable and has a $T$-stable complementary subspace, in other words for a summand in a $T$-stable direct sum decomposition of the space. These facts of course also follow by the minimal polynomial characterisation, but they can be proved directly as well, see here or here.

Now for 2. one direction is easy: if both $A$ and $B$ are diagonalisable, then bases of eigenvectors for them obviously lift to $F^{m+n}$ to produce a basis of eigenvectors for $C$. For the converse the cited result applies: $C$ is diagonalisable, so restrictions of$~T$ to the $T$-stable summands of $F^{m+n}\cong F^m\oplus F^n$ are also diagonalisable, whence $A$ and $B$ are.

The proof of the cited result a gave under the second link concretely gives: the projections of $F^{m+n}$ onto the summands commutes with $T$ (you can see this directly), so the projection of an eigenspace for$~\lambda$ of$~C$ is contained in the eigenspace for$~\lambda$ of$~A$ respectively of$~B$ (or it is $\{0\}$ if there is no such eigenspace); moreover since the sum of all such eigenspaces is all of $F^{m+n}$, the sum of their projections is all of $F^m$ respectively of $F^n$, so $A,B$ are diagonalisable.

Your approach with concrete matrices is certainly not the easiest way (it rarely is for such problems), but one can describe how the above gives you your matrices $Q,S$; I will put the inverses on the other side however (so $C=PDP^{-1}$ so that the columns of the conjugating matrix give a basis of eigenvectors. For each eigenvalue$~\lambda$ of $C$, take the set of $k$ corresponding columns of$~P$ (which form a basis of the eigenspace for $\lambda$) and separate it into the first $m$ rows and the final $n$ rows. These $m\times k$ and $n\times k$ matrices have ranks $r_1,r_2$ with $r_1+r_2=k$, and selecting $r_1$ independent columns from the first part and $r_2$ independent columns form the second part (one can choose complementary sets of columns) gives you bases for the eigenspaces for$~\lambda$ of $A$ respectively$~b$, and thereby contributions to the matrices $Q,S$, respectively.