3

Let $E=M_n(\mathbb{R}), A,B \in E -\{0\}$. We define $f \in \mathcal{L}(E)$ by: $$\forall M \in E, f(M)=M+tr(AM)B$$

  1. Find a necessary and sufficient condition so that $f$ is diagonalizable.
  2. What is $\dim C$, where $C=\{g \in \mathcal{L}(E) | f \circ g = g \circ f \}$?

My attempt:

  1. I wanted to cancel $f$ with a second degree polynomial. Therefore, I calculated: $$f^2(M)=f(M)+tr(AM)f(B)=f(M)+tr(AM)B+tr(AB)tr(AM)B$$ $$\Rightarrow f^2(M)-2f(M)=tr(AB)tr(AM)B-M$$ $$\Rightarrow f^2(M)-2f(M)-tr(AB)f(M)=-tr(AB)M-M$$ $$\Rightarrow f^2(M)-2f(M)-tr(AB)f(M)+tr(AB)M+M=0$$ Therefore, $P(X)=X^2-(2+tr(AB))X+(1+tr(AB))$ satisfies $P(f)=0$. Let's calculate the discriminant of this polynomial. $$\Delta = 4+4tr(AB)+tr(AB)^2-4-4tr(AB)=tr(AB)^2$$ This polynomial has at least one real root, and $f$ is diagonalizable iff $\Delta \neq 0$. So: $$f \text{ diagonalizable} \Leftrightarrow tr(AB) \neq 0$$

  2. This is where things get harder for me. I assume $f$ is diagonalizable. The roots of the polynomial, which are the eigenvalues of $f$, are: $$\lambda_1=1 \text{ and } \lambda_2=1+tr(AB)$$ Now, let's find the dimension of the eigenspaces. $$M \in E_{\lambda_1} \Leftrightarrow f(M)=M \Leftrightarrow tr(AM)=0$$ $$M \in E_{\lambda_2} \Leftrightarrow f(M)=M+tr(AB)M \Leftrightarrow M=xB, x \in \mathbb{R}$$ We get $\dim E_{\lambda_1}=n^2-1$ (it's an hyperplane) and $\dim E_{\lambda_2}=1$. Now, if we have $g \in C$, we know that we can find a basis of diagonalization for both $f$ and $g$. I'm not sure how to formalize my idea, but $g$ diagonalizes with $2$ diagonal blocks corresponding to the eigenspaces $E_{\lambda_1}$ and $E_{\lambda_2}$. If we note $g_1$ and $g_2$ the endomorphisms induced on these subspaces, the dimension of the subspace of possible endomorphisms for $g_1$ and $g_2$ are respectively $(n^2-1)^2$ and $1$. So, $\dim C$ should be $(n^2-1)^2+1$, right?

Could anyone help me making things clearer on the diagonalizable case, and does anyone has an idea on how to proceed if $f$ is non-diagonalizable (i.e. $tr(AB)=0$)? Thanks in advance!

Jujustum
  • 1,197
  • If $P(f)=0$ this just means that the minimal polynomial of $f$ divides $P$. The discriminant of $P$ is zero iff $P$ has a double zero. This doesn't say anything about $f$ being diagonalizable. It just proves that $f$ has at most 2 eigenvalues (which would be the zeroes of $P$) – Ben Jun 11 '21 at 08:18

1 Answers1

1

1/ For the first question, your result is correct, although, your justification in the case $Tr(AB)=0$ is lacking. You need to show that there is NO polynomial with distinct roots $P$ such that $P(f)=0$. Well you made $99$ percent of the work since you found a degree 2 polynomial with a double root, and you can easily show that no degree 1 polynomial fits (you would have f~identity, which is not the case (consider $f(A^T)\ne A^T$)).

2/ In the diagonalizable case, your justification seems correct to me. Check out this question, which gives an overview of the general case : Space of matrices that commute with a given matrix The question links to a very good reference (in French) with a proof.

3/ In the general case however, this is a bit more subtle and you will find more information in the answer to this question Computing the dimension of a vector space of matrices that commute with a given matrix B, . The size of this space is highly related to the Jordan form of the matrix.

Applied to your problem specifically,the size of the largest Jordan block should be 2 since it is the multiplicity of the eigenvalue 1 in the minimal polynomial. So there is a base in which the matrix of $f$ is the identity matrix (in $M_{n^2}(\mathbb{R})$, with an extra one added in in the last column just above the diagonal. I will denote this matrix $F = I + \delta_{p-1,p}$, where $p=n^2$. Hence, The (matrix) equation $0 = GF-FG$ reduces to $\forall i,j, G_{i,p-1}\delta_{p,j} = \delta_{p-1,i}G_{p,j}$.

In order to find the dimension of the solution space, we can work through all the cases :

  • $j\neq p , i \neq p-1 \implies$ no constraint
  • $j= p , i \neq p-1 \implies G_{i,p-1} = 0$
  • $j\neq p , i = p-1 \implies G_{p,j} = 0$
  • $j=p , i = p-1 \implies G_{p,p} = G_{p-1,p-1}$

I count $(p-1)+(p-1)+1\color{red}{-1}= 2p$ independant constraints, hence the seeked dimension is $p^2 - (2p) = (p-1)^2+1 = (n^2-1)^2+1$, which is the same as the diagonalizable case.

Edit : It is even possible to build the Jordan base explicitely. The last vector would be $E_p =A^T/Tr(AA^T)$ for instance, the second to last, $E_{p-1} = B$, and the rest would span the rest of the hyperplane $Tr(AX)=0$.

Hence, $f(E_i) = E_i \forall i\ne p$ and $f(E_p) = E_p + E_{p-1}$.

G. Fougeron
  • 1,544
  • Firstly, thanks for your very detailled answer. It definitely helps me a lot understanding what's going on. I accepted your answer, but I think I'm still missing something in your proof. I'm not sure how you get your reduction of the matrix equation. Using your idea, I chose a basis of the hyperplane, $B$ being the first element (since $tr(AB)=0$), that I completed to get a basis of $M_n(\mathbb{R})$. The matrix I got was a identity matrix with something on the top-right corner, and, using the commutativity condition, I ended up with a dimension of $n^4-2n+2$ (same dimension as before). – Jujustum Jun 11 '21 at 19:18
  • 1
    Yes, my bad : I counted the constraint $G_{p, p-1}=0$ twice, sorry. Your matrix and mine are equivalent up to renumbering, so no problem. – G. Fougeron Jun 11 '21 at 22:08