7

I know there is a result that $S=$ collection of all trace $0$ matrices, and that collection forms a vector space. But I want to prove it independently i.e.
For any $A,B,C,D\in M_n(K)$ we have to find $E,F\in M_n(K)$ such that $(AB-BA)+(CD-DC)=EF-FE$.
But I don't know how to solve this. But the thing I can observe that the above equation gives rise to $n^2$ equation (equating each entries of matrices of both the sides) with $2n^2$ variables (Total number of entries of $E,F$ is $2n^2$).Can this problem be simplified if we choose special kind of $E$ say diagonal matrix.
Edit-(This is valid only for $\Bbb{R}$ or $\Bbb{C}$)
$AB-BA=(A+aI)(B+bI)-(B+bI)(A+aI)$ for all $a,b\in\Bbb{R}$. And there is $a,b$ in $\Bbb{R}$ such that $A+aI,B+bI$ are invertible.
Hence, we can assume $A,B$ to be invertible matrices i.e. $S=\{AB-BA|A,B\in GL_n(K)\}$

MathBS
  • 3,104
  • 2
    Good question. Minor detail: If $E$ is diagonal, then $EF-FE$ has only zeros on the main diagonal. That's too restrictive. – ccorn Nov 13 '20 at 10:04
  • 1
    Your edit at first assumes $K=\mathbb{R}$ but asserts that in general we can assume $A$ is invertible. This is not so: it certainly fails in $\mathbb{F}_2$. – ancient mathematician Nov 13 '20 at 10:19
  • @ancientmathematician Yes right – MathBS Nov 13 '20 at 10:21
  • 1
    I think this is quite deep, as it depends on the fact $K$ is a field, just being a commutative ring would not suffice. There are some clues here https://www.tandfonline.com/doi/abs/10.1080/00927870008827009?journalCode=lagb20 – ancient mathematician Nov 13 '20 at 10:26
  • This is a really cool question. Have you tried doing a simple case, like $\mathfrak{sl}_2(\mathbb{R})$? Maybe some pattern might emerge, but thinking about it, I'm kind of doubtful since there will be lots of free variables. Hmm... – Richard Jensen Nov 13 '20 at 13:16

1 Answers1

3

I have commented that fixing $E$ to have diagonal form would be too restrictive because then $EF - FE$ would have only zeros on its main diagonal.

However, @ancientmathematician's comment let me track down the original proof by Shoda (1936). As it turns out, Shoda did use the idea with a diagonal $E$ after observing that similarity transformations carry over to commutators, and in characteristic 0 a zero-trace matrix is similar to a matrix with zero main diagonal. For the latter, Shoda referred to matrix normal forms. Actually we do not need that much normalization, so I'll try an elementary proof here.

Lemma: Let $K$ be a field of characteristic 0, $n$ a positive integer and $M\in\operatorname{Mat}_n(K)$ with $\operatorname{tr} M = 0$. Then there exists $T\in\operatorname{SL}_n(K)$ such that $T M T^{-1}$ has only zeros on its main diagonal.

Proof: For $n = 1$, $\operatorname{tr} M = 0$ implies $M = ((0))$, and $T = I$ (identity matrix) works.

Now assume $n > 1$. We repeat the following sequence of steps, each iteration finding a transformation $S\in\operatorname{SL}_n(K)$ such that replacing $M$ by $S M S^{-1}$ increases the number of zero main diagonal elements of $M$. Multiplying all used $S$ (later ones at the left) results in a suitable transformation $T$.

  1. If $M$ has only zeros on its main diagonal, $S = I$ works, and we are done.

  2. Otherwise, $M$ must have at least two nonzero main diagonal elements $m_{ii}, m_{jj}$ with $1\leq i < j\leq n$ because if there were only one, it would equal the trace, which is zero, contradiction.

    Furthermore, not all nonzero main diagonal elements can be equal because otherwise the zero-trace requirement would imply that their number is a multiple of the field characteristic, but that's zero, contradiction.

    Therefore we can choose $i,j$ such that $0\neq m_{ii}\neq m_{jj}\neq 0$. We now define a transformation $S\in\operatorname{SL}_n(K)$ that differs from the identity matrix only in its principal submatrix with rows/columns $i,j$:

    $$\begin{pmatrix}s_{ii} & s_{ij}\\s_{ji} & s_{jj}\end{pmatrix} = \begin{cases} \begin{pmatrix}1 & 0\\\frac{m_{ii}}{m_{ij}} & 1\end{pmatrix} & \text{if $m_{ij}\neq 0$, else} \\ \begin{pmatrix}1 & -\frac{m_{ii}}{m_{ji}}\\0 & 1\end{pmatrix} & \text{if $m_{ji}\neq 0$, else} \\ \begin{pmatrix}1 & \frac{m_{ii}}{m_{jj}-m_{ii}} \\ 1 & \frac{m_{jj}}{m_{jj}-m_{ii}}\end{pmatrix} & \text{if $m_{ij} = 0 = m_{ji}$} \end{cases}$$

    Then $M' = S M S^{-1}$ has the same main diagonal elements as $M$ except that $m_{ii}' = 0$ and $m_{jj}' = m_{ii} + m_{jj}$, preserving the trace. Off-diagonal elements with row or column index in $\{i,j\}$ may have changed as well, but that does not matter.

Armed with the above Lemma, we can now assume $M$ to have an all-zero main diagonal. Fix $E\in\operatorname{Mat}_n(K)$ to be a diagonal matrix with pairwise distinct diagonal elements. Characteristic 0 ensures their existence. To obtain $F$, let

$$f_{ij} = \begin{cases} \frac{m_{ij}}{e_{ii}-e_{jj}} & \text{for $i\neq j$} \\ \text{(arbitrary)} & \text{for $i = j$} \end{cases}$$

Then $M = EF - FE$.

References

  • K. Shoda (1936): Einige Sätze über Matrizen. Japanese J. Math. 13, pp. 361--365.
  • M. Rosset, S. Rosset (2000): Elements of trace zero that are not commutators. Communications in Algebra 28:6, 3059-3071, DOI: 10.1080/00927870008827009.
ccorn
  • 9,803
  • If I'm not overlooking anything, all this works as well with a field $K$ of prime characteristic $p$ as long as $n < p$. – ccorn Nov 13 '20 at 23:45