Given n x n matrices, my book says the maximum size of a set of linearly independent mutually anti commuting matrices is $n^2-1$. I don't understand why this is true.
Would appreciate any tips to prove this is the case.
Given n x n matrices, my book says the maximum size of a set of linearly independent mutually anti commuting matrices is $n^2-1$. I don't understand why this is true.
Would appreciate any tips to prove this is the case.
I can answer the question, under the additional hypothesis that at least two of the matrices are invertible. I haven't figured out how to do it without that hypothesis.
Note that if you had $n^2$ such matrices, you'd have a basis for the space of all $n\times n$ matrices. If you can prove that all the matrices have trace zero, then you get a contradiction, since any linear combination of trace zero matrices has trace zero, but not every matrix has trace zero.
So suppose $A,B$ are anticommuting, that is, $AB=-BA$, and suppose $A$ is invertible. Then $B=A^{-1}(-B)A$, which says that $B$ is similar to $-B$. But similar matrices have the same trace, and the trace of $-B$ is the additive inverse of the trace of $B$, so $B$ has trace zero. So the presence of one invertible matrix in the set of anticommuting matrices forces all the other matrices in the set to have trace zero. Thus if there are two (or more) invertible matrices in the set, then all of the matrices in the set have trace zero, so they can't be a basis, so there can't be $n^2$ of them.
I wonder whether it's true that $AB=-BA$ implies $A$ and $B$ have trace zero, even if they are not invertible. If you can prove that, you win.
Suppose there were $n^2$ pairwise anticommuting matrices, i.e. as Gerry points out, a basis for $n\times n$ matrices.
If the field were of characteristic two, then anticommuting would amount to commuting matrices. Aside: A previous Question addresses how many linearly independent commuting matrices one can have, with a sharp count known for most fields. In any case it is evident that we cannot have a basis of commuting matrices unless all matrix multiplications commute, so in particular in characteristic two we have at most $n^2 - 1$ pairwise (anti)commuting linearly independent matrices unless $n=1$.
For the rest of this post we consider only fields not of characteristic two.
Express the identity matrix as a linear combination of these, say $I= \sum r_i A_i$ where, without loss of generality, $r_1\neq 0$.
Clearly $A_1$ commutes with $I-r_1 A_1$ but anticommutes with the equal expression $\sum_{i\neq 1} r_i A_i $. Therefore the product must be zero, implying:
$$ A_1 = r_1 A_1^2 $$
and it follows that $r_1 A_1$ is idempotent. The same argument shows any nonzero $r_i A_i $ in the sum is idempotent.
Indeed each of the $n^2$ matrices $A_i $ must appear in the representation, for if it did not, it would anti commute with all those that do appear and (because it would anticommute with the identity matrix) be zero (not possible). Thus we have an equal number $n^2$ of linearly independent anti-commuting idempotent matrices $B_i = r_i A_i$ whose sum is $I$.
Now we can get a contradiction about the rank of $I$ being $n$. In characteristic zero Gerry's idea to use the trace works quickly. The trace of an idempotent is its rank, so the trace of the identity matrix would be at least $n^2$. Contradiction (except in the vacuous case $n=1$).
For a field of nonzero characteristic $p \gt 2$ we dig a little deeper. Matrices $B_i,B_j$ not only anti-commute, they actually annihilate one another, $B_i B_j = 0$.
For any column $v$ of idempotent $B_j$, $B_j v = v$. For $i \neq j$, then $B_j B_i v = -B_i B_j v = -B_i v$. But idempotent $B_i$ has no eigenvalue $-1$, so $B_i v = 0$. This is true for each column of $B_j$, so $B_i B_j = 0$.
It follows that the column spaces of $B_i,B_j$ have trivial intersection, and so their (nonzero) ranks are additive: $B_i + B_j$ is again idempotent and $\text{rank}(B_i + B_j) = \text{rank} B_i + \text{rank} B_j$. Thus we can reach the same impossibility, that rank of $I = \sum B_i$ is at least $n^2$, without relying on the trace operator.
Suppose the set in question is not a singleton and let $A$ be a member of this set. If the underlying field has characteristic 2, we pick $A\ne I$. Rewrite the equation $AB+BA=0$ as $\left(I\otimes A+A^T\otimes I\right)\operatorname{vec}(B)=0$. Since every square matrix is similar to its transpose over any field, the sum of Kronecker products in the bracket is similar to $C=I\otimes A+A\otimes I$. So, if we can prove that the nullity of $C$ is strictly smaller than $n^2-1$, or equivalently, $\operatorname{rank}(C)>1$, we are done.
Suppose the contrary that $\operatorname{rank}(C)\le1$. Extend the ground field $\mathbb F$ to its algebraic closure, and let the eigenvalues of $A$ be $\lambda_1,\ldots,\lambda_n$. The eigenvalues of $C$ are then $\lambda_i+\lambda_j$ with $i,j\in\{1,2,\ldots,n\}$. It is not hard to see that the condition $\operatorname{rank}(C)\le1$ implies that $A$ must be nilpotent when $\operatorname{char}(\mathbb F)\ne2$, and $A$ is either unipotent or nilpotent when $\operatorname{char}(\mathbb F)=2$. Now there are two cases:
Therefore the cardinality of the set of linearly independent and mutually anticommuting matrices is bounded above by $n^2-1$.
When $n\ge3$, an argument similar to the one in the second paragraph above shows that if $\operatorname{rank}(C)\le2$, we may still conclude that that $A$ is nilpotent when $\operatorname{char}(\mathbb F)\ne2$ or $A$ is unipotent/nilpotent when $\operatorname{char}(\mathbb F)=2$. Therefore, the contradictions in cases 1 and 2 above still arise, and hence the upper bound $n^2-1$ is unattainable.
Therefore, one can find $n^2-1$ linearly independent and mutually anticommuting $n\times n$ matrices if and only if $n=2$ and $\operatorname{char}(\mathbb F)\ne2$. For instance, consider the set $$ \left\{\pmatrix{1&0\\ 0&-1},\ \pmatrix{0&1\\ -1&0},\ \pmatrix{0&1\\ 1&0}\right\}. $$
There can be at most $n^2$ linearly independent nxn matrices.
If there were $n^2$ linearly independent anticommuting matrices, you could form any other nxn matrix from them. In particular, you could form the identity matrix, I.
Since any matrix M in the family anticommutes with any other matrix in the family, it would anticommute with I, since M anticommutes with each of the matrices in the family that were used to form I. However, MI = IM = M, not -M, therefore I does not anticommute with the other matrices in the family, and I cannot be a linear combination of matrices in the family. Therefore the family of matrices cannot be a basis, and there cannot be more than $n^2 - 1$ of them.