3

If we consider the set of all n by n matrices M, and denote by Z(M) the set of matrices which commute with every matrix
I personally dislike the only proof I know. So I was wondering if anybody here knows of a more elegant proof, or rather just a different proof, given that "elegant" is subjective.

Forgot to mention, the proof I know is where we consider $A\in Z(M)$ and calculate the effect on the standard basis of the set of n by n matrices, to notice that A must be scalar in light of the implied conditions on the rows and columns of A.
Thank you for all the proofs, I am most definitely satisfied in elegance and variety.

DevVorb
  • 1,327

6 Answers6

5

(Too long for a comment.)

Robert Israel's proof is the best one that I have ever seen, but to beginners, it may appear a bit abstract or even confusing.

In terms of matrices, the idea is very simple: if $A$ commutes with every square matrix, it must in particular commute with every rank-one matrix. Thus $Axy^T=xy^TA$ for all vectors $x$ and $y$. If we pick a pair of vectors $y$ and $z$ such that $y^Tz=1$ (e.g. $y=z=(1,0,\ldots,0)^T$), then $Axy^Tz=xy^TAz$ for all vectors $x$. That is, $Ax=(y^TAz)x$ for all $x$. Hence $A$ is a scalar multiple of the identity matrix.

Note that the above proof is still valid if the elements of $A$ are taken from a commutative ring rather than a field.

Rank-one matrices are actually quite useful. Some rather non-trivial propositions (such as this) can be proved by considering rank-one matrices. Their uses are somehow under-taught in undergraduate courses.

user1551
  • 139,064
4

$\def\id{\mathrm{id}}$Instead of working with matrices let us work with linear maps.

Let $k$ be a field an algebraically closed field, let $V$ be a finite dimensional $k$-vector space, and suppose that $f:V\to V$ is a linear map that commutes with all endomorphisms of $V$. There is a scalar $\lambda\in k$ and a non-zero vector $v$ such that $f(v)=\lambda v$. Let $w$ be any other vector in $V$. There is a linear map $g:V\to V$ such that $g(v)=w$, and therefore $f(w)=f(g(v))=g(f(v))=g(\lambda v)=\lambda g(v)=\lambda w$. We thus see that $f$ is simply $\lambda\id_V$.

  • This is rather elegant, but doesn't it only work if the field is algebraically closed? Or is there some other way to see that $f$ necessarily has an eigenvalue? – Jacob Maibach Nov 17 '22 at 23:47
  • 2
    Once we know that this holds over algebraically closed fields, we can extend to all fields as follows: let $A \in Z(M_n(F))$ be arbitrary. Then $A$ commutes with every basis matrix $E_{ij}$, and thus $A$ also lies in the center of $M_n(\overline{F})$ (for any algebraic closure $\overline{F}$ of $F$). Then we know $A = \lambda I$ for some $\lambda \in \overline{F}$, but also $A \in M_n(F)$, so we have $\lambda \in F$. $\square$ – diracdeltafunk Nov 18 '22 at 05:26
2

First: if $n > 1$, then no element of $Z(M_n(F))$ has rank 1. This is because any such element would fail to commute with any linear transformation with kernel equal to its $1$-dimensional image.

Now: If $A \in Z(M_n(F))$, then $XAX^{-1} = A$ for all $X \in GL_n(F)$. When $X$ is a permutation matrix, $XAX^{-1}$ is the matrix obtained by permuting the rows and columns of $A$ according to the permutation encoded by $X$. Since $S_n$ acts transitively and 2-transitively on $\{1,\dots,n\}$, we find that all of the diagonal entries of $A$ are equal and that all of the non-diagonal entries of $A$ are equal, i.e. $A = s1 + tI$ for some $s,t \in F$, where $1$ denotes the matrix whose entries are all $1$. If $n=1$ then we are done. Otherwise, since $tI \in Z(M_n(F))$, we conclude that $s1 \in Z(M_n(F))$. But if $s \neq 0$ then $s1$ would have rank 1! So we must have $s = 0$ and thus $A = tI$, as desired.

1

Let A be a central matrix.

If v is a nonzero vector, then there is an idempotent matrix of rank 1 whose image is spanned by v. Then v spans the eigenspace of B of eigenvalue 1, and since A commutes with B, the v is an eigenvector for A.

Now it is easy to check that if every non zero vector is an eigenvector for A then A is a multiple of the identity.

1

Consider $f \in Z(M)$. We first show that any arbitrary $v \in V$ is an eigenvector of $f$, and then second show that the eigenvalue does not depend on $v$.

Let $E(v)$ be the set of linear maps with $v$ as an eigenvector. It is clearly nonempty since it contains the identity. If $g \in E(v)$ with eigenvalue $\lambda_{g}$, we can show that $f(v)$ is an eigenvector of $g$ as follows. $$ g(f(v)) = f(g(v)) = f(\lambda_{g} v) = \lambda_{g} f(v). $$

Therefore if we can find two maps $g_{1}, g_{2} \in E(v)$ whose only common eigenvectors are $v$ and its multiples, we can conclude that $f(v)$ is a multiple of $v$. For some basis $\beta = \{ v_{\alpha} : \alpha \}$ where $v_{0} = v$, we take $g_{1}$ to be a map where all the $v_{\alpha}$ are eigenvectors with distinct eigenvalues $\lambda_{\alpha}$. Then we take $g_{2}(v_{\alpha}) = g_{1}(v_{\alpha}) + v$. Then $v =v_{0}$ is an eigenvector with eigenvalue $\lambda_{0}+1$, but for all other $\alpha$, $g_{2}(v_{\alpha}) \not= \lambda v_{\alpha}$ for any $\lambda$. As such, $v$ is an eigenvector of $f$.

Now we have that $f(v) = \lambda_{v} v$ for all $v$. To see that the eigenvalues are all the same, consider linearly independent vectors $v_{1}$ and $v_{2}$ with eigenvalues $\lambda_{1}$ and $\lambda_{2}$. Since $v_{1}+v_{2}$ is also an eigenvector with eigenvalue $\lambda_{12}$, we have $$ \lambda_{1}v_{1} + \lambda_{2}v_{2} = f(v_{1}+v_{2}) = \lambda_{12}(v_{1} + v_{2}). $$ This implies $$(\lambda_{12} - \lambda_{1})v_{1} + (\lambda_{12} - \lambda_{2})v_{2} = 0. $$ By linear independence we must have $$ (\lambda_{12} - \lambda_{1}) = (\lambda_{12} - \lambda_{2}) = 0$$ or rather $\lambda_{1} = \lambda_{2}$. By the arbitrary nature of $v_{1}$ and $v_{2}$, we have that $f$ is a scalar transformation.

Jacob Maibach
  • 2,512
  • 2
  • 14
  • 20
0

Expanding on Mariano Suárez-Álvarez's answer, when $F$ is a field and $\mathcal{A}$ is a subset of $M_n(F)$, let us denote by $Com(\mathcal{A})$ the set of matrices that commute with all elements of $\mathcal{A}$.

We assume that for every nonzero vector $v$ in $F^n$, there is an element $A$ of $\mathcal{A}$ such that $Fv$ is an eigenspace of $A$. Then $Com(\mathcal{A})$ contains only multiples of the identity matrix. To prove it, we use two classical lemmas: one that says that if two endomorphisms commute, they stabilize each others eigenspaces, and the other says that if an endomorphism stabilizes each line, then it is a multiple of the identity.

This can be used to determine the center of various subsets of $M_n(F)$, including $M_n(F)$ itself.

Plop
  • 2,647