Let $A$ be a diagonalizable transformation on a finite-dimensional vector space. If $B$ commutes with all transformations commuting with $A$, does it follow that $B=p(A)$ for some polynomial $p$?
This is based on an exercise of Halmos (1958, §84 exc. 5). I seem to think I've shown it to be true, but the original exercise has the much more restricted assumption that $A$ is Hermitian. So is it still true for any diagonalizable matrix?
Here is a sketch of my arguments:
- Let $C$ be a transformation with unique eigenvalues and with an eigenbasis agreeing with $A$. Then $C$ commutes with $A$.
- Therefore, $C$ commutes with $B$ (by assumption). Since $C$ has unique eigenvalues, it follows that $B$ and $C$ are simultaneously diagonalizable, which also means that $A$ and $B$ are simultaneously diagonalizable (since $A$ and $C$ share the unique eigenbasis of $C$).
- Let $\sigma_i$ be the eigenvalues of $A$ with corresponding eigenvectors $v_i$, and $\lambda_i$ be the eigenvalues of $B$. Let $E_{ij}$ be the transformation given by $$ E_{ij}v_k = \begin{cases}v_i&k=j\\v_j&k\ne j\end{cases}. $$ (That is, $E_{ij} = (\delta_{ij})_{ij}$ in the eiegenbasis). Then $\sigma_i=\sigma_j$ implies that $E_{ij}$ commutes with $A$, and thus also with $B$, which then implies that $\lambda_i=\lambda$.
- Finally, $B=p(A)$, where $p$ is a polynomial such that $p(\sigma_i)=\lambda_i$ (which is well-defined by the point above).
I can't pinpoint any problems with the above, and I can't see in particular why an orthogonal eigenbasis or real eigenvalues should make a difference. Any turnip?