0

Let $V$ be a finite dimensional inner product vector space over $\mathbb{R}$ and let $T : V \to V$ be a normal operator (that is, $T T^\ast = T^\ast T$) whose characteristic polynomial splits over $\mathbb{R}$. Is it true that $T$ admits an orthonormal basis of eigenvectors?

I know that the above result is true for complex vector spaces, but I would like to know whether it is also true for real vector spaces (or even vector spaces over other fields).

  • I believe the answer is yes. It can be shown that the space is the direct sum of the kernels of the irreducible factors of the polynomial applied to the operator. If those factors are linear that's an eigenspace decomposition. – blargoner Aug 03 '22 at 16:50

1 Answers1

0

The answer to your question is yes for real vector spaces (and for vector spaces over any subfield of $\Bbb C$). The simplest approach is to note that it suffices to consider the case where $T$ is a matrix operator, which means that its complexification is easily understood.

Suppose that $T$ is a square, real matrix satisfying $T^*T = TT^*$ (where $T^*$ denotes the conjugate-transpose, which is equal to the usual transpose when $T$ has real entries). By the spectral theorem for normal matrices, there exists a complex matrix $U$ such that $T = UDU^*$, where $D$ is the diagonal matrix whose diagonal entries are the eigenvalues of $T$. Because the characteristic polynomial of $T$ splits over $\Bbb R$, its eigenvalues are real and $D$ is a real, diagonal matrix. It follows that $$ T^* = (UDU^*)^* = UD^*U^* = UDU^* = T. $$ Thus, $T$ is a self-adjoint matrix with real entries, which is to say that it is a symmetric real matrix. The conclusion now follows from the spectral theorem for symmetric matrices.


The answer is no for fields of finite characteristic. For example, the matrix $$ A = \pmatrix{1&1\\1&1} $$ with entries in $\Bbb F_2$ is normal, has characteristic polynomial $p(x) = x^2$, but is not diagonalizable (let alone diagonalizable with an orthonormal eigenbasis).


For a more direct proof (that doesn't require an appeal to the complexification of a vector space), we can proceed as follows. Note the following:

  • If $T$ is normal, then so is $T - \lambda I$
  • If $T$ is normal, then $\ker(T) = \ker(T^*)$ (more generally, $\|Tx\| = \|T^*x\|$).

Lemma 1: If $T$ is normal and has $\lambda$ as its only eigenvalue, then it must be the map $T(x) = \lambda x$.

Proof: Otherwise, there exists a vector $x$ such that $(T - \lambda I)x \neq 0$, but $(T - \lambda I)^2 x = 0$. But this impossible: we have $$ \begin{align} (T - \lambda I)^2 x = 0 &\implies (T - \lambda I)^* (T - \lambda I)x = 0 \\ & \implies \langle x,(T - \lambda I)^* (T - \lambda I)x \rangle = 0 \\ & \implies \langle (T - \lambda I)x, (T - \lambda I)x \rangle = 0 \\ & \implies (T - \lambda I)x = 0, \end{align} $$ contradicting our premise.

Lemma 2: If $T$ is normal and $\lambda \neq \mu$, then $\ker(T - \lambda I)$ and $\ker(T - \mu I)$ are mutually orthogonal subspaces.

See the proof given here for instance.

Let $n = \dim(V)$. For any operator $T$ with a characteristic polynomial that splits, we can write $V$ as a direct sum $$ V = \ker(T - \lambda_1 I)^n \oplus \cdots \oplus \ker(T - \lambda_k I)^n, $$ where $\lambda_1,\dots,\lambda_k$ are the eigenvalues of $T$. By Lemma 2, the addends of the direct sum are mutually orthogonal. By Lemma 1, $T|_{\ker(T - \lambda I)}$ is simply the map $T(x) = \lambda x$. Conclude that combining orthogonal bases for each addend produces an orthonormal basis of eigenvectors of $T$.

Ben Grossmann
  • 225,327