9

Suppose $T$ is a linear operator on a complex inner product space. Is it a theorem that if $\langle Tx,x\rangle=0$ for all $x$ in the space then $T=0$. The theorem fails in the real case, as seen for instance by rotation by $\pi/2$ on $\mathbb{R}^2$.

Is there anything deeper behind this fact, or can it mostly be looked at as a quirk of the conjugate-linearity of the complex inner product?

If anyone is interested in looking up this proof, it is theorem 9.2 in Roman's Linear Algebra, third edition.

Jonas Meyer
  • 53,602
Eric Auld
  • 28,127
  • 1
    For me, it seems that the main difference is the existence of eigenvectors. – Sangchul Lee Dec 23 '14 at 00:33
  • I have a hunch that this is related to the existence of orthogonal eigenvectors for a symmetric matrix, but lack of existence of 'unitary' eigenvectors. I also think a connection between both can be given by considering the exponential map from Lie theory so that the condition $\langle Tx,x\rangle + \langle x,Tx\rangle=0$ would be translated to $\langle x,x\rangle = \langle \exp(T)x,\exp(T)x\rangle$. But I'd have to think hard to figure out the missing pieces. – Myself Dec 23 '14 at 00:37
  • @sos440: This holds just as well over infinite dimensional inner product spaces, on which linear operators need not have any eigenvectors, even in the complex case. – Jonas Meyer Dec 23 '14 at 04:24
  • @JonasMeyer, You're right. I was thinking on it after a while, and realized that even for finite dimensional cases my idea does not reveal much. Thank you! – Sangchul Lee Dec 23 '14 at 05:18

2 Answers2

7

The key is to see how $T$ acts on an arbitrary basis. Suppose $\langle Tx,x \rangle = 0$ for all $x \in V$, where $V$ is a (real or complex) vector space.

If $T$ acts on a real space with (Hamel) basis $\{e_j\}_{j \in \alpha}$, then we have

  • $\langle e_j, Te_j \rangle = 0$
  • $ \langle e_j + e_k, T(e_j + e_k) \rangle = 0 \implies \langle e_j, T e_k \rangle = -\langle Te_k, e_j \rangle $

This is enough to deduce that $T = -T^*$. If $T$ acts on a complex space, we have the additional constraint

  • $ \langle e_j + ie_k, T(e_j + ie_k) \rangle = 0 \implies \langle e_j, T e_k \rangle = \langle Te_k, e_j \rangle $

This, together with the other two properties, allows us to deduce that $T = 0$.

We deduce that on a real space, $\langle Tx,x \rangle = 0$ for all $x \in V$ $\iff T^* = -T$, and on a complex space, $\langle Tx,x \rangle = 0$ for all $x \in V$ $\iff T = 0$.

Note: I haven't explicitly proved the converse in either case. I think that you'll find that, in each case, the proof is straightforward.


While I can't say whether the result is deep, I can say that this shows that the inner product becomes much more powerful over complex spaces.

A consequence of this quirk is that when one defines positive definite operators over a real inner product space, it is significant whether one specifies that the operator must also be self-adjoint. As real bilinear forms, matrices act the same up to their self-adjoint part. That is, we have $A + A^*= B + B^* \iff \langle x,Ax \rangle = \langle x,Bx \rangle$ for all $x$.

For complex inner-product spaces, the additional specification of self-adjointness is redundant, and we have $A = B \iff \langle x,Ax \rangle = \langle x,Bx \rangle$ for all $x$.


Another interesting quirk: the statement $$ \|x+y\|^2 = \|x\|^2 + \|y\|^2 \iff \langle x,y \rangle = 0 $$ is only true for real inner-product spaces.

Ben Grossmann
  • 225,327
2

I believe this fact follows easily for self-adjoint operators (in finite dimension) and since every $T=R+iS$, where $R,S$ are self-adjoint operators, the theorem follows for every $T$. I wrote the proof bellow.

The condition $\langle Tx,x\rangle=0$, for every $x$, implies $0=\langle Rx,x\rangle+i\langle Sx,x\rangle$.

Now, $\langle Rx,x\rangle, \langle Sx,x\rangle\in\mathbb{R}$ for every $x$, since $R$ and $S$ are self-adjoints. Thus, $\langle Rx,x\rangle=\langle Sx,x\rangle=0$, for every $x$.

We know by spectral theorem (in finite dimension) that $R$ is diagonalizable. If $x$ is an eigenvector associated to an eigenvalue $a$ then $0=\langle Rx,x\rangle=a\langle x,x\rangle$ and $a=0$. Thus any eigenvalue is zero and since $R$ is diagonalizable then $R=0$. Of course the same occurs with $S$. Thus, $T=0$. $\square$

Daniel
  • 5,872