0

This post is inspired by an accepted answer to a question posed here: Prove that the linear transformations are the same..

Specifically, the question is about the method used (and accepted) in proving the proposition:

If X is a complex inner product space and $S,T \in B(X)$ are such that $(Sz,z)=(Tz,z)\forall z \in X$, then $S=T$.

The accepted answer provides a brief proof...and the method used has caused me a great deal of grief as I can not understand why it is true!

In my below post, I am (purposely) going to slowly go through my thought process...working my way to the final point of confusion that is preventing me from understanding the accepted answer's proof.


Author's Proof

Let $Q=S-T$. By assumption, $\langle Qv,v \rangle =0$ for all $v=\alpha x + \beta y$. Now, $$ 0=\langle Q(\alpha x+y),\alpha x+y \rangle = |\alpha|^2 \langle Qx,x \rangle + \langle Qy,y \rangle + \alpha \langle Qx,y \rangle + \bar{\alpha} \langle Qy,x \rangle \\ = \alpha \langle Qx,y \rangle + \bar{\alpha} \langle Qy,x \rangle. $$ Choosing first $\alpha =1$ and then $\alpha =i$, we get $$ \langle Qx,y \rangle + \langle Qy,x \rangle=0 $$ and $$ \langle Qx,y \rangle - \langle Qy,x \rangle =0. $$ Sum and conclude that $\langle Qx,y \rangle =0$ for all $x$ and all $y$. Hence $Q=0$.


The offending line, in particular, derives from the fact that I do not understand why the $x,y$ values in the final two equations are the same $x,y$ values. To me, the final two equations should, instead, read:

$$ \langle Qx_1,y_1 \rangle + \langle Qy_1,x_1 \rangle=0 $$ and $$ \langle Qx_2,y_2 \rangle - \langle Qy_2,x_2 \rangle =0. $$

As such, one should not be able to sum the terms and conclude $\langle Qx,y \rangle =0$


For purposes of my understanding, I chose a small workable dimension of $\text{dim}(V) = 3$.

Let $v^*$ be the arbitrary vector within an inner product space $V$ of dimension $3$. Therefore, consider an arbitrary orthonormal basis within $V$ of the form $e=\{e_1,e_2,e_3\}$. Let $v^*=\eta_1 e_1 + \eta_2 e_2 + \eta_3 e_3$.

Now, for the purpose of this proof, note that $v^*$ can also be constructed from a linear combination of two vectors. There are many possibilities to choose from, but for simplicity, consider the case where $x = \frac{1}{\eta_2}e_1 + \frac{1}{\eta_1}e_2$ and $y = e_3$. We can thus reframe $v^*$ as the sum of two vectors:

$v^* = \alpha x + \beta y$ where $\alpha = \eta_1 \eta_2$ and $\beta = \eta_3$.

Now, assume that for the inner product space $V$, there exists two linear operators $T$ and $S$ such that, $\forall v \in V$, $\langle Tv, v \rangle = \langle Sv, v \rangle$.

Subtracting a common scalar quantity from both sides:

$\forall v \in V$, $\langle Tv, v \rangle - \langle Sv, v \rangle = 0$

Due to linearity of an inner product,

$\forall v \in V$, $\langle Tv - Sv, v \rangle = 0$

By definition of linear operator addition, define $Av + Bv = (A+B)v$. Thus $ Q:=T-S$

$\forall v \in V$, $\langle Qv, v \rangle = 0$

This is simply a restatement of our initial assumption of the system that we are working with.

Moving forward, let $v=v^*= \alpha x + \beta y$.

$\forall v \in V$, $\langle Q(\alpha x + \beta y), \alpha x + \beta y \rangle=0$

By linearity property of operators:

$\forall v \in V$, $\langle Q\alpha x + Q\beta y, \alpha x + \beta y \rangle=0$

By additional linearity property of operators:

$\forall v \in V$, $\langle \alpha Qx + \beta Qy, \alpha x + \beta y \rangle=0$

Now, by linearity of inner products:

$\forall v \in V$, $\langle \alpha Qx, \alpha x \rangle +\langle \alpha Qx, \beta y \rangle + \langle \beta Qy, \alpha x \rangle + \langle \beta Qy, \beta y \rangle$=0

By linearity of first slot of inner products and conjugate linearity:

$\forall v \in V$, $\langle \alpha Qx, \alpha x \rangle +\langle \alpha Qx, \beta y \rangle + \langle \beta Qy, \alpha x \rangle + \langle \beta Qy, \beta y \rangle=0$

$\alpha\bar{\alpha} \langle Qx, x\rangle + \alpha\bar{\beta} \langle Qx, y\rangle+\beta\bar{\alpha} \langle Qy, x\rangle + \beta\bar{\beta}\langle Qy, y\rangle=0$

From our initial assumption, the first and fourth term are equal to zero:

$\forall v \in V, \ \ \alpha\overline{\beta} \langle Qx, y\rangle+\beta\bar{\alpha} \langle Qy, x\rangle=0$.

Returning the $\alpha$ and $\beta$ values back to their original definitions:

$\forall v \in V, \ \ \eta_1\eta_2\bar{\eta_3} \langle Qx, y\rangle+\eta_3\overline{\eta_1 \eta_2} \langle Qy, x\rangle=0$.

Recall that $x$ and $y$ are purely determined by the coefficients $\eta_1, \eta_2,$ and $,\eta_3$.

$\color{red}{\text{For this final equation, we therefore have a set of ordered triplets}}$ $\{(\eta_{1_i}, \eta_{2_i}, \eta_{3_i})\}$ $\color{red}{\text{for which this equation is true.}}$ (Note the super tiny index $i$ that is subscripted on these $\eta$ symbols within the above set...this is my way of saying there are 'many" different 3-coefficient combinations corresponding to an arbitrary vector that satisfies this equation)

Consider two special $\color{blue}{\text{classes }}$of vectors: $v'$ and $v''$ where

  1. $\alpha = 1, \beta = 1 \iff v' = x+y$
  2. $\alpha = i, \beta = 1 \iff v'' = ix+y$

Note that, because of how we defined $y$, there are no dependencies on the $\eta$ values. However, $x$ is entirely dependent. In general, therefore, conclude that the $\color{red}{\text{stipulation}}$ of $\eta$ values (which therefore stipulates the $\alpha$ and $\beta$ values...and vice versa) will dictate the exact form of $x$ and $y$. Therefore, $v'$ and $v''$ are most accurately described as:

  1. $\alpha = 1, \beta = 1 \iff v' = x'+y'$
  2. $\alpha = i, \beta = 1 \iff v'' = ix''+y''$

In the first case of $v'$, if $\beta = 1, \eta_3 = 1$ and if $\alpha=1, \eta_1=\frac{1}{\eta_2}$...therefore $x'=\frac{1}{\eta_2}e_1+\eta_2 e_2$...and $y'$ (as stated earlier) simply is: $y'=y$.

In the second case, $\alpha=i$, $i=\eta_1 \eta_2$ therefore, $\eta_1 = \frac{i}{\eta_2}$. As such, $x'' = \frac{1}{\eta_2}e_1 + \frac{\eta_2}{i}e_2$. Similarly, $y''=y$.

At any rate, consider some arbitrary $v'$ (i.e. a an arbitrary vector within that particular $v'$ class), which has its $\alpha=1$ value and $\beta=1$ value) and call it $v'^*$...described, of course, by a corresponding $x'^*$ and $y'^*$. Similarly, consider some arbitrary $v''$ and call it $v''^*$ described by a $x''^*$ and $y''^*$

We thus have the following two equations that are generated:

  1. $1*\overline{1} \langle Qx'^*, y'^*\rangle+1*\bar{1} \langle Qy'^*, x'^*\rangle=0$ and simplifying:

$$\langle Qx'^*, y'^*\rangle+\langle Qy'^*, x'^*\rangle=0$$

  1. $i*\overline{1} \langle Qx''^*, y''^*\rangle+1*\bar{i} \langle Qy''^*, x''^*\rangle=0$ and simplifying:

$$i \langle Qx''^*, y''^*\rangle-i \langle Qy''^*, x''^*\rangle=0 \text{ and then multiply by i}$$

$$- \langle Qx''^*, y''^*\rangle+ \langle Qy''^*, x''^*\rangle=0 $$

Now, an important question arises: WHAT IS THE RELATIONSHIP BETWEEN $v'^*$ and $v''^*$.

I have played around with this for quite some time and I simply cannot figure out how to go about it.

In the aforementioned stack exchange post (Prove that the linear transformations are the same.) the author appears to claim that the $x$s and $y$s from the two final equations are the same. But I am having great difficulty understanding why this is true.

For anyone that stuck through this, I greatly appreciate it! Cheers~


Edit: Given the responses provided below, here is my new attempt at understanding the method used in this proof.

I first need to confirm that an arbitrary vector $v \in V$ of dimension $n$ can be represented as a linear combination of two vectors.

One such construction is the following: $ v = \alpha (\eta_1 e_1 + \eta_2 e_2 + ... + \eta_{n} e_{n}) + \beta (\delta_1 e_1 + \delta_2 e_2+...+\delta_n e_n)$ where $e_1 : e_n$ are basis vectors of $V$.

This can be rewritten as:

$v = \alpha x + \beta y$,

where $x = \eta_1 e_1 + \eta_2 e_2 + ... + \eta_n e_{n}$ and $y = \delta_1 e_1 + \delta_{2} e_{2}+...+\delta_n e_n$

Now, because of how we described vectors $x$ and $y$, any vector in $V$ can be described by the ordered data of $(\alpha, \beta, \eta_1, ...,\eta_n, \delta_1, ..., \delta_n)$...i.e. any vector in $V$.

As such, when we ultimately derive the equation $\alpha\overline{\beta} \langle Qx, y\rangle+\beta\bar{\alpha} \langle Qy, x\rangle=0$, we've effectively created an equation that is "eligible" (valid) for any ordered data $(\alpha, \beta, \eta_1 ..., \eta_{n}, \delta_1, ...,\delta_n)$

!!! NOTE !!! Something that will become important is the fact that even with $\alpha$ and $\beta$ fixed at particular values, cycling through all of the different $\eta_1 : \eta_n$ and $\delta_1 : \delta_n$ combinations will still allow one to describe every vector in $V$.

Consider two arbitrary vectors $v'$ and $v^*$.

Let $v'$ be represented by the ordered data: $(1, 1, \eta_1', ...,\eta_n',\delta_1',...,\delta_n')$

and $v^*$ represented by the ordered data: $(i, 1, \eta_1^*, ...,\eta_n^*,\delta_1^*,...,\delta_n^*)$. In particular, however, let $\eta_1' = \eta_1^*, ..., \eta_n ' = \eta_n^*, \delta_1'=\delta_1^*, ..., \delta_n'=\delta_n^*$

That is to say $x' = x^*$ and $y' = y^*$. For this reason, let $x' = x^*=x$ and $y' = y^*=y$

Two statements can therefore be made about vector $v'$ and $v^*$ :

$\langle Qx, y\rangle+\langle Qy, x\rangle=0$

$i\langle Qx, y\rangle-i\langle Qy, x\rangle=0$

When we solve for these equations, we find that $\langle Qx, y \rangle = 0$.

Now this, in and of itself, simply let's us conclude that the vectors $x$ and $y$ associated with $v'$ and $v^*$ carry the following property: $Q(x)$ is orthogonal to the vector $y$. (i.e. $Q(x) \bot y$).

Importantly, however, note that because of how we constructed our description of $x$ in general ( $x = \eta_1 e_1 + \eta_2 e_2 + ... + \eta_n e_{n}$), $x$ can literally take on any vector in all of $V$.

As such, now holding $y$ fixed, you could cycle through every possible vector in $V$ by using different combinations of $\eta_1, \eta_2,...,\eta_n$. This would mean that $\forall x \in V$, $Q(x) \bot y$ for a fixed non-zero $y$. Now do this again for another, different, non-zero $y$ vector. And again, and again, etc etc.

$Q(x)$ is just a vector though, and the only way $Q(x)$ can always be orthogonal to all non-zero vectors is if $Q(x)$ is the $0$ vector. Thus, $Q$ is the zero map and $T=S$.

S.C.
  • 4,984

2 Answers2

2

I think you're making this much more complicated than it needs to be.

The key claim is:

$(*)\quad$ For every $x,y,\alpha$ we have $$\alpha\langle Qx,y\rangle+\overline{\alpha}\langle Qy,x\rangle=0.$$

From this we proceed as follows. Fix $u$ and $v$; we want to show that $\langle Qu,v\rangle=0$.

First, we apply $(*)$ $x=u,y=v,\alpha=1$; this gives $$\langle Qu,v\rangle+\langle Qv,u\rangle=0$$ or $$\langle Qu,v\rangle=-\langle Qv,u\rangle.$$

Next, apply $(*)$ with $x=u,y=v,\alpha=i$; this gives $$i\langle Qu,v\rangle-i\langle Qv,u\rangle=0\implies \langle Qu,v\rangle=\langle Qv,u\rangle.$$

The point is that we're choosing to use the same $x$- and $y$-values here; the whole point of $(*)$ is that it's a universal statement, so we get to apply it to any triple of values we want. We're not forced to use the same $x$- and $y$-values, but we're not prevented from doing so either: the whole point of universal instantiation is that we get to plug in whatever terms we want.

Noah Schweber
  • 245,398
  • This is my source of confusion. I derived the statement $\forall v \in V, \ \ \alpha\overline{\beta} \langle Qx, y\rangle+\beta\bar{\alpha} \langle Qy, x\rangle=0$. . The universal quantifier is over $v$, not $x$ and $y$. In particular, my $x$ is dependent on $\eta_1$ and $\eta_2$. My $\alpha$ is also dependent on $\eta_1$ and $\eta_2$. Therefore, when I change from $\alpha = 1$ and $\alpha = i$, my claim would be that I literally cannot keep $x$ fixed. (Thanks for coming to the rescue as usual) – S.C. Oct 08 '20 at 00:28
  • 1
    @S.Cramer But the universal quantifier over $v$ really is over $x$ and $y$ as well - they're introduced as components of $v$. If you want to think in terms of $v$, fix $x$ and $y$ and use $v_1=1x+y$ and $v_2=ix+y$. – Noah Schweber Oct 08 '20 at 00:36
  • lol. I'm going to have to mull over this for a bit because this response rocked my world with respect to how I have always treated universal quantifiers (sort of a massive paradigm shift for my purely self-taught + stack.exchange math education haha). I am sure the eureka moment will come soon. I'll be back. Cheers~ – S.C. Oct 08 '20 at 00:53
  • I added my interpretation of your comments in an "Edit" section. I think the general gist is correct (albeit wordy). Thanks for the token of insight! Much appreciated. – S.C. Oct 10 '20 at 11:52
1

I think the author simply uses the fact that if $z_1, z_2$ are two complex numbers such that for all $\alpha\in{\mathbb C}$, $\alpha z_1 + \overline\alpha z_2 = 0$, then $z_1=z_2=0$.

Gribouillis
  • 14,188
  • I think I know which line you are referring to (the one that includes $\alpha \bar{\beta}$ and $\bar{\alpha}\beta$). In your above statement, however, aren’t the $z_1$ and $z_2$ values FIXED as you cycle through all the different $\alpha$’s? In my equation, the “$z_1$’s” and “$z_2$’s” would not be fixed, however, because they change as the $\alpha$’s and $\beta$’s change. Or am I misunderstanding something? (Thanks!) – S.C. Oct 07 '20 at 12:24
  • No, I was referring to the author's $0=\alpha\langle Q x, y\rangle + \overline\alpha \langle Q y, x \rangle$ line. Here, $x,y$ are arbitrary elements of $X$ and they don't depend on $\alpha$. – Gribouillis Oct 07 '20 at 18:51
  • I am fairly certainty that line corresponds to my line of $\forall v \in V, \ \ \alpha\overline{\beta} \langle Qx, y\rangle+\beta\bar{\alpha} \langle Qy, x\rangle=0$ for a $\beta$ value of $1$. However, in my description, my $x$ and $y$ depend on my $\alpha$ and $\beta$ values. That's the gist of the question. How does the author avoid such dependencies while I do not? – S.C. Oct 07 '20 at 21:47