0

Consider the following argument from a book (included as an image to show exactly what is printed): Natural orientation of underlying real space

There are a number of minor typos here, but I'm mainly concerned with the computation of $\Delta(a_1,\ldots,a_n,ia_1,\ldots,ia_n)$. In the definition of $\Delta$ I'm assuming that the author is using the Grassmann product defined earlier in the book, which here would give $$\Delta(x_1,\ldots,x_{n+n})=\frac{(-i)^n}{(n!)^2}\sum_{\sigma\in S_{n+n}}\varepsilon_{\sigma}\Delta_E(x_{\sigma(1)},\ldots,x_{\sigma(n)})\overline{\Delta}_E(x_{\sigma(n+1)},\ldots,x_{\sigma(n+n)})\tag{1}$$ If I take $E=\mathbb{C}$ (so $n=1$), then from (1) I obtain $$\Delta(a_1,ia_1)=-2|\Delta_E(a_1)|^2<0$$ Even using the expression on the right-hand side of the first equality in the computation from the book, I obtain $$\Delta(a_1,ia_1)=-|\Delta_E(a_1)|^2<0$$ If I am correct, does someone know what the definition of $\Delta$ should be to yield the desired result (for all $n$)?

blargoner
  • 2,968
  • Can you name the book? – Nicholas Todoroff Oct 31 '22 at 21:40
  • @NicholasTodoroff Greub Multilinear Algebra 2nd ed (1978). – blargoner Oct 31 '22 at 21:49
  • 2
    Just a philosophical remark: This should be viewed as a generalization of $dz\wedge d\bar z = -2i,dx\wedge dy$ in $\Bbb C$. Thus, $dx\wedge dy = \frac i2 dz\wedge d\bar z$. This suggests that the author's sign is indeed wrong and that your factor of $2$ may also be correct. – Ted Shifrin Nov 04 '22 at 21:26

2 Answers2

1

$ \newcommand\Ext{{\bigwedge}} \newcommand\R{\mathbb R} \newcommand\C{\mathbb C} \newcommand\conj[1]{\overline{#1}} \newcommand\form[1]{\langle#1\rangle} \DeclareMathOperator\linspan{span} \renewcommand\Re{\mathop{\mathrm{Re}}} \newcommand\tensor\otimes \newcommand\gtensor{\mathbin{\hat\tensor}} $

I think that $\Delta = (-i)^n\Delta_E\wedge\bar\Delta_E$ is a typo and should be $$ \Delta = {\color{red}{\frac1{(-i)^n}}}\Delta_E\wedge\bar\Delta_E = i^n\Delta_E\wedge\bar\Delta_E. $$ This is somewhat natural since (as Greub is trying to convince us) $$ \Delta_E(ia_1, \dotsc, ia_n)\wedge\bar\Delta_E(ia_1, \dotsc, ia_n) = i^n\Delta_E(a_1,\dotsc,a_n)\wedge\bar\Delta_E(ia_1,\dotsc,ia_n) $$ is non-zero and real. It could also be the case that Greub meant to write $$ \Delta(ia_1, ia_2, \dotsc, ia_n, a_1, a_2, \dotsc, a_n) $$ in which the case the definition of $\Delta$ as written by Greub is the one required.

In either case, the argument proceeds by suggesting to the reader that $\sum_{S_{2n}} = \sum_{S_n\times S_n}$ when evaluating this particular wedge product. This doesn't seem outlandish to me, but I also don't immediately see how that works.


What I will do is give a more abstract approach, which completely avoids the tedious combinatorics. It would be a good exercise to pinpoint how this argument lines up with Greub's.

Let us consider the exterior algebra.

Choose non-zero $J \in \Ext^n E$. This induces a linear isomorphism $\cdot/J : \Ext^n E \cong \C$ via $$ J' = (J'/J)J,\quad J' \in \Ext^n E. $$ The notation $J'/J$ is merely meant to be suggestive. Such an isomorphism is a determinant function, and is equivalent to a choice of $J$ by choosing the unique $J'$ such that $J'/J = 1$. We will show that (given a notion of conjugation) $J$ naturally gives rise to an $I \in \Ext^{2n}E_\R$ by constructing a natural determinant function $\Delta \in \Ext^{2n}E_\R \to \R$, and we will also show that every such $I$ has the same orientation.

A conjugation on $E$ is not inherent to $E$ being a complex vector space; it is an additional structure. Once we have a conjugation map, there is a special subset of $E$ $$ \Re E := \{v + \conj v \;:\; v \in E\} $$ whose elements we call the real elements of $E$. This is not a subspace of $E$, but clearly is a subspace of $E_\R$. By $\Ext\Re(E)$ we mean the subspace of $\Ext E_\R$ generated by $\Re(E)$. It is evident that $$ E_\R = \Re E \oplus i\Re E, $$ and it follows that $\Ext E = \Ext\Re(E)\gtensor\Ext i\Re(E)$ where $\gtensor$ is the graded tensor product. In particular $$ \Ext^{2n} E_\R = \Ext^n\Re E\gtensor\Ext^n i\Re E. $$ The graded aspect of $\gtensor$ isn't important here; when considering just linear maps, $\gtensor$ is just like the usual tensor product. By this fact, we may construct a linear map $\Ext^n E_\R \to \R$ by specifying its values on pairs $(H, H')$ for $H \in \Ext^n\Re(E)$ and $H' \in \Ext^ni\Re(E)$ and ensuring bilinearity.

By the universal property of $\Ext E_\R$, the $\R$-linear isomorphism $\phi : E_\R \to E \subseteq \Ext E$ extends uniquely to an $\R$-algebra homomorphism $\Ext E_\R \to \Ext E$. Our candidate for $\Delta$ is then $$ H\tensor H' \mapsto \conj{\frac{\phi(H)}J}\frac{\phi(H')}J. $$ Here, the conjugation is meant to apply to the entire expression $\phi(H)/J$. Consider though the particular case where $H' = m_i(H)$ for $m_i : \Ext E_\R \to \Ext E_\R$ the outermorphism which multiplies vectors by $i$. Then $\phi(H') = i^n\phi(H)$ and $$ \conj{\frac{\phi(H)}J}\frac{\phi(H')}J = i^n\left|\frac{\phi(H)}J\right|^2. $$ So we divide by $i^n$. Define $\Delta : \Ext^{2n}E_\R \to \R$ by $$ \Delta(H\tensor H') = (-i)^n\conj{\frac{\phi(H)}J}\frac{\phi(H')}J $$ This still gives us a real number for any $H'$ since $H' = xm_i(H)$ for some $x \in \R$ by virtue of being an element of $\Ext^ni\Re(E) = m_i\left(\Ext^n\Re(E)\right)$. Observe now that for any non-zero $a \in \C$ $$ J'/(aJ) = \frac{J'/J}a, $$ hence if $\Delta'$ is the $E_\R$ determinant function associated to $aJ$ $$ \Delta'(H\tensor H') = (-i)^n\conj{\frac{\phi(H)}{aJ}}\frac{\phi(H')}{aJ} = \frac{(-i)^n}{|a|^2}\conj{\frac{\phi(H)}J}\frac{\phi(H')}J = \frac1{|a|^2}\Delta(H\tensor H'). $$ Since $1/|a|^2$ is positive, $\Delta'$ has the same orientation as $\Delta$. Thus, as promised, every $J$ induces the same orientation on $E_\R$.

We could also consider the alternative $\Delta$ $$ \Delta(H\tensor H') = i^n\frac{\phi(H)}J\conj{\frac{\phi(H')}J} $$ but this is in fact equivalent to the original since there is some $x \in \R$ such that $H' = xm_i(H)$ and $$ i^n\frac{\phi(H)}J\conj{\frac{\phi(H')}J} = \frac{\phi(m_i(H))}Jx(-i)^n\conj{\frac{\phi(H)}J} = (-i)^n\frac{\phi(H')}J\conj{\frac{\phi(H)}J}. $$


It is interesting to note that the same argument goes through if we replace every instance of $i$ with any $\xi \in \C\setminus\R$. Whether every choice of $\xi$ induces the same $\Delta$ as $i$ does is a question I leave alone for now.

  • Thanks for your answer. I'm working through your abstract argument. I think you meant to write $${\bigwedge}^{2n} E_{\mathbb{R}}={\bigwedge}^n\mathop{\mathrm{Re}}E\mathop{\hat{\otimes}}{\bigwedge}^ni\mathop{\mathrm{Re}}E$$ right? – blargoner Nov 04 '22 at 20:04
  • Also I'm assuming by conjugation you mean an $\mathbb{R}$-linear involution $z\mapsto\overline{z}$ of $E_{\mathbb{R}}$ which satisfies $\overline{iz}=-i\overline{z}$, in which case we can induce one of these for which $\mathop{\mathrm{Re}} E$ is just the $\mathbb{R}$-span of a basis of $E$ in $E_{\mathbb{R}}$. – blargoner Nov 04 '22 at 20:15
  • 1
    @blargoner Yes, right on both counts. – Nicholas Todoroff Nov 04 '22 at 20:28
  • If you have time and interest, let me know if the answer I just posted looks right. – blargoner Nov 06 '22 at 09:10
0

In light of part of Nicholas' answer, and Ted's comment, I think Greub should have defined $\Delta$ by $$\Delta=\frac{i^n}{2^n}\Delta_E\wedge\overline{\Delta}_E$$ Then equation (1) in my question becomes $$\Delta(x_1,\ldots,x_{n+n})=\frac{i^n}{2^n(n!)^2}\sum_{\rho\in S_{n+n}}\varepsilon_{\rho}\Delta_E(x_{\rho(1)},\ldots,x_{\rho(n)})\overline{\Delta}_E(x_{\rho(n+1)},\ldots,x_{\rho(n+n)})\tag{1'}$$ Taking $x_k=a_k$ and $x_{n+k}=ia_k$ for $1\le k\le n$, what are the non-zero terms in the sum on the right side of (1')? First observe that in such a term the subscripts associated with the $a$'s and $ia$'s (not the $x$'s) occurring within $\Delta_E$ must be distinct, lest there be a linear dependence relation over $\mathbb{C}$ which causes $\Delta_E$, and hence the term, to be zero. Similarly for $\overline{\Delta}_E$. If the subscripts within each of $\Delta_E$ and $\overline{\Delta}_E$ are strictly increasing (call this an increasing term), then the term is nonzero and for each $1\le k\le n$ we must have either $$\rho(k)=k\quad\text{and}\quad\rho(n+k)=n+k\tag{2}$$ or $$\rho(k)=n+k\quad\text{and}\quad\rho(n+k)=k\tag{3}$$ Note there are $2^n$ such permutations $\rho$, and $\varepsilon_{\rho}=(-1)^p$ where $p$ is the number of $k$ for which (3) holds, which is just the number of $i$'s appearing within $\Delta_E$. These $i$'s can be moved over to $\overline{\Delta}_E$ by multiplying by $(-1)^p=\varepsilon_{\rho}$, since $\Delta_E$ is $\mathbb{C}$-multilinear and $\overline{\Delta}_E$ is conjugate $\mathbb{C}$-multilinear. Now every non-zero term is obtained from a unique increasing term by applying unique signed permutations $\sigma,\tau\in S_n$ to $\Delta_E$ and $\overline{\Delta}_E$, respectively. So we obtain $$\begin{align*} \Delta(a_1,\ldots,a_n,ia_1,\ldots,ia_n)&=\frac{i^n}{2^n(n!)^2}\cdot2^n\cdot\sum_{\sigma,\tau\in S_n}\varepsilon_{\sigma}\varepsilon_{\tau}\Delta_E(a_{\sigma(1)},\ldots,a_{\sigma(n)})\overline{\Delta}_E(ia_{\tau(1)},\ldots,ia_{\tau(n)})\\ &=\frac{i^n}{(n!)^2}\cdot(n!)^2\cdot\Delta_E(a_1,\ldots,a_n)\overline{\Delta}_E(ia_1,\ldots,ia_n)\\ &=\Delta_E(a_1,\ldots,a_n)\overline{\Delta}_E(a_1,\ldots,a_n)\\ &=|\Delta_E(a_1,\ldots,a_n)|^2>0 \end{align*}$$ A bit tedious, not as slick as an abstract argument, but it seems to work.

blargoner
  • 2,968
  • 1
    This looks good. The hard part for me was Eqs. (3) and (4), but I've convinced myself that that makes sense. You could say the abstract argument "cheats" because the fact that $$\sum_{S_{2n}} = \sum_{S_n\times S_n}$$ here is exactly the fact that $${\bigwedge}^{2n}E_{\mathbb R} = {\bigwedge}^n\mathrm{Re}(E) \mathbin{\hat\otimes}{\bigwedge}^ni\mathrm{Re}(E),$$ which I conveniently don't provide a proof for :) – Nicholas Todoroff Nov 06 '22 at 21:40