9

The vector triple product is defined as $\mathbf{a}\times (\mathbf{b}\times \mathbf{c})$. This is often re-written in the following way: \begin{align*}\mathbf{a}\times (\mathbf{b}\times \mathbf{c}) = \mathbf{b}(\mathbf{a}\cdot\mathbf{c}) - \mathbf{c}(\mathbf{a}\cdot\mathbf{b})\end{align*} This is a very useful identity for integrating over vector fields and so on (usually in physics).

Every proof I have encountered splits the vectors into components first. This is understandable, because the cross product is purely a three dimensional construct. However, I'm curious as to whether or not there is a coordinate free proof of this identity. Although I don't know much differential geometry, I feel that tensors and so on may form a suitable framework for a coordinate free proof.

Harambe
  • 8,230
  • By ”components“ do you mean coordinates relative to some basis or a decomposition of $a$ into components parallel to and perpendicular to the plane of $b$ and $c$? The latter can be without reference to coordinates by using orthogonal projection/rejection. The identity can be derived via these components of $a$. – amd Apr 11 '17 at 05:38
  • I was referring to the former. The second approach sounds interesting. I feel the overall nature is similar as they both involve splitting things into components but I'll give that a go when I have a bit more time on my hands! – Harambe Apr 11 '17 at 06:33
  • Coordinate-free proofs like this can "easily" be computed using differential forms and particularly the Hodge map. Of course, by "easily", I mean you need a knowledge and understanding of differential forms in the first place. – AloneAndConfused Apr 11 '17 at 07:05
  • 1
    Could this be what you're after ?http://math.stackexchange.com/questions/305285/deriving-bac-cab-from-differential-forms?noredirect=1&lq=1 – Arnaud D. Apr 11 '17 at 07:43

6 Answers6

8

Since $b\times c$ is normal to the plane $b,\,c$ span, $a\times (b\times c)$, which is orthogonal to this vector, is in said plane. The coefficients $B,\,C$ for which the result is $Bb+Cc$ are invariant under rotations, and clearly $B$ must be linear in $a,\,c$ while $C$ is linear in $a,\,b$, so constants $B',\,C'$ exist with $a\times (b\times c) =B' (a\cdot c) b + C' (a\cdot b) c$. Since the left-hand side is antisymmetric, $C'=-B'$. Since $B'$ must be a constant (since both sides are linear in each vector), we can use any vectors we like for which both sides are non-zero to compute $B'$. Example: $a=b=i,\,c=j$ so $a\times (b\times c) = i\times k = -j$ and $(a\cdot c) b - (a\cdot b) c = -j$ as required.

J.G.
  • 115,835
  • two questions: 1: why are B,C invariant under rotation? – peter Jun 11 '21 at 10:49
  • sorry, i wanted to edit this. i start with a more findamental question: i have questions about the argument about the coefficients B and C. a priori they are scalar functions A(a,b,c) and B(a,b,c). why are the individually linear in two variables and independent of the third? a priori only the entire expression needs to be linear, not the individual summands. and even if we get that the summand are individually linear, why the independence of the third variable? at first glance i see linear only dictate homogeneity of degree 0. – peter Jun 11 '21 at 10:56
  • 1
    @peter The reason $a\times(b\times c)=Bb+Cc$ requires $B,,C$ to be invariant scalars is because any equation of the form $v=Bb+Cc$ with $v$ a vector requires coefficients independent of your coordinate system viz.$$\left(\begin{array}{c} v\cdot b\ v\cdot c \end{array}\right)=\left(\begin{array}{cc} b^{2} & b\cdot c\ b\cdot c & c^{2} \end{array}\right)\left(\begin{array}{c} B\ C \end{array}\right).$$By Cauchy-Schwarz, the matrix is invertible if $b,,c$ aren't parallel or antiparallel. – J.G. Jun 11 '21 at 12:08
  • you seem to be saying that B,C are determined by fixed v. that is certainly so. but now we have v(a,b,c)=B(a,b,c)b+C(a,b,c)c. why do the dependencies simplify/disappear? – peter Jun 11 '21 at 12:17
  • @peter I think you're conflating "independent of the vectors themselves" (which isn't how the coefficients I derived work) with "independent of the coordinate system used" (which is true for the aforementioned reason). – J.G. Jun 11 '21 at 12:59
  • you say for example that B is linear in a and c and independent of b. why is that? – peter Jun 11 '21 at 13:07
  • @peter Doubling $a$ doubles $a\times(b\times c)$ and hence the coefficients; doubling $b$ doubles $a\times(b\times c)$, so preserves $B$ and doubles $C$. – J.G. Jun 11 '21 at 13:13
  • that sounds like a way to detect what is what if you already know the claim. i dont see how it amounts to a proof. could you please elaborate a bit more? – peter Jun 13 '21 at 09:38
  • @peter It boils down to $\lambda(Bb+Cc)=(\lambda B)b+(\lambda C)c=B(\lambda b)+C(\lambda c)$ (I happened to give the case $\lambda=2$). That the coefficients are unique only follows if $b\not\Vert c$, but the final result's validity in that case follows by continuity (more precisely, by the cases with $b\not\Vert c$ being dense in the space of all choices of $a,,b,,c$). – J.G. Jun 13 '21 at 09:42
  • for b parallel c both sides vanish, so no worries. ;-) i dont see why youre stating this trivial calculation in the field. it says nothing about B and C. please give a coherent argument. here it sounds like there is no other way than to validate on a basis, but this would make this entire ansatz kind of pointless. https://math.stackexchange.com/questions/3975031/proof-of-bac-cab-identity-missing-step – peter Jun 13 '21 at 11:03
  • @peter My $lambda$ equation shows how $B,,C$ respond to any rescaling of exactly one vector. I'm not sure what your complaint is near the end. If you're worried the formulae in terms of $v\cdot b$ etc. aren't coordinate-free enough, I have to object to that; it's just an inner product. – J.G. Jun 13 '21 at 12:25
  • i dont understand what you mean about coordiantes, i was not talking about that. i was giving a link to an answer to my question which is rather unelegant, and im beginning to doubt that there is a more elegant way. please write your B as B(a,b,c) and show exactly why B(x+y,b,c)=B(x,b,c)+B(y,b,c), i just dont see how that works without putting in basis vectors. – peter Jun 14 '21 at 15:28
  • @peter Recall$$\begin{align}B&=\frac{c^{2}\left(a\times\left(b\times c\right)\right)\cdot b-\left(b\cdot c\right)\left(a\times\left(b\times c\right)\right)\cdot c}{b^{2}c^{2}-\left(b\cdot c\right)^{2}},\C&=\frac{b^{2}\left(a\times\left(b\times c\right)\right)\cdot c-\left(b\cdot c\right)\left(a\times\left(b\times c\right)\right)\cdot b}{b^{2}c^{2}-\left(b\cdot c\right)^{2}}.\end{align}$$You can see the effect of $a=x+y$. The effect on $B$ of $c=x+y$ is less obvious, but for my argument we only need the effects of scaling. Maybe I should have said $B$ is of degrees $1,,0,,1$ in $a,,b,,c$. – J.G. Jun 14 '21 at 15:52
  • @J.G. "Since the left-hand side is antisymmetric, $C'=−B'$ ", this isn't clear to me, could you please expand? Also, based on your previous comment to peter - so we need the $B,C$ to be invariant under rotations, because the proof is to be coordinate-free? – Jake1234 May 26 '23 at 15:22
  • @Jake1234 An identity of the form $a\times(b\times c)=B'(a\cdot c)b+C'(a\cdot b)c$ implies$$0=a\times(b\times c)+a\times(c\times b)=(B'+C')[(a\cdot c)b+(a\cdot b)c],$$so $B'+C'=0$. $a$ can't be a linear combination of $b,,c$ in two ways unless they're linearly independent. – J.G. May 26 '23 at 21:29
  • @J.G. Thanks. I thought of that, but I'm not sure how it's true that $a\times (c\times b) = C' (a\cdot c) b + B' (a\cdot b) c$. I can see that using the same approach as in your answer, we can get $a \times (c \times b) = U'(a \cdot b)c + V'(a\cdot c)b$, but why must for example $U' = B'$ ? – Jake1234 May 27 '23 at 03:27
  • @J.G. I think I've figured it out. $B,C$ being invariant under rotation means we have \begin{align}\tilde{a} \times (\tilde{b} \times \tilde{c}) = B' (\tilde{a}\cdot \tilde{c}) \tilde{b} + C' (\tilde{a}\cdot \tilde{b}) \tilde{c} \end{align} for $\tilde{a},\tilde{b}, \tilde{c}$ being the image under some rotation. For $\tilde{a} = -a,\tilde{b} = c, \tilde{c} = b$ we then have \begin{align} a\times (b \times c) = -a \times ( c \times b) = B'(-a \cdot b)c + C'(-a \cdot c)b \end{align} Please let me know if this is the right idea. It does require assuming $b,c$ can be rotated to $c,b$. – Jake1234 May 27 '23 at 05:16
  • @Jake1234 No, nothing like that. The multilinearity of each side requires the coefficients to be the same for arbitrary choices of $a,,b,,c$. – J.G. May 27 '23 at 06:43
  • @J.G. I see, could you explain "multilinearity of each side requires the coefficients to be the same" in a bit more detail please? – Jake1234 May 27 '23 at 06:54
  • @Jake1234 $a\times(b\times c),,(a\cdot c)b,,(a\cdot b)c$ are all vectors linear in each of $a,,b,,c$. The coefficients therefore can't introduce any further factors proportional to such vectors. – J.G. May 27 '23 at 07:05
  • @J.G. I think I finally get it, thanks. So for example if \begin{align} f(a,b) = K(a,b)(a\cdot b)v \end{align} for some $f$ bilinear and some vector $v$, then $K$ must be constant, because both sides must be bilinear. I can sort of see this is true, but I'm not sure how to formalize it, do you have any reference on this? – Jake1234 May 27 '23 at 07:28
3

Okay, I really hate the sign issues with the Hodge star, so I am gonna assume that $\star\star=1$, the end result will be good.

The fundamental relationship is that for $k$-vectors/forms, we have $\langle\omega,\eta\rangle\mu=\omega\wedge\star\eta$, where the angle brackets are the inner product on the exterior algebra and $\mu$ is the volume form/multivector.

Let's start with $x$ being an arbitrary vector and taking a look at $$ \langle x,a\times(b\times c)\rangle\mu=\langle x,\star(a\wedge\star(b\wedge c))\rangle\mu= \\=x\wedge(a\wedge\star(b\wedge c))=(x\wedge a)\wedge\star(b\wedge c)= \\=\langle x\wedge a,b\wedge c\rangle\mu=\det\left(\begin{matrix}\langle x,b\rangle & \langle x,c\rangle \\ \langle a,b\rangle & \langle a,c\rangle\end{matrix}\right)\mu= \\=(\langle x,b\rangle \langle a,c\rangle-\langle x,c\rangle \langle a,b\rangle)\mu, $$ comparing the LHS with the RHS we can "divide" (ofc not really divide) by $\mu$ since the coefficients need to agree, and because $x$ was arbitrary, and the inner product is nondegenerate, we can "" divide "" by $x$ and we have $$ a\times(b\times c)=b\langle a,c\rangle-c\langle a,b\rangle. $$

Bence Racskó
  • 7,329
  • 25
  • 42
  • I wonder why the inner product of wedge products is a determinant? – Frenzy Li Apr 11 '17 at 07:32
  • Also, may cancel be a more appropriate substitute for "divide"? – Frenzy Li Apr 11 '17 at 07:33
  • @FrenzyLi Maybe "cancel" would be ok. In the first case, it was because 3-forms in a 3-dim space form a one dimensional space, so equality of 3-forms $\Longleftrightarrow$ equality of the single coefficient, in the second case, I have essentially viewed the inner product as the action of a 1-form (due to Riesz isomorphism and all), and 1-forms agree if their actions on an arbitrary vector agree.

    As for the determinant, the inner product on the exterior algebra is defined as, if $\alpha=\alpha_1\wedge...\wedge\alpha_k$ and $\beta=\beta_1\wedge... \wedge\beta_k$ then...

    – Bence Racskó Apr 11 '17 at 07:38
  • 1
    @FrenzyLi $\langle \alpha,\beta\rangle=\det(\langle \alpha_i,\beta_j\rangle)$. See for example https://en.wikipedia.org/wiki/Hodge_dual#Formal_definition_of_the_Hodge_star_of_k-vectors. – Bence Racskó Apr 11 '17 at 07:39
  • Nice! And last thing, is $\star\star=1$ generally not true? – Frenzy Li Apr 11 '17 at 07:41
  • 1
    @FrenzyLi Generally $\star\star=\pm 1$. It depends on the dimension of the space, the signature of the metric/inner product and also on the degree of the form/multivector you apply it on. A hassle to deal with it, fortunately here, even if I did make a sign error, the sign errors cumulatively cancelled out. – Bence Racskó Apr 11 '17 at 07:42
3

Let $\mathbf{a}$, $\mathbf{b}$, $\mathbf{c}$ be vector fields on $\mathbb{R}^{3}$ (we could extend to $\mathbb{R}^{n}$ if we wish!), considered as a Riemannian manifold equipped with metric $g$ and induced Hodge map $\star$. Let $\mathbf{a}$, $\mathbf{b}$, $\mathbf{c}$ have corresponding vector field representations $U,V,W$ on $\mathbb{R}^{n}$ respectively. Then

$$\begin{align} \mathbf{a}\times(\mathbf{b}\times \mathbf{c}) \,\equiv\, \star(\widetilde{U} \wedge \star(\widetilde{V} \wedge \widetilde{W})) \end{align}$$

where $\widetilde{X}$ denotes the metric dual of $X$ (i.e. $\widetilde{X}=g(X,-)$) and the equivalence is up to metric dual. Then

$$\begin{align} \star(\widetilde{U} \wedge \star(\widetilde{V} \wedge \widetilde{W})) &\,=\, \star(\widetilde{U} \wedge i_{W}\star \widetilde{V}) \,=\, \star ( i_{W}\widetilde{U} \wedge \star \widetilde{V} - i_{W} (\widetilde{U}\wedge \star \widetilde{V})) \\ &\,=\, (i_{W}\widetilde{U})\star\star \widetilde{V} - \star i_{W}(\widetilde{U}\wedge\star \widetilde{V}) \\ &\,=\, g(U,W)\widetilde{V} - \star(\widetilde{U}\wedge\star \widetilde{V}) \wedge \widetilde{W} \\ &\,=\, g(U,W)\widetilde{V} - g(U,V)\widetilde{W} \end{align}$$

where $i_{X}$ denotes the interior derivative with respect to $X$ and we have used the identities:

$$\begin{align} \star\star \alpha &\,=\, \alpha \\[0.2cm] \star(\widetilde{X}\wedge\star\widetilde{Y}) &\,=\, g(X,Y) \\[0.2cm] \star(\alpha \wedge \widetilde{X}) &\,=\, i_{X}\star \alpha \\[0.2cm] \widetilde{X} \wedge \star\alpha &\,=\, (-1)^{p+1}\star i_{W}\alpha \end{align}$$

for any $p$-form $\alpha$ and vector fields $X,Y$ (note that the first and second identities are specific to $\mathbb{R}^{3}$). Then note that

$$g(U,W)\equiv \mathbf{a}\cdot\mathbf{c}\quad\text{and}\quad g(U,V)\equiv\mathbf{a}\cdot\mathbf{b}$$

and so then the result follows after taking the metric dual of our expression.

3

I just want to add a proof using my favourite method for this kind of thing: Penrose graphical notation. I think it is as coordinate free as you can go.

Every piece of the diagram has a meaning. Tensors are shapes with a lines going upwards or downwards, depending on the type of tensor. For example, vectors have 1 line going upwards, and covectors one line going downwards. Contracion is represented joining the lines.

For example: the first line on the right side, over the word "proof", is the statement $(b\times c)^{a} = b^{b}c^{c}g_{bd}g_{ce}\epsilon^{dea} = b^{b}c^{c}\epsilon_{bcd}g^{da}$ where the indices are proper abstract indices: they do not represent components, but the slots of the tensors.

At the bottom I repeat the properties I use.

Penrose graphical notation proof

  1. Antisymmetry of $\epsilon^{abc}$
  2. $\epsilon^{abc}\epsilon_{def} = \delta^{abc}_{def}$
  3. $\delta^{abc}_{dec} = \delta^{ab}_{de}$
  4. $\delta^{ab}_{cd}A_{ab} = 2A_{[cd]} = A_{cd} - A_{dc}$
Jackozee Hakkiuz
  • 5,583
  • 1
  • 14
  • 35
2

Adapted from my previous proof of $\nabla \times (\vec{A} \times \vec{B})$: \begin{align} \vec a \times (\vec b \times \vec c) & = a_l \hat{e}_l \times (b_i c_j \hat{e}_k \epsilon_{ijk}) \\ & = a_l b_i c_j \epsilon_{ijk} \underbrace{ (\hat{e}_l \times \hat{e}_k)}_{(\hat{e}_l \times \hat{e}_k) = \hat{e}_m \epsilon_{lkm} } \\ & = a_l b_i c_j \hat{e}_m \underbrace{\epsilon_{ijk} \epsilon_{mlk}}_{\text{contracted epsilon identity}} \\ & = a_l b_i c_j \hat{e}_m \underbrace{(\delta_{im} \delta_{jl} - \delta_{il} \delta_{jm})}_{\text{They sift other subscripts}} \\ & = a_j (b_i c_j \hat{e}_i) - a_i (b_i c_j \hat{e}_j) \\ & = (b_i \hat{e}_i) (a_j c_j) - (c_j \hat{e}_j) (a_i b_i) \\ & = \vec b (\vec a\cdot\vec c) - \vec c(\vec a\cdot \vec b) \end{align}

Frenzy Li
  • 3,685
2

We can do this by using the definition of the cross product as an bilinear map $V \times V \to V$. We also take as a basic property that it is orthogonal to each of its arguments, $$ a \cdot (a \times b) = b \cdot (a \times b) = 0 \tag{Orth} $$ We need another property to fix the scaling; the easiest one is that the magnitude of $a \times b$ is the area of the parallelogram spanned by $a$ and $b$. This effectively means that $$ (a \times b) \cdot (a \times b) = (a \cdot a)(b \cdot b) - (a \cdot b)^2 \tag{Area} $$ (If this looks opaque, you might prefer to start with perpendicular $a$ and $b$, then extending with bilinearity.)

(A) and $v \cdot v = 0 \implies v = 0 $ force that $$a \times a = 0, \tag{Alt} $$ (alterating) and expanding $(a+b) \times (a+b)$ and using this implies that $$a \times b = - b \times a : \tag{AntiSym} $$ the cross product must be antisymmetric.

The scalar triple product is $[a,b,c] = a \cdot (b \times c)$. It is trilinear since both products are bilinear, and it is also zero when two arguments are equal, by (Orth) and (Alt): $$ [a,a,b] = [a,b,a] = [b,a,a] = 0 $$ Using linearity on $[a+b,a+b,c]$ implies that further $[a,b,c] = -[b,a,c]$, and combining this with the inherited antisymmetry in the last two arguments from $\times$, we find that $$ [a,b,c] = [b,c,a] = [c,a,b] = -[b,a,c] = -[a,c,b] = -[c,b,a] , $$ and in particular, $$ a \cdot (b \times c) = (a \times b) \cdot c : $$ we can swap the position of the dot and the cross.

We can now derive a restricted form of the triple product identity: using linearity on (Area), we have $$ 0 = ((a + b) \times c) \cdot ((a+b) \times c) + ((a+b) \cdot c)^2 - (a+b) \cdot (a+b))(c \cdot c) \\ = \dotsb = 0 + 0 + (a \times c) \cdot (b \times c) + (a \cdot c)(b \cdot c) - (a \cdot b)(c \cdot c) . $$ Switching the dot and the cross in the first term, we find that $$ 0 = a \cdot ( c \times (b \times c) + (b \cdot c)c - (c \cdot c)b ) $$ But $a$ is arbitrary here, so $$ c \times (c \times b) = (b \cdot c)c - (c \cdot c)b . \tag{R} $$

We can derive a couple more identities that are true in general, but now we can cheat and use that we are in three dimensions, so we can expand $a = \lambda b + \mu c + \nu (b \times c)$. Then using linearity, (R) and linearity again, $$ a \times (b \times c) = \lambda (b \times (b \times c)) + \mu ( c \times b \times c ) \\ = \lambda( (c \cdot b)b - (b \cdot b)c ) + \mu ( (c \cdot c)b - (b \cdot c)c ) \\ = ((\lambda b + \mu c) \cdot c)b - ((\lambda b + \mu c) \cdot c)b $$ Finally, we can add the $\nu (b \times c)$ part back into both dot products, since it is orthogonal to $b$ and $c$, which gives $$ a \times (b \times c) = (a \cdot c)b - (a \cdot b)c $$ as expected.

Chappers
  • 67,606