As in the title, I was wondering if the formula: $$a\times (b\times c)=b(a\cdot c)-c(a \cdot b)$$ for $\mathbb R ^3$ cross product has some geometrical interpretation. I've recently seen a proof (from Vector Analysis - J.W. Gibbs) that's not at all difficult to understand, however I would hardly remember the steps of the proof (and I keep forgetting the correct order of A,B,C's), since it appears to me just as algebraic manipulation. So, why is this true? Or is it just an accident?
3 Answers
No, it's not an accident. The cross product is orthogonal to each factor, so the vector has to be orthogonal to $b\times c$, hence in the plane spanned by $b$ and $c$. But it also has to be orthogonal to $a$. So, writing $$a\times(b\times c) = xb + yc$$ and dotting with $a$, you get $x(b\cdot a) + y(c\cdot a)=0$. So the answer must be some scalar multiple of the correct formula. Now you only have to check that that scalar is $1$ by substituting $a=b$ and $a=c$. Better yet, let $a$ be a unit vector in the plane spanned by $b$ and $c$ that is orthogonal to $b$.

- 115,160
-
3I'm confused. How do you know a priori that the scalar multiple doesn't depend on the vectors $a$ and $b$? I.e., if you substitute $a=b$ and $a=c$, how do you know that the scalar multiplier doesn't change? – Matthew Kvalheim Dec 09 '16 at 02:36
-
3@Matthew: Note that the coefficient $x$ has to be bilinear in $a$ and $c$ and the coefficient $y$ has to be bilinear in $a$ and $b$. – Ted Shifrin Dec 09 '16 at 03:08
-
Thank you very much for your help. I think I understand now. – Matthew Kvalheim Dec 09 '16 at 06:58
-
@TedShifrin I am confused when you say substitute $a=b$ and $a=c$. Are you saying that $a=b=c$? – A Slow Learner Apr 16 '18 at 01:33
-
@ASlowLearner: No, I mean first substitute $a=b$ and then substitute $a=c$. Sorry it was not clear! – Ted Shifrin Apr 16 '18 at 01:42
-
@TedShifrin But after substituting $a=b$ , $a$ is no longer in the equation. – A Slow Learner Apr 16 '18 at 02:28
-
1@ASlowLearner, you're trying to solve fir $x$ and $y$. Work it out step by step with pencil and paper. – Ted Shifrin Apr 16 '18 at 02:32
-
@TedShifrin: um, sorry, I need to understand this and I don't know anything about bilinearity. Can you explain what the first comment asks in some dumbed-down way? – harry Apr 17 '21 at 15:01
-
@HarryHolmes There's nothing fancy here. Note that $a\times(b\times c)$ is linear in each of the variables separately, and so the right-hand side must be as well. – Ted Shifrin Apr 17 '21 at 15:56
-
@TedShifrin: that seems like... Dimensional analysis? I still haven't got it. – harry Apr 17 '21 at 16:11
-
@HarryHolmes I'm talking about linearity in the sense of linear algebra. Distributes across sums and scalar multiples pull out. – Ted Shifrin Apr 17 '21 at 17:22
-
@TedShifrin: Can you recommend some resources to get a basic knowledge of this? – harry Apr 18 '21 at 01:21
-
1There are zillions of linear algebra books out there. I don't know what's best for your interests and background. For just the basic notions of linearity, properties of dot and cross product, you might check out some of the earlier lectures in my YouTube lectures (linked in my profile). – Ted Shifrin Apr 18 '21 at 02:23
-
@MatthewKvalheim I have thought about it thoroughly and i dont see why the coefficient functions x(a,b,c) and y(a,b,c) are constant. I'm not even sure why they need to be linear in any of the variables. a priori we only know that the sum is linear. i find this proof very elegant and conceptual, but everywhere i find it, this point is left out. could you elaborate please? – peter Jun 13 '21 at 09:27
-
@peter This is, after all, a polynomial of degree 3 in the (coordinates) of the vector, so all that can be left is a true constant factor. – Ted Shifrin Jun 13 '21 at 18:30
-
well the lhs is such a polynomial, but why do the summands on the rhs need to be such individually? e.g. (x^3 + e^x) + (x^2 - e^x) is also a polynomial of degree 3. – peter Jun 14 '21 at 15:25
-
@peter You're missing the point. The right-hand side in totality mudt be some (possibly functional) multiple of what we have. I'm arguing that function can only be a constant since we already have a polynomial of degree 3. – Ted Shifrin Jun 14 '21 at 15:30
-
so you're not arguing for the x and y be bilinear/constant first, as the second comment here might imply? you're saying that the scalar multiple of the entire formula must be constant? but then again the quotient of two polynomials is not necessarily a polynomial, right? – peter Jun 15 '21 at 00:43
-
@peter You keep erroneously saying that the $x$ and $y$ I defined are constant. I said very clearly that $(x,y)$ is a scalar multiple of the desired ve tor quantity. That scalar multiple is the function to which I've been referring. Where do you get a quotient of polynomials? The quantity under consideration is multilinear in $a,b,c$ to start with. So is the formula on the right-hand side. They have the same degree as polynomials. – Ted Shifrin Jun 15 '21 at 00:50
-
the entire right hand side is such a polynomial. but why are the summands individually? i thought the second comment says that x and y are linear in two variables and constant in the third (i.e. independent of it). why cant the one summand introduce sth nonlinear which the other then compensates? – peter Jun 16 '21 at 07:56
-
@TedShifrin I've plugged in $b$ and $c$ and gotten $b\times(b\times c) = D(b\cdot c) b - D (b\cdot b) c$ and $c\times(b\times c) = D(c\cdot c) b - D(c\cdot b)c$ where we want to show $D$ is 1. Am I supposed to know a priori what $b\times (b\times c)$ and $c\times(b\times c)$ are, or what some manipulation of them together is? – ffffffyyyy Sep 16 '21 at 18:35
-
@ffffffyyyy You need to use more geometric meaning of the cross product (e.g., magnitude being area of the parallelogram). It’s easiest to assume $b$ and $c$ are unit vectors. – Ted Shifrin Sep 16 '21 at 20:28
-
You might as well say to assume that a,b,c are e_1, e_2, e_3 then, so e_2 x (e_2 x e_3) = e_2 x e_1 = - e_3 which means D has to equal 1. If you assume a,b,and c to be convenient unit vectors I don't see what the point of also plugging in c is. This was confusing. – ffffffyyyy Sep 16 '21 at 23:28
-
@ffffffyyyy To be honest, I wrote this more than 8 years ago. Today, I would delete that sentence and go with the “better yet.” – Ted Shifrin Sep 16 '21 at 23:30
Well, we can prove the BAC-CAB identity using only geometric arguments as follows.
First consider a coordinate system where the $x$- and $y$-axes are skewed (non-orthogonal). We project the line $\mathrm{OA}$ orthogonally onto the axes, forming the quadrilateral $\square\mathrm{ABOC}$.
We then construct the parallelogram $\square\mathrm{OFED}$ with sides $\mathrm{OF} = \mathrm{DE} = \mathrm{OC}$ and $\mathrm{OD} = \mathrm{FE} = \mathrm{BO}$ and the diagonal $\mathrm{OE} = \mathrm{BC}$. Then, by construction,
$$\begin{aligned} \overrightarrow{\mathrm{OE}} &= \mathrm{OF}\,\hat{x} + \mathrm{FE}\,\hat{y} \\ & = \mathrm{OC}\,\hat{x} + \mathrm{OB}\,\hat{y} \\ & = (\mathrm{OA} \cdot \cos{\angle \mathrm{AOC}})\,\hat{x} + (\mathrm{OA} \cdot \cos{\angle \mathrm{BOA}})\,\hat{y} \\ & = (\overrightarrow{\mathrm{OA}} \boldsymbol{\cdot} \hat{y})\,\hat{x} - (\overrightarrow{\mathrm{OA}} \boldsymbol{\cdot} \hat{x})\,\hat{y}, \end{aligned} \label{eq1} \tag{1}$$
where $\hat{x}$ and $\hat{y}$ are unit basis vectors.
We note that $\square\mathrm{ABOC}$ is a cyclic quadrilateral since $\angle\mathrm{BAC} + \angle\mathrm{BOC} = \angle\mathrm{ABO} + \angle\mathrm{ACO} = 180^\circ$. It can therefore be inscribed in a circle and the intersecting chords theorem gives us
$$\mathrm{AP} \cdot \mathrm{PO} = \mathrm{BP} \cdot \mathrm{PC}$$
such that $\Delta\mathrm{ABP} \sim \Delta\mathrm{COP}$ and $\Delta\mathrm{BOP} \sim \Delta\mathrm{ACP}$ (triangles are similar).
From the law of sines we get that
$$ \frac{\mathrm{OE}}{\sin \angle\mathrm{OFE}} = \frac{\mathrm{FE}}{\sin \angle\mathrm{EOF}} = \frac{\mathrm{BO}}{\sin \angle\mathrm{BCO}} = \frac{\mathrm{OA} \cdot \cos \angle\mathrm{BOA}}{\sin \angle\mathrm{BCO}} = \mathrm{OA}$$
as $\angle\mathrm{BOA} = \angle\mathrm{BCA} = 90^\circ - \angle\mathrm{BCO}$. Thus
$$ \mathrm{OE} = \mathrm{OA} \cdot \sin \angle\mathrm{OFE} = \mathrm{OA} \cdot \sin \angle\mathrm{DOF}$$
which is equivalent to
$$|\overrightarrow{\mathrm{OE}}| = |\overrightarrow{\mathrm{OA}}| \cdot \sin \angle\mathrm{DOF} = |\overrightarrow{\mathrm{OA}} \times (\hat{x} \times \hat{y})|. \label{eq2} \tag{2}$$
We can also see that
$$ \angle\mathrm{AOE} = \angle\mathrm{AOC} + \angle\mathrm{COE} = \angle\mathrm{ABP} + \angle\mathrm{PBO} = 90^\circ. \label{eq3} \tag{3}$$
$(\ref{eq2})$ and $(\ref{eq3})$ combined thus proves that $$ \overrightarrow{\mathrm{OE}} = \overrightarrow{\mathrm{OA}} \times (\hat{x} \times \hat{y}) \label{eq4} \tag{4}$$ (correct magnitude and direction as given by the right-hand rule). We can now set $\vec{a} \equiv \overrightarrow{\mathrm{OA}}$ and combine eqs. $(\ref{eq1})$ and $(\ref{eq4})$;
$$ \vec{a} \times (\hat{x} \times \hat{y}) = (\vec{a} \boldsymbol{\cdot} \hat{y})\,\hat{x} - (\vec{a} \boldsymbol{\cdot} \hat{x})\,\hat{y}. $$
Finally, multiplying both sides with some scalars $b$ and $c$ gives us
$$ \vec{a} \times (\vec{b} \times \vec{c}) = (\vec{a} \boldsymbol{\cdot} \vec{c})\,\vec{b} - (\vec{a} \boldsymbol{\cdot} \vec{b})\,\vec{c}. $$
Note that this also holds for any vector $\vec{a}$, even one which is not parallel to the $xy$-plane, as any component perpendicular to this plane is mapped to zero when taking the cross and dot products.
So to answer the question, I guess the rule can be said to hold due to a (rather complex) geometrical relation between the two quadrilaterals $\square\mathrm{ABOC}$ and $\square\mathrm{OFED}$ formed as above. The last multiplication step can be thought of as scaling the relevant sides of these equally.

- 123
Trigonometric proof of vector triple product expansion
The vector identity \begin{equation}\label{e1}\tag{1} \mathbf{a}\times(\mathbf{b}\times\mathbf{c}) = \mathbf{a}{\cdot}\mathbf{c}~\mathbf{b} - \mathbf{a}{\cdot}\mathbf{b}~\mathbf{c} \end{equation} is related to the trigonometric identity \begin{equation}\label{e2}\tag{2} \sin(\beta-\gamma) = \cos\gamma\,\sin\beta - \cos\beta\,\sin\gamma \,, \end{equation} which I have arranged so as to emphasize the connection.
Proof of (\ref{e1})
If $\mathbf{b}$ and $\mathbf{c}$ are linearly dependent, then either $\mathbf{b}$ is a scalar multiple (possibly zero) of $\mathbf{c}$, or vice versa; and in either case it is trivial to verify that both sides of (\ref{e1}) are zero.
If $\mathbf{b}$ and $\mathbf{c}$ are linearly independent, then the left side of (\ref{e1}) is unchanged if $\mathbf{a}$ is replaced by its projection on the plane normal to $\mathbf{b}{\times}\mathbf{c}$, i.e. on the plane of $\mathbf{b}$ and $\mathbf{c}$; and the same replacement leaves the right side of (\ref{e1}) unchanged, because any component of $\mathbf{a}$ normal to the said plane is normal to both $\mathbf{b}$ and $\mathbf{c}$ and therefore makes no contribution to the dot-products.
So, to complete the proof of (\ref{e1}), we need only prove it for the special case in which $\mathbf{a}$ is in the plane of $\mathbf{b}$ and $\mathbf{c}$. In this case let $\mathbf{i},\mathbf{j},\mathbf{k}$ be mutually perpendicular unit vectors with $\mathbf{i}\times\mathbf{j}=\mathbf{k}\,,$ and let them be oriented so that the plane of $\mathbf{i}$ and $\mathbf{j}$ is parallel to the plane of $\mathbf{b}$ and $\mathbf{c}\,$ while $\mathbf{a}$ (if nonzero) is in the $\mathbf{i}$ direction. Let $\mathbf{a},\mathbf{b},\mathbf{c}\,$ have magnitudes $a,b,c$. Let $\mathbf{b}$ and $\mathbf{c}$ make angles $\beta$ and $\gamma$ (respectively) with $\mathbf{i}$ (measured clockwise while looking in the $\mathbf{k}$ direction). Then, by definition, $$\mathbf{b}\times\mathbf{c} = bc\sin(\gamma-\beta)\,\mathbf{k} \,,$$ so that \begin{equation}\label{e3}\tag{3} \mathbf{a}\times(\mathbf{b}\times\mathbf{c}) = abc\sin(\gamma-\beta)\,(-\mathbf{j}) = abc\sin(\beta-\gamma)\,\mathbf{j} \,. \end{equation} But $\mathbf{a}=a\mathbf{i}\,,$ and $\mathbf{b}=b\mathbf{i}\cos\beta+b\mathbf{j}\sin\beta\,,$ and $\mathbf{c}=c\mathbf{i}\cos\gamma+c\mathbf{j}\sin\gamma\,,$ so that $\mathbf{a}{\cdot}\mathbf{b}=ab\cos\beta\,$ and $\mathbf{a}{\cdot}\mathbf{c}=ac\cos\gamma\,,$ whence $$\mathbf{a}{\cdot}\mathbf{c}~\mathbf{b} - \mathbf{a}{\cdot}\mathbf{b}~\mathbf{c} = ac\cos\gamma\,(b\mathbf{i}\cos\beta+b\mathbf{j}\sin\beta) - ab\cos\beta\,(c\mathbf{i}\cos\gamma+c\mathbf{j}\sin\gamma) \,.$$ The terms in $\mathbf{i}$ cancel, leaving $$\mathbf{a}{\cdot}\mathbf{c}~\mathbf{b} - \mathbf{a}{\cdot}\mathbf{b}~\mathbf{c} = abc(\cos\gamma\,\sin\beta - \cos\beta\,\sin\gamma)\,\mathbf{j} \,,$$ which, by identity (\ref{e2}), matches the right side of (\ref{e3}), completing the proof.
Notes
- The above proof uses conveniently chosen basis vectors—so conveniently chosen that it does not need to invoke the distributive law for the cross-product (but does invoke the distributive law for the dot-product).
- I don't think "bac-cab"; I think "outer dot-product first"—which also works for $$(\mathbf{a}\times\mathbf{b})\times\mathbf{c} = \mathbf{a}{\cdot}\mathbf{c}~\mathbf{b} - \mathbf{b}{\cdot}\mathbf{c}~\mathbf{a} \,.$$

- 161
- 1
- 6
-
At the time of this comment, internal links work correctly in Firefox but not in Chrome. – Gavin R. Putland Jan 06 '24 at 00:18