3

I want to prove the following relation

$$\nabla \times (\mathbf a\times \mathbf b) = \mathbf a\nabla \cdot \mathbf b + \mathbf b \cdot \nabla \mathbf a - \mathbf b \nabla \cdot \mathbf a - \mathbf a \cdot \nabla \mathbf b$$

using the following Levi-Civita definition of cross product $$\mathbf{a} \times \mathbf{b} =\mathbf{e}_i \epsilon_{ijk}a_ib_j$$ where $\epsilon_{ijk} =\begin{cases} +1 & \text{if } i,j,k \text{ are in clockwise permutation}, \\ -1 & \text{if } i,j,k \text{ are in counterclockwise permutation, and} \\ \;\;\,0 & \text{if }i=j \text{ or } j=k \text{ or } k=i. \end{cases}$

Frenzy Li
  • 3,685
Theorem
  • 7,979

2 Answers2

4

Proof

Courtesy of this thread from PhysicsForums.

\begin{align} \nabla \times (\vec{A} \times \vec{B}) &=\partial_l \hat{e}_l \times (a_i b_j \hat{e}_k \epsilon_{ijk}) \\ &=\partial_l a_i b_j \epsilon_{ijk} \underbrace{ (\hat{e}_l \times \hat{e}_k)}_{(\hat{e}_l \times \hat{e}_k) = \hat{e}_m \epsilon_{lkm} } \\ &=\partial_l a_i b_j \hat{e}_m \underbrace{\epsilon_{ijk} \epsilon_{mlk}}_{\text{contracted epsilon identity}} \\ &=\partial_l a_i b_j \hat{e}_m \underbrace{(\delta_{im} \delta_{jl} - \delta_{il} \delta_{jm})}_{\text{They sift other subscripts}} \\ &=\partial_j (a_i b_j \hat{e}_i)- \partial_i (a_i b_j \hat{e}_j) \\ &=\color{blue}{a_i \partial_j b_j \hat{e}_i + b_j \partial_j a_i \hat{e}_i} - (\color{green}{a_i \partial_i b_j \hat{e}_j + b_j \partial_i a_i \hat{e}_j}) \\ &= \vec{A}(\nabla \cdot \vec{B}) + (\vec{B} \cdot \nabla)\vec{A} - (\vec{A} \cdot \nabla)\vec{B} - \vec{B}(\nabla \cdot \vec{A}) \\ \end{align}

Edit: As is pointed out by @enzotib, the blue and green sums are derivatives of products.

Why did the deltas vanish?

Due to Kronecker $\delta$'s sifting property. Recall the definition of Kronecker delta:

$$\delta_{ij}=\begin{cases} 0,\quad \text{if } i\ne j, \\ 1,\quad \text{if } i=j. \end{cases}$$

Thus, for $j\in\mathrm Z$:

$$\sum\limits_{-\infty}^{\infty}a_i\delta_{ij} = a_j$$

This is just like filtering (or sifting), because only when $i=j$ does $\delta_{ij} = 1$. Other terms are zeroes. This also works for partial derivatives.

For example,

$$\partial_l a_i b_j \hat{e}_m \delta_{im} \delta_{jl} = \partial_l a_i b_j \hat{e}_i \delta_{jl}$$

  • If $l\ne j$ then $\partial_l a_i b_j \hat{e}_i \delta_{jl} = \partial_l a_i b_j \hat{e}_i(0) = 0$;
  • If $l=j$ then $\partial_l a_i b_j \hat{e}_i \delta_{jl} = \partial_j a_i b_j \hat{e}_i (1) = \partial_j a_i b_j \hat{e}_i$.

Thus, $$\partial_l a_i b_j \hat{e}_m \delta_{im} \delta_{jl} =\partial_j (a_i b_j \hat{e}_i).$$

Note that $\hat{e}_i$ is a const std::vector<int>.

Some thoughts

  1. There are two cross products (one of them is Curl) and we use different subscripts (of partials and Levi-Civita symbol to distinguish them, e.g., $l$ for the curl and $k$ for $\vec{A} \times \vec{B}$.
  2. We move the variables around quite often.
  3. The cross product of two basis is explained in the underbrace.
  4. The contracted epsilon identity is very useful. We replaced them by Kronecker $\delta$-s.
  5. Kronecker $\delta$-s are known to select things efficiently.
  6. In many proofs of vector calculus identities (this one included), we add and substract extra terms.
  7. Do I love Levi-Civita symbols and Einstein Notation? I'm ambivalent.
Frenzy Li
  • 3,685
  • 1
    The last step is simply the derivative of a product, first $\partial_j a_i b_j$ and then $\partial_i a_i b_j$. There are errors, though. – Vincenzo Tibullo Oct 20 '12 at 12:04
  • @FrenzYDT. : Can you explain me how u changed $\partial_l$ to $\partial_j$. didn't get how you got rid of $\delta$. – Theorem Oct 20 '12 at 12:12
  • @Theorem We have to apply the sifting property twice. If delta is zero then the partial derivative with respect to that variable is zero. – Frenzy Li Oct 20 '12 at 12:44
0

Just figured out one quick but tentative proof using skew-symmetric matrices and expecting to be verified. The product of two skew-symmetric matrices is $\mathbf{S}_\vec{A}\mathbf{S}_\vec{B}\vec{x}=\left[\vec{B}\vec{A}^T-(\vec{A}\vec{B}^T)\mathbb{I}\right]\vec{x}$ or in dyadic direct multiplication: $$\begin{align} (\vec{a}\times\vec{b})\times \mathbb{I}&=\mathbb{I}\times (\vec{a}\times\vec{b})=\vec{b}\vec{a}-\vec{a}\vec{b}.\\ {(\vec{a}\times\vec{b})\times \mathbb{I}}\cdot\vec{x}&=(\vec{a}\times \vec{b})\times \vec{x}=(\vec{b}\vec{a}-\vec{a}\vec{b})\cdot\vec{x}\\ \vec{x}\times (\vec{a}\times \vec{b})&=\vec{x}\cdot(\vec{b}\vec{a}-\vec{a}\vec{b}). \end{align}$$ where $\mathbf{S}_\vec{A}=\vec{A}\times$ is the skew-symmetric matrix about vector $\vec{A}$, likely for $\mathbf{S}_\vec{B}$, and $\mathbb{I}$ is the identity matrix which can also be omitted. Let $\vec{e}$ be unit vector, so $$\begin{align} [\nabla \times (\vec{A} \times \vec{B})]\cdot\vec{e}&=\nabla\cdot\left[ (\vec{A} \times \vec{B})]\times\vec{e}\right]=\nabla\cdot\left[\vec{B}\vec{A}^T-(\vec{A}\vec{B}^T)\mathbb{I}\right]\vec{e}\\ &=\left[\nabla {\cdot }\left(\vec{B}\vec{A}^T\right)-\nabla {\cdot }\left(\vec{A}\vec{B}^T\right)\right]\vec{e} \end{align}$$

MathArt
  • 1,053