9

One of the popular Kronecker delta and Levi-Civita identities reads $$\epsilon_{ijk}\epsilon_{ilm}=\delta_{jl}\delta_{km}-\delta_{kl}\delta_{jm}.$$

Now, is there an intuition or mnemonic that you use, that can help one learn these or similar mathematical identities more easily?

Also, what is the motivation for expressing Levi-Civita symbol in terms of Kronecker Delta in the first place?

Léreau
  • 3,015
Isomorphic
  • 1,182

2 Answers2

22

I think it's helpful to see how you can actually derive this identity, using a different definition of $\epsilon_{ijk}$. I hope you are a friend of matrices and determinants, since I am going to use that a lot in what follows now. (I will not be using the Einstein summing convention in the proof, but then again in the other parts of this answer) $\newcommand{\vek}[1]{\boldsymbol{#1}}$ The most common definition of $\epsilon_{ijk}$ is $$ \epsilon_{ijk} = \begin{cases} 1 & \text{if $\space (ijk) \space$ is an even permutation of (123) } \\ -1 & \text{if $\space (ijk) \space$ is an odd permutation of (123) } \\ 0 & \text{else} \end{cases} $$ Simply comparing the following two expressions, you can observe that

$$ \det (\vek{e}_i ~\vek{e}_j ~\vek{e}_k) = \epsilon_{ijk} ~~~~\text{for all}~~~ (ijk) $$

Armed with this, and some linear algebra, lets tackle the identity you are interested in:

$$ \sum_{i=1}^3 \epsilon_{ijk}\epsilon_{ilm} = \delta_{lj} \delta_{mk} - \delta_{lk} \delta_{mj} $$


Proof

$\color{blue}{\text{If } j=k}$ then no matter the value of $i$ , every summand will be $0$ and thus the sum as a whole. So we have $$ \delta_{lj} \delta_{mk} - \delta_{lk} \delta_{mj} = \delta_{lk} \delta_{mk} - \delta_{lk} \delta_{mk} = 0 = \sum_{i=1}^3 \epsilon_{ijk}\epsilon_{ilm} $$

$\color{blue}{\text{If } j \neq k}$ then there is one value of $i \in \{1,2,3\}$ , such that $i,j,k$ are all different. For this value of $i$, we have

\begin{align*} \epsilon_{ijk}\epsilon_{ilm} &= \det( \vek{e}_i ~\vek{e}_j ~\vek{e}_k) \det \begin{pmatrix} \vek{e}_i^T \\ \vek{e}_l^T \\ \vek{e}_m^T \\ \end{pmatrix} = \det \begin{pmatrix} \vek{e}_i^T \vek{e}_i & \vek{e}_i^T \vek{e}_j & \vek{e}_i^T \vek{e}_k \\ \vek{e}_l^T \vek{e}_i & \vek{e}_l^T \vek{e}_j & \vek{e}_l^T \vek{e}_k \\ \vek{e}_m^T \vek{e}_i & \vek{e}_m^T \vek{e}_j & \vek{e}_m^T \vek{e}_k \\ \end{pmatrix} \\&= \det \begin{pmatrix} \delta_{ii} & \delta_{ij} & \delta_{ik} \\ \delta_{li} & \delta_{lj} & \delta_{lk} \\ \delta_{mi} & \delta_{mj} & \delta_{mk} \\ \end{pmatrix} = \det \begin{pmatrix} 1 & 0 & 0 \\ \delta_{li} & \delta_{lj} & \delta_{lk} \\ \delta_{mi} & \delta_{mj} & \delta_{mk} \\ \end{pmatrix} \\&= \det \begin{pmatrix} \delta_{lj} & \delta_{lk} \\ \delta_{mj} & \delta_{mk} \\ \end{pmatrix} = \delta_{lj} \delta_{mk} - \delta_{lk} \delta_{mj} \end{align*} For the other values of $i$, we either have $i=j$ or $i = k$ and thus $\epsilon_{ijk}=0$ . Thus, summing over all values of $i$, we get $$ \sum_{i=1}^3 \epsilon_{ijk}\epsilon_{ilm} = \delta_{lj} \delta_{mk} - \delta_{lk} \delta_{mj} $$

Which completes the proof.


Mnemonic

\begin{align} \require{cancel} \epsilon_{ijk}\epsilon_{ilm} &= \det(\vek{e}_i ~\vek{e}_j ~\vek{e}_k) ~\det(\vek{e}_i ~\vek{e}_l ~\vek{e}_m) \\ &= \color{blue}{\det(\cancel{\vek{e}_i} ~\vek{e}_j ~\vek{e}_k) ~\det(\cancel{\vek{e}_i} ~\vek{e}_l ~\vek{e}_m) = \det(\vek{e}_j ~\vek{e}_k) ~\det(\vek{e}_l ~\vek{e}_m) } \\ &= \det(\vek{e}_j ~\vek{e}_k) ~\det \begin{pmatrix} \vek{e}_l^T \\ \vek{e}_m^T \end{pmatrix} \\ &= \det \begin{pmatrix} \vek{e}_l^T \vek{e}_j & \vek{e}_l^T \vek{e}_k \\ \vek{e}_m^T \vek{e}_j & \vek{e}_m^T \vek{e}_k \\ \end{pmatrix} \\ &= \det \begin{pmatrix} \delta_{lj} & \delta_{lk} \\ \delta_{mj} & \delta_{mk} \\ \end{pmatrix} = \delta_{lj} \delta_{mk} - \delta_{lk} \delta_{mj} \end{align} You could just remember what is marked in blue; the $\color{blue}{\text{formal canceling of } \vek{e}_i}$ . The calculation after that is pretty straight forward, and you get the correct result.


Usage of the identity

Where would you use this identity? For example if you want to prove $$ \vek{a} \times (\vek{b} \times \vek{c} ) = \vek{b} (\vek{a} \cdot \vek{c} ) - \vek{c} ( \vek{a} \cdot \vek{b} ) $$ Because you can start at the left side of the equation (here, using Einstein summation!) \begin{align} \vek{a} \times (\vek{b} \times \vek{c}) &= \epsilon_{ijk} a_k (\vek{b} \times \vek{c})_i ~\vek{e}_j \\ &= \epsilon_{ijk} a_k \left( \epsilon_{ilm} b_l c_m \right) ~\vek{e}_j \\ &= (\epsilon_{ijk}\epsilon_{ilm}) ~a_k b_l c_m ~\vek{e}_j \end{align} where you can now use the identity to continue.

Léreau
  • 3,015
  • In terms of abstract orthogonal vectors, what would $\det (\vec{e}_i ~\vec{e}_j ~\vec{e}_k) $ mean? Or can this only make sense in terms of components? – ions me Sep 06 '20 at 06:47
  • Just asking because tensor analysis can be coordinate free, right? – ions me Sep 06 '20 at 06:54
  • @IonSme Yes, tensor analysis and the concept of a determinant in particular can be done in a coordinate free way. This will lead you to the topic of exterior algebras: https://en.wikipedia.org/wiki/Determinant#Exterior_algebra – Léreau Sep 06 '20 at 12:57
2

Here is a new reference

Another reference explaining how one can generalize Levi Civita symbol.

Jean Marie
  • 81,803
  • If you read both questions carefully, you'll notice there is an extra line of text present in this question – Isomorphic Jul 29 '16 at 16:56
  • Nevertheless, you should mention your previous question. – Jean Marie Jul 29 '16 at 17:01
  • Alright, that also has been linked now – Isomorphic Jul 29 '16 at 17:04
  • Very well. I modify my answer accordingly. – Jean Marie Jul 29 '16 at 17:13
  • I'd like to ask you something, that when you deal with indicical notation. Is it still like a compressed form of the rules have been given to you and you basically have to evaluate the final result component by component only in the end? – Isomorphic Jul 29 '16 at 21:08
  • And you have certain set of algebraic rules called "Tensor Algebra" which are proved in general, for all the components and you can do these manipulations on your expression given in indicical notation, to shorten it so that the number of calculations per component reduces, but still in usual scenarios, but ultimately you've to evaluate each component separately and carefully, one by one? – Isomorphic Jul 29 '16 at 21:10
  • I'm sorry, I'm new to this notation, so it's kind of troubling me to apply it in calcuations – Isomorphic Jul 29 '16 at 21:15
  • Do you know the Einstein contraction rule which governs this notation : whenever 2 identical indices are found in a product, one upper and the other lower, it means that there is an (omitted) summation. For example $f_q^n=g^{np}h_{pq}$ means $f_q^n=\sum_{p=1}^{p=n}g^{np}h_{pq}$ (in this case it is the definition of a matrix product). But in some cases (your notations) all indices are... as indices. – Jean Marie Jul 29 '16 at 21:37
  • Yes, I know. May be I need to some solved examples for this. Thanks for your help :) – Isomorphic Jul 29 '16 at 21:43
  • Broken first link @JeanMarie – tryst with freedom May 05 '21 at 20:32