0

I'd like to know basic rules of inference in proofs of vector identities based on geometric algebra that avoid bases and coordinates.

For example how to prove the identity:

$$ \operatorname{Rot} \left(fA\right)=\left(\operatorname{Grad} f \right)\times A+ f \operatorname{Rot} A $$

And what are some general tips, rules and tricks that come up in other such proofs?

janmarqz
  • 10,538
Gauge
  • 19
  • 1
    General questions like this one, while good for long and perhaps meandering discussion, are a bad fit for this site. What works on this site is this site is a single, sharp, focussed question, including some context such as your own attempts to settle that question. For our guidelines on these issues see our post on how to ask a good question. – Lee Mosher Dec 21 '22 at 15:12
  • That identity could be a good basis for such a question if you added such context, but also you should define your symbols and your notation so that your potential answerers are not deterred by the desire to avoid ambiguity. – Lee Mosher Dec 21 '22 at 15:13
  • Remember that curl (rot) is a type of product, so think about the ordinary product rule. You don't need a basis, just chase indicies. – Sean Roberson Dec 21 '22 at 15:25
  • If someone could just latex it up a bit, the identity is the main question – Gauge Dec 21 '22 at 15:39

1 Answers1

1

This has very little to do with geometric algebra. We just need a couple of things:

  1. It is paramount that we keep track of what is being differentiated. For instance, if $f$ is a scalar-valued function and $A$ is a vector-valued function, rather than simply $$ \nabla\times(fA) = (\nabla f)\times A + f\nabla\times A $$ we will instead write $$ \dot\nabla\times(\dot f\dot A) = (\dot\nabla\dot f)\times A + f\dot\nabla\times\dot A. $$ The dots indicate precisely what each $\dot\nabla$ is differentiating. This allows us to write things like $$ \nabla\times A = \dot\nabla\times\dot A = -\dot A\times\dot\nabla. $$
  2. After doing this, all vector identities involving $\nabla$ are valid as long as it appears in a linear slot. More precisely, if $L_x(v)$ and $M_x(v)$ are two expressions linear in $v$ such that $L_x(v) = M_x(v)$ for all $v$, then symbolically $$ L_{\dot x}(\dot\nabla) = M_{\dot x}(\dot\nabla). $$
  3. The derivative of an expression is the sum of the derivatives of its subexpression. To illustrate, we directly prove your identity as follows: $$ \dot\nabla\times(\dot f\dot A) = \dot\nabla\times(\dot fA) + \dot\nabla\times(f\dot A) = (\dot\nabla\dot f)\times A + f\dot\nabla\times\dot A. $$ The last equality follows simply because $f$ is a scalar quantity.

As one more example, we derive an expression for $\nabla\times(A\times B)$. Recall that we have the vector identity $$ u\times(v\times w) = (u\cdot w)v - (u\cdot v)w. $$ Using this we see $$\begin{aligned} \dot\nabla\times(\dot A\times\dot B) &= (\dot\nabla\cdot\dot B)\dot A - (\dot\nabla\cdot\dot A)\dot B \\ &= (\dot\nabla\cdot\dot B)A + (\dot\nabla\cdot B)\dot A - (\dot\nabla\cdot\dot A)B - (\dot\nabla\cdot A)\dot B \\ &= (\nabla\cdot B)A + (B\cdot\nabla)A - (\nabla\cdot A)B - (A\cdot\nabla)B. \end{aligned}$$ In the last step, we have dropped the dots and used the standard convention of "differentiate directly to the right".


Elaboration and Well-Definedness

The operator $\nabla$ together with overdot notation satifies all generic and linear properties that vectors satisfy. Generic means the property has to be true for all vectors. You can justify this with index notation; we simply start with a tensor identity and then contract each side with $\partial_i$. Alternatively, see the third section of my answer here. Notice that there is a chain rule there that I forgot to cover here.

Let's use the $u\times(v\times w)$ identity in my answer as an example. It is generic in $u$ since it is true for any $u$, and it is linear since both sides of the identity are linear functions of $u$. Thus we may replace $u$ with $\dot\nabla$ and arrive at a valid identity; in fact we get four identities: $$ \dot\nabla\times(v\times w) = (\dot\nabla\cdot w)v - (\dot\nabla\cdot v)w, \tag1 $$$$ \dot\nabla\times(\dot v\times w) = (\dot\nabla\cdot w)\dot v - (\dot\nabla\cdot\dot v)w, \tag2 $$$$ \dot\nabla\times(v \times\dot w) = (\dot\nabla\cdot\dot w)v - (\dot\nabla\cdot v)\dot w, \tag3 $$$$ \dot\nabla\times(\dot v\times\dot w) = (\dot\nabla\cdot\dot w)\dot v - (\dot\nabla\cdot\dot v)\dot w. \tag4 $$ We may consistently interpret $\dot\nabla$ with no other dotted variables as differentiating the constant $1$, or symbolically $\dot\nabla = \dot\nabla\dot 1$; thus equation (1) is simply saying $0 = 0$.

In index notation the vector identity is $$ \epsilon^i_{jk}\epsilon^k_{lm}u^jv^lw^m = u^jw_jv^i + u^jv_jw^i. $$ Each of the equations above are then saying $$ \epsilon^i_{jk}\epsilon^k_{lm}(\partial^j1)v^lw^m = (\partial^j1)w_jv^i + (\partial^j1)v_jw^i, \tag1 $$$$ \epsilon^i_{jk}\epsilon^k_{lm}(\partial^jv^l)w^m = w_j(\partial^jv^i) + (\partial^jv_j)w^i, \tag2 $$$$ \epsilon^i_{jk}\epsilon^k_{lm}v^l(\partial^jw^m) = (\partial^jw_j)v^i + v_j(\partial^jw^i), \tag3 $$$$ \epsilon^i_{jk}\epsilon^k_{lm}\partial^j(v^lw^m) = \partial^j(w_jv^i) + \partial^j(v_jw^i). \tag4 $$

Higher derivatives are much that same, but each derivative needs to separately track what it differentiates. An expression like $\dot\nabla\cdot\dot\nabla(\dot u\cdot\dot v)$ makes no sense. We will use a check $\check\nabla$ exactly like an overdot, but having both symbols allows us to write all the possible (non-trivial) second derivatives: $$ \check\nabla\cdot\dot\nabla(\check{\dot u}\cdot v),\quad \check\nabla\cdot\dot\nabla(\dot u\cdot\check v),\quad \check\nabla\cdot\dot\nabla(\check u\cdot\dot v),\quad \check\nabla\cdot\dot\nabla(u\cdot\check{\dot v}),\quad \check\nabla\cdot\dot\nabla(\dot u\cdot\check{\dot v}),\quad \check\nabla\cdot\dot\nabla(\check u\cdot\check{\dot v}),\quad \check\nabla\cdot\dot\nabla(\check{\dot u}\cdot\dot v),\quad \check\nabla\cdot\dot\nabla(\check{\dot u}\cdot\check v),\quad \check\nabla\cdot\dot\nabla(\check{\dot u}\cdot\check{\dot v}). $$ The last expression is the Laplacian. The fact that partial derivatives commute means that e.g. $\check{\dot u} = \dot{\check u}$.

The rule for higher derivatives is the same as first derivatives: each $\nabla$ must appear in a generic, linear slot. Continuing with our example $$ u\times(v\times w) = (u\cdot w)v - (u\cdot v)w, $$ we see $$ \check\nabla\times(\dot\nabla\times\check{\dot w}) = (\check\nabla\cdot\check{\dot w})\dot\nabla - (\check\nabla\cdot\dot\nabla)(\check{\dot w}) = \nabla(\nabla\cdot w) - \nabla^2w. $$

  • But I don't get how you can just turn nabla x f into nabla f , especially as the curl of a scalar should be zero – Gauge Jan 01 '23 at 18:21
  • @LeoKovacic Are you saying you expect $$ \nabla\times(fA) = (\nabla\times f)A + f(\nabla\times A)? $$ Why? $\times$ isn't even defined on scalars. Even if we define it, what reason is there for the above equation to be true? Perhaps it would be clearer for you in index notation: $$ \epsilon_{ijk}\partial_i(fA_j) = \epsilon_{ijk}(\partial_if)A_j + f\epsilon_{ijk}(\partial_iA_j). $$ That's it. We are not "turning $\nabla\times$ into $\nabla$". Compare this with my derivation in (3) above; they are exactly the same thing. – Nicholas Todoroff Jan 01 '23 at 19:06
  • Overdots are not required in index notation because everything is a scalar and can be rearranged however we need it to be so that what is differentiatied is directly to right of $\partial_i$. We can't move things around arbitrarily in vector notation, so overdots (or some similar system) are required. – Nicholas Todoroff Jan 01 '23 at 19:07
  • This is all very confusing cause I can't find any reasonable definitions of algebraic properties of vector operators , indices make it even less clear . – Gauge Jan 02 '23 at 19:43
  • The overdot notation is a step in the direction of some clarity , still I don't know based on what you can just assume these operators behave like vectors , and just take the gradient of f and cross it with A , it is intuitive enough though so I'll settle for it though ideally there would be some formal reason , but that's why I mentioned GA as in my opinion vector calculus is an intrinsically incomplete theory – Gauge Jan 02 '23 at 19:46
  • But now I see what you did in the last example and how you use this notation to get divergences and directional derivatives , it's neat thanks – Gauge Jan 02 '23 at 19:53
  • Ps , does this work for second differentials for example if instead of a is another curl ? – Gauge Jan 02 '23 at 20:05
  • So in conclusions , the overdot notation plus the double cross product rule is enough to prove any vector calculus identity , and geometric algebra would only be needed to prove the double cross product rule . – Gauge Jan 02 '23 at 20:10
  • @LeoKovacic See my edit. – Nicholas Todoroff Jan 03 '23 at 00:10
  • Nice , again like sound algebra more than indices they are the opposite of well definedness and clarity to me . – Gauge Jan 03 '23 at 09:46