2

Suppose we have some smooth manifold with some connection $\nabla$. I understand how $\nabla$ acts on vectors ($\nabla_{e_i} e_j$) and tensors ($\nabla_{e_i} (e_j \otimes e_k \otimes \dots)$). Those are the two applications with an obvious geometric interpretation.

Can this notion be extended to covariant differentiation by tensors? I.e. $\nabla_{e_i \otimes e_j \dots} (e_k)$? I'm not sure what the geometric picture would be.

I've been reading through Clifford Algebra to Geometric Calculus (Hestenes) and they introduce the idea of a multi-vector derivative. I've been trying to understand this idea by working in a given basis, but I'm confused about how to handle expressions like the one above. For functions of multi-vector valued variables the multi-vector derivative is easy, but in the more general case, I'm not sure how to proceed. Can someone provide some clarification?

(Aside: I know tensors are not one-to-one with multi-vectors, but they represent similar constructions. The geometric algebra is a quotient of the tensor algebra so the logic should extend)

Arctic Char
  • 16,007
  • 1
    The multivector derivative is essentially just the gradient/del operator of the full geometric algebra vector space. For a scalar valued function $f$ over any finite-dimensional vector space $V$ equipped with a nondegenerate symmetric bilinear form $\langle\cdot,\cdot\rangle$ the gradient $\nabla f$ is defined in relation to the total differential $\mathrm Df$: $$\langle v, \nabla f(x)\rangle = \mathrm Df_x(v).$$ The multivector derivative is just the case where $V$ is the full geometric algebra (or some subspace) and $\langle X, Y\rangle = \langle\widetilde XY\rangle_0$. – Nicholas Todoroff Dec 28 '23 at 15:56
  • 1
    So what you need is to choose a bilinear form on the tensor algebra. One choice would be $$\langle v_1\otimes\dotsb\otimes v_k,:w_1\otimes\dotsb\otimes w_k\rangle = \prod_iv_i\cdot w_i.$$ For how to formalize treating $\nabla$ "like a multivector" or "like a tensor", see my post here. As for whether any of this extends in a nice way to covariant derivatives and manifolds, I don't know. – Nicholas Todoroff Dec 28 '23 at 16:03
  • @NicholasTodoroff

    Thanks for your comment. Trying to apply your idea:

    Let $g_p:T(T_pM)×T(TpM)→R$ be an inner product/metric on the tensor algebra.

    Let $∇:T(T_pM)×T(T_pM)→T(T_pM)$ be the covariant derivative we are looking for on the tensor algebra.

    Let $\tilde{∇}:C^\infty(M→T(TM))→C^\infty (M→T(TM))$ be the gradient. Here the notation I'm using is that it takes smooth maps from M to the tensor bundle to new smooth maps of the same form.

    – iglizworks Dec 28 '23 at 18:34
  • Let $\phi \in C^\infty(M→T(TM))$ be a smooth map.

    Then:

    $$g_p(\chi,\tilde{\nabla}\phi(p)):=\partial_\chi \phi (p)$$

    defines the directional derivative of the field at $p$.

    The covariant derivative is the projection of the directional derivative onto the tangent tensor algebra. That is:

    $$\nabla_\chi=\sum_{e\in B} g_p(e,\partial_\chi \phi )e$$

    where $B$ is a basis for the tangent algebra. The task is then to solve for the gradient in terms of a chosen metric.

    – iglizworks Dec 28 '23 at 18:34

0 Answers0