0

To preface, I am currently following eigenchris' series on tensor calculus in an effort to learn GR. I believe I understand the sense in which differential operators representing derivatives are vectors, but I may be wrong here too.

The way eigenchris explains it, $\frac{\partial \vec R}{\partial x_i}=\vec e_{x_i}$ for some coordinate $x_i$. Since this is true for any position vector $\vec R$, we may as well just use the operator $\frac{\partial}{\partial x_i}$ to represent the basis vector $\vec e_{x_i}$.

My understanding is that this is a useful transition because when working with a general manifold that may be curving, we cannot talk about vectors in a plane, but the partial derivative operators can still be used within the manifold to talk about moving in the direction of a coordinate curve.

I think my understanding is starting to break down when he talks about treating differentials as covectors. I understand how they follow the transformation rules for covectors, but when I try to think of them as covectors in the usual calculus settings, I get confused.

For example, normally we would have the differential operator $\frac{\partial}{\partial x}$ (which we think of as a vector) act on a function $f$ (a scaler?) to get the partial derivative $\frac{\partial f}{\partial x}$ (a scaler). I guess this is not normal vector scaler multiplication since we end up with a scaler.

However we also have that the covector $df$ acting on the vector $\frac{\partial}{\partial x}$ also gives us the scaler $\frac{\partial f}{\partial x}$. This makes sense from a linear algebra perspective, but we would never write $df\frac{\partial}{\partial x}=\frac{\partial f}{\partial x}$ in calculus, and it seems nonsensical.

Finally where I feel the most confused is in the most common context for differentials, integration. $\int_a^b f(x)dx$ is telling us what in the context of differential geometry? We are multiplying a scaler function by a covector and then... Taking a limit of sums? It feels like the result should be a covector and it seems completely disconnected from the idea of a covector, and I can't find an explaination anywhere.

This is my first time on stack exchange, so I'm sorry if I did something wrong and I'm happy to provide more clarification if my question is unclear or too vague.

  • The idea of $df$ as a covector is (modulo the issue of distinguishing a vector space from its various tangent spaces) exactly the same idea in usual calculus, even if this perspective is (unfortunately) not emphasized as much: see here Sure in calculus, we don’t think of $\frac{\partial}{\partial x^i}$ as a vector; we call this a differential operator and we would write the vector as $e_i$, but this bit is just a notational hurdle. See also this. – peek-a-boo Mar 20 '24 at 03:14
  • And regarding integrals, no, we’re not adding up many differential forms, so the answer is not going to be a covector. Rather we add up the *values* of the forms once they act on certain vectors. Perhaps this might help? The point is we’re using our already established notion of integration in $\Bbb{R}^n$ (be it Riemann or Lebesgue) and ‘transporting’ all of that to a notion of ‘integral’ at the manifold level. The proper objects to integrate are scalar densities, or on an oriented manifold, equivalently, $n$-forms. – peek-a-boo Mar 20 '24 at 03:21
  • @peek-a-boo thank you very much for your response. After some thought and reading the posts you linked, I believe I understand why differentials need to be covectors and must receive a vector as an input. I have a few additional questions/clarifications; First, when a differential operator (vector) acts on a function (scaler) to give us a partial derivative, this is different from the normal scaler vector multiplication that happens in the vector space (like when we use partial derivatives to expand one operator in terms of some set of basis operators), is this understanding correct? – Oobleck Mar 23 '24 at 18:54
  • @peek-a-boo, Second, I don't think I'm at the level of understanding yet to fully comprehend the explanation you linked on integration within a manifold, but I wanted to know if the following intuition is generally correct. You said that we are not summing up the differential forms in the integrals, but rather the values they output after acting on certain vectors. Are those "certain vectors" associated with the path of integration? I guess I'm thinking of it something like a path integral in multivariable calculus, where our differential vector is related to the path as it's tangent vector. – Oobleck Mar 23 '24 at 18:59
  • A function itself is not a scalar. It is scalar-*valued*. Don’t omit this key word. So yes, this is why it is completely different to scalar-multiplication. For the second question, for line integrals of $1$-forms, yes you evaluate along the tangent vector to the path. For more general $n$-form, look at the link (I know you said it’s a little abstract) but focus on the special case where you cover the manifold with one chart. Then, you can write $\omega=f,dx^1\wedge\cdots\wedge dx^n$, and you essentially integrate the function $f$ as usual. This $f$ is the value of $\omega$ when [cont.] – peek-a-boo Mar 24 '24 at 03:38
  • [cont.] applied to the chart-induced basis vectors, or more specifically, $f=\omega\left(\frac{\partial}{\partial x^1},\dots, \frac{\partial}{\partial x^n}\right)$. Then, you end up integrating $f\circ \alpha^{-1}$, which is the chart-representation of the function. So, really, we’re integrating the values of the form (in the usual Riemann or Lebesgue sense… depending on how much analysis you’ve developed). – peek-a-boo Mar 24 '24 at 03:40

0 Answers0