2

I'm reading up on tensors. Page 18 of a NASA document has a line stating $$\mathbf{e^{(1)}}\cdot \mathbf{e_{(1)}}=\left(\frac{\partial \mathbf{r}}{\partial u} \right) \cdot \left( \nabla u \right) \tag{1}$$

$$= \left(\frac{\partial x}{\partial u} \right) \left( \frac{\partial u}{\partial x}\right)+ \left(\frac{\partial y}{\partial u} \right) \left( \frac{\partial u}{\partial y}\right) + \left(\frac{\partial z}{\partial u} \right) \left( \frac{\partial u}{\partial z}\right) \tag{2}$$

$$=\frac{\partial u}{\partial u}=1 \tag{3}$$

Where the LHS of (1) states the dot product of the contra variant and covariant bases.

I've seen (2)-(3) in Griffiths' Introduction to Electrodynamics as well.

Question:

I understand:

  • the reasoning for equation LHS and RHS in (1).
  • the formulation of (2), through dot product of RHS of (1)
  • $\frac{\partial u}{\partial u}=1$

I do not understand why (2) leads to (3).

To me, refactoring (2) looks like $\frac{\partial u}{\partial u}\left(\left(\frac{\partial x}{\partial x} \right)+ \left(\frac{\partial y}{\partial y} \right) + \left(\frac{\partial z}{\partial z} \right) \right) =\frac{\partial u}{\partial u}\left( 1+1+1\right)=3\frac{\partial u}{\partial u}=3 \neq \frac{\partial u}{\partial u}=1$

Please help a confused physicist.

DWD
  • 625
  • 6
  • 15
  • 1
    I also think it should be $3$... – Arthur Aug 10 '17 at 22:42
  • I believe it has to do with the orthogonality of each component with respect to one another – JohnColtraneisJC Aug 10 '17 at 23:17
  • @benjamin moss The orthogonality does play a part in cases where the contravariant and covariant bases are different (i,j : j,k : i,k) which is also on the page, but it's not apparent to me how orthogonality affects this case – DWD Aug 10 '17 at 23:27

1 Answers1

2

Firstly, you can't just push the parts of a (partial) derivative $\partial u/\partial x$ around as you like: $\partial u/\partial x$ is one quantity, that happens to be composed of a number of symbols. This is clearer if you write it as $u_x$ or $u_{,x}$.

The next problem is that while for total derivatives or functions of one variable $$ \frac{dy}{dx} \frac{dx}{dy} = 1, $$ the same is not the case for partial derivatives, as the following example will illustrate: let $$ u = x+y, \qquad v=x-y. $$ Then inverting gives $$ x = \frac{u+v}{2} \qquad y = \frac{u-v}{2}. $$ So, $$ \frac{\partial u}{\partial x} = 1 \qquad \frac{\partial u}{\partial y} = 1 \\ \frac{\partial x}{\partial u} = \frac{1}{2} \qquad \frac{\partial y}{\partial u} = \frac{1}{2}, $$ and therefore $$ \frac{\partial u}{\partial x} \frac{\partial x}{\partial u} = \frac{1}{2} \qquad \frac{\partial u}{\partial y} \frac{\partial y}{\partial u} = \frac{1}{2}, $$ so the sum is $1$, but the products are not $1$ individually.

A further instructive example is the even simpler-looking example $$ u = x + ay, \qquad v = y. $$ Then $$ x = u-av \qquad y = v. $$ So $ \frac{\partial v}{\partial x} = 0 $, but $$ \frac{\partial x}{\partial v} = -a. $$ This tells us two interesting things: firstly, what we already know about the product not being $1$. And secondly, that although $y=v$, $\partial x/\partial y = 0 $ ($x$ and $y$ are meant to be independent to start with, after all!), so $ \frac{\partial x}{\partial v} \neq \frac{\partial x}{\partial y} $. What does this mean? It means that the partial derivative very much depends on what is being held constant, in addition to what is allowed to vary.

In the case of $\frac{\partial x}{\partial y}$, the coordinates other than $y$ (namely $x$) is being held constant, so of course this gives zero. But in $\frac{\partial x}{\partial v}$, $u$ is being held constant, and since this is not the same as $x$, the answer is different. This is one of the most difficult, and most important, things to understand about partial derivatives. There's a nice illustration of this on p.190 of Penrose's Road to Reality.

The correct approach is to apply the chain rule for partial derivatives, namely that if $f$ is a function of $x,y,z$, which in turn are functions of $u$ and other variables, then $$ \frac{\partial f}{\partial u} = \frac{\partial x}{\partial u}\frac{\partial f}{\partial x} + \frac{\partial y}{\partial u}\frac{\partial f}{\partial y} + \frac{\partial z}{\partial u}\frac{\partial f}{\partial z}. $$ If then $f=u$, the result follows.

Chappers
  • 67,606
  • 1
    Further remark: I looked at that NASA document before, long ago, but returning to it with some experience of working with tensors, I notice that it's actually not a very good introduction: the notation is nonstandard (contravariant basis vectors should have indices downstairs so summation convention is consistent: it is their components that have upstairs indices), and the approach is quite old-fashioned: one normally now talks of vectors and one-forms, tensors are linear maps taking a certain number of these as arguments, and the transformation formulae are derived, not used as a definition. – Chappers Aug 11 '17 at 00:38
  • Thank you for your thoughtful answer and comment. I am reading the NASA paper in conjunction with another: https://previa.uclm.es/profesorado/evieira/ftp/apuntes/tensors.pdf by Eduardo W.V. Chaves. Would you recommend any sources for learning about tensors in particular? The less costly the better (unfortunately) – DWD Aug 11 '17 at 08:09
  • It very much depends on what you want tensors for: do you only care about Cartesian tensors? Do you want to work in curvilinear coordinates? What about curved spaces? Materials or general relativity (or more general manifolds)? These all affect what you need to know about tensors, and the best approach for your purposes. – Chappers Aug 11 '17 at 10:02
  • I would like to work in both cartesian and curvilinear co-ordinates. The aim is to apply tensors to general relativity and some other topics. I also understand tensors are applied to machine learning. I want as broad a knowledge on the subject as possible. Application to electrodynamics and quantum mechanics would interest me too. – DWD Aug 11 '17 at 10:14
  • The machine learning version of "tensors" is not really the same thing: they're just multidimensional arrays: see https://stats.stackexchange.com/a/198395. It looks like there is some substantial theory there, at least of decomposition, but I don't know anything useful about resources. No doubt the machine learning community have their own. – Chappers Aug 11 '17 at 11:18
  • As far as GR goes, you need the full machinery, with vectors and one-forms, and multilinear maps of them. http://www.damtp.cam.ac.uk/user/hsr1000/part3_gr_lectures_2016.pdf are lecture notes for a GR course that do them the GR/DG way. A nice book someone recommended to me recently is Geometrical Methods in Mathematical Physics by Schutz, which goes about it the modern way (and goes through tensors before vector and tensor fields, which is a much better idea IMO. Q.v. https://math.stackexchange.com/a/10390/221811 for a summary of the "transformations" approach vs the "multilinear" approach. – Chappers Aug 11 '17 at 11:34