0

Hi I'm new to Einstein notation when describing position in 4 dimensions. I understand that $x^μ=(x^0,x^1,x^2,x^3)$ represents $t, x, y$ and $z$ but I'm having a hard time understanding what $x_μ$ represents. Is it just the transpose so that the two vectors can have a dot product or is it equal to something else? Thank you :)

K.defaoite
  • 12,536
Joel100
  • 11
  • Depending on the context you may say that the $x_\mu$ are the "dual" representation for $x^\mu$, or that those are covariant and contravariant components for the 4-vector. In Cartesian coordinates, is just the transpose as you said. However, in curved coordinates is the equivalent to decompose a 4-vector in one basis or another (covariant or contravariant basis). – FeedbackLooper Feb 09 '22 at 11:09
  • @Joel100 Have you studied any differential geometry? Have a look at the musical isomorphisms and index gymnastics . – K.defaoite Feb 09 '22 at 11:27
  • Maybe not a duplicate, but possibly of interest: https://math.stackexchange.com/questions/1501802/dual-space-and-covectors-force-work-and-energy. The viewpoint there is, lower and upper indices signify how a tensor transforms under change of coordinates. – Andrew D. Hwang Feb 09 '22 at 12:34

2 Answers2

0

A simple case explaining the notations.

Consider the equation of a vector plane in $\mathbb{R^3}$:

$$ax+by+cz=0$$

With indexed notation, it would be written

$$a_1x^1+a_2x^2+a_3x^3=0 \ \iff \ \sum_i a_ix^{i}=0$$

With Einstein conventions (suppression of the $\sum$ sign) we can write it: $a_ix^{i}=0.$

The rule being that when there is a same letter for an index, one as a superscript, the other one as a lowerscript, they cancel (the technical word for it is "contraction").

Another familiar case where the handyness of this notation can be appreciated is in the explicitation of the product of a matrix with a vector

$$Y=AX$$

that you write usually in the following way:

$$\forall i, \ y_i=\sum_{j=1}^{j=n} a_{ij}x_j$$

which will become with lower and upper indexing for the matrix entries:

$$y_i=\underbrace{\left(\sum_{j=1}^{j=n} \right)}_{\text{to be omitted}} a^{j}_{i}x_j$$

In the RHS, the upper index and the lower index $i$ cancel and it remains a lower index $j$ "homogeneous" (even if the word is not mathematically correct) with the lower index $j$ that can be found on the LHS.

Jean Marie
  • 81,803
  • This does not answer the question. The asker wants to know the difference between the contravariant components $x^\mu$ and the covariant components $x_\mu$. – K.defaoite Feb 09 '22 at 11:25
  • I don't agree. I just wanted to give a "very low level" explanation (for somebody who hopefully hasn't learnt differential geometry) of what the $x_{\mu}$ (here my $a_k$) could represent. – Jean Marie Feb 09 '22 at 13:23
0

I don't know how much differential geometry you know. Without getting into too many details, if $u$ is a contravariant (column) vector, with components $u^i$, then its lower (covariant) indices are obtained via contraction with the metric: $$u_i=g_{ij}u^j$$ This notation allows to write, e.g, inner products, in a very succinct way: $$\langle u,v \rangle =g_{ij}u^iv^j \\ =u_jv^j=u^jv_j$$


Some basic DG:

In a coordinate free definition, the tensor with components $u_i$ is called the "flat" of $u$, denoted $u_\flat$. When we write $u_i$ what we really mean is $(u_\flat)_i$, but for brevity and readability, the flats are almost always dropped. Similarly if we have a covector (row vector) with components $\omega_i$ we can talk about its "sharp", $\omega^\sharp$.

What we are doing here is taking advantage of the isomorphism that exists between the tangent space $\mathrm T\mathcal M$ and its dual space, $\mathrm (T\mathcal M)^*$ which for brevity is often denoted $\mathrm T^*\mathcal M$.

K.defaoite
  • 12,536