3

The dot product is defined for any $\mathbf{u,v}\in\mathbb{R}^n$ as,

$$ \mathbf{u} \cdot \mathbf{v} =\mathbf{u}^{\mathsf{T}} \mathbf{v}=\sum_{i=1}^{n} u_{i} v_{i}=u_{1} v_{1}+\cdots+u_{n} v_{n} $$ Recall the geometric definition for $\mathbf{u,v}\in\mathbb{R}^{n}$ when $1\leq n\leq3$ $$ \mathbf{u} \cdot \mathbf{v} = \|\mathbf{u}\|\|\mathbf{v}\|\cos[\measuredangle(\mathbf{u},\mathbf{v})] $$ In 1D, 2D, and 3D, the oriented angle measured between two vectors makes sense.

From this I have two questions:

(1) Does the geometric definition extend to cases where $n\geq4$? I cannot imagine $\measuredangle(\mathbf{u},\mathbf{v})$ making sense in higher dimensions.
(2) Is the dot product always defined with the 2-norm? Would it still make sense to use any other $p$-norm? Or a general norm?

Extra question (if it makes sense): What about infinite-dimensional spaces?

Thank you for the insight.

ex.nihil
  • 934
  • 2
    For (1), note that in higher dimensions, two vectors still lie in a plane. So yes it makes sense.

    For (2), the polarization identity defines an inner product as a fnuction of the norm only. Any norm. For infinite-dimensional spaces, see Hilbert spaces.

    – Jean-Claude Arbaut Mar 26 '20 at 07:05
  • (1) and (2) make sense, thank you. My question with regards to Hilbert spaces is, would the geometric definition still be applicable? I am simply curious about its generality. It apparently generalizes to any norm and any $n$. What about infinite dimensions? – ex.nihil Mar 26 '20 at 07:10
  • 4
    actually I would suggest that you adopt a different point of view: a dot product(or inner product) is something which tells you what the geometry of the space is. this is in contrast to the current approach of trying to generalize the dot product using geometry. In this way, the generalization becomes immediate and i think it provides a clearer understanding. I'll try to elaborate later when i have time – peek-a-boo Mar 26 '20 at 07:13

2 Answers2

7

As mentioned in the comments, I think it is more beneficial to completely reverse the perspective. Your question is basically "what is a geometric definition, and/or interpretation for the dot product in higher dimensions". What I'm suggesting is that you should instead consider the question "what do we mean by a geometry on a vector space".

The only reason this might sound silly is because most of us learn the same (Euclidean) geometry of triangles, circles, rectangles, trapezoids etc since kindergarten, and that's what we're most familiar with. However, familiarity $\neq$ clarity/understanding. So, to answer this question, we must be able to describe what features of Euclidean geometry we like to count as a "geometry".

The relevant notion here is that of an Inner Product Space (or if you're also doing analysis, you'll want a completeness condition, in which case we call the resulting space a Hilbert space).

A real inner product space is a pair $(V, g)$, where $V$ is a real vector space and $g: V \times V \to \Bbb{R}$ is a function which is

  • Bilinear
  • Symmetric
  • Positive definite.

Usually the inner product is written with angle brackets as $\langle \cdot, \cdot \rangle$. So, given two vectors $\xi, \eta \in V$, we might write $\langle \xi, \eta\rangle$ or $\xi \cdot \eta$, however, I shall write $g(\xi, \eta)$ (simply because it's quicker to type).

Now of course, if this definition is to be of any use in generalizing our concept of "geometry", we had better make sure it is able to recover our familiar notions of lengths and angles in $\Bbb{R}^3$. Of course this is possible, we simply define $g_e: \Bbb{R}^3 \times \Bbb{R}^3 \to \Bbb{R}$ (subscript $e$ for Euclidean) by \begin{align} g_e \left((\xi_1, \xi_2, \xi_3), \, (\eta_1, \eta_2, \eta_3) \right) &:= \sum_{i=1}^3 \xi_i \eta_i \end{align} Then, the $g_e$ so defined is easily verified to satisfy the $3$ properties of an inner product I described above.


Now, one of the key ideas in elementary geometry is that of lengths and angle. So, if the notion of "inner-product" is supposed to encapsulate all the "geometry" of a vector space, we have to describe how we're measuring the lengths and angles. This is simple again: for any $\xi \in V$, define the norm by $\lVert \xi\rVert:= \sqrt{g(\xi, \xi)}$ (the direct generalization of "length of a vector"). Also, given any two non-zero vectors $\xi, \eta \in V$, we define the angle between them to be \begin{align} \angle(\xi, \eta) &:= \arccos \left( \dfrac{g(\xi, \eta)}{\lVert\xi \rVert \lVert \eta \rVert} \right). \end{align}

Of course this definition is motivated by the familiar definition of the dot product as the product of the norms times the cosine of the angle between them. So, it is no surprise that with this definition, the Euclidean inner product $g_e$ coincides with the familiar formula.


So, the concept "inner product space" at the very least is not a bad concept, because it is atleast able to recover our familiar notions if we consider $(\Bbb{R}^3, g_e)$ as defined above. In fact, it offers us so much more, and the theory of inner product spaces, Hilbert spaces (and later, Riemannian manifolds etc) are all very interesting topics.

Now, this is a good concept, because we can now consider the same thing on "any" vector space; finite or infinite dimensional. As a first illustration, consider as an even lower dimensional space, $\Bbb{R}^2$, but this time, with a different inner product. Consider instead $g: \Bbb{R}^2 \times \Bbb{R}^2 \to \Bbb{R}$ defined by \begin{align} g(x,y) &:= 3x_1y_1 + x_1y_2 + x_2y_1 + 2x_2 y_2 \end{align} This can also be easily verified to be an inner product (I simply took the symmetric positive-definite matrix $ \begin{pmatrix} 3 & 1 \\ 1 & 2\end{pmatrix} $ and constructed an inner product using it). Note that with this inner product, the vectors $(1,0)$ and $(0,1)$ are no longer orthogonal to each other (they don't even have unit norm any more). So, a different choice of inner product on the same space $\Bbb{R}^2$ can be thought of as "using different length and angle measurement devices".

But the real power of this concept comes in its several generalizations. For example, we can easily go to $\Bbb{R}^n$, and define $g_e(\xi, \eta) := \sum_{i=1}^n \xi_i \eta_i$.

Really, any finite-dimensional vector space can be given an inner product in such a way that it is isomorphic to $(\Bbb{R}^n, g_e)$ (for some $n$). So, for example, the vector space of $n \times n$ matrices can be given such an inner product. The space of all polynomials of degree $\leq k$ can also be given the structure of an inner product space.

More interesting is an infinite-dimensional example. Let $V = C^0([0,1], \Bbb{R})$, the set of continuous functions from $[0,1] \to \Bbb{R}$. Here, we can define an inner product as \begin{align} g(\phi, \psi) &:= \int_0^1 \phi(t) \psi(t) \, dt. \end{align}

It is a standard exercise to verify that this satisfies all the properties an inner product is supposed to. Now, you might be thinking, "what does this mean, how can I visualise taking dot products of functions". Well, one heuristic explanation is that rather than taking a sum of finitely many number $\sum_{i=1}^n \xi_i \eta_i$, this is the continuous version of that, so we take an integral. Instead, what I'd like to emphasize is that you should think of it as "great, I now have an inner product $g$ defined on the space of continuous functions, so I can start doing some geometry on this new exciting space".


Extra Remarks

  • Your question (2) asks "does the dot product always use the 2-norm..." Well, as I've alluded to in my answer, this is the "wrong way around." It is more interesting to say "here's my inner product, how can I construct a norm?" As I've shown above, the answer to this question is not particularly hard. Of course, this remark is not meant to discourage you/ demean your question: it is actually a very interesting question to ask "given a norm on a vector space, can I find an inner product which gives rise to this norm". This has been somewhat addressed in the comments by means of the polarization identity and the parallelogram law.

  • Now, if we want to generalize even further, to other types of geometries, all we have to do is modify what conditions we impose on the "geometry dictator" $g$. For example, if we replace the positive-definiteness requirement in the inner-product definition with "non-degeneracy" (i.e the function $V \to V^*$ defined by $\xi \mapsto g(\xi, \cdot)$ is required to be an isomorphism), then we get to play around with even more spaces. For example, in special relativity, one considers $\Bbb{R}^4$, with the "pseudo-inner product"/ "Lorentzian inner product" $g: \Bbb{R}^4 \times \Bbb{R}^4 \to \Bbb{R}$ defined by \begin{align} g(x,y) &:= -x_0y_0 + x_1y_1 + x_2y_2 + x_3 y_3 \end{align} (or some other variation). There are many interesting questions one can ask here, because here the geometry is no longer Euclidean (for example, there are non-zero vectors with zero length etc).

Hopefully this somewhat gives you a general overview of what generalizations might entail, and more importantly, that it is the choice of an inner-product on a vector space (or a pseudo-inner product if you wish to be more general) which allows you to start talking about geometry on a vector space; and reversing the thought process in this manner is something which I found very beneficial.

peek-a-boo
  • 55,725
  • 2
  • 45
  • 89
  • 1
    This is easily one of the most fascinating and enlightening answers I have read. Thank for taking the time to "come down" to my perspective and "elevate" the abstraction. I need to re-read multiple times to digest. But much appreciated for the delicate writing. – ex.nihil Mar 26 '20 at 17:19
  • You mentioned that Riemannian manifolds are, sort of, the next step from Hilbert spaces (if I understood correctly). Could you recommend an article/book that distills more of this generalized geometric wisdom you unveiled? I would like to better understand the intricacies that arise from the usual Euclidean space, to affine, topological, metric, inner product, ..., spaces. I never had a rigid grasp beyond basic understanding but would like to strengthen that. Thank you once again! – ex.nihil Mar 26 '20 at 17:22
  • @ex.nihil I'm glad you found this answer helpful. Actually, I only mentioned Riemannian manifolds sort of as a "buzz-word", something which if it sparks your interest, you could look up. So, I'm not sure if it is strictly necessary to study Hilbert space theory and Riemannian manifolds in any particular order. However, what I found is that neither is easier than the other: for example, Hilbert spaces are vector spaces (i.e everything is linear by definition) whereas manifolds not. On the other hand, several notions from Riemannian geometry are familiar, whereas Hilbert space stuff seemed – peek-a-boo Mar 27 '20 at 06:23
  • more abstract. Another thing is that I've only ever seen finite-dimensional manifolds (of course there are interesting infinite-dimensional ones, but I'm no expert), whereas several Hilbert spaces are infinite-dimensional. So, there's no one way to approach math. As for references, the things I've said here aren't particularly from any one place, this answer is pretty much a summary of bits and pieces I learnt from various books, various tidbits of knowledge from profs etc. However, one book on calculus/analysis/geometry whatever you want to call it which I particularly enjoyed is – peek-a-boo Mar 27 '20 at 06:31
  • Loomis and Sternberg's Advanced Calculus. This is where I learnt a lot of things. Second is a series of Lectures by Frederic Schuller (search his name and you'll see a bunch of lectures on youtube). In particular, I watched several of his General relativity lectures; the precision and clarity is unlike any I've seen before. – peek-a-boo Mar 27 '20 at 06:37
2

Yes, it extends to higher dimensions since any two vectors $\mathbf{u}, \mathbf{v}$ in a higher dimensional Euclidean space, still uniquely define a plane that passes through $\mathbf{0}$ and thus their angle can be calculated as you normally would. This geometric definition can be extended to complex vectors $\mathbf{x}, \mathbf{y}$ where the angle between them $\theta$ will be given by:

$\cos{\theta} = \frac{Re<\mathbf{x}\cdot \mathbf{y}>}{||\mathbf{x}||||\mathbf{y}||}$

Regarding your second question: Since the angle between a vector $\mathbf{u}$ and itself is 0 then it follows that $||\mathbf{u}||^2 = <\mathbf{u}, \mathbf{u}>$, where $||\cdot ||$ is the Euclidean norm.

However, this can be generalized if you are willing to consider inner product spaces. Any linear space endowed with an inner product $<\cdot,\ \cdot >$ has a naturally defined norm given by: $||\mathbf{u}|| = \sqrt{<\mathbf{u}, \mathbf{u}>}$