1

Given $p_1,p_2,p_3 \in S^2 = \{x \in \mathbb{R}^3 \mid \lVert x \rVert = 1\}$ all distinct, I wish to prove that \begin{align*} J &:=(p_1p_2 - 1)^2 + (p_1p_3 - 1)^2 + (p_2p_3 - 1)^2\\ &\qquad - (p_1p_2 - p_1p_3)^2 - (p_1p_3 - p_2p_3)^2 - (p_2p_3 - p_1p_2)^2\\ &> 0 \end{align*} where $p_ip_j := p_i\cdot p_j$ denotes (for better readability and to save space) the regular dot product in $\mathbb{R}^3$.

$p_i \in S^2$ implies $p_ip_j \in [-1, 1)$. I can make $J \to 0^+$ by choosing $p_1 = -p_2$ and having $p_3 \to p_1$, so that $p_1p_2 = -1$, $p_1p_3 \to 1$, $p_2p_3 \to -1$. $J$ then approaches $(-2)^2+0^2+(-2)^2-(-2)^2-2^2-0^2=0$. If each $p_ip_j$ were allowed to be chosen freely in that range, $J$ can become negative, e.g. by choosing $p_1p_2=-1, p_1p_3=0, p_2p_3=0.9$. Then $J=-0.41<0$.

I believe the interrelatedness between the $p_i$’s make the inequality $J>0$ hold, but at the same time haven’t been able to show that no combination of $p_i$’s can produce e.g. those values of $p_ip_j$ that fail. (I believe $J=0$ is only possible by allowing $p_i = p_j$, but haven’t checked this.)

I tried bounding $p_1p_3$ after fixing the other two (e.g. if they were all in the $x-y$ plane, $\theta_{ij}$ are related, and $p_ip_j=\cos\theta_{ij}$) but didn’t get very far even with that simplification.

For some context in case it turns out to be useful, or just as an aside: $J$ is one of several discriminants used to classify quadratic curves. Let the plane passing through $p_1, p_2, p_3$ be $H = \{p_1 + s(p_2-p_1) + t(p_3-p_1) \mid s,t \in \mathbb{R}\}$. My $J$ arises from the intersection of $S^2$ with $H$, which is well known to be a circle in real space $\mathbb{R}^3$. Proving $J > 0$ shows that the corresponding solution curve for $(s,t)$ in the $s-t$ plane "configuration space" is an ellipse, which can also be easily seen to pass through $(s,t) = (0,0), (1,0), (0,1)$.

hhliu
  • 15
  • Welcome to MSE. What is $S^2$? Is it the unit sphere? – José Carlos Santos Aug 20 '23 at 09:36
  • Yes, $S^2 = {x \in \mathbb{R}^3 \mid \lVert x \rVert = 1}$ is the unit (2-)sphere in $\mathbb{R}^3$. In particular we know $\lVert p_i \rVert = 1$. Edited question to clarify, thanks! – hhliu Aug 20 '23 at 09:48
  • Is $p_ip_j=\langle p_i,p_j\rangle$? – José Carlos Santos Aug 20 '23 at 09:49
  • Yes, I specified that under the main inequality statement, albeit using the other common notation $p_ip_j = p_i\cdot p_j$. – hhliu Aug 20 '23 at 09:53
  • Indeed. I missed that. – José Carlos Santos Aug 20 '23 at 09:54
  • It seems to come from the curvature of the sphere (but I may be mistaken) – julio_es_sui_glace Aug 20 '23 at 12:33
  • @julio_es_sui_glace That’s my intuition for why three points on $S^2$ can’t be collinear, which I was able to prove by showing that the intersection of $S^2$ with a line passing through any two such points consists of only those two points (so the third can’t be on that line if it’s distinct from both). I’ll have to think more about whether/how that intuition carries over to this particular relationship between the points. – hhliu Aug 21 '23 at 01:02
  • @t0mdab0mb Anyway, we can prove $J\ge 0$ by parameterizing the unit sphere and expressing $J$ as Sum of Squares (SOS). But it is not nice. – River Li Aug 21 '23 at 02:45
  • @RiverLi Would you mind elaborating please? Do you mean write each $p_i$ as $(\sin\theta_i\cos\varphi_i, \sin\theta_i\cos\varphi_i, \cos\varphi_i)$? That does seem messy but I can try later in Mathematica. Also, did you specifically mean to say $J\geq0$ rather than the strict inequality? For my purposes I believe $J=0$ would be degenerate and can only happen with a $p_i=p_j$. – hhliu Aug 21 '23 at 16:18
  • @t0mdab0mb I used stereographic projection. Yes, it is a computer solution. You can analyze the condition of $J = 0$ to ensure $J > 0$ under your condition. Anyway, it is a complicated solution. – River Li Aug 21 '23 at 22:18

2 Answers2

2

Consider the bivector

$$B=p_1\wedge p_2+p_2\wedge p_3+p_3\wedge p_1.$$

We'll use the standard inner product on the space of bivectors, defined by $\langle a\wedge b,\;c\wedge d\rangle=\langle a,c\rangle\langle b,d\rangle-\langle a,d\rangle\langle b,c\rangle$ (for arbitrary vectors $a,b,c,d$). As with any inner product, we have $\langle B,B\rangle=\lVert B\rVert^2>0$ unless $B=0$.

$$\lVert B\rVert^2=\lVert p_1\wedge p_2\rVert^2+\lVert p_2\wedge p_3\rVert^2+\lVert p_3\wedge p_1\rVert^2 \\ +2\langle p_1\wedge p_2,p_2\wedge p_3\rangle+2\langle p_2\wedge p_3,p_3\wedge p_1\rangle+2\langle p_3\wedge p_1,p_1\wedge p_2\rangle$$ $$=\Big(\lVert p_1\rVert^2\lVert p_2\rVert^2-\langle p_1,p_2\rangle^2\Big) \\ +\Big(\lVert p_2\rVert^2\lVert p_3\rVert^2-\langle p_2,p_3\rangle^2\Big) \\ +\Big(\lVert p_3\rVert^2\lVert p_1\rVert^2-\langle p_3,p_1\rangle^2\Big) \\ +2\Big(\langle p_1,p_2\rangle\langle p_2,p_3\rangle-\langle p_1,p_3\rangle\lVert p_2\rVert^2\Big) \\ +2\Big(\langle p_2,p_3\rangle\langle p_3,p_1\rangle-\langle p_2,p_1\rangle\lVert p_3\rVert^2\Big) \\ +2\Big(\langle p_3,p_1\rangle\langle p_1,p_2\rangle-\langle p_3,p_2\rangle\lVert p_1\rVert^2\Big)$$ $$=J.$$

This shows that $J\geq0$.

Now suppose $J=0$; this implies $B=0$. By the properties of the wedge product, we have

$$B=p_1\wedge p_2+p_2\wedge p_3+p_3\wedge p_1$$ $$=(p_1-p_2)\wedge(p_2-p_3).$$

Thus $B=0$ if and only if either $(p_1-p_2)=0$, or $(p_2-p_3)=0$, or $(p_1-p_2)$ and $(p_2-p_3)$ are collinear. The first two cases are ruled out since you said $p_1,p_2,p_3$ are distinct. Thus $(p_1-p_2)=c(p_2-p_3)$ for some scalar $c$, and so

$$p_1=p_2+(-c)(p_3-p_2)$$

which shows that $p_1$ is on the line connecting $p_2$ and $p_3$. This is impossible since a line intersects a sphere in at most two points (which follows easily from the quadratic formula / fundamental theorem of algebra).

mr_e_man
  • 5,364
  • It is very nice! (+1) – River Li Aug 22 '23 at 02:12
  • Looks very nice indeed, I just need some time to digest this before picking an answer to accept. Could you please clarify what you mean by and/or provide a reference to the “standard” inner product on bivectors? The wiki article you linked gives the inner product in terms of components of the basis bivectors, not the original vectors, and this post claims “there is more than one common inner product in use in GA”. I think I follow the rest of your algebra, I’ll do it out longhand later since I’ve never worked with bivectors. – hhliu Aug 22 '23 at 10:59
  • I actually just started learning about geometric (Clifford) algebras, through a YouTube channel that was randomly(?) recommended to me. Coincidentally (or maybe The Algorithm knows…), it seems potentially very useful to the broader problem I’m currently working on involving (many) planes intersecting the unit sphere. Through that channel, I found out about this geometric algebra Python package that was just released on Aug 12. I plan to install and have a play around, have you come across it? – hhliu Aug 22 '23 at 10:59
  • 1
    @t0mdab0mb - Well, some GA'ers use the term "inner product" loosely: https://en.wikipedia.org/wiki/Geometric_algebra#Extensions_of_the_inner_and_exterior_products . I'm using it in the strict sense of a scalar-valued positive-definite symmetric bilinear form. To make it positive-definite, one of the inputs must be reversed. See Q1, Q2, Q3. – mr_e_man Aug 22 '23 at 14:19
  • @mr_e_man I've accepted your answer, thanks again for giving me the push to finally dive into GA. I've been consuming all the GA texts I can get my hands on—Dorst, Hestenes, Perwass, Hildenbrand... I now understand that the inner product, as defined here and in your answer to Q1, is a special case of what is unambiguously(?) referred to as the scalar product. Rather annoyingly, some authors include reversal as part of the definition, as you do, while others (Dorst) don't, defining $A\ast B$ ($=\langle A,B\rangle$) as $\langle AB\rangle_0$. – hhliu Sep 05 '23 at 21:11
  • Am I correct that your personal convention is to reserve the term "inner product" (denoted $\langle \cdot,\cdot\rangle$) only for this scalar product $\ast$, and then only when it's positive-definite? Under this convention, would reversal have to be part of the definition of $\ast$? Or would you just define $\langle A,B\rangle := A \ast\tilde{B}$, analogous to defining $\langle x,y\rangle := x\cdot\bar{y}$ for $x,y\in\mathbb{C}^n$? Dorst defines the squared norm $\lVert A \rVert^2 := A \ast\tilde{A}$, without explicitly defining an "inner product" separate from the scalar product. – hhliu Sep 05 '23 at 21:52
  • To be clear, the positive-definiteness of the inner product in this case is inherited from the Euclidean inner/dot product on vectors in $\mathbb{R}^n$, yes? You said "this $\ast$ extends any symmetric bilinear form on the vector space to the whole algebra, regardless of positive-definiteness or non-degeneracy". Does that mean $\ast$ need not be positive-definite, even if we include the reversal in its definition? For instance, using the Minkowski metric in STA, would you simply not be able to define an inner product (under your strict definition)? – hhliu Sep 05 '23 at 22:01
  • @t0mdab0mb - Yes, I try to use "inner product" and $\langle\cdot,\cdot\rangle$ in the strict sense. (But occasionally I use it for the Minkowski metric, e.g. when doing hyperbolic geometry.) And the fat dot always has the meaning $A\bullet B=\langle AB\rangle_{|k-j|}$ where $A=\langle A\rangle_k$ and $B=\langle B\rangle_j$. I don't care about the symbols $\cdot$, $\ast$, $\star$, $\circ$; they could mean $\langle A^\sim B\rangle_0$ or $\langle AB\rangle_0$ or even $AB$ (the full geometric product), depending on context. – mr_e_man Sep 06 '23 at 14:08
  • Without reversion, $\langle AB\rangle_0$ cannot be an inner product; if it's positive on vectors, then it's negative on bivectors; and if it's negative on vectors, then it's positive on trivectors; so in any case, it's indefinite. With or without reversion, when applied to vectors, it agrees with the given metric/bilinear form/dot product. So it can't be positive-definite or non-degenerate unless the metric is. (In fact, with or without reversion, it is non-degenerate if and only if the metric is.) – mr_e_man Sep 06 '23 at 14:10
  • 1
    For the question of whether a (strict) inner product on Minkowski space can be defined at all, see https://math.stackexchange.com/questions/3845533/is-there-a-useful-natural-definition-of-norm-in-geometric-algebra – mr_e_man Sep 06 '23 at 15:30
  • (Actually I can't find any old posts of mine that use $\langle\vec a,\vec b\rangle$ for the Minkowski metric. Anyway, the notation doesn't much matter.) – mr_e_man Sep 06 '23 at 16:07
1

My second proof using SOS (Sum of Squares):

Remarks: My first proof using SOS is very complicated (as referred to in comment for OP; omitted here). My second proof using SOS is simple. I used computer to motivate the identity (1). The homogenization of the inequality is inspired by mr_e_man's very nice answer. By the way, I don't know much about the bivector. Is my identity (1) related to mr_e_man's result $\|B\|^2 = J$?

Let $p_1 = (a_1, b_1, c_1), p_2 = (a_2, b_2, c_2)$ and $p_3 = (a_3, b_3, c_3)$. Let $x = \langle p_1, p_2 \rangle = a_1a_2 + b_1b_2 + c_1c_2$, $y = \langle p_2, p_3\rangle = a_2a_3 + b_2b_3 + c_2c_3$, and $z = \langle p_3, p_1 \rangle = a_3a_1 + b_3b_1 + c_3 c_1$.

We have the following identity (for all reals $a_1, b_1, c_1, a_2, b_2, c_2, a_3, b_3, c_3$) \begin{align*} &-x^2 - y^2 - z^2 - 2xy - 2yz - 2zx - 2x\|p_3\|^2 - 2y \|p_1\|^2 - 2z\|p_2\|^2 \\[6pt] &\qquad + \|p_1\|^2 \|p_2\|^2 + \|p_2\|^2 \|p_3\|^2 + \|p_3\|^2\|p_1\|^2 \\[6pt] ={}& (a_1b_2 - a_1b_3 - a_2b_1 + a_2b_3 + a_3b_1 - a_3b_2)^2 \\ &\qquad + (a_1c_2 - a_1c_3 - a_2c_1 + a_2c_3 + a_3c_1 - a_3c_2)^2\\ &\qquad + (b_1c_2 - b_1c_3 - b_2c_1 + b_2c_3 + b_3c_1 - b_3c_2)^2. \tag{1} \end{align*}

If $\|p_1\| = \|p_2\| = \|p_3\| = 1$, we have $$\mathrm{LHS}_{(1)} = (x - 1)^2 + (y - 1)^2 + (z - 1)^2 - (x - y)^2 - (y - z)^2 - (z - x)^2 = J.$$

We may analyze the condition $J = 0$. Omitted here.

We are done.

Edit

I found that $$\mathrm{RHS}_{(1)} = (C_{11} + C_{12} + C_{13})^2 + (C_{21} + C_{22} + C_{23})^2 + (C_{31} + C_{32} + C_{33})^2$$ where $C_{ij} = (-1)^{i+j}M_{ij}$ is the cofactor of the matrix \begin{align*} \begin{pmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{pmatrix}. \end{align*}

hhliu
  • 15
River Li
  • 37,323
  • Very nice as well, that identity turned out pretty slick! Considering the alternative was some monster involving stereographic projection or spherical coordinates… The highly symmetric form of $J$ suggested a SOS form like this was possible, I just couldn’t quite get there myself. That said, the form of $J$ I posted here is already massaged out from the bare output of the determinant calculation defining $J$. One minor suggestion, I think it’s safe to omit the subscript $2$ in $\lVert \cdot \rVert_2$ if it’s clear from context we’re talking about the L2 norm, it only adds visual clutter imho. – hhliu Aug 22 '23 at 11:13
  • @t0mdab0mb Yes, agree. – River Li Aug 22 '23 at 11:38
  • 1
    Yes, each term being squared in RHS(1) is a component of $B$. Specifically, if $\mathbf p_i=a_i\mathbf e_a+b_i\mathbf e_b+c_i\mathbf e_c$, then $$B=\mathbf p_1\wedge\mathbf p_2+\mathbf p_2\wedge\mathbf p_3+\mathbf p_3\wedge\mathbf p_1$$ $$=(a_1b_2-a_2b_1+a_2b_3-a_3b_2+a_3b_1-a_1b_3)\mathbf e_a\mathbf e_b \ +(b_1c_2-b_2c_1+b_2c_3-b_3c_2+b_3c_1-b_1c_3)\mathbf e_b\mathbf e_c \ +(c_1a_2-c_2a_1+c_2a_3-c_3a_2+c_3a_1-c_1a_3)\mathbf e_c\mathbf e_a$$ and $\lVert B\rVert^2$ is the sum of squares of those. – mr_e_man Aug 22 '23 at 13:39
  • @mr_e_man Very nice. – River Li Aug 22 '23 at 13:41