3

How can I determine whether two ellipses (given using their symmetric matrices, their quadratic forms, or some similar representation) have any inner points in common? Can I determine this fact without computing any radicals (square roots, cubic roots, quartic roots)?

I just wrote a StackOverflow answer about how to test whether two ellipses (axis-aligned in that case) intersect. The approach I took was formulating the two conics as symmetric matrices $M_1$ and $M_2$, then use $\det(M_1+\lambda M_2)=0$ as a condition describing a degenerate element from the pencil of conics, i.e. a pair of lines which share all four points of intersection with the two given conics. I then argued that the touching situation corresponds to a point of intersection with algebraic multiplicity two, which in turn corredponds to the discriminant of the cubic polynomial in $\lambda$ being zero. So far so good.

But this approach left me somewhat unsatisfied. Having the touching condition expressed as a zero of some polynomial, I expect the two sides of the touching situation to have different signs. So I might start from two fully disjoint ellipses, then move them closer together, and when they touch I get a zero and then a sign change of the discriminant. But when I move them further, then depending on their shapes and orientations I might get from two distinct real points of intersection to four real points of intersection, or I might get to one ellipse fully enveloping the other. In both cases the transition would be via another touching configuration, and thus entail another sign change.

So my approach of looking at the sign of the discriminant can't distinguish between two disjoint ellipses, two ellipses with four real and distinct points of intersection, and one ellipse fully contained in the inside of another.

Is there any predicates that I can formulate in terms of the original coefficients to distinguish these situations? Can I do this without solving a cubic equation (which I would have to do for computing the points of intersection)? Can I perhaps avoid all radicals, and just look at some more signs of some more polynomials in the original coefficients to make my decision?

Also, does having the ellipses described as center, radii and rotation make this any easier? Personally I prefer the matrix representation when dealing with conics, but since the transformation from there to center and radii entails some square roots, it is conceivable that starting with this form might allow avoiding any additional roots that can't be avoided if starting from the matrix or quadratic form. And for the original StackOverflow question, a solution in terms of center and radii might even have been preferable.

MvG
  • 42,596
  • If the $x$ coordinate of one ellipse's center is between the $x$ coordinates of the endpoints of the other ellipse's horizontal axis, it's relatively simple to test whether the vertical axis of the first ellipse intersects the second ellipse. Likewise for $y$ coordinates. By checking all four cases like this, I think you can detect all cases in which more than one quadrant of one ellipse intersects the other ellipse. So then you're down to intersections of one quadrant with one quadrant. Those intersections might be better behaved. – David K Mar 26 '23 at 15:20
  • @DavidK your comment sounds like it would assume the ellipses to be aligned with coordinate axes. Which they were in the original StackExchange post, but I was trying to formulate my question here more general, allowing for rotated ellipses. So there wouldn't necessarily be a “horizontal axis”. Still, a useful approach to a specific case is still valuable, I'll definitely think more about this. – MvG Mar 26 '23 at 17:28
  • Yes, I was assuming axis-aligned ellipses as in the stackoverflow post. Perhaps I misunderstood what dissatisfaction was to be resolved: a dissatisfaction with the answer for axis-aligned ellipses (my thoughts) or a dissatisfaction with answering the more general case. – David K Mar 26 '23 at 17:33
  • 1
    The most efficient way is to solve a small 1D convex optimization problem. I wrote an answer about how to do this (with code) here: https://math.stackexchange.com/a/3678498/3060 I learned of the method from the paper of Gilitschenski, Igor, and Uwe D. Hanebeck, though it seems the method may have been known previously. – Nick Alger Mar 29 '23 at 07:49

1 Answers1

1

Here is an answer in terms of quadratic forms.

Say we have $F$, $G$ quadratic forms of signature $(n\ (+1), (-1)\ )$ on an $n+1$-dimensional space ( in our case $n=2$).

Q: When do the closed convex cones $\{F\le 0\}$, $\{G\le 0\}$ $not$ intersect except at the origin?

A: If and only if there exist $a$, $b>0$ such that

$$a F + b G \succ 0$$ (that is the form $aF + b G$ is positive definite).

One implication is clear, the other uses the fact of separation of disjoint closed sets by hyperplanes.

Note that we may assume $a\in (0,1)$ and $b= 1-a$. Now this condition can be checked readily in concrete cases. Note that we need to look at the coefficients of the polynomial in $x$

$$\det (x I_3 + a M +(1-a) N)$$ and we want them all positive for some $a \in (0,1)$. If that is possible, the cones are separated. If that is not possible, the cones are not separated.

The case of open cones:

The sets $\{F<0\}$ and $\{G<0\}$ do not intersect if and only if there exist $a$, $b\ge 0$, $a+b=1$, with $a F + b G \succeq 0$

$\bf{Added:}$ Let's sketch a proof for $n=2$. Consider an ellipse $F=0$ in the plane ( the equation is dehomogenized), the interior is $F\le 0$). In what case is the interior contained in a half-plane $l\le 0$. The equation $l\le 0$ has to be a consequence of $F \le 0$. Therefore ( hand waving here...)

$$l = \alpha F - \Sigma S_1$$

where $\alpha > 0$, and by $\Sigma S$ we denote a sum of squares of affine forms.

Now consider the case $F\le 0$, $G\le 0$ are disjoint. Then, being convex, a line separates them. Therefore

$$l = \alpha F - \Sigma S_1 \\ -l = \beta G - \Sigma S_2 $$ and summing up we get $$\alpha F + \beta G =\Sigma S_1 + \Sigma S_2 $$

Note: We did not prove the part ellipse contained in a halfplane, so that could be just another exercise.

$\bf{Added:}$ The case $n=1$ is simpler and easier to check. This is recommended as an exercise.

Let $F(x) = x^2 + \cdots$, $G(x) = x^2 + \cdots $ quadratic polynomial such that the sets $\{F\le 0\}$ and $\{G\le 0\}$ do not intersect. Then there exist $a$, $b> 0$ such that the polynomial $a F + b G> 0$ on $\mathbb{R}$.

$\bf{Added:}$ Searched for "Farkas lemma for quadratic polynomials" and there are many results. Especially so called S-lemma which is more or less whatever was written above (perhaps for any forms $F$, $G$). It would be tempting to generalize for other functions, several of them, but that does not seem to work this way.

orangeskid
  • 53,909
  • What does the $\succ$ symbol stand for? I'm not familiar with that notation, symbols make terrible search terms on Google, and it feels very central to your answer. – MvG Mar 29 '23 at 06:25
  • 1
    @MvG : It means positive definite. I will add more details. – orangeskid Mar 29 '23 at 06:45
  • Thanks for addition! I think I understand how positive definite and no common points relates. And pos def means all eigenvalues positive. The determinant is characteristic polynomial, except for for flipped sign so pos def means all roots negative? And the link from that to coefficients is Descartes' Rule of Signs? I see how all coeffs positive implies pos def but struggle with the converse. So the char poly gives me 4 coefficients, each linear in $a$, so I get 4 linear inequalities there plus 2 more for $0\le s\le1$ and just need to check whether they can all be met together? Nice! – MvG Mar 30 '23 at 20:24
  • @MvG Yes, the fact that the eigenvalues of $M$ are positive is equivalent to $\det( M + t I)$ has all coefficients positive. Equivalently, the coefficients of $\det(t I-M) $ are alternating. It is indeed a consequence of Descartes that in that case there are no negative eigenvalues... but this particular case is not that hard. – orangeskid Mar 30 '23 at 20:45
  • @MvG: So the question is: can we make for some $s\in (0,1)$ all of the coefficients in $x$ of $\det (tI + s M + (1-s) N)$ positive. This is equivalent to the separation of ${M v \le 0}$ and ${N v \le 0}$. I've seen that there is an older variant of a similar question with a robust method for testing. I will try to see how robust this turns out to be. I think a variant of this test is well known, but not stated in a symmetric way. So perhaps we can try to see how good this one is. I will do some Mathematica testing. – orangeskid Mar 30 '23 at 20:51