Today (4 December 2017) is my 19th birthday, and I realised I hadn't asked a question here in over a year. So this is one on an old problem: computing the intersections of two Bézier curves.
In my Kinross SVG library, a Bézier curve object contains two equivalent representations of its coordinate polynomials ($x(t),y(t)$): a control point form (Bernstein basis) and a polynomial form (power basis). The control point form is used for interfacing with the SVG format; the polynomial form is used for explicit calculations, and will be the form used below.
The SVG standard includes line segments (linear Béziers), quadratic Béziers and cubic Béziers, which have two, three and four control points respectively. Intersecting two Bézier curves is easy if they are not both cubic:
- If one of the curves is linear or has collinear control points, aligning that curve with the $x$-axis and solving a cubic equation will do the trick.
- If one of the curves is quadratic, the same idea can be used: perform an affine transformation that maps that curve to the graph of $y=x^2$ with $0\le x\le 1$, then solve a sextic equation derived from the (transformed) other curve.
This is curve implicitisation, and NumPy (the external library Kinross uses) has both (univariate) polynomial objects and a good polynomial root-finder for the high-degree polynomials that arise, so these are rather boring cases. The interesting part is cubic–cubic intersection, where this simple implicitisation no longer works. Several techniques are available, but many are unsuitable for my purposes:
- Bézier subdivision/clipping, in my eyes, is an inelegant approach that also fails to be reasonably quick in refining solutions. I like my answers to be as exact as possible, notwithstanding necessary precision compromises.
- Homotopy solvers are overly complicated, involving a differential equation at every step, and have a bewildering array of divergent cases to handle.
- I once considered optimisation (minimising $(p_x(t)-q_x(u))+(p_y(t)-q_y(u))^2$, where the polynomials are defined below), but this was also seen as overly complicated compared to direct solving.
- A resultant computation is currently a little too taxing for NumPy, since it involves polynomials in matrices (which is just not possible). I could sidestep these problems in the future by working at a lower level though.
- The groebner-basis has its own tag here, and certain places like this question and this Scholarpedia article describe Buchberger's algorithm in a fair amount of detail. It is more complicated than the resultant algorithm, however, and my own tests indicate that the numbers involved blow up even for polynomials with small coefficients.
- This leaves the rational univariate representation (RUR). Its final stage involves univariate root-finding, which again is a piece of cake for NumPy, but so far I have not been able to fully understand how it goes from the initial polynomials to the solutions (more on this later).
So I came up with my own algorithm based on the multivariate Newton's method. Suppose the two curves to be intersected are $(p_x(t),p_y(t))$ and $(q_x(u),q_y(u))$. An intersection occurs when $$p_x(t)-q_x(u)=p_y(t)-q_y(u)=0\tag1$$ The Newton iteration (from $t,u$ to $t',u'$) is thus $$\begin{bmatrix}t'\\u'\end{bmatrix}=\begin{bmatrix}t\\u\end{bmatrix}+\begin{bmatrix}\Delta_t\\\Delta_u\end{bmatrix}$$ where the last term solves $$\begin{bmatrix}p'_x(t)&-q'_x(u)\\p'_y(t)&-q'_y(u)\end{bmatrix}\begin{bmatrix}\Delta_t\\\Delta_u\end{bmatrix}=\begin{bmatrix}q_x(u)-p_x(t)\\q_y(u)-p_y(t)\end{bmatrix}$$ To get enough starting points to catch all the roots, I take for each curve:
- its endpoints
- its points of extremal curvature
- its inflection points
and then perform a Cartesian product of the two sets obtained. With a little fuzzing to handle cases where the linear system in the Newton iteration fails to have a unique solution, my algorithm has performed very well against the test cases I have considered. (I do not need to report tangencies or other multiple root cases, since the vector images I handle in practice have "nice enough" curves.)
In contrast, there are few articles on RUR around. Thus my question here is:
What are the exact algorithmic details of solving via the RUR, going from $(1)$ to the solutions? Bézier curve-specific optimisations may exist.
I am looking for an explanation that can be implemented from the ground up using NumPy. The ultimate objective I have in mind is not just to see whether RUR is better than my Newton-based algorithm (in time or reliability), but to understand some fundamental concepts of algebraic geometry.
If you want, you can explain RUR using this pair of Bézier curves I tested my algorithm against: $$p_x(t)=2+147t-342t^2+221t^3\qquad q_x(u)=1+87u-84u^2+11u^3\\ p_y(t)=17-90t+183t^2-104t^3\qquad q_y(u)=10+42u-165u^2+124u^3$$ $$1+147t-342t^2+221t^3-87u+84u^2-11u^3=0\\ 7-90t+183t^2-104t^3-42u+165u^2-124u^3=0$$ In Kinross these are represented as
p = bezier(2+17j, 51-13j, -14+18j, 28+6j)
q = bezier(1+10j, 30+24j, 31-17j, 15+11j)
They intersect in five points ($0\le t,u\le1$) whose corresponding $(t,u)$ parameters are $$(0.053611563083829826, 0.10086541719622877)\\ (0.15646174074706343, 0.94523150924142163)\\ (0.41077722343039424, 0.88035078651205612)\\ (0.86571544618164098, 0.97134323668380174)\\ (0.96472594025254754, 0.43951554821497935)$$