1

I hope you can help me. I'm working with $N$th order Bezier curves $\mathbf{C}(t)$, and so far I have used Gauss-Legendre (GL) to calculate the length of the curve $s(t_1, t_2)$. Having the abscissae and weights precomputed up to $N_{gl} = 64$, I used a naive (and pretty much random) heuristic to determine the number of points for the Gauss-Legendre: \begin{equation} N_{gl} = \min\left(N\left\lceil{\frac{t2-t1}{0.2}}\right\rceil, 64\right) \end{equation}

The drawback is that usually this is overkill, and only in some extreme cases is it a necessity.

For the efficiency, I tried the approximation of $s(t)$ by the Chebyshev polynomials, where I evaluated $s(0, t)$ at Chebyshev nodes with the aforementioned GL. It works superbly, but it is slow when evaluating length at nodes.

Currently, I'm looking into changing GL for Clenshaw-Curtis (CC), with an adaptive strategy (doubling the number of Chebyshev nodes to reuse previously computed ones until sufficient precision is met). I like it more than GL since I can also precompute abscissae and weights, but I can also set the wanted error tolerance.

Now I'm wondering, since I have already computed $||\mathbf{C}'(t)||_2$ at Chebyshev nodes for CC, can I reuse them for calculating approximation of $s(t) = \int_0^t ||\mathbf{C}'(x)||_2 dx$ by Chebyshev polynomials? (I.e., reuse them to somehow calculate length of the curve between Chebyshev nodes)

stribor14
  • 121
  • I've tried using the precomputed values to estimate the length between them with the trapezoidal rule, But the error per subpart is too large (even with calculating the additional midpoint). This might be because of the harmonics which CC over the whole range takes into account. – stribor14 Apr 25 '22 at 12:13
  • 1
    See also https://math.stackexchange.com/questions/12186/arc-length-of-bézier-curves – lhf Apr 26 '22 at 21:37

2 Answers2

1

Personally, I’d avoid these sophisticated numerical integration functions.

Just use piecewise linear approximation, instead. Calculate points on the Bézier curve at 100 $t$ values, and use these points to construct a polyline, and calculate its length. Then calculate 200 points, and do the same thing. Keep increasing the number of points until the length converges satisfactorily.

The only possible problem is performance. There are clever ways to calculate the array of points (using forward differencing) that can substantially reduce the computations required. But, on a modern computer, I wouldn’t expect performance to be a problem, anyway.

If you really want to use numerical integration, then this blog post is a good place to start.

bubba
  • 43,483
  • 3
  • 61
  • 122
  • [pt. 1/2] Thank you for the answer. The linked blog post was a very interesting read (I wasn't aware of it before). Since we are using $G_3+$ continuous higher-order curves, we cannot simply approximate them with quadratic curves. And often when optimizing the shape, we use at least 7th order curves (since then you can manipulate 4 points on each side without influencing the continuity on the other side of the curve). – stribor14 Apr 26 '22 at 12:52
  • [pt. 2/2] With additional manual shape correction, we further elevate their order (new mid control points do not affect continuity at endpoints). Etc. Additionally, performance is a must for us (shaving off mili/microseconds). Currently, I'm looking into the Chebyshev approximation so that we can compute series inversion on it (for $\mathbf{C}(t(s))$). Since the most expansive calculations are done when the shape of the curve is locked, this is a great fit since it can be cached for later use, and evaluating a polynomial (or Chebyshev series) is cheap. – stribor14 Apr 26 '22 at 12:52
1

"The error" in my reasoning was that I first used Clenshaw-Curtis to calculate length so that I can approximate the length function $s(t)$ with the Chebyshev series. Instead, Clenshaw-Curtis is the integral of the Chebyshev approximation itself.

So the answer would be to approximate $||\mathbf{C}'(t)||_2$ with Chebyshev series $\frac{1}{2}\sum_{k=0}c_kT_k(2t-1)$. Then, the length function can be easily derived:

\begin{align} s(t) &= \int_0^t ||\mathbf{C}'(t)||_2dt \\ &= \frac{1}{2}\sum_{k=0}c_k \int T_k(2t-1)dt \\ &= \frac{1}{2}\sum_{k=0}\frac{c_k}{2}\left(\frac{T_{k+1}(2t-1)}{k+1} - \frac{T_{k-1}(2t-1)}{k-1}\right) + C\\ &= \frac{1}{2}\sum_{k=1}\frac{c_{k-1} - c_{k+1}}{2k}T_k(2t-1) + C \end{align}

I.e., from the coefficients $\{c_0,\ c_1,\ \dots\}$ we can derive new coefficient for the Chebyshev approximation of the length function: \begin{align} c^{new}_k &= \frac{c_{k-1} - c_{k+1}}{2k} \\ s(t) &= \frac{1}{2}\sum_{k=0}c^{new}_kT_k(2t-1)dt \end{align} Since we have "lost" coefficient $c^{new}_0$, we can set the $c^{new}_0 = -2s(t)|_{c^{temp}_0=0}$

In Wiki, it is said that the integral fo Chebyshev polynomial $T_k(x)$ holds for $k \geq 2$. Since I'm not interested in the approximation of $||\mathbf{C}'(t)||$ itself, I do not divide $c_0$ and $c_N$ by two after calculating them with DCT, thusby "extending" it to $k \geq 1$ (usually $T_1$ and $T_0$ are special cases). Chebyshev nodes at which I am calculating correspond to the extrema of Chebyshev polynomials, i.e.: \begin{align} t_n &= \frac{\cos\left(\frac{n\pi}{N}\right)+1}{2} \\ \{c_0,\ c_1,\ \dots, \ c_N\} &= \text{DCT}\left(\frac{||\mathbf{C}'(t_0)||_2}{N},\ \frac{||\mathbf{C}'(t_1)||_2}{N},\ \dots,\ \frac{||\mathbf{C}'(t_0)||_N}{N}\right) \end{align}

EDIT:

Comparing this to other applicable approaches (as I stated in a comment, we cannot use piecewise quadratic or cubic representation because the geometric continuity would be lost).

Legendre-Gauss and other Gaussian approaches give the same precision, but the problem lies in deducing the correct sample size for calculating them. Also, it is costly to recalculate $s(t)$ every time for some new parameter $t$.

Adaptive quadrature schemes give good results (with Clenshaw-Curtis being the best we tried), but one has to again recalculate them for different parameters $t$.

As for piecewise linear approximation. With the equally spaced $t$s (e.g., $0:0.01:1$), it is inefficient compared with the Gaussian quadrature approach, and one would need to evaluate at substantially more points. On the other hand, with subdividing strategy, we can adaptively subdivide a curve into linear segments. We use this for the graphical representation and have read a lot about the linearity criterion (a beast for another topic). The inefficiency comes from iteratively dividing the curve into sub-curves (even though we use matrix operation) to achieve the required precision of piecewise flatness.

The empirical tests in our case show that the Chebyshev approximation achieves the same precision as the Gaussian quadrature. On the first iteration, it is slower than Gauss-Legendre (because of DCT), but if one needs to make multiple length estimates, it is extremely more efficient.

stribor14
  • 121