0

I was reading an article called Computing the Cube Root, in which they approximate a cube root using one quartic polynomial divided by another, in the form

$$y=\frac{ax^4+bx^3+cx^2+dx+e}{fx^4+gx^3+hx^2+ix+j}$$

Is there a reason for representing their approximation in such a form? Is it possible to get a similar level of accuracy using a polynomial in general form?

Also, how would you go about deriving the smallest maximum error of such a formula? (Sorry, I probably seem like a complete idiot.)

  • A somewhat related answer. Padé approximants seek to get the best approximation in the sense of power series topology as opposed to minimizing the maximal error, but the two goals are not very contradictory. – Jyrki Lahtonen May 06 '14 at 10:19
  • It's interesting that the article is hosted on the web page of a FreeBSD user. My work is on the FreeBSD math library where cbrt has reached several levels higher. cbrt started there with an old algorithm by Kahan. That version uses Newton-Raphson with sophisticated methods to achieve good accuracy. These methods are – Bruce Evans May 06 '14 at 10:41
  • rather inefficient and unsuited to modern CPUs, but are still used in part. The article is about graphics applications where accuracy is very unimportant so simple methods can be used to optimize for efficiency at a cost of accuracy. My work involves first improving accuracy and then finding a better algorithm that keeps the improved accuracy while increasing efficiency. – Bruce Evans May 06 '14 at 10:57

2 Answers2

1

The reason is that rational approximation normally is much more efficient than polynomial approximation. Here are some examples for your problem: The task is to approximate $f: x \rightarrow x^{\frac{1}{3}}$ on the interval $[\frac{1}{8},1]$. I don't know the source for the given rational function above, but here are the result from Maple V's minimax function (using the default weight function 1), writing $P_n(x), Q_m(x)$ for the numerator and denominator polynomials:

P_4(x)/Q_4(x): max. error 0.2067773134985e-7
P_4(x)/Q_3(x): max. error 0.1687562176023e-6
P_5(x)/Q_3(x): max. error 0.2773562640305e-7
P_4(x)/Q_0(x): max. error 0.4725267889820e-3

To get a comparable accurate polynomial approximation you have to use a polynomial of degree 16 or 17:

P_17(x)/Q_0(x): max. error 0.1567070356649e-7
P_16(x)/Q_0(x): max. error 0.3537910592697e-7
gammatester
  • 18,827
  • The Maple V approximations are apparently not very good. I get an accuracy of 0.27e-7 with a degree 7 polynomial on [0,1]. This Polynomial was generated by a modified Remez method. Simple Chebyshev interpolation tends to need only 1 more term for the same accuracy. – Bruce Evans May 06 '14 at 11:04
  • Previous versions used (a) a degree 2 polynomial giving an intermediate accuracy of 0.43e-4; (b) a rational approximation of total degree 5 or 6 giving an intermediate accuracy of about 0.12e-6; this was mis-engineered for final float precision and gave far more accuracy than was useful. But it is instructive to look at very low degree approximations. These can almost be computed by hand. – Bruce Evans May 06 '14 at 11:15
  • @Bruce Evans: Your results are very surprising. With Maple you get for the Chebyshev series in the interval $[\frac{1}{8},1]$ and the max. allowed error 0.5E-7 a degree of 18 and for the complete $[0,1]$ a whopping degree 162. Remember $f'(0) = \infty.$ Are you sure about the minmax error over the complete interval? – gammatester May 06 '14 at 11:19
  • My approximations are actually for (1+x)^(1/3) on [0,1], or equivalently for x^(1/3) on [1,2]. There is no reasonable approximation near the singularity at 0 for x^(1/3). So we reduce to an interval away from 0. The endpoint 1/8 has some technical properties which make it good, but [1/8,1] is still too wide for efficiency. After rescaling, the interval [1/8,1] becomes [1,8]. That is much wider than my [1,2]. ... – Bruce Evans May 06 '14 at 11:27
  • Splitting up the interval [1,8] into [1,2], (2,4] and (4,8] is used as a preliminary reduction in some places, including the current FreeBSD math library and in at least old versions of glibc. The endpoints 2 and 4 aren't as technically good as 8 since their cube root is irrational -- it is too easy to lose accuracy. The splitting step can also be quite slow. My best version – Bruce Evans May 06 '14 at 11:34
  • @Bruce Evans: So there is no contradiction between the results and the Maple result is not so bad! – gammatester May 06 '14 at 11:38
  • ... splits into 64 or 128 pieces instead of 3. Splitting into subintervals of equal length gives endpoints with even worse technical properties than 2 and 4, but can be arranged to be more efficient. – Bruce Evans May 06 '14 at 11:38
  • Right. No contradiction. – Bruce Evans May 06 '14 at 11:42
  • @Bruce Evans: You comments may be correct, but miss the point. It is clear that there better ways to approximate cube root. My answer explains the question and is not an introduction to approximation. – gammatester May 06 '14 at 11:43
  • The comments get closer to "explaining" why the rational function works better. We observed that the size of the error is related to the singularity at 0. The error must be bounded by some expression in the higher derivatives. Apparently, the maximum height of interest depends on the polynomial degree but not on the total degree. x^(1/3) is hard to approximate near its singularity since its higher derivatives blow up fast. On the other hand, sin(x) has no singularities and all its derivatives have the small bound 1. – Bruce Evans May 06 '14 at 11:59
1

Actually, rational approximations are usually much less efficient than polynomial approximations, since in modern computer arithmetic division is many times slower than multiplication and addition, so the single division needed for a rational approximation takes about as long as 5 to 10 extra terms for a polynomial approximation. Divisions are inherently slower, and tend to stall pipelines.

However, rational approximations do tend to take fewer terms. I haven't studied the reasons in detail, but heuristically this is because the error for a polynomial approximation blows up in a hard to control way depending on he number of terms, perhaps quadratically or worse. In rational approximations, the error only blows up depending on the number of terms in the polynomials in the numerator and denominator separately. This is the infinite-precision error. Rounding errors are usually dominated by the one for the final operation, and thus don't blow up.

The example of $x^{\frac{1}{3}}$ on the interval $[\frac{1}{8},1]$ is close to a worst case for a reasonable polynomial approximation. Often, the rational approximation only takes 1 fewer term for a given accuracy, but here it takes about twice as many terms. The interval is just too wide for a polynomial approximation to work well. The width of an interval that is too wide to work well is very dependent on the function. For example, $[0,\frac{\pi}{4}]$ is barely manageable for $sin(x)$ but is too wide for $tan(x)$; $[0,-1]$] is barely manageable for $cosh(x)$ but $[0,\frac{1}{4}]$ is barely manageable for $tanh(x)$. Algebraic functions tend to be harder to approximate than transcendental ones.

I go to great lengths to rewrite algorithms to avoid using rational approximations. The basic technique is to use only short intervals over which polynomial approximations work well. For $x^\frac{1}{3}$ on $[0,1]$, I use 64 subintervals of length $\frac{1}{64}$. The function is barely simple enough to allow efficient accurate interpolation after reduction to the primary interval $[0,\frac{1}{64}]$ (in general, this method would need 64 different polynomials or lose more efficiency or accuracy than it gains because the interpolation would be unmanageable). A degree 3 polynomial suffices for float precision on the primary interval. Float precision is relatively easy. I use it to warm up for higher precisions.