7

The crucial requirement for using root isolation methods based on Vincent's theorem is that the input polynomial does not have multiple zeros. One way to remove the multiple zeros is to use polynomial GCD. However, when it is implemented with the usual floating-point numbers (IEEE doubles), most probably the various rounding errors will cause the multiple roots to disintegrate into clusters of roots.

Does anyone know about good (and simple) approximate polynomial GCD algorithm, which does not require exact arithmetic (i.e. is amenable to implementation in 64-bits floating points numbers)?

The closest thing I found is the "Computing multiple roots of inexact polynomials" by Zhonggang Zeng but unfortunately it is way over my tiny brain.

Ecir Hana
  • 985
  • 1
  • 9
  • 20
  • 1
    What would " approximate polynomial GCD" mean? – lhf Jul 27 '15 at 18:06
  • I doubt there exists one. Commercial software such as Maple and Mathematica use exact arithmetic in their root finding packages (Regular Chains package in Maple and Reduce in Mathematica). – Sergio Parreiras Jul 27 '15 at 18:11
  • @lhf By "approximate" I meant an algorithm which is able to split the polynomial even if the coefficients are in, and computation is done in, floating point. (Yes, one can represent 53 bits integers exactly in doubles - what I meant is the various over-/under-flows when calculating with floats) – Ecir Hana Jul 27 '15 at 18:19

1 Answers1

5

One of the reasons why the methods you mentioned do not work for multiple roots is that, in this situation, the root-finding problem is numerically ill-posed. That is, an arbitrarily small perturbation of the input will change the structure of your solution (just as you recognized, multiple roots will split into clusters). The same holds for the computation of the square-free part. Trying to remedy this instability by an inexact calculation is, in general, doomed, unless you have additional information (e.g., the total number of real and complex roots, or your actual goal is to isolate a multiple root of which you know a priori that it is unique). In such situations, you should check whether it is indeed necessary to make the polynomial squarefree, or if you can infer all the required information by other means.
Also, it might be easier to look for the complex roots using methods like Aberth-Ehrlich (as implemented in MPSolve), Durand-Kerner-Weierstraß, or homotopy continuation methods (e.g., as in PHCpack or Bertini). In some sense, if the precision of the input is not high enough, they work just as if the polynomial would have a cluster of roots instead of a multiple root, and you can simply stop after a sufficient precision or "resolution" is reached. Then postprocess or combine the clustered approximations into multiple roots. Of course, that's a heuristic, but an approximate GCD is less than trivial to implement in a certified manner, too.
By the way, the heart of the Aberth-Ehrlich method is maybe a five-liner, and it performs impressively well in practice. I'd definitely give that one a try, even if you are just looking for real roots.

But back to your actual question: As far as I know, approximate GCDs are still not sufficiently well understood, and in particular I know of no ready-made implementations using machine precision (probably because all of them will fail for some instances). YMMV, but if you want to try something in machine precision, I would take the following steps:

  • Use the Extended Euclidean Algorithm for the GCD computation, nothing fancier (in particular, no asymptotically fast algorithm like the Half-GCD unless you have a very good reason to believe that it should be as stable).
  • The critical steps will be divisions with remainder where the leading coefficient of the remainder vanishes. If it doesn't due to numerical instabilities, the output will be garbage. Thus:
  • Along with the floating-point version of the polynomial GCD (possibly in interval arithmetic), compute the GCD modulo some prime(s) (I assume your input consists of polynomials in $\mathbb{Q}[x]$). Not only that this will, with high probability, give you the correct degree of the GCD; unless you have chosen an "unlucky" prime, the sequence of the degrees and coefficients occuring throughout the EEA should be identical modulo the prime. So you can use these results as a sanity check for any rounding to or away from zero.

But AFAICS, if you ever run into a situation where you do not have enough precision to sufficiently bound a non-zero leading coefficient (as certified by the modular computation) away from zero, you will have trouble and need to rely on guesswork (e.g., heuristics for root distances or magic epsilons in your code).

akobel
  • 700
  • 4
  • 7
  • "I assume your input consists of polynomials in $\mathbb Q[x]$" - The OP commented that the coefficients are floating-point numbers, not exact rational numbers. – mr_e_man Jun 01 '22 at 19:34
  • 1
    Floating point numbers are always binary fractions of the form $m \cdot 2^e$ with $m, e \in \mathbb{Z}$, so floats themselves are certainly exact rationals. But the OP - if I understand correctly - asked for something different: given some polynomial with real (rational?) coefficients, how to compute its square-free part by a GCD computation with interim floating point values. That is, the input is exact, but he wants to do the computation without exact arithmetic. I only based the assumption that the input has rational coefficients on OP's goal of root isolation using Vincent's method. – akobel Jun 03 '22 at 06:09
  • 1
    If the input is not even available exactly - that is, the input is given with floating point coefficient, but in reality it is just an approximation to the exact input - the problem changes somewhat. Note that in this case, you cannot distinguish between the infinitely many exact input candidates in the immediate neighborhood (round-off threshold) of the approximate input. – akobel Jun 03 '22 at 06:17
  • 1
    Option 1: You consider the approximation as the one and only input, i.e., as exact. In this case, you can proceed as above. Option 2: You try to give the "usual" GCD that holds for almost all polynomials in the neighborhood; that is, you (almost) always return 1, barring trivial cases like constant polynomials. Option 3: You try to give the "largest"/highest-degree GCD that can be realized for some input in the neighborhood. That's way more complicated, both to specify and to compute, and truly in the "GCD of approximate polynomials" turf. But probably beyond OP's scope of root finding. – akobel Jun 03 '22 at 06:17