The Bernstein representation is great to give an intuition how and why the algorithms based on Descartes' rule of signs work: essentially, the signs of the coefficients of the "localized" polynomial which are computed in the classical VCA method show whether the corresponding vertices of the Bézier control polygon are located above or below the $x$-axis. If there is no sign change, all control points lie in either the positive or the negative halfplane, and by the convex hull property of Bézier curves, the polynomial cannot have a root. If there is only one sign change, there has to be a root (because the polynomial curve will hit the endpoints of the control polygon), and if you work out the details, you will see that there can be only one root.
On the other hand, the sign sequences in the Bernstein and the VCA approaches are identical; in fact, all coefficients of the polynomials considered are the same up to a scaling by an (obviously positive) binomial coefficient.
I have yet to find a conclusive proof why the Bernstein approach should be beneficial for general polynomials. It is true that the de Casteljau scheme is very stable, but this ignores the initial transformation to the Bernstein basis. On the other hand, a Taylor shift with a Horner-like scheme is very stable as well, and
for j from n-1 downto 0:
for i from j to n-1:
p[i] += p[i+1]
(the code for a "naive" Taylor shift of a polynomial $p = \sum_{i=0}^n p_i x^i$ by $1$, using the Ruffini-Horner scheme) looks "suspiciously" close to the de Casteljau scheme that I'm not convinced that the latter should be inherently more stable.
Furthermore, as far as I know, Fabrice Rouillier's empirical analysis of both approaches using his RS framework showed no significant differences. You should have a look into "Efficient Isolation of a Polynomial’s Real Roots" by Rouillier and Zimmermann, and also "Bernstein’s basis and real root isolation" by Mourrain, Rouillier and Roy. I could not quickly dig out real-life benchmark of identically sophisticated implementations of both approaches, though.
Note: Rouillier's RS library (also the default in Maple) is widely considered the state-of-the-art benchmark for real root isolation, and it does not use the Bernstein basis by default (although there is an implementation of it). Carl Witty's implementation of real_roots
in SAGE is almost on par, and it does use Bernstein and de Casteljau. I leave the interpretation up to you.
I can only guess, but I assume that the somewhat bad reputation of the classical approach stems from the fact that implementations using it were traditionally designed in the computer algebra community rather than by numerical analysist, and thus tuned towards exact arithmetic. I have no doubt that the de Casteljau scheme yields more accurate results in machine precision than any fast Taylor shift algorithm. But I am also convinced that one can make the de Casteljau algorithm asymptotically fast (trivially: by conversion to the monomial basis, using a fast Taylor shift, and converting back; but I guess there is work on a "native" fast version if you search hard enough). I assume that its stability will be almost identical to the Taylor shift version.
My own approach would be: don't bother to convert something to Bernstein basis which is given in monomial basis, and don't bother to convert something to monomial basis which is given in Bernstein basis. Unless you measure and find a very convincing reason to do it.
If you find that you can solve an instance using one approach, but not the other, first see whether a mildly more complex function will still work before you claim one method to be the winner. Try to scale all coefficients by a common factor such that their magnitude is as close to 1 as possible; this is where floating point calculations tend to behave the best. And see whether using something like an xdouble
(from Shoup's NTL) or a qd
(from Bailey's high-precision packages) will remedy your issues.