As far as I know, there is no certified and complete method that can determine arbitrary intersections without transformations and implicitization. However, the bounding box checks, including repeated subdivision of the curves to some application-dependent resolution, are excellent filters for curves in general position.
As you already noticed, a purely numerical method (including subdivision when you only use fixed-precision floating point arithmetics) will not be able to reliably handle degeneracies like (nearly) tangential intersections. The argument is the same as in the univariate case, just that the precision bounds are even worse in higher dimension.
For the bivariate solving, the main methods to analyze and intersect (pairs of) algebraic curves in monomial basis are RUR (rational univariate representation) by Fabrice Rouillier et al. and CAD (cylindrical algebraic decomposition).
The former involves some not-so-lightweight algebraic machinery like Gröbner bases, the latter can be done with GCD and resultant computations only. Note that I'm oh-so-slightly biased both as an author of a state-of-the-art variant of the latter method and as a coauthor of Fabrice. Obviously, algebraic curve analysis is a well-studied topic, and I refer to the small selection of a vast number of references that is mentioned in that paper.
I also met and worked with Chee Yap who's the author of the paper you mentioned, but didn't know this work. From a very, very quick glance, he needs a genericity assumption that ensures that sufficient refinement eventually reveals only ordinary intersections. In particular, the method does not seem to handle tangential intersections.
Any complete approach with both variants will require exact or bigfloat arithmetic, but that alone will not give you a complete algorithm unless you compute with a priori worst-case precision bounds. More elegant formulations use adaptive precision handling and combine symbolic and numeric computations. The crucial ingredients are, again, univariate polynomial root isolation as well as GCD, (sub-)resultants and/or Gröbner basis computations, depending on your favorite method.
I vaguely remember rumours about resultant computations in Bernstein basis, but IIRC, none of them has been investigated and developed to the point of a complete solver. From a theoretical point of view, there is some evidence and conjectures – but no proof! – that conversion to implicit form is not a bottleneck step (and that, indeed, the previously mentioned method might be asymptotically optimal). Again, the conjecture basically boils down to univariate root isolation complexity and precision demand bounds. In practice, the situation might be different; but I guess that the difference is negligible for the degenerate or "almost degenerate" situations that remain after some filter steps (essentially, intersection of convex hulls of control point polygons).
BTW, for implementations, have a look at the CGAL project; it features a library for 2D arrangements according to the exact geometric computation paradigm. (Again, I'm biased because... You know. Just one of many, many wheels here in all those developments.)