18

Suppose there is a polynomial:

$p(x) = a_0x^n + a_1x^{n-1} + ... + a_{n-1}x + a_{n}$

I would like to "shift" it (I'm not sure what is the right term), by substituting $x$ for some other function of $x$.

What I mean is that I would like to "transform" the polynomial by replacing the $x$ by $x+1$, for example:

$p(x) = a_0(x+1)^n + a_1(x+1)^{n-1} + ... + a_{n-1}(x+1) + a_{n}$

I could do it by expanding each term and then sum the terms of the same order. For example:

$x^2 - 1$ ---> $(x+1)^2 - 1$ ---> $(x^2 + 2x + 1) - 1$ ---> $x^2 + 2x$

My question is, is it possible to do such transformation "quickly"? Currently I have to expand every term, add each of its items to the previous item of the same degree and so on. Isn't there a trick which would turn the coefficients $[1, 0, -1]$ into $[1, 2, 0]$ in one go?

If there was such a trick for the above $x+1$ substitution, are there similar tricks for other ones, too? For example $1/x$? Is there a general method of constructing some kind of substitution which turns one set of coefficients into another one?

Ecir Hana
  • 985
  • 1
  • 9
  • 20
  • I'm guessing there is something to do with Taylor expansions. If $x\to x+b$ is an infinitesimal transformation, then $p(x+b)$ can be approximated by the Taylor expansion $\sum^\infty_{i=0}\frac{p^{(n)}(x)}{i!}b^i$. This converges, since $p(x)$ is a polynomial, so $\frac{p^{(n)}(x)}{i!}\to0$ as $i\to\infty$. –  Mar 01 '14 at 00:20
  • Evaluating at $1/x$ is trivial. Exchange the coefficients left-right and divide by $x^n$. For example, $t^2-3t+2|_{t=1/x}=(2x^2-3x+1)/x^2$. –  Jul 27 '15 at 09:25

4 Answers4

9

This is commonly refered to as a "Taylor shift", for the reason already quoted. Unfortunately, a web search typically reveals way more results for "Taylor Swift"...
[edit]Funny enough, this holds even after you click on "Search instead for taylor shift" on Google. On the other hand, by now this very thread scored #1 for the query "taylor shift polynomial". However, "taylor swift polynomial" brings you to the "Taylor Polynomial Series formula" on Uncyclopedia, which is described as “A Country/Western musical formula invented by Taylor Swift”, as well as a Polynomials Song, a parody on Taylor Swift's "Everything has Changed". Well, plus ça change, plus c'est la même chose...[/edit]

There is a short survey of complexity results for Taylor shifts in chapter 4 of Jürgen Gerhard's Modular Algorithms in Symbolic Summation and Symbolic Integration (Springer, Lecture Notes in Computer Science, Vol. 3218, 2004; DOI: 10.1007/b104035), with pointers to the original research papers. Also, look into Computer Algebra textbooks; good buzzwords are "Taylor shift", "fast polynomial division with remainder and applications", and "FFT-based polynomial multipoint evaluation and interpolation".

Bottom line: Using, e.g., a divide-and-conquer approach, you can save roughly one order of magnitude (i.e., $n$). That brings the cost down to $\tilde O(n)$ arithmetic operations in the coefficient ring, where polylogarithmic factors are ignored. Note that these are arithmetic operations; the bit complexity will be higher by a factor of $n$ and, obviously, also depend on the size (magnitude and/or precision) of the input.

The fast algorithms are, conceptually, not too involved. But they require a fair number of asymptotically fast subroutines for polynomial arithmetic, so it's not for the faint-hearted. And when it comes to an actual implementation: that could well be a totally different beast. The naive algorithm will work quite well for low degrees; the performance of the fast algorithms depends crucially on the underlying implementation of fast polynomial arithmetic. That is to say, don't do it on your own, but look for libraries like Victor Shoup's NTL, William Hart's FLINT, Fredrik Johansson's Arb, Andreas Enge's MPFRCX, or a full-grown computer algebra system.
Also, if you are only interested in shifts by 1 ($x \mapsto x+1$), the Horner scheme-like Taylor shift should be multiplication-free, which will significantly lift the threshold when asymptotically fast methods take the lead.

[edit] I forgot to mention: if you substitute $1/x$ for $x$, you end up with a rational function. But you might want to get $q(x) = x^n p(1/x)$, and this is simply the polynomial with coefficients in reverse order, i.e. $q_i = p_{n-i}$. Also, a scaling ($x \mapsto s\cdot x$) is rather simple to achieve: just scale the $i$-th coefficient by $s^i$. This is especially cheap if $s$ is a power of two (and, thus, the multiplication is just a bitshift). For an arbitrary linear combination $x \mapsto a\cdot x+b$ or Mobius transform $x \mapsto \frac{a\cdot x +b}{c\cdot x + d}$, decompose it into such simpler steps. For a higher-order composition (i.e., compute $(f \circ g)(x) = g(f(x))$, where both $f$ and $g$ are non-linear polynomials), you probably should look into multipoint evaluation- and interpolation-based algorithms.

akobel
  • 700
  • 4
  • 7
  • So many useful pointers, thanks a lot! – Ecir Hana Jul 27 '15 at 11:32
  • 1
    You're welcome. BTW, since you seem to have asked the question in the context of root finding: You probably know already that all these terms pop up in many descriptions of real root solving algorithms using Descartes' rule of signs. E.g., the Vincent-Collins-Akritas algorithm or Uspensky's method (lot of confusion out there about the names). You should also have a look into Fabrice Rouillier's RS library and Michael Sagraloff's publications (disclaimer: I'm working in Michael's group), and you can find good (and free-of-charge) implementations in RS or SAGE's real_roots package. – akobel Jul 27 '15 at 14:45
  • You wont believe how glad I am I finally met someone who works on root finding based on Budan theorem and similar techniques. I have so many questions, please, would you be so kind and would you take a look on the few a just posted? Here, here, and here.Thanks a lot in advance! – Ecir Hana Jul 27 '15 at 17:43
  • @EcirHana: I can (try to) give you answers to all of them, but if I go into sufficient detail, I might end up summarizing a bunch of papers and theses in the field. And if I don't go into details, you might not find what you are looking for. So: what is your goal? A (re-)implementation of a real root solver? Where will it be used? What are the expected degrees, the expected input? If you explain the background, it will be much easier to give an answer that helps you, or to give you pointers to publications that are reasonably self-contained and accessible. – akobel Jul 27 '15 at 22:02
  • I certainly don't want to bother you too much. I would be grateful for any information you might provide but it seems reasonable to start small, and I can always post a comment or two later, if you wont mind. Yes, I would like to implement a root finder myself but nothing comparable to research-level solvers out there: degrees should be of only couple of tens, simplicity over asymptotic speed, floating point over arbitrary precision or rationals. VCA and the like seems to me like the best option I have for real coefficients and real roots but I'm just an amateur who stumbles in the dark. – Ecir Hana Jul 27 '15 at 23:26
  • Okay. I agree that VCA-like methods will likely be the easiest options. Sturm's method is algebraically involved and difficult to implement in floating point arithmetic (experiment with POV-Ray to see some nice pictures where this fails when the degree goes up, despite the fact that they did a good job with its implementation), and EVAL-like methods are unlikely to beat them. – akobel Jul 28 '15 at 07:24
  • @EcirHana: I just saw your comment on us working on ANewDsc. For news on that front, have a look at our website. Cheers! – akobel Aug 25 '16 at 16:00
  • I saw the site! Ever since you answered my questions some time ago I try to follow your research here and there. Your explanations are very helpful, I cannot thank you enough! Btw., would it be possible to see the source code for ANewDsc, ARCaVoID or Bisolve? I would understood if not, just asking because I think I only know of MPSolve's source and sometimes it would quite help to see how are the techniques from papers really implemented... – Ecir Hana Aug 28 '16 at 09:45
  • In a nutshell: the sources of none of the projects are available for public download currently, for a variety of reasons. However, at least for ANewDsc and Arcavoid, none of the authors has an interest in keeping them secret, rather the contrary. Feel free to contact me privately via mail for details. – akobel Aug 28 '16 at 20:10
  • Update: Arcavoid is available in it's ancient form in the public CGAL repo on Github. No warranty at all for anything; and no time for support here, sorry. – akobel Oct 10 '16 at 10:00
  • Thank you so much! It's awesome that one can now compare the thesis and the implementation! – Ecir Hana Oct 11 '16 at 09:02
9

Yes, there is. The trick is Taylor expansion. If you wanna shift $p(x)=x^2-1$ to $p(x+1)$, then take Taylor expansion at point $x-1$ with $\Delta x=x-(x-1)=1$. $$\begin{align}p(x)&=p(x-1)+p'(x-1)\Delta x+\frac12p''(x-1)\Delta x^2\\&=(x-1)^2-1+2(x-1)+1\\&=(x-1)^2+2(x-1)\end{align}$$ Don't expand it. Just replace $x$ by $x+1$, we have $$p(x+1)=x^2+2x$$

Note this trick is only valid for analytic functions (maybe not?). But I think it be faster than substitution only for polynomials since their Taylor expansion has finite order and easier way to compute derivative.

Shuchang
  • 9,800
  • Just to check I understood: if I wanted to substitute $x-2$ instead, the first line would become: $p(x) = p(x+2) + p'(x+2)2 + \frac12p''(x+2)4$, correct? – Ecir Hana Mar 01 '14 at 01:02
  • @EcirHana Exactly – Shuchang Mar 01 '14 at 01:05
  • If you want to get the coefficients of $p(x+1)$, you do Taylor expansion of $p$ at the point $x_0=1$. So that $$p(x+1)=p(1)+p'(1)x+\tfrac12p''(1)x^2+...+\tfrac1{n!}p^{(n)}(1)x^n.$$ Likewise $$p(x-2)=p(-2)+p'(-2)x+...+\tfrac1{n!}p^{(n)}(-2)x^n.$$ Evaluation of the $k$-th derivative has a cost $O(n-k)$, all derivatives are computed in $O(n^2)$. – Lutz Lehmann Mar 01 '14 at 22:39
  • In other words, your method for computing $p(x+h)$ is to take $p(x)$ and construct all derivatives $p'(x)$, $p''(x)$,..., $p^{(n)}(x)$ and the use Taylor in the "wrong" sense as $$p(x+h)=p(x)+p'(x)h+p''(x)\tfrac12h^2+...+p^{(n)}(x)\tfrac1{n!}h^n.$$ I'm sure there is a clever way to do that, but I see only an implementation for this that has about double the number of arithmetic operations than the Horner based method. – Lutz Lehmann Mar 01 '14 at 22:48
  • So, is this the solution to problems like shifting the center for Taylor's series? For example, if given f(x)=sinx. The Maclaurin Series for sinx at x=0 is known. And I want to figure out the Taylor series at x=pi/2. – Ivy Aug 19 '17 at 01:36
5

Of course, Taylor expansion, for instance by repeated application of the Horner scheme, is about as fast as iteratively expanding the binomials and adding them up with the coefficients, that is it requires $O(n^2)$ operations.

Schönhage proposed a faster method using FFT methods (integer or floating point) based on the binomial expansion

$$\sum a_k(x+h)^k=\sum_j\left(\sum_k (k!\,a_k)\frac{h^{k-j}}{(k-j)!}\right)\frac{x^{j}}{j!}$$

where the inner sum can be interpreted as part of a convolution product of the sequences $(n!\,a_n,(n-1)!\,a_{n-1},...3!\,a_3,2!\,a_2, a_1, a_0)$ and $(1,h,\frac{h^2}{2!},\frac{h^3}{3!},...,\frac{h^n}{n!})$.

Lutz Lehmann
  • 126,666
1

Found this as I'm researching a similar project, and I had a thought to share about it. If $n$ is large and $a$ arbitrary, then calculating $p_1(x)=p(x-a)$ by actually evaluating each term may very well be more expensive than simply choosing a collection of $n$ distinct $x$-values ${x_n}$, evaluating $p$ at each value to get a collection ${p(x_n)}$, and then interpolating to find the polynomial $p_1(x)$ passing through the collection of points ${((x_n-a), p(x_n))}$. Because an $n$th-degree polynomial passing through $n$ distinct points is unique, we know if $p_1$ is a polynomial passing through these shifted points, it is approximately the desired shifted polynomial. The tricky part would be choosing the collection ${x_n}$.