5

If I have two generating functions $A(x) = \sum_na_nx^n$ and $B(x) = \sum_n b_nx^n$ then the Hadamard product is $(A \star B)(x) = \sum_{n} a_nb_nx^n$.

Now when $A$ and $B$ are both rational functions ($P(x)/Q(x)$ with polynomials $P, Q$) I've seen non-constructive proofs that $A \star B$ is also rational.

Given two rational generating functions $A$, $B$, is there an algorithm that efficiently computes $A \star B$? Or at least a constructive proof you could use to write an algorithm?

orlp
  • 10,508

2 Answers2

8

Yes, here's one way to do it. Suppose $A(x) = \frac{P(x)}{Q(x)}$ and $B(x) = \frac{R(x)}{S(x)}$ where $P, Q, R, S$ are polynomials, and WLOG $Q$ and $S$ have constant term $1$. Write

$$Q(x) = \prod_i (1 - \lambda_i x), S(x) = \prod_j (1 - \mu_j x).$$

Then we can write $a_n$ as a sum of terms of the form $p(n) \lambda_i^n$ where $p$ is a polynomial, and similarly for $b_n$ and $\mu_j$. The product of two such terms is another such term, but with an exponential with base $\lambda_i \mu_j$. From this you can show that $(A \star B)(x)$ is rational with denominator

$$(Q \otimes S)(x) = \prod_{i, j} (1 - \lambda_i \mu_j x).$$

This polynomial can be computed fairly explicitly as

$$\det (I - (M \otimes N) x)$$

where $\otimes$ is the Kronecker product or tensor product of matrices and $M$ is the companion matrix of $x^{\deg Q} Q \left( \frac{1}{x} \right)$, and similarly for $N$ and $S$; in particular its coefficients are a polynomial in the coefficients of $Q$ and $S$. Once you know the denominator, the numerator is determined by compatibility with initial conditions.

Edit, 6/24/21: I've sketched this argument out in a bit more detail here.


For example, suppose we wanted to compute the Hadamard square of the generating function $\frac{x}{1 - x - x^2}$ of the Fibonacci numbers, namely

$$\sum_n F_n^2 x^n = x + x^2 + 4x^3 + 9x^4 + 25x^5 + \dots$$

The relevant companion matrix is $M = \left[ \begin{array}{cc} 0 & 1 \\ 1 & 1 \end{array} \right]$, and its tensor square is

$$M \otimes M = \left[ \begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 1 \end{array} \right].$$

We compute that

$$\det (I - (M \otimes M) x) = 1 - x - 4x^2 - x^3 + x^4$$

so this is the denominator of the generating function. We can compute the numerator by multiplying the denominator by the first few terms of the series $\sum F_n^2 x^n$ which gives

$$(1 - x - 4x^2 - x^3 + x^4)(x + x^2 + 4x^3 + 9x^4 + \dots) = x - x^3$$

which gives the final answer

$$\boxed{ \sum F_n^2 x^n = \frac{x - x^3}{1 - x - 4x^2 - x^3 + x^4} }.$$

You can check this to however many terms you like in WolframAlpha.


You can also bypass the last step of computing the numerator using initial conditions by doing a little extra work ahead of time. First, show that every series with a rational generating function (edit: whose numerator has degree less than its denominator) can be written in the form

$$a_n = u^T M^n v$$

for some square matrix $M$ and some vectors $u, v$ (you can take $M$ to be the companion matrix above which is why I'm using the same letter for it). The generating function of such a series can therefore be written

$$A(x) = u_a^T (I - Mx)^{-1} v_a$$

which you can compute explicitly using Cramer's rule, which among other things shows that the denominator of $A(x)$ is $\det (I - Mx)$ as expected. Suppose we've also written $b$ this way, so

$$b_n = u_b^T N^n v_b, B(x) = u_b^T (I - Nx)^{-1} v_b.$$

Then it's an exercise to show that

$$a_n b_n = (u_a \otimes u_b)^T (M \otimes N)^n (v_a \otimes v_b)$$

from which it follows that

$$\boxed{ (A \star B)(x) = (u_a \otimes u_b)^T (I - (M \otimes N) x)^{-1} (v_a \otimes v_b) }.$$

This is perhaps the most conceptual proof that the Hadamard product of two rational power series is rational, and depending on how you've written down $A$ and $B$ it is sometimes also the fastest way to compute it. For example, the Fibonacci numbers are easy to write in this form, in the sense that the vectors $u, v$ are very simple, so the previous calculation can be redone this way without much difficulty.

B. Mehta
  • 12,774
Qiaochu Yuan
  • 419,620
  • Thank you for your clear answer, I will study it and see how far I come. As for your squared fibonacci numbers it simplifies to $\dfrac{x - x^3}{1 - x - 4x^2 - x^3 + x^4} = \dfrac{x(1-x)}{(1+x)(1-3x+x^2)}$ which is also the generating function listed on A007598. – orlp Nov 14 '17 at 23:14
  • Yes, the simplification reflects the fact that this polynomial, by construction, has roots $\phi^2, \varphi^2, \phi \varphi, \varphi \phi$ where $1 - x - x^2 = (1 - \phi x)(1 - \varphi x)$ and so we have $\phi \varphi = \varphi \phi = -1$. This sort of behavior is special to the quadratic case. – Qiaochu Yuan Nov 14 '17 at 23:18
  • I'm now looking into your 'bypass last step' solution, with $a_n = u^TM^nv$. To understand it better I'm reading this section on Wikipedia, which says $M$ should actually be $M^T$, a minor detail. But more importantly, if I understand this correctly computing $v$ involves calculating the first $\deg Q$ terms of $a_n$ anyway. So the part that's unclear to me is whether or not this method actually bypasses the last step and is less work, or whether it's doing the same thing in disguise. – orlp Nov 15 '17 at 22:11
  • @orlp: 1) whether we transpose or not is irrelevant, because $(u^T M^n v)^T = v^T (M^T)^n u$. 2) It's doing the same thing in disguise in full generality, but in practice it's often possible to pick $u$ and $v$ to be very simple, without checking initial conditions, for combinatorial reasons. The basic example is that if $a_n$ counts walks on a graph between two vertices, you can take $M$ to be the adjacency matrix of the graph and $u$ and $v$ will be standard basis vectors $e_i$. For example the Fibonacci numbers are like this. – Qiaochu Yuan Nov 16 '17 at 00:22
  • I'm running into trouble for a fully general solution (I'm implementing it as a CAS). Take $\dfrac{x}{1-x}$, then $M = [1]$, which makes it impossible to find correct $u, v$, because $a_0 = 0$, and $a_1 = 1$ but $M^0 = M^1$. I'm also running into trouble because of $(-x)^{\deg Q}$, which makes the characteristic polynomial not monic. Are you sure that shouldn't be $x^{\deg Q}$? As $x^{\deg Q}Q(x^{-1})$ is all that's needed to get the reciprocal polynomial of $Q$, it's unclear to me what the $(-x)$ is for. – orlp Nov 16 '17 at 02:19
  • @orlp: okay, so the issue is the matrix thing only works for sequences whose generating function has numerator of smaller degree than its denominator; these correspond to sequences whose recurrences start applying as soon as possible. If the numerator has degree greater than or equal to the denominator then the recurrence has finitely many exceptions and those need to be dealt with separately. And yes, you're right, that should just be $x^{\deg Q}$. – Qiaochu Yuan Nov 16 '17 at 18:04
  • 2
    Alright, thanks for your help. I managed to implement it by taking out coefficients until both numerators have a degree less than their denominators. As an example, the Fibonacci numbers have g.f. $\frac{x}{1-x-x^2}$ and powers of two have g.f. $\frac{1}{1-2x}$, the program instantly computes the g.f. of $2^nF_n$ as $\dfrac{2x}{1 - 2x - 4x^2}$. Or $2^n n^2$ using their g.f.s gives $\dfrac{2x(2x+1)}{(1-2x)^3}$. Being able to compute the Hadamard product of two arbitrary rational g.f. is very useful and was missing from my toolkit. – orlp Nov 18 '17 at 12:09
  • Cool, glad I could help. – Qiaochu Yuan Nov 18 '17 at 17:56
  • 1
    @QiaochuYuan Thanks, this really helped me! It is so powerful as everything can be carried out completely based on the coefficients of $P,Q,R,S$ without the roots of $Q,S$. – flonk Apr 03 '23 at 21:35
2

Here is another way to compute Hadamard products of rational functions. Suppose we want to compute the Hadamard product $f(x)*g(x)$ of rational functions $f(x)$ and $g(x)$. For simplicity, we will assume that $f(x)$ and $g(x)$ are proper rational functions with coefficients in some field $K$ of characteristic 0. In an appropriate extension field of $K$ we can factor the denominator of $f(x)$ as $\prod_{i=1}^m(1-\alpha_i x)$ and we can factor the denominator of $g(x)$ as $\prod_{j=1}^n (1-\beta_j x)$. Let us assume for now that the $\alpha_i$ are distinct and the $\beta_j$ are distinct; our final result will be valid without this restriction. Then for some $c_i$ and $d_j$ we have $$f(x) = \sum_{n=0}^\infty x^n\sum_{i=1}^m c_i\alpha_i^n$$ and $$g(x) = \sum_{n=0}^\infty x^n\sum_{j=1}^n d_j\beta_j^n.$$ Thus $$f(x)*g(x) = \sum_{n=0}^\infty x^n\sum_{i=1}^m\sum_{j=1}^n c_id_j(\alpha_i\beta_j)^n.$$ So we may write $$f(x)*g(x) =\frac{N(x)}{D(x)}$$ where $$D(x) = \prod_{i=1}^m \prod_{j=1}^n (1-\alpha_i \beta_j x)$$ and the degree of $N(x)$ is less than the degree of $D(x)$. If we know $D(x)$, we can determine $N(x)$ from the first $mn$ terms of $f(x)*g(x)$. The coefficients of the denominator of $f(x)$ are (up to sign) the elementary symmetric functions of $\alpha_1,\dots, \alpha_m$, the coefficients of the denominator of $g(x)$ are the elementary symmetric functions of $\beta_1,\dots, \beta_n$, and the coefficients of $D(x)$ are the elementary symmetric functions of the $\alpha_i\beta_j$. So the problem reduces to expressing the elementary symmetric functions of $\alpha_i\beta_j$ in terms of the elementary symmetric functions of $\alpha_i$ and of $\beta_j$. By Newton's identities we can express power sum symmetric functions in terms of elementary symmetric functions and vice-versa. So it is sufficient to express the power sum symmetric functions of $\alpha_i\beta_j$ in terms of the power sum symmetric functions of $\alpha_i$ and of $\beta_j$. But this is easy: Letting $p_n$ denote the $n$th power sum symmetric function, we have $$p_n(\alpha_1\beta_1,\dots, \alpha_i\beta_j,\dots, \alpha_m\beta_n) = p_n(\alpha_1,\dots, \alpha_m)p_n(\beta_1,\dots, \beta_n).$$

I have implemented this approach in Maple, and it works well. For example, we can easily compute that $$\frac{1}{1-x-ax^2}*\frac{1}{1-x-bx^3} =\frac {1-ab{x}^{3}}{1-x-a{x}^{2}- \left( b+3ab \right) {x}^{3}- \left( ab+2{a}^{2}b \right) {x}^{4}-{a}^{3}{b}^{2}{x}^{6}}. $$