21

Let's say I have two power series $\,\mathrm{F}\left(x\right) = \sum_{n = 0}^{\infty}\,a_{n}\,x^{n}$ and $\,\mathrm{G}\left(x\right) = \sum_{n = 0}^{\infty}\,b_{n}\,x^{n}$.

If I define the function $\displaystyle{\,\mathrm{H}\left(x\right) = \frac{\mathrm{F}\left(x\right)}{\mathrm{G}\left(x\right)} = \frac{\sum_{n = 0}^{\infty}\, a_{n}\,x^{n}}{\sum_{n = 0}^{\infty}\, b_{n}\, x^{n}}}$, is there a general way to expand $\,\mathrm{H}$ such that $\,\mathrm{H}\left(x\right) = \sum_{n=0}^{\infty}\,c_{n}\,x^{n}$ ?.

I guess, what i'm asking is if there is a way to get the first few $c_{n}$ coefficients ?. I'm dealing with a physics problem in which I have two such functions $\,\mathrm{F}$, $\,\mathrm{G}$ and I'd like to get the first few terms in the power series $\,\mathrm{H}$.

Felix Marin
  • 89,464
  • This likely isn't helpful, but in specific cases it may be. Gauss's Continued Fraction provides some interesting context on the case of dividing two contiguous hypergeometric functions (special case of power series). Their division is represented by an infinite continued fraction, however you can also recursively calculate the convergents to the desired degree, that is, if you have time. – Vessel Apr 26 '21 at 18:42

4 Answers4

26

The standard way (in other words, there is nothing original in what I am doing here) to get $H(x)$ is to write $H(x)G(x) = F(x)$ and get an iteration for the $c_n$.

$\begin{array}\\ H(x)G(x) &=\sum_{i=0}^{\infty} c_{i} x^{i} \sum_{j=0}^{\infty} b_{j} x^{j}\\ &=\sum_{i=0}^{\infty} \sum_{j=0}^{\infty} c_{i}b_{j} x^{i+j}\\ &=\sum_{n=0}^{\infty} \sum_{i=0}^{n} c_{i}b_{n-i} x^{n}\\ &=\sum_{n=0}^{\infty} x^{n} \sum_{i=0}^{n} c_{i}b_{n-i} \\ \end{array} $

Since $H(x)G(x) = F(x) = \sum_{n=0}^{\infty} a_{n} x^{n} $, equating coefficients of $x^n$, we get $a_n =\sum_{i=0}^{n} c_{i}b_{n-i} $.

If $n=0$, this is $a_0 = c_0b_0$ so, assuming that $b_0 \ne 0$, $c_0 =\dfrac{a_0}{b_0} $.

For $n > 0$, again assuming that $b_0 \ne 0$, $a_n =\sum_{i=0}^{n} c_{i}b_{n-i} =c_nb_0+\sum_{i=0}^{n-1} c_{i}b_{n-i} $ so $c_n =\dfrac{a_n-\sum_{i=0}^{n-1} c_{i}b_{n-i}}{b_0} $.

This is the standard iteration for dividing polynomials.

marty cohen
  • 107,799
20

Since the multiplication of power series is not that hard we can reduce the task in finding the reciprocal $\frac{1 }{G(x)}$ of a power series \begin{align*} G(x)=\sum_{n=0}^\infty b_n x^n \end{align*} provided $b_0\ne 0$.

According to H.W. Gould's Combinatorial identities, vol. 4 formula (2.27) the following is valid: Let $b_0\ne 0$, then with

\begin{align*} \frac{1}{G(x)}=\frac{1}{\sum_{n=0}^\infty b_n x^n}=\sum_{n=0}^\infty B_n x^n \end{align*} we obtain \begin{align*} B_0&=\frac{1}{b_0}\\ B_n&=\frac{1}{b_0^nn!}\left| \begin{array}{ccccc} 0&nb_1&nb_2&\cdots&nb_n\\ 0&(n-1)b_0&(n-1)b_1&\cdots&(n-1)b_{n-1}\\ 0&0&(n-2)b_0&\cdots&(n-2)b_{n-2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&0&0&\cdots&1\\ \end{array}\tag{1} \right| \end{align*} The right-hand side of (1) is the determinant of an $(n\times n)$-matrix.

Markus Scheuer
  • 108,315
  • Wouldn't the right-hand side of $(1)$ be the determinant of a square matrix of length $n+1$ instead of $n$, since there are $n+1$ columns? – Vessel Apr 30 '20 at 18:37
  • @MathematicallyEncrypted: You are right, but expansion at the bottom left $1$ results in an $(n\times n)$ determinant. I've updated the link where you can find the coresponding statement. – Markus Scheuer Apr 30 '20 at 19:10
6

If we use the geometric series, we end up with

$$\frac1{G(x)}=\frac1{1-(1-G(x))}=\sum_{n=0}^\infty(1-G(x))^n$$

This works out best if $b_0=1$. If $b_0=b$, then one must rescale as follows:

$$\frac1{G(x)}=\frac{1/b}{1-(1-G(x)/b))}=\frac1b\sum_{n=0}^\infty\left(1-\frac{G(x)}b\right)^n$$

Proceed to foil out and then multiply $F(x)$ in to get the desired $H(x)$.

4

One can also derive the following fast method (which works fast at least for a few coefficients of the expansion). It follows from umbral calculus, since the generating function of the form

$$ \mathrm{F}(x)=\sum_{n=0}^\infty a_n x^n $$

satisfies umbral differential equation (here $\theta=x\frac{d}{dx} \Rightarrow g(\theta)\cdot x^n = g(n)x^n$):

$$ (1+\theta)^{-1} a_{\theta+1}^{-1} a_{\theta}^{\vphantom{1}} \frac{d}{dx} \cdot \mathrm{F}(x) = \mathrm{F}(x) $$

Now one needs some trivial steps to obtain the following identity. For given sequence $\{b_n\}_{n=0}^\infty$ consider the operator $\mathfrak{L}_b$ which acts on sequences as $$ \mathfrak{L}_bf(n) := f(n+1)-\frac{b_{n+1}}{b_0}f(0) $$ Then $$ \frac{\sum\limits_{n=0}^\infty a_n x^n}{\sum\limits_{n=0}^\infty b_n x^n}=\sum_{k=0}^\infty x^k \left.\left[\frac{1}{b_0} \mathfrak{L}_b^k \cdot a_n\right] \right|_{n=0} $$ Indeed \begin{align*} &\left.\left[\frac{1}{b_0} \mathfrak{L}_b^0 \cdot a_n\right] \right|_{n=0} = \frac{a_0}{b_0}\\ &\left.\left[\frac{1}{b_0} \mathfrak{L}_b^1 \cdot a_n\right] \right|_{n=0} = \left.\frac{1}{b_0}\left(a_{n+1}-\frac{b_{n+1}}{b_0}a_0\right)\right|_{n=0}=\frac{a_1}{b_0}-\frac{b_1 a_0}{b_0^2}\\ &\left.\left[\frac{1}{b_0} \mathfrak{L}_b^2 \cdot a_n\right] \right|_{n=0} =\\ &=\left.\frac{1}{b_0}\left(a_{n+2}-\frac{b_{n+2}}{b_0}a_0-\frac{b_{n+1}}{b_0}\left(a_1-\frac{b_1 a_0}{b_0} \right)\right)\right|_{n=0}=\\ &=\frac{a_2}{b_0}-\frac{a_0 b_2}{b_0^2}-\frac{a_1 b_1}{b_0^2}+\frac{a_0 b_1^2}{b_0^3} \end{align*} I hope it is helpful.

UPD. Similarly, one may use the following method. Suppose that we want to understand what is the expansion of the series $$ \frac{1}{\theta!} \cdot \frac{R_\theta \cdot A(x)}{H_\theta \cdot B(x)} $$ Define the $0$-derivative $\mathrm{L}$, an operator that acts on formal power series of $x$ as $\mathrm{L}\cdot f(x)=(f(x)-f(0))/x$. Now write tautologically \begin{align*} \frac{1}{\theta!} \cdot \frac{R_\theta \cdot A(x)}{H_\theta \cdot B(x)}&=\exp(x \partial_y)\cdot \frac{1}{\theta_y!} \cdot \frac{R_{\theta_y} \cdot A(y)}{H_{\theta_y} \cdot B(y)}\bigg|_{y=0} = \theta_y !\exp(x \partial_y)\cdot \frac{1}{\theta_y!} \cdot \frac{R_{\theta_y} \cdot A(y)}{H_{\theta_y} \cdot B(y)}\bigg|_{y=0}=\\ &=\exp(x \mathrm{L}_y) \cdot \frac{R_{\theta_y} \cdot A(y)}{H_{\theta_y} \cdot B(y)}\bigg|_{y=0} = (H_\theta\cdot B)(y)\exp(x \mathrm{L}_y) \cdot \frac{R_{\theta_y} \cdot A(y)}{H_{\theta_y} \cdot B(y)}\bigg|_{y=0}=\\ &=\exp(x (H_\theta \cdot B)\mathrm{L}(H_\theta \cdot B)^{-1}) R_\theta \cdot A(y)\bigg|_{y=0}=\\ &=\exp\left(x \frac{H_{\theta+1}}{H_\theta}B\mathrm{L}B^{-1}\right) R_\theta \cdot A(y)\bigg|_{y=0} \end{align*} Now if $H_{\theta+1}/H_{\theta}$ is a polynomial in $\theta$, say $p(\theta)$, then we have an action $(H_{\theta+1}/H_{\theta}) \cdot y^n = p(n)y^n$. The latter means that in the exponent we have some reasonable operator, and in certain cases it may be very helpful (for the reasonable choice of the series $B(y)$).

  • Im interested in this topic, where can I read some reference to understand your answer? – Masacroso Nov 30 '21 at 10:23
  • @Masacroso Well, this is actually a fancy way to rewrite the procedure of usual long division. Similar method is used (at least as it seems to me) for establishing moment generating functions of associated orthogonal polynomials (for a given family of orthogonal polynomials with given recurrence relation one may try to find polynomials corresponding to the same recurrence relation but with shifted initial conditions. More you shift, more associated the family is :) ). – Danil Krotkov Dec 02 '21 at 07:57
  • @Masacroso So that I don’t think there is any actual reference, but you may try to look for associated families (“associated OP”) and I’m sure you’ll find some interesting applications of this tautological procedure. To say more about associated families, f.e. associated Jacobi family leads to the Gaussian continued fraction as moment generating function. – Danil Krotkov Dec 02 '21 at 07:57
  • @Masacroso The similar method is updated. – Danil Krotkov Dec 02 '21 at 08:40