4

I was wondering what would happen if we took Newton's Interpolation formula (series) and applied it the factorial.

It is given as

$$f(x)=\sum_{k=0}^\infty\binom{x-a}k\Delta^k[f](a)$$

where

$$\Delta^n[f](a)=\sum_{k=0}^n\binom nk(-1)^{n-k}f(a+k)$$

and I want $f(x)=x!$

Putting this all in with $a=0$, I get

$$x!=\sum_{k=0}^\infty\sum_{n=0}^k\binom xk\binom nk(-1)^{n-k}k!$$

which, notably, is capable of taking non-integer values for $x$.

So of course, I am concerned with its radius of convergence and especially if it comes out to equal the gamma function.


As a side question, if this fails, then are there any interpolations of the factorial that come out to the factorial?

  • Newton's interpolation formula is only going to give you a polynomial which equals the factorial at finitely many points as a result -- the input is finitely many points. And it will agree with the factorial only at those finitely many points since the factorial cannot be expressed as a polynomial. So it isn't possible to use Newton's interpolation to sample all of the points of the factorial function, of which there are countably many. – Chill2Macht Oct 07 '16 at 20:25
  • However, I would also be interested to know whether there is a countable extension of Newton's interpolation formula and if the result of this countable extension applied to the factorial would be the gamma function. I think I have read elsewhere that there are other functions which interpolate the factorial, so it would not be guaranteed. – Chill2Macht Oct 07 '16 at 20:26
  • 1
    @William Why can't I make a series instead of a polynomial expansion simply be having an infinite amount of sample points to use? – Simply Beautiful Art Oct 07 '16 at 20:26
  • @Masacroso You are referring to Lagrange interpolation? – Simply Beautiful Art Oct 07 '16 at 20:28
  • 1
    @SimpleArt Yes, it is the same basically. – Masacroso Oct 07 '16 at 20:28
  • http://math.stackexchange.com/questions/1537/why-is-eulers-gamma-function-the-best-extension-of-the-factorial-function-to and the Bohr-Mollerup theorem is what I am thinking of. @SimpleArt I am not sure that you can't, that is my point, but to prove that such an extension exists, suddenly we have to deal with convergence issues. Also general analytic or entire functions "infinite polynomials" are much more complicated than polynomials (which have finitely many terms), so it is usually a powerful result when results for them coincide exactly. – Chill2Macht Oct 07 '16 at 20:29
  • @William Yeah, I've already seen that question. Oh, the Bohr-Mollerup theorem, seen that too. (would be neat if this did work :D ) – Simply Beautiful Art Oct 07 '16 at 20:31
  • 1
    @SimpleArt Well since there are infinitely many functions with power series expansions which coincide with the factorial on the integers, I don't think that there could be any unique way of extending the Newton interpolation formula to countably many points. See here: http://www.luschny.de/math/factorial/hadamard/HadamardsGammaFunction.html And then if there is not a unique way to extend the Newton interpolation formula, then suddenly the original question becomes ill-defined, because one does not know which extension it is referring to. – Chill2Macht Oct 07 '16 at 20:33
  • @SimpleArt Perhaps one could ask whether any of the polynomial interpolations (of finitely many points) agree with the gamma function on finite intervals (which contain/are defined by those points)? I think the identity theorem might have something to say about that, but I am not sure, because then we would be talking about restrictions to $\mathbb{R}$ – Chill2Macht Oct 07 '16 at 20:38
  • @William Yes, it is true that there are infinitely many functions that coincide with the factorial for integer arguments. I'd still like to see the interpolation worked out anyways to see what it gives. – Simply Beautiful Art Oct 07 '16 at 20:40
  • @SimpleArt You might be misinterpreting my point: you say "the interpolation" -- what I am saying is that if any interpolation using countably many points exists, it cannot be unique, thus the "the" in "the interpolation" is not justified and the question is not well-defined. – Chill2Macht Oct 07 '16 at 20:43
  • @William Sorry, I meant the above interpolation, using Newton's formula. – Simply Beautiful Art Oct 07 '16 at 20:43
  • @SimpleArt Oh sorry your question is about the Newton series (as it is called in the Wikipedia article you linked), not just any extension of the Newton interpolation formula. It might help to make this clear by referring to it by its proper name, although on the other hand I am probably the only person stupid or pedantic enough to make this mistake. Sorry about the confusion. – Chill2Macht Oct 07 '16 at 20:49

2 Answers2

4

Notice that the inner sum can be reduced to

$$ \sum_{k=0}^{n} \binom{n}{k}(-1)^{n-k} k! = n! \sum_{j=0}^{n} \frac{(-1)^j}{j!} =: d_n, $$

which is the $n$-th derangement. It is easy to check that $d_n \sim n!/e$ as $n \to \infty.$ On the other hand, if $x \notin \{0, 1, 2, \cdots \}$, then we have

$$\left| \binom{x}{n} \right| = \left| \frac{\Gamma(n-x)}{n!\Gamma(-x)} \right| \sim \frac{1}{|\Gamma(-x)| n^{\Re(x)+1}}$$

as $n \to \infty$. So it follows that the series

$$ \sum_{n=0}^{\infty} \binom{x}{n} \sum_{k=0}^{n} \binom{n}{k}(-1)^{n-k} k! = \sum_{n=0}^{\infty} \binom{x}{n} d_n \tag{1} $$

diverges unless $x$ is a non-negative integer.


We can possibly assign a value to the formal series $\text{(1)}$ under a suitable renormalization. (Of course, the meaningfulness of this assignment is thus purely one's taste of matter.) To this end, consider the following representation of $d_n$:

$$ d_n = \int_{0}^{\infty} (t-1)^n e^{-t} \, dt. $$

Although totally invalid, we may attempt the following ''computation'':

$$ \sum_{n=0}^{\infty} d_n \ ``\text{=''} \int_{0}^{\infty} \sum_{n=0}^{\infty} \binom{x}{n} (t-1)^n e^{-t} \, dt \ ``\text{=''} \int_{0}^{\infty} t^x e^{-t} \, dt = x!. $$

One may try to give a meaning to this nonsense by considering a suitable renormalization. So we consider the following function

$$ d_n(\epsilon) := e^{-\epsilon n} \int_{0}^{1} (t-1)^n e^{-t} \, dt + \int_{1}^{\infty} (t-1)^n e^{-\epsilon n t} e^{-t} \, dt. $$

For each fixed $n$, we have $\lim_{\epsilon \to 0^+} d_n(\epsilon) = d_n (0)$. Now we consider the following renormalized sum:

$$ f(x, \epsilon) := \sum_{n=0}^{\infty} \binom{x}{n} d_n(\epsilon). $$

Then on the region $\mathcal{D} = \{\epsilon \in \Bbb{C} : \Re(\epsilon) > W_0(\frac{1}{e}) \}$, we have

$$ \sup_{t \in [1, \infty)} |(t-1) e^{-\epsilon t}| \leq \frac{e^{-1-\Re(\epsilon)}}{\Re(\epsilon)} < 1 $$

and thus $d_n(\epsilon)$ decays exponentially. Thus the function $\epsilon \mapsto f(x, \epsilon)$ is holomorphic on $\mathcal{D}$. Moreover, using Fubini's theorem and the binomial series we can write

$$ f(x, \epsilon) = \int_{0}^{1} (1 + (t-1) e^{-\epsilon})^x e^{-t} \, dt + \int_{1}^{\infty} (1 + (t-1) e^{-\epsilon t})^x e^{-t} \, dt $$

It is not hard to prove that the right-hand side defines a holomorphic function near an open neighborhood of $(0, \infty)$. So we can use this expression to extend the function $\epsilon \mapsto f(x, \epsilon)$ and we may define

$$ \sum_{n=0}^{\infty} \binom{x}{n} d_n := \lim_{\epsilon \to 0^+} f(x, \epsilon) = \int_{0}^{\infty} t^x e^{-t} \, dt = x!. $$

Sangchul Lee
  • 167,468
  • I like the renormalization a lot. :) Yet another demonstration that sometimes useful stuff can be squeezed out of formally divergent series. – J. M. ain't a mathematician Oct 08 '16 at 04:18
  • @J.M., Thank you! Yet I feel less satisfied because this crosses the barrier of convergence and move beyond that. (Indeed, the original definition of $d_n(\epsilon)$ forces that the defining series of $f(x, \epsilon)$ diverges outside $\bar{\mathcal{D}}$. We moved beyond this barrier simply by relying on holomorphy.) I really wanted to find a regularization so that I we do not cross the barrier of convergence, but rather touch the barrier. I expect that Gaussian regularization would work, but has not been successful. – Sangchul Lee Oct 08 '16 at 04:26
2

Let me rewrite the Newton expansion as: $$ \begin{gathered} f(x) = \sum\limits_{0\, \leqslant \,j} {\;\left( \begin{gathered} x \hfill \\ j \hfill \\ \end{gathered} \right)\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,j} \right)} {\left( { - 1} \right)^{j - k} \left( \begin{gathered} j \\ k \\ \end{gathered} \right)\;k!} } = \hfill \\ = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,k\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\;\left( \begin{gathered} x \\ j \\ \end{gathered} \right)\left( { - 1} \right)^{j - k} \left( \begin{gathered} j \\ k \\ \end{gathered} \right)\;k!} \hfill \\ \end{gathered} $$ which, when developed further gives: $$ \begin{gathered} f(x) = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,k\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\;\left( \begin{gathered} x \\ j \\ \end{gathered} \right)\left( { - 1} \right)^{j - k} \left( \begin{gathered} j \\ k \\ \end{gathered} \right)\;k!} = \hfill \\ = \sum\limits_{\begin{array}{*{20}c} {k\, \leqslant \,j} \\ {0\, \leqslant \,k} \\ \end{array} } {\;\left( \begin{gathered} x \\ k \\ \end{gathered} \right)\left( { - 1} \right)^{j - k} \left( \begin{gathered} x - k \\ j - k \\ \end{gathered} \right)\;k!} = \hfill \\ = \sum\limits_{0\, \leqslant \,k} {\;\left( \begin{gathered} x \\ k \\ \end{gathered} \right)\left( {1 - 1} \right)^{\,x - k} \;k!} = \sum\limits_{0\, \leqslant \,k} {\;\left( \begin{gathered} x \\ k \\ \end{gathered} \right)0^{\,x - k} \;k!} \hfill \\ \end{gathered} $$ or:
---- reviewed ----
$$ \begin{gathered} f(x) = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,k\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\;\left( \begin{gathered} x \\ j \\ \end{gathered} \right)\left( { - 1} \right)^{j - k} \left( \begin{gathered} j \\ k \\ \end{gathered} \right)\;k!} = \hfill \\ = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,k\, \leqslant \,j} \\ \end{array} } {\;\left( {\frac{{\left( { - 1} \right)^k }} {{\left( {j - k} \right)!}}} \right)\left( { - 1} \right)^j x^{\,\underline {\,j\,} } } = \hfill \\ = \sum\limits_{0\, \leqslant \,j} {\;\left( {\sum\limits_{0\, \leqslant \,k\, \leqslant \,j} {\frac{{\left( { - 1} \right)^k }} {{k!}}} } \right)x^{\,\underline {\,j\,} } } = \hfill \\ = \sum\limits_{0\, \leqslant \,l} {\left( {\sum\limits_{0\, \leqslant \,j\, \leqslant \,u} {\;\left( {\sum\limits_{0\, \leqslant \,k\, \leqslant \,j} {\frac{{\left( { - 1} \right)^k }} {{k!}}} } \right)\left( { - 1} \right)^{j - l} \left[ \begin{gathered} j \\ l \\ \end{gathered} \right]} } \right)x^{\,l} } \hfill \\ \end{gathered} $$ where the upper bound $u$ in the summation in $j$ is $u=x$ if $x$ is a non-negative integer, otherwise $u= \infty$.

Either the first derivation - which practically gives $f(x) = \left[ {0 \leqslant \text{integer}\,x} \right]x!$ - and the second indicate that the Newton interpolation is valid only for integral $x$.

In the second expression we have that the coefficients , given by the sum in $k$, are $1, \; 0, \; 1/2, \; 1/3, \; 3/8, \cdots \to \;1/e$. The Stirling Numbers of the first kind are increasing with the upper index. Their alternate sum, if extended to infinity, is undefined.

If you plot a partial sum from the above expansion against $\Gamma(x+1)$ you will notice that the approximant polynomial oscillates among the interpolation points, which is known as Runge's phenomenon.
The Gamma function has the requisites for this to occur since it has poles distributed all over the negative $x$ axis.
And in fact, the Newton interpolation works much better for 1/Gamma, i.e. $$ \begin{gathered} \frac{1} {{\Gamma (x + 1)}} = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,k\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\;\left( \begin{gathered} x \\ j \\ \end{gathered} \right)\left( { - 1} \right)^{j - k} \left( \begin{gathered} j \\ k \\ \end{gathered} \right)\;\frac{1} {{k!}}} = \hfill \\ = \sum\limits_{0\, \leqslant \,j} {\left( {\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,j} \right)} {\;\;\frac{{\left( { - 1} \right)^k }} {{k!\left( {\left( {j - k} \right)!} \right)^2 }}} } \right)x^{\,\underline {\,j\,} } } \hfill \\ \end{gathered} $$

G Cab
  • 35,272
  • I'm pretty sure our indices are the same, just in different formats. – Simply Beautiful Art Oct 07 '16 at 22:35
  • @SimpleArt yes, you are right, in the second binomial you took $n$ as index: pardon! – G Cab Oct 07 '16 at 22:45
  • I'm not entirely confident on why the series is divergent (for all $x$?). Could you elaborate? – Simply Beautiful Art Oct 07 '16 at 23:02
  • @SimpleArt, well, actually, except when $x$ is a non-negative integer, the summation in $j$ goes on to $\infty$, the coeff. of $x^{,\underline {,j,} }$ tends to $1/e$, while for the alternating sign of the falling factorial , you put me a doubt. Conclusion retired: need more elaborating. Do you have an idea in the respect? – G Cab Oct 08 '16 at 00:39
  • Not really, it seems rather complicated, I don't do convergence testing for double sums with factorials in denominators very often. I'll get back to you if I have any suggestions. – Simply Beautiful Art Oct 08 '16 at 01:15
  • @SimpleArt: I reviewed the last part of my post, now it should be more clear and convincing. – G Cab Oct 10 '16 at 01:30
  • Thank you for the nice answer. I've decided to give Lee the "this answers my question" for his extra work as well – Simply Beautiful Art Oct 10 '16 at 01:33