21

I'm wondering if there is a function that is its own generating function. That is, is there an entire function $f$ such that $$ f(z) = f(0) + f(1)z + f(2)z^2 + f(3)z^3 + \cdots? $$ I have found that if I fix $p(0)$ and $p(1)$, I can construct a degree $n$ polynomial $$ p(k) = p(0) + p(1)k + p(2)k^2 + \cdots + p(n - 1)k^{n - 1} + a_nk^n $$ for all natural numbers $k < n$. I calculated these polynomials up to $n = 150$ with $p(0) = 1$ and $p(1) = 0$, and they do seem to be converging to some function, but I can't figure out how to prove that they really do converge.

Here is a graph with these polynomials up to n = 150: https://www.desmos.com/calculator/lzdlcyymlu

I found these polynomials by noting that there are $n - 1$ equations and $n - 1$ unknowns. (One equation for each $k$ from $1$ to $n - 1$, and one unknown for $p(k)$ with $k > 2$, as well as for $a_n$.) I wrote a program to compute these polynomials' coefficients by Gaussian elimination, but it is $O(n^3)$, and that's without taking into account that I need more bits of precision as $n$ grows. It took over two hours to compute the polynomial with $n = 500$, so it is not really feasible to keep going this way.


Does anyone know if there exists a (non-trivial lol) self-generating function, and if there is a closed form for the function the polynomials seem to be converging to?



Possibly relevant notes:

  • If we were to consider exponential generating functions, rather than ordinary, there would be a set of simple solutions. If $w = a + bi$ is a solution to $w = e^w$, then $f(z) = e^{wz}$ is exponential self-generating, along with any linear combinations of this function for different values of $w$. We can use this to find real solutions $e^{ax}\cos(bx)$ and $e^{ax}\sin(bx)$, as GEdgar noted. However, I have not been able to find anything so nice for ordinary self-generating functions.

  • If we do not fix $p(1)$, then the polynomials do not converge to anything. However, each polynomial approximates a seemingly random linear combination of two polynomials with $p(1)$ fixed to two different values. The same is true for if we were to approximate exponential generating functions with polynomials in the same way - each polynomial without $p(1)$ fixed approximates a different linear combination of $e^{ax}\cos(bx)$ and $e^{ax}\sin(bx)$. This makes me suspect that, much like the exponential self-generating functions are best expressed as complex functions $e^{wz}$, the ordinary self-generating function might be found most easily by considering complex functions.

Polygon
  • 1,854
  • 9
    Consider the function $f(x)=0$. – CyclotomicField Sep 20 '23 at 23:51
  • 1
    @CyclotomicField What a great example! :) – K. Jiang Sep 20 '23 at 23:56
  • If you assume that $f$ exists and has a Maclaurin series, then you need $f(n) = \frac{f^{(n)}(0)}{n!}$. – Dan Sep 21 '23 at 00:22
  • Well, if you set $a_{n}:=f(n)$ you get that they need to satisfy $\displaystyle a_{n}=\sum_{k=0}^{\infty}a_{k}n^k,\quad \forall n\in \mathbb{N}$. Fixing $a_{0}, a_{1}$ like you did is probably important because then for $n>1$ you get $n^n \neq 1$ and $\displaystyle a_{n} = \frac{1}{1-n^n}\sum_{k\neq n}^{}a_{k}n^k\quad (1)$. Perhaps by fixing $a_{0},a_{1}$ and constantly setting $a_{n}$ as in $(1)$ for $n=2,3\dots$ and doing all these infinite redefinitions until convergence you get the unique $(a_{n})$ from these $a_{0},a_{1}$. Still, I have no idea how to continue my thinking. – Yuumita Sep 21 '23 at 00:50
  • 2
    Related (but different) https://math.stackexchange.com/q/91855/442 – GEdgar Sep 21 '23 at 01:25
  • Wow, that's a lot of Desmos equations! Did you write them manually? – Varun Vejalla Sep 21 '23 at 02:10
  • Your polynomial seems to oscillate a lot. So something with sinusoids does seem likely. – Dan Sep 21 '23 at 06:05
  • 1
    @VarunVejalla I would have lost my mind doing it manually haha! I just copy+pasted the output of my program. – Polygon Sep 21 '23 at 17:20
  • 1
    Please see Function $f(x)$, such that $\sum_{n=0}^{\infty} f(n) x^n = f(x)$, however, there is no complete answer. – Jam Sep 25 '23 at 19:38
  • Take a look at the bell curve. It is it's own fourier transform, among other things. – richard1941 Sep 27 '23 at 02:13
  • @Jam I am not sure whether to mark this as a duplicate. The prompt asks, "Do any of these answer your question?" I clicked no, since none of the answers there answer my question. But then I'm asked to explain why my question is different, which it really isn't. – Polygon Sep 28 '23 at 12:22

2 Answers2

12

Here is a possible approach using functional analysis. It is an existence proof, that shows the OP's desired $f$ exists. Feel free to pop any questions, as I am not very familiar with functional analysis myself.

The starting idea is to construct the function "backward" from its values at $\{0,1,\cdots\}$. Specifically, consider the functions $$\phi_i(z) = (-1)^i\frac{\sin \pi z}{\pi (z - i)}.$$ These functions are entire, and satisfy $\phi_i(j) = \delta_{ij}$ for any non-negative integers $i, j$. So we can attempt to construct the function as $$f(z) = \sum_{i = 0}^\infty a_i \phi_i(z).$$ Let $\ell^2(\mathbb{N})$ denote the Hilbert space of sequences $(a_0, a_1, \cdots, )$ such that $\sum_i a_i^2 < \infty$. Our first claim is that for any such $(a_i)$, the function $f$ is entire.

Lemma 1: For any $(a_i) \in \ell^2(\mathbb{N})$, the function $f(z)$ defined above is entire, and satisfies $f(i) = a_i$.

Proof: It suffices to show that the series converges uniformly in compact regions $B_R(0)$. This follows from the C-S inequality and the fact that $$\sum_{i > R + 1} \frac{1}{|z - i|^2} < \infty$$ uniformly in $B_R(0)$. qed.

Thanks to uniformly convergence, we can commute derivatives with sum and get $$f^{(k)}(z) = \sum_{i = 0}^\infty a_i \phi^{(k)}_i(z).$$ So it suffices to find an $(a_i) \in \ell^2(\mathbb{N})$ such that for any $k$, we have $$a_k = \frac{1}{k!}\sum_{i = 0}^\infty a_i \phi^{(k)}_i(0).$$ To this end, we define an operator $$T(a_i) = \left(\sum_{i = 0}^\infty a_i \frac{\phi^{(k)}_i(0)}{k!}\right)_{k = 0}^\infty.$$ Then the problem would be resolved if we can find an eigenvector of $T$ with eigenvalue $1$ that lies in $\ell^2(\mathbb{N})$.

Now here is the crucial functional analytic property we want.

Lemma 2: $T$ is a compact operator on $\ell^2(\mathbb{N})$.

Proof. Note that $T$ is a Hilbert-Schmidt operator with kernel $$K_{ik} = \frac{\phi^{(k)}_i(0)}{k!}.$$ So to argue that $T$ is compact, it suffices to show that $$\sum_{i, k \geq 0} K_{ik}^2 < \infty.$$ Clearly, when $k = 0$ we have $K_{i0} = \delta_{i0}$. We now see what happens when $k \geq 1$. When $i = 0, 1$, note that $\phi_{0, 1}(z)$ are entire, so their Taylor coefficients must decay exponentially fast. Thus $$\sum_{k \geq 0} K_{0k}^2 + K_{1k}^2 < \infty.$$ When $i \geq 2$, we can explicitly write $$(-1)^i K_{ik} = \frac{1}{\pi k!}\sum_{a = 0}^k \binom{k}{a} (\sin \pi z)^{(a)}|_0 \left(\frac{1}{(z - i)}\right)^{(k - a)}|_0 = \frac{1}{\pi} \sum_{a = 0}^k \frac{1}{a!} (\sin \pi z)^{(a)}|_0 \frac{1}{(-i)^{k-a+1}}.$$ The $a = 0$ term vanishes, so we get $$K_{ik} = \frac{(-1)^i}{\pi} \sum_{a = 0}^k \frac{1}{a!} (\sin \pi z)^{(a)}|_0 \frac{1}{(-i)^{k-a+1}}.$$ Note that $$|(\sin \pi z)^{(a)}|_0| \leq \pi^a.$$ So $$|K_{ik}| \leq \sum_{a = 0}^k \frac{\pi^a}{a!} \frac{1}{i^{k-a+1}}.$$ After some fiddling, one can get, for any $i \geq 2$, we have $$|K_{ik}| \ll \frac{1}{ik}.$$ Thus we can get $$\sum_{k \geq 1, i \geq 2} K_{ik}^2 < \infty$$ as desired.

Finally, we can use a nuke known as the Fredholm alternative. This tells you that, as long as $T$ is compact, $T$ has an eigenvector with eigenvalue $1$ if and only if $I - T$ is not surjective. But the latter is obvious: for any $a$, $(I - T)a$ has first entry $0$. So we conclude that $T$ has an eigenvector with eigenvalue $1$, and the desired function exists.


Edit: I implemented a numerical version of this approach in Mathematica.

ClearAll["Global`*"]
n := 20
sr := Table[
  Series[Power[-1, i]*Sin[Pi*x]/(Pi*(x - i)), {x, 0, n}], {i, 0, n}]
k := Table[Coefficient[sr[[i]], x, k], {k, 0, n}, {i, 1, n + 1}]
eg := N[Eigenvalues[N[k]]]
pos := FirstPosition[Chop[eg], 1.][[1]]
a := Eigenvectors[N[k], pos][[pos]]
normalizeda := Re[N[a/a[[1]]]]
normalizeda
G[x_] := Table[Power[x, i], {i, 0, n}]
G[1].normalizeda
G[2].normalizeda
G[3].normalizeda

Unfortunately, it is kinda hard to visually see that $f(x)$ has the desired property. This is probably because to compute say $f(4)$, you need to compute $a_{20} * 4^{20}$, which means you need to compute $a_{20}$ to extreme precisions.

abacaba
  • 8,375
  • Thank you, this question has been on my mind for years! I don't know a single thing about functional analysis, but I get the gist of your proof. Is there any way you could share the output of your Mathematica program? I don't have Mathematica, and it's Greek to me. – Polygon Sep 23 '23 at 15:16
  • The output, which form the coefficients of this series, is ${1., 1.17323, -0.438363, -0.727928, 0.0188224, 0.159182, 0.00731155,
    -0.0193347, -0.00125325, 0.00155119, 0.000103043, -0.0000896631,
    -5.5346910^{-6}, 3.9309410^{-6}, 2.1648610^{-7}, -1.3530510^{-7}, -6.5335110^{-9}, 3.7513810^{-9}, 1.5786510^{-10}, -8.5490810^{-11}, -3.13454*10^{-12}}$
    – abacaba Sep 23 '23 at 16:51
  • 1
    Awesome, this does indeed appear to be a linear combination of the polynomials I had computed! I never would have guess that there would be a combination that's 0 for all negative integers. Though I assume these coefficients aren't final. If you were to compute with n=21, then all 21 coefficients would be slightly different than the n=20 ones? – Polygon Sep 23 '23 at 19:07
  • Yes I agree they would change by a bit. – abacaba Sep 23 '23 at 23:00
  • Great answer! Do you think it might be possible to obtain the first, say, seven digits of the true value of $f(1)$? Perhaps by doing the same computation with $n=100$, or higher? I would be interested in formulating conjectures regarding the exact values of the coefficients of the series – Max Muller Sep 26 '23 at 22:03
  • @MaxMuller I just got done modifying my program to work with this method. Here are the solutions for a vector of 100 numbers, calculated with overkill precision floats (32 bit exponent, 2048 bit mantissa) https://pastebin.com/raw/SHqJRZk4 I have no idea how many decimal points are accurate, but it's super promising that they align so well abacaba's outputs – Polygon Sep 27 '23 at 15:19
  • @Polygon That's great indeed, and thank you for providing the digit expansions. I haven't found anything with the OEIS and Inverse Symbolic Calculator yet – Max Muller Sep 27 '23 at 16:21
2

Partial answer

Assume that such an $f$ exists and can be defined by a Maclaurin series:

$$f(z) = \sum_{n=0}^\infty a_n z^n$$

For $f$ to be self-generating, we need $\forall n\in\mathbb{N} : a_n = f(n)$. But $f(n)$ itself is defined by an infinite summation.

$$a_n = \sum_{k=0}^\infty a_k n^k$$

But infinite summations take a long time to calculate, so let's try making the summation finite.

$$a_n = \sum_{k=0}^M a_k n^k$$

Now, let $a$ denote the (column) vector containing the $a_k$ values, and let $B$ be a matrix such that $B_{nk} = n^k$. (This is a special case of a Vandermonde matrix.)

$$B = \begin{bmatrix} 0^0 & 0^1 & 0^2 & \dots & 0^M \\ 1^0 & 1^1 & 1^2 & \dots & 1^M \\ 2^0 & 2^1 & 2^2 & \dots & 2^M \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ M^0 & M^1 & M^2 & \dots & M^M \\ \end{bmatrix}$$

The product $Ba$ then produces a vector whose $n$th element is $\sum_{k=0}^M a_k n^k = a_n$. But the vector containing all of the $a_n$ values is just $a$. So now we have

$$Ba = a$$

Since B is not the identity matrix, this means that $a$ must be an eigenvector of $B$, corresponding to an eigenvalue of 1.

So the next step is to find a way to efficiently compute $B$'s eigenvectors, and then see what happens as $M \to \infty$.


I've also implemented a simplistic iterative approach to finding $a$.

def find_coefs(m):
    '''
    Return the x**0 thru x**m coefficients of the self-generating polynomial.
    '''
    coefs = [1] * (m + 1)
    for dummy in range(1000):
        # Iteratively calculate the new coefficients.
        coefs = [
            sum(coefs[k] * n ** k for k in range(m + 1))
            for n in range(m + 1)
        ]
        # Scale so that f(1) = 1
        scale = 1.0 / sum(a for a in coefs)
        coefs = [scale * a for a in coefs]
    return coefs

The highest it goes without an OverflowError is $m=142$, which produces:

$$a_{0} = 0.0$$ $$a_{1} = 2.3655519745873434 \times 10^{-306}$$ $$a_{2} = 1.024902291826228 \times 10^{-263}$$ $$a_{3} = 9.64728298857952 \times 10^{-239}$$ $$\vdots$$ $$a_{139} = 0.030612082968903314$$ $$a_{140} = 0.08471574013582125$$ $$a_{141} = 0.23274960588728968$$ $$a_{142} = 0.6349095910516465$$

That the high-power coefficients are the largest suggests that $f$ is either a very fast-growing function, or that the trivial $f(z) = 0$ is the only self-generating function.

Dan
  • 14,978
  • Here are the first 24 such functions with $f(0) = 0$: https://www.desmos.com/calculator/won06se4wh. Unfortunately, they do not seem to converge anything. This is why I had to fix $f(1)$ as well as $f(0)$. However, I computed these by doing rref to $(B - I)$. Maybe focusing on eigenvectors like you suggest would help. – Polygon Sep 21 '23 at 13:32
  • (Typo, and I can't edit. It should have said $f(0)=1$) – Polygon Sep 21 '23 at 14:19