10

(This sounds like a fairly natural question, but I couldn't find it asked on this site before…)

Fix an integer $n$ as the modulus, and consider polynomials (with integer coefficients) modulo $n$.

For any polynomial $f$ and any fixed $k$, we have $f(x) + nx^k \equiv f(x) \pmod n$ for all integers $x$; thus any two polynomials with some coefficient differing by a multiple of $n$ are equivalent in the sense that they give the same value mod $n$ for any integer $x$ as input. So we can assume that each coefficient of $f$ is one of $\{0, 1, 2, \dots, n-1\}$.

When $n$ is prime, we know (from Fermat's little theorem) that $x^n \equiv x \pmod n$ for all integers $x$, which means any polynomial of degree $n$ or greater is equivalent (in the same sense as above) to a polynomial of lower degree. Even when $n$ is not prime, some such relationship probably exists; for example when $n=4$, we can see that $x^4 \equiv x^2 \pmod 4$. (I guess, though this may be wrong and I haven't checked it, that if the prime factorization of $n$ is $n = p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r}$, then $x^n \equiv x^{n/(p_1p_2\cdots p_r)} \pmod n$.)

In any case, whenever $x \equiv y \pmod n$, we have $f(x) \equiv f(y) \pmod n$, so it is enough to consider the polynomial's values at $n$ consecutive integers say $\{0, 1, 2, \dots, n-1\}$. As there are at most $n$ distinct values it can have mod $n$ for each input $n$, there are at most $n^n$ distinct polynomials in this sense.

The question: How many distinct polynomials with integer coefficients are there mod $n$? Here, the meaning of "distinct" is that polynomials $f$ and $g$ are "distinct" if there exists at least one integer $x$ at which $f(x) \not\equiv g(x) \pmod n$. Or, using the terminology from what I found later (see below), we want the number of polynomial functions from $\mathbb{Z}$ to $\mathbb{Z}/n\mathbb{Z}$.


That is the question, but of course before asking it here I tried a few things to find the answer myself, including writing the following program.

Define the "signature" (just a name I made up for writing this program: is there already a standard term for this?) of a polynomial $f$ as the tuple $(f(0), f(1), f(2), …, f(n-1))$ of its values at all $n$ points. We can count the number of distinct polynomials mod $n$ by keeping track of all the encountered signatures, as we start with the zero polynomial and successively add $ax^k$ for each $a$ from $0$ to $n-1$, for increasing $k$ until $x^k$ has the same signature as some already seen polynomial.

#!/usr/bin/env python3

Adding the signatures of two polynomials mod n

def add(t1, t2): n = len(t1) assert n == len(t2) return tuple((t1[i] + t2[i]) % n for i in range(n))

def count(n): szero = tuple([0]n) # The signature of the zero polynomial seen = {szero} # Only the zero polynomial has been seen k = 0 sxk = tuple([1]n) # The signature of x^k while True: news = set() # New signatures, by adding ax^k for some a saxk = szero for a in range(1, n): saxk = add(saxk, sxk) # The signature of a(x^k) for s in seen: newp = add(saxk, s) # a(x^k) + (smaller polynomial) if newp not in seen: news.add(newp) seen.update(news) # Now done with all polynomials of degree k k += 1 sxk = tuple((sxk[i]*i) % n for i in range(n)) if sxk in seen: break return len(seen)

print('n', '\t', 'count(n)') for n in range(1, 11): # Gets very slow for n=11 print(n, '\t', count(n))

This prints:

n    count(n)
1    1
2    4
3    27
4    64
5    3125
6    108
7    823543
8    1024
9    19683
10   12500

The good thing about computing these small values is that there is the OEIS, using which you can, to quote Donald Knuth, "compute your way into the literature". And accordingly, searching the OEIS for this sequence $1$, $4$, $27$, $64$, $3125$, $108$, $823543$, $1024$, $19683$ gives A058067 described as "Number of polynomial functions from Z to Z/nZ" which is exactly our question. We also learn from that page that there's a formula

$$ \prod_{k=0}^{n-1} \frac{n}{\gcd(n, k!)} $$

that fits the numbers we computed and (per that page) was first proved by Aubrey J. Kempner in 1921 in a two-part paper called "Polynomials and their residue systems" (1, 2), and by David Singmaster (1974), and there's what appears to be a nice paper in 2000 by Manjul Bhargava called "The Factorial Function and Generalizations" PDF that generalizes it (and maybe gives a shorter proof).

But I haven't read any of these papers as I ran out of my time budget for this (well, I exceeded it by writing up this question), and I hope someone can give a clear proof/derivation of the formula either using those papers or independently.


Edit: I read Bhargava's paper and understood it enough to answer this question below.

Bill Dubuque
  • 272,048
ShreevatsaR
  • 41,374
  • Closely related to https://math.stackexchange.com/questions/3387540/polynomials-that-induce-the-zero-function-mod-n – lhf Feb 18 '22 at 11:41
  • @lhf Yes, I used the result from this question/answer to leave an answer at that question. :-) AFAICT it is the first complete answer at that question, but doesn't seem to have gotten any attention. – ShreevatsaR Feb 18 '22 at 17:53
  • Also linking this. IIRC the paper Carlitz (too bad I cannot find an exact reference) I mentioned there does discuss this problem. – Jyrki Lahtonen Sep 06 '23 at 15:40

3 Answers3

12

Here is a proof based on the article by Manjul Bhargava (2000): this article is nicely written and won a Merten M. Hasse Prize ("for a noteworthy expository paper appearing in an Association [MAA] publication, at least one of whose authors is a younger mathematician"). Though the paper is about a more general setting, we can restrict to the case of integers (undo the generalization!), to get a nice and illuminating proof.

Lemma 1: Any integer polynomial $f$ can be written in the "falling factorial basis" instead of the monomial basis. In other words, if $f(x) = \sum\limits_{k=0}^d a_k x^k$ where each $a_k$ is an integer, then we can also write $$f(x) = \sum_{k=0}^d c_k x^{\underline k}$$ for some integer coefficients $c_k$, where $x^{\underline k}$ means $x(x-1)\cdots(x-k+1)$.

Proof: We have the expression $x^n = \sum_k {n \brace k} x^{\underline k}$ where ${n \brace k}$ is a Stirling subset number ("Stirling number of the second kind"), etc. See for instance Chapter 6 of the book Concrete Mathematics, or Wikipedia. $\blacksquare$

So from now on we'll write all polynomials in this basis.

Lemma 2: The polynomial $c_k x^{\underline k}$ vanishes identically mod $n$ (meaning that $c_k x^{\underline k} \equiv 0 \pmod n$ for all integers $x$) if and only if $c_k k!$ is divisible by $n$.

Proof: If $c_k x^{\underline k} \equiv 0 \pmod n$ for all $x$, then taking $x = k$ in particular gives $c_k k! \equiv 0 \pmod n$. For the other direction, if $c_k k!$ is divisible by $n$, then (for any integer $x$) note that $x^{\underline k}$ is a product of $k$ consecutive integers and is thus divisible by $k!$, which means that $c_k x^{\underline k}$ is divisible by $c_k k!$ which in turn is divisible by $n$. $\blacksquare$

Lemma 3: Let $f(x) = \sum_{k=0}^d c_k x^{\underline k}$. Then $f$ vanishes identically mod $n$ (meaning that $f(x) \equiv 0 \pmod n$ for all integer $x$) if and only if each term $c_k x^{\underline k}$ vanishes identically mod $n$.

Proof: One direction is obvious: if each $c_k x^{\underline k}$ vanishes identically mod $n$, then so does their sum $f$. For the other direction, suppose $f \equiv 0 \pmod n$, and let $j$ be the smallest index such that $c_j x^{\underline j} \not\equiv 0 \pmod n$. Then,

  • for $k < j$ we have $c_k x^{\underline k} \equiv 0 \pmod n$ by definition of $j$,
  • for $k > j$, by taking $x = j$ we see that $c_k j^{\underline k}$ includes a factor $0$ so $c_k j^{\underline k} \equiv 0 \pmod n$.

As $f(x) \equiv 0 \pmod n$, this means (still looking at $x=j$) that the only remaining term $c_j j^{\underline j} \equiv 0 \pmod n$ as well, i.e. $c_j j!$ is divisible by $n$. This from Lemma 2 means that $c_j x^{\underline j} \equiv 0 \pmod n$ for all $x$. $\blacksquare$

Lemma 4: $c_k k!$ is divisible by $n$ if and only if $\frac{n}{\gcd(n, k!)}$ divides $c_k$.

Proof: A special case of "If $a$ divides $bc$, then $a/gcd(a,b)$ divides $c$", which can be proved in many ways, say using the unique prime factorization.$\blacksquare$

This leads to a theorem characterizing all the polynomial functions mod $n$:

Theorem: Any polynomial function $f: \mathbb{Z} \to \mathbb{Z}/n\mathbb{Z}$ can be expressed uniquely in the form $$f(x) = \sum_k c_k x^{\underline k} \qquad\text{ where }0 \le c_k < \frac{n}{\gcd(n, k!)}.$$

Proof: By the previous lemmas, note that changing one of the coefficients $c_k$ by a multiple of $\frac{n}{\gcd(n, k!)}$ means adding a term which vanishes identically mod $n$, and thus does not change the function $f$. This means that each of the coefficients $c_k$ is determined only modulo $\frac{n}{\gcd(n, k!)}$. $\blacksquare$

(Note that this is essentially a "structure theorem" for the abelian group (under addition) of polynomial functions mod $n$.)

Finally, as there are $\frac{n}{\gcd(n, k!)}$ choices for each coefficient $c_k$, the number of distinct polynomial functions modulo $n$ is $$\prod_k \frac{n}{\gcd(n, k!)}$$


Edit. Here's another proof I came up with, that's more computational and less "magic". It came up while trying to enumerate all possible "signatures" $(f(0), f(1), \ldots, f(n-1))$.

Specifically, suppose we are given a "signature" $(y_0, y_1, \ldots y_{n-1})$ and want to find a polynomial (or all polynomials) $f$ such that $f(0) \equiv y_0$ and $f(1) \equiv y_1$ and so on (all congruences are modulo $n$). By Lemma 1 from the proof above, we can suppose that the polynomial we are trying to find has been written in the basis of "falling factorial powers" $x^{\underline k}$, where $x^{\underline k}$ means $x(x-1)(x-2)\cdots(x-k+1)$. That is, we can suppose that the polynomial we want is $$f(x) = c_0 + c_1 x + c_2 x^{\underline 2} + c_3 x^{\underline 3} + \ldots,$$ where we are trying to solve for the coefficients $c_0, c_1, \dots$.

This we can do iteratively, by considering the given values of $f(0), f(1), \dots$ in order:

  • $f(0) = y_0$ is the same as $c_0 \equiv y_0$, which can always be "solved" for $c_0$, i.e. just set $c_0 \equiv y_0$.

  • Having thus determined $c_0$, the next equation we were given, namely $f(1) = y_1$, is now the same as $c_0 + c_1 \equiv y_1$ or equivalently $c_1 \equiv y_1 - c_0$ which can always be solved for $c_1$, as $c_0$ is known.

  • Next, $f(2) = y_2$ is the same as $c_0 + 2c_1 + 2c_2 \equiv y_2$ or equivalently $2c_2 \equiv y_2 - c_0 - 2c_1$ which (now that we have determined $c_0$ and $c_1$) can be solved for $c_2$ except in one case: if $n$ is even and the RHS is odd, then our congruence $2c_2 = \text{(an odd number)} \pmod n$ is impossible. So we need to impose a condition, namely: if $n$ is even then $y_2$ must be such that the RHS is also even. When $n$ is odd, any value mod $n$ will do for $y_2$, but when $n$ is even only half the values mod $n$ will do for $y_2$. In short, there are $\dfrac{n}{\gcd(n, 2)}$ values for $y_2$ for which we can find $c_2$ to satisfy $f(2) = y_2$.

  • Next, $f(3) = y_3$ is the same as $c_0 + 3c_1 + 6c_2 + 6c_3 \equiv y_3$ or equivalently $6c_3 \equiv y_3 - …$ which can be solved for $c_3$, for $\dfrac{n}{\gcd(n, 6)}$ values of $y_3$.

And so on: in general $f(k) = y_k$ is the same as $k!c_k \equiv y_k - …$ which can be solved (for $c_k$) for $\dfrac{n}{\gcd(n, k!)}$ values of $y_k$.

So the number of $(y_0, y_1, \ldots)$ that can possibly be the signatures of a polynomial (and thus also the number of polynomial functions) is:

$$\prod_{k \ge 0} \frac{n}{\gcd(n, k!)}$$

(Deep down there are of course some similarities between this proof and the previous one, but at least this version is something I could imagine having come up with by myself, with only the falling factorial basis idea.)

ShreevatsaR
  • 41,374
  • Clearer: $,f(x)$ is the zero function $!\bmod n!!\iff!$ its coefs in binomial basis are all $\equiv 0\pmod{!n},,$ i.e. $,f(x) = \sum_k b_k{x\choose k}\Rightarrow,$ all $,b_k\equiv 0\pmod{!n}.,$ There are very old results. $\ \ $ – Bill Dubuque Nov 22 '23 at 07:01
  • @BillDubuque When writing up the question I found Kempner (1921) and Singmaster (1974) as the references (via OEIS). See also https://math.stackexchange.com/q/3387540 for the zero function question. I'll think more about your hint, thank you. – ShreevatsaR Nov 22 '23 at 13:14
  • @BillDubuque I tried working out your hint for a bit — I guess what we want to prove is that, for an integer-valued polynomial (in the sense of the other question, i.e. allowing non-integer coefficients) to correspond to a polynomial function (in the sense of this question, i.e. starting with integer coefficients), we must have each $b_k$ be a multiple of $k!$, and that is how the $n/\gcd(n, k!)$ factor will enter. It's not immediately obvious to me how to prove this cleanly (to get a cleaner/shorter proof than the second proof above), but will think about it a bit more. – ShreevatsaR Nov 26 '23 at 17:17
2

The simplest case is when $n$ is prime: Then the first $p$ monomials are independent and give us $n^n$ polynomial functions.

The next case of interest is when $n$ is a prime power, whci is left as an exercise to the reader. Once we have solved that, the full result follows using the Chinese Remainder Theorem.

  • The prime power case is the hard part that I wasn't able to see easily myself; could you explain it a bit? – ShreevatsaR Mar 21 '21 at 00:37
  • @ShreevatsaR On polys of prime powers, see https://en.wikipedia.org/wiki/Hensel%27s_lemma – PM 2Ring Mar 21 '21 at 00:44
  • @PM2Ring Well, I vaguely know about Hensel's lemma (had studied it at some point years ago) but I don't know how to apply it here. To ask more concretely, as I imagine I may be able to get the general idea from the example: how to prove that (A) if $n = p^2$, then the number of polynomial functions is $p^{3p}$? (B) If $n = p^3$ where $p \ge 3$, then the number of polynomial functions is $p^{6p}$? (Right now this answer covers the part that's easy for me and leaves the part that's hard for me as an exercise!) – ShreevatsaR Mar 21 '21 at 01:18
  • I just posted an edit to the question clarifying that this answer is not the part of the question I have trouble with. :-) I didn't downvote obviously (thanks for answering!) but this initial version of this answer almost feels like a joke answer (or at least I find it funny), like the "draw the rest of the owl" meme! – ShreevatsaR Mar 21 '21 at 02:12
  • @ShreevatsaR I see your point. :) The actual story behind this answer was that late at night and fatigued, I started writing "primes and CRT" until I noticed the gap. I added the ominous "exercise" and went to bed, hoping to expand with a fresh brain today. -- Apparently way better answers have been added meanwhile – Hagen von Eitzen Mar 21 '21 at 11:35
  • @HagenvonEitzen :-) Makes sense, thanks for answering… For my part, actually the reason I posted the comments about the gap is that at the time, this answer had as many votes as the question, so I worried a bit that everyone would think at a glance that the question was fully answered, and no one may look deeper / I may never get the explanation I was hoping for. :-) Meanwhile I was able to read one of the linked papers a bit more carefully and understand what's going on, fortunately. – ShreevatsaR Mar 21 '21 at 18:02
0

Well bearing in must $m^k$ will eventually repeat. Perhaps not with $m^{\phi(n)}\equiv m^0\equiv 1\pmod n$ but with some priliminary $k$ so that $m^{\phi(n) + k}\equiv m^k$.

So that gives us an upper bound $ax^k \equiv (a-n)x^{k-\phi(n)}$ so $(\phi(n) + 1)(n-1)$ is an upper bound.

fleablood
  • 124,253