2

My numeric tests showed that the sequence of the remainders of $n^k$ divided by $m$ are periodic with respect to $k$ $(n, k, m \in \mathbb{Z}^+, n < m)$.

For example, for $n = 7$ and $m = 30$:

$k = 1: \quad 7^1 = 7 = 0 \cdot 30 + \fbox{7}$
$k = 2: \quad 7^2 = 49 = 1 \cdot 30 + \fbox{19}$
$k = 3: \quad 7^3 = 343 = 11 \cdot 30 + \fbox{13}$
$k = 4: \quad 7^4 = 2401 = 80 \cdot 30 + \fbox{1}$
$k = 5: \quad 7^5 = 16807 = 560 \cdot 30 + 7$
$k = 6: \quad 7^6 = 117649 = 3921 \cdot 30 + 19$
$k = 7: \quad 7^7 = 823543 = 27451 \cdot 30 + 13$
$k = 8: \quad 7^8 = 5764801 = 192160 \cdot 30 + 1$
$k = 9: \quad 7^9 = 40353607 = 1345120 \cdot 30 + 7$
$\vdots$

In this case, the remainders apparently have a period of 4: $7, 19, 13, 1$.

My questions:
a) Does such a period always exist?
b) Is there a way to calculate the length of the period from $n$ and $m$, without calculating the remainders as I did above?

Bill Dubuque
  • 272,048
kol
  • 123

2 Answers2

2

a) Yes, this sequence is always periodic, though it may start with a pre-periodic sequence.

The reason is that the remainder of $n^k$ divided by $m$ completely determines the remainder of $n^{k+1}$ divided by $m$, even if you don't know the value of $k$.

This is a special case of the fact that if you know the remainder of a number $a$ when divided by $m$, and the remainder of $b$ when divided by $m$, you know the remainder of $ab$ when divided by $m$, even if you don't know $a$ and $b$ specifically. That's the basis of calculation modulo m.

Since the number of possible remainders is finite ($=m$), a remainder has to repeat in the sequence $n^k$, and once the remainder of $n^{k_1}$ and $n^{k_2}$ is the same, so must be (by what I said above) $n^{k_1+1}$ and $n^{k_2+1}$, a.s.o. so the sequence of remainders repeated indefinitely.

To see an example with a pre-period sequence, consider $n=6, m=20$, where the remainder sequence is $6,16,16,\ldots$ This can only happen when $n$ and $m$ have a common divisor $>1$.

b) You can get some information on the period length, but if $m$ is really big, it might still not be easy to test all possibilities.

First, you can ignore all common prime factors of $n$ and $m$. If $p$ is such a factor, and $p^a$ is the highest power of $p$ that divides $m$, then $n^k$ will for all $k\ge a$ always be divisible by $p^a$. So the only remainders that will occur are those that are divisible by $p^a$ for $k\ge a$. So considering what happens for the prime factor $p$ is no longer giving any restriction, so it can be ignored.

So you can reduce $n$ and $m$ to $n'$ and $m'$ by dividing out their common prime factors (this can be done algorithimally fast by using the Euclidean algorithm). Then it is known that the length of the period must be a divisor of $\phi(m')$, where $\phi$ represents Euler's totient function.

Calculating $\phi$ when $m'$ is big and has an unknown prime number decomposition is hard. Even if you can do it, $\phi(m')$ might have a lot of divisiors, and as far as I know there is no easy way to find out which divisor is the period length. But I'm not an expert on this.

Ingix
  • 14,494
1

Hint: this follows from the periodicity of recurrences on finite rings. The power sequence $f_k = a^k\,$ satisfies the recurrence $\,f_{k+1} = a f_k,\ f_0 = 1.\,$ Since it take values in a finite ring $\, R = \Bbb Z_{30}\,$ the Pigeonhole Principle implies it eventually repeats: say $\,\color{#c00}{f_{j+n} = f_j}\,$ so induction using the recurrence shows it continues to repeat in-step for larger indices, i.e.

$$\begin{align} \color{#0a0}{f_{1+j+n}} &= a \color{#c00}{f_{0+j+n}} = a \color{#c00}{f_{0+j}} = \color{#0a0}{f_{1+j}},\ \ \ {\rm i.e.}\ \ \ \,a \ \left[ f_{n+j} = f_n\right]\rightarrow f_{1+j+n} = f_{1+j}\\ f_{2+j+n} &= a \color{#0a0}{f_{1+j+n}} = a \color{#0a0}{f_{1+j}} = f_{2+j},\ \ \ {\rm i.e.}\ \ \ a^2 \left[ f_{n+j} = f_n\right]\rightarrow f_{2+j+n} = f_{2+j}\\ &\ \ \,\vdots\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ \ \vdots\\ &\phantom{a \color{#0a0}{f_{k-1+j+n}} = a \color{#0a0}{f_{1+j}} = f_{2+j},\ \ \ \ \ }{\rm i.e.}\ \ \ a^k \left[ f_{n+j} = f_n\right]\rightarrow f_{k+j+n} = f_{k+j} \end{align}\qquad$$

Hence, $\, f_K = f_k\,$ if $K\equiv k\pmod{\! n}$ and $\,K,k\ge j,\,$ i.e. once both indices are $\ge j\, $ we enter a cycle of length $\,n,\,$ i.e. $\rm\color{#0af}{MOR}$ = mod order reduction eventually applies. This holds even if $\,j\,$ and $\,n\,$ are not the minimal possible values. When we choose them minimal - so $n$ is the order of the cycle - then we get a converse - just like in $\rm\color{#0af}{MOR}$.

Remark: $\! $ oh vs. rho orbits: $\!$ permutation orbits are cycles, i.e. o-shaped vs. $\rho$-shaped
When $\,a\,$ is coprime to the modulus $\,n\,$ then the shift map $f_n \to f_{n+1} = a f_n\,$ is invertible, so being an invertible map on a finite set it is a permutation, whose orbits are purely periodic, i.e. cycles, i.e. o-shaped (vs. generally having a preperiodic part, i.e. $\rho$-shaped, i.e. $\,j> 0).\,$ This simple general fact about such periodicity is often overlooked, resulting in reinventing the wheel (cycle).

Generally the same argument works over any finite commutative ring $R$ when we have a (nonlinear) recurrence of order $k$ of form $\,f_{n+k} = g(f_{n+k-1},\ldots,f_0)$ for $\,g\,$ a polynomial over $R$, i.e. where the next value is a polynomial function of the prior $k$ values. As above it is eventually periodic by $R$ is finite so there are only finitely many sequences of $k$ values from $R$ so they must eventually repeat as a subsequence in the sequence $\,f_n,\,$ hence the values of $f_i$ repeat in-step after these matching points, by induction - as above. If the recurrence is linear then we can represent the shift map as a matrix $A$ and then the repetition occurs via scaling by powers of $A$. In particular $\,f_n\,$ can be computed efficiently by repeated squaring of $A$, e.g. as here for fast computation of Fibonacci numbers.

Bill Dubuque
  • 272,048