1

Let $N$ be a positive integer.

I would like to determine $\mathcal{O}(\frac{1}{N})$.

Let $n$ denote the number of bits in $(N)_{2}$, the binary representation of $N$.

If ordinary long division is used, am I correct in assuming that the calculation of the reciprocal of $N$ will be $\mathcal{O}(n)$? My reasoning is this---

In the case of dividing the integer $a$ by the integer $b$, the Big-O time complexity associated the decimal expansion of the fraction $\frac{a}{b}$ is (I am presuming), $len(a) \cdot len(b)$, where $len$ is the number of bits in the binary form of the integers $a$ and $b$, respectively. Hence, for $\left(\frac{1}{N}\right)_{2} = \frac{1}{n}$, $\mathcal{O}(1/N) = len1 \cdot lenN$ = $n$.

Is this reasoning correct?

Incidentally, the $N$s I have in mind are very, very large. I expect that this does not matter in terms of obtaining a big-$\mathcal{O}$ estimate, but please advise me if it does.

Many thanks.

DDS
  • 3,199
  • 1
  • 8
  • 31
  • 1
    What certainly matters is the representation of $1/N$ that is to be output. We can define the "time complexity" of an algorithm (subject to an assumed model of computation), not the time complexity of a value or expression. – hardmath Sep 21 '20 at 04:12
  • @hardmath Yes, indeed. But I have an algorithm which performs this computation as one of its steps. I am trying to assess the overall complexity of the algorithm, but to do that, I have to asses the complexity of each step. $1/N$ is one of them. – DDS Sep 21 '20 at 05:36
  • Again, how do you represent $1/N$? Clearly if $N$ is "very, very large", then $1/N$ is correspondingly small, and certainly not itself an integer or in most cases a terminating binary fraction. Perhaps you are interested in the repeating part of the binary fraction and the preceding bits that lead up to it? Note that the radix $N$ expression is found trivially. The binary fraction not so much. – hardmath Sep 21 '20 at 13:33
  • @hardsmith You are correct!! I am interested in the repeating part of $1/N$. e.g., if N is, say, a $1000$ digit number, and if theoretically, I had access to a computer powerful enough to compute $1/N$ to, say, $\lfloor \log_{2}(N) \rfloor$ bits of precision to be sure that I am able to produce the exact repetend---that is what I am trying to get a big-$\mathcal{O}$ bound on. Is this enough information to reasonably approximate $\mathcal{O}(1/N)$ within my algorithm? Thank you again. – DDS Sep 21 '20 at 16:41

1 Answers1

1

Assuming "if ordinary long division is used," computing $1/N$ in binary representation far enough "to produce the exact repetend" will take up to $\mathcal O(nN)$ bit operations.

Consider the complexity of a single step of the usual long division algorithm, carried out we presume in binary arithmetic. We will subtract one $n$-bit operand from another, which means $n$ subtract-with-borrow operations.

We then need to bound the length of the repetend -- the repeating sequence of the binary representation of the reciprocal $1/N$ -- because that relates the number of subtraction steps required by the long division. We cannot be sure a priori how many such steps are necessary, and one might argue that in the steps where the trial dividend bit is zero, no actual subtraction is necessary. On the other hand we are looking for an upper bound, and furthermore checking whether zero or one is right trial dividend bit is a comparison of two $n$-bit quantities, so one may need to carry out the subtraction in order to know whether or not it is needed.

Interestingly the length $k$ of the repeating sequence is the multiplicative order of $2$ modulo the largest odd divisor of $N$. One would generally have removed any even factor of $N$ to begin with (the trailing zeros of its binary representation, so a known quantity) and thus reducing to the case $N$ itself is odd. We will assume so for the rest of the discussion.

This is often discussed in the context of a repeating decimal expansion here on Math.SE. See for example this previous Question and Ross Millikan's Answer, tying the length of the repeating decimal part $k$ to the minimal positive integer such that $10^k \equiv 1 \bmod N$.

As a practical matter, although one can separately determine that length in advance, one can simply carry out the long division until a remainder of one is obtained. That's where our division $1/N$ started, so if we reach that point again, all the steps will repeat afterwards.

Now how big can $k$ be? As the multiplicative order of $2$ in $\mathbb Z/N\mathbb Z$ divides Euler's totient function $\varphi(N)$, that gives us an upper bound. In fact Artin's conjecture on primitive roots implies that for infinitely many prime values of $N$, this multiplicative order $k$ will be $N-1$. (Recall the well-known decimal case of $1/7$ having a repetend length six... because $10^6 \equiv 1 \bmod 7$.) For more information see the Wikipedia article Full repetend prime.

hardmath
  • 37,015
  • According to Wikipedia's Computational Complexity of math. operations, en.wikipedia.org/wiki/… , the division of two n digit numbers is on the order of O(n2). Are you saying that to divide 1 by N (which has roughly logN digits), the complexity is on the order of O(nN)? And would this not translate into something like O(n10n)? Or am I incorrect in my ``translation''? – DDS Sep 24 '20 at 18:55
  • Perhaps this section is what you refer to? If so, note that the "output" specified for division of two $n$-bit numbers is only the $n$-bit integer quotient. Of course the remainder can also be obtained in $\mathcal O(n^2)$ time. But in my answer I'm addressing the output of the repeating sequence in a binary representation of the reciprocal. The main issue is that this repeating sequence will typical be many more bits long than the denominator. – hardmath Sep 24 '20 at 19:58
  • Thank you. I apologize for the atrocious formatting in my first comment. I am no longer permitted to edit it. In your original post, may I conclude that in terms of $n$, $\mathcal{O}(nN)$ roughly equates to $\mathcal{O}(n 10^{n})$? And if so, it thus may take more than exponential time to arrive at the first repeating sequence of digits in the remainder (i.e., the repetend)? Thank you. – DDS Sep 24 '20 at 21:09
  • 1
    In view of $N \approx 2^n$, $\mathcal O(nN) = \mathcal O(n2^n)$. So yes, this is exponentially more time in general to arrive at the full repeating sequence of bits. Your conclusion would be correct if it were a decimal expansion, instead of the binary one contemplated in the original post. This looong expanse of the repeating sequence is closely related to the long period of a well-chosen linear congruential generator of pseudo-random numbers. – hardmath Sep 25 '20 at 01:05