3

From my understanding, the complexity of the algorithm is O(number of inputs * number of bits for input). The number of bits in binary notation is obviously less than the number of bits in unary notation. Then why is it that for binary, complexity is exponential but for unary, it is polynomial?

EDIT: I think the binary time is still faster. It just appears to be faster in case of unary due to notations. If x bits are used, then n has to be less than equal to x. So it becomes polynomial. while in case of binary, it's obviously O(n*2^number of bits). There might be something wrong in my explanation. Feel free add an answer to correct it.

Raphael
  • 72,336
  • 29
  • 179
  • 389
aste123
  • 445
  • 1
  • 9
  • 18

1 Answers1

12

Consider the following (silly) function cow, which accepts a number $n$:

def cow(n):
   k = 0
   for i in range(0, n):
       k =  k + 1
   return k

The loop takes $n$ steps, so the time needed for the function to complete on input $n$ is $\Theta(n)$.

However, the size of the input is not $n$ but rather $\log_2 n$ because computers represent numbers in binary. That is, if $b = \log_2 n$ is the size of the input, then the complexity is $\Theta(2^b)$ (because $n = 2^{\log_2 n}$), which is an exponential algorithm when measured in size of the input $b$. If we measure in terms of $n$ then it is a linear algorithm.

If we represented numbers in unary then we would have $b = n$, i.e., it would take $n$ "units" (not "bits" anymore) to represent $n$. In this case the complexity would be $\Theta(b)$, a linear algorithm.

Andrej Bauer
  • 30,396
  • 1
  • 70
  • 117
  • b = $\log_{2}(n) + 1$ is the amount of bits, e.g. $\log_{2}(8) = 3$ but clearly you need 4 bits to represent 8.

    Then the n that you calculate with $n = 2^{log_{2}(n)} - 1$ is the worst case the for-loop can run proportional to the input length.

    – DenLilleMand Mar 07 '20 at 15:27
  • Are you pointing out that I made an off-by-one error, or something else? If it is off-by-error, then I'd point out that you made an error, too, because $\log_2(n) + 1$ is not an integer. I purposely did not want to pollute the answer with stuff like rounding up $\log_2(n) + 1$ because the answer is trying to explain a very basic misconception, not count the exact number of bits needed. – Andrej Bauer Mar 07 '20 at 16:15
  • I found your answer very useful, so i am sorry if the tone was snarky. When trying it out on paper i just realized that writing a comment to elaborate what i found would maybe help someone who thinks like me. Very good note on the log not giving a integer, had not thought about that. – DenLilleMand Mar 07 '20 at 17:45
  • Sure, no problem there, but I couldn't tell from your comment what it was about. – Andrej Bauer Mar 07 '20 at 19:43