0

In class we have learned that division takes O($k^2$) where k is the bitlength of the numbers used in the operation. What would be the runtime of a function that looks like this?

while a % 2 == 0:
    a = a / 2

I am guessing the worst case is $O(log(k)k^2)$ as a decreases by 2 each time but each division takes $k^2$

eatorres
  • 3
  • 2
  • Keep in mind the time of an operation depends on the size of the arguments. $a / 2$ would be implemented as a right bit shift. Asymptotically the typical bit shift algorithm would be linear in the bit length of $a$. Worst case $a = 2^n$ and you have to perform $n$ bit shifts, each the number of bits of $a$. That would take time proportional to $(n+1) + (n) + (n-1) + \ldots + 2 \approx \frac{n^2}{2}$. There would also be $n$ mod-by-2 operations which would probably be implemented $O(1)$ so they can be ignored. So the algorithm is $\theta(n^2)$ where $n = \log_2 a$, or $\theta(\log^2 a)$. – Reinstate Monica Mar 15 '18 at 20:35

2 Answers2

1

Numerical division of arbitrary numbers is $O(k^2)$, but division by 2 is a special case that is always $O(k)$, and a % 2 is $O(1)$, since it only requires examining a single bit. Since the loop requires $O(k)$ iterations, that makes the whole computation $O(k^2)$ in terms of bit operations.

In practice, however, you'd either

  • have a stored in a shift register to begin with, which means that the division (shift) operation is $O(1)$, making the overall implementation $O(k)$.
  • use a pointer to find the first nonzero bit of a (which takes $O(k)$) and then copy the bits to the result just once (which is also $O(k)$), making the overall implementation $O(k)$
Dave Tweed
  • 138
  • 6
0

Take the case that $a=2^{63}$, so k = 63. The loop will iterate k times. Under the assumption that a division takes $O(k^2)$, the total is $O(k^3)$. On the other hand, that is not the worst case. The worst case is a = 0, where the algorithm never terminates.

Using clever methods, multiplication and division for the general case can actually be performed a bit slower than $O (k \log k)$, which would be relevant if k is say a few hundred thousand. But if a is a binary number, then division by 2 can be performed in O(k), so the algorithm (if you check for a = 0 first) takes $O (k^2)$.

On the other hand, assuming binary representation, you can do this a lot faster by counting the number of trailing zero bits in O (k), then shifting the number by that many bits in O (k), with a total of O (k).

gnasher729
  • 29,996
  • 34
  • 54