5

I'm developing a IMHO very interesting algorithm for integer division. This algorithm uses boolean shifting(It shifts left for multiplication by 2). I'm wondering if << c is $O(c)$ or $O(n)$. I'm hearing it depends on architecture. Algorithms ideally should be architecture independent. So for the purposes of analysing the time complexity of my algorithm, should I consider << c as 1 primitive operation or c primitive operations.

I've looked at the other questions here, and they are saying it depends on the machine.

I'm using RAM model of computation, but don't want to assign << c a cost of 1 when it may in fact cost c.

Tobi Alafin
  • 1,617
  • 3
  • 16
  • 22
  • Of course it depends on the architecture. You have to check your CPU vendors manual to see how many cycles a shift takes. – adrianN Dec 28 '16 at 14:45

3 Answers3

6

Most of architectures use a single instruction for left and right shift.

Usually, this instruction (WLOG we condier only logical shift left) is LSL, the syntax is the following:

lsl $rd,$rs,#$offset

where rd is the registry in which load the variable, rs is the registry in which store the result and #offset is the number of bit to shift.

So you could consider the left or right shift operation as taking $O(1)$, because this operation takes only a single CPU's clock.

dariodip
  • 864
  • 10
  • 19
6

In the RAM machine model, operations on "machine words" cost $O(1)$, where a machine word has width $O(\log n)$ bits, $n$ being the length of the input (in bits). This reflects the instruction sets of real world CPUs, as mentioned in the other answers.

While the RAM machine model makes sense for many algorithmic tasks, it is not a good fit for large integer arithmetic, since in practice the machine word is fixed while the integer length is growing. Instead, one usually measures bit operations. Shifting an $m$-bit integer by $c$ bits takes $O(m+c)$ bit operations.

These issues are explored in a paper of Martin Fürer, the inventor of the fastest known integer multiplication algorithm (which is also the fastest known integer division algorithm).

Yuval Filmus
  • 276,994
  • 27
  • 311
  • 503
  • 3
    Fürer's algorithm is no longer the fastest known. ​ ​ –  Dec 29 '16 at 00:49
  • I disagree with your analysis of shifting. To shift an $m$ bit integer by $c$ bits, I would need at most $c$ bit operations. To expand, shifting right by $c$ bits, is deleting the $c$ most significant bits. Shifting left by $c$ bits, is simply adding $c$ trailing 0s, so all in all $c$ operations. I guess the time complexity of shifting is $O(c)$ then. – Tobi Alafin Dec 29 '16 at 15:41
  • @TobiAlafin We will have to disagree. – Yuval Filmus Dec 29 '16 at 22:41
  • I'm not against disagreeing. But please at least explain why you disagree? Is there a fault in my argument? – Tobi Alafin Dec 29 '16 at 22:55
  • @TobiAlafin It depends on the bit operations model. If you think of a number as an array of bits, then you can't shift a number in one operation. – Yuval Filmus Dec 29 '16 at 22:58
  • If it is an array of bits, I can to shift left add $c$ bits(0). To shift right, delete the first (non-zero) $c$ bits. – Tobi Alafin Dec 30 '16 at 06:12
  • @TobiAlafin You can shift an array in one operation. It takes one operation per cell. – Yuval Filmus Dec 30 '16 at 07:20
  • Why. To shift right $c$ bits, delete first $c$ cells. To shift left, add $c$ cells(set them to $0$'s. – Tobi Alafin Dec 30 '16 at 07:41
  • 1
    @TobiAlafin it depends on how you implement arrays. The usual implementation doesn't allow you to add elements on both sides for free, but you could use an alternative implementation. – Yuval Filmus Dec 30 '16 at 08:14
2

On most modern CPU architectures, such as an i5 or i7, bitshifting isn't only constant time, it's typically done in just a couple clock cycles max. It's generally considered one of the fastest operations you can do.

Joe Horne
  • 165
  • 1
  • 2
  • 7