I've been reading around the area of complexity and arithmetic operations using logic gates; one thing that is confusing me is that \begin{equation} \Theta (n^{2}) \end{equation} is quoted as being the complexity for multiplication for iterative adition. But addition of a number requires \begin{equation} log_2(n) \end{equation} operations, 1 for each bit or 8 times that for each nand gate involved in doing this. So it strikes me as obvious that adding that number n times will have a complexity of \begin{equation} n \log_2(n) \end{equation} Which is definitely less than \begin{equation} \Theta (n^{2}) \end{equation} So where is this additional factor of \begin{equation} \frac{n}{\log_2(n)} \end{equation} coming from?
Asked
Active
Viewed 8,131 times
4
-
5Check which model of computation is assumed, and what $n$ denotes exactly. – Raphael Feb 03 '16 at 12:01
-
@Raphael +1 Good advice. When working with numbers, this is very a common detail to get mixed up. – Lan Feb 03 '16 at 13:01
-
Nobody sane would calculate $a \times b$ by adding $a$ together $b$ times. There are far more efficient multiplication algorithms, some of which you should've learned in grade school. – Ilmari Karonen Feb 03 '16 at 15:26
-
I am aware that this is far from how multiplication is executed in modern computors. I was just confused at the compexity given for this method. I am curious though which methods should I have been taught for multiplication? I'm not sure I can recall any. I read your link and do recall learning long multiplication (not really something that needs teaching, very intuitive), but that isn't any more efficient for a computor. – Duke of Sam Feb 03 '16 at 15:53
-
Ah yes, I see how the "shift and add" method is alot faster than iterative adition. Not so easy to understand its implimentation at a bit level though. – Duke of Sam Feb 03 '16 at 15:58
-
Correct me if i'm wrong but to multiply two n bit numbers using shift and add is complexity n! – Duke of Sam Feb 03 '16 at 16:17
1 Answers
8
Addition of a number of size $n$ takes time $O(n)$. Don't confuse a number and its encoding size, which is logarithmically smaller.
When multiplying an $n$-bit integer $a$ by an $n$-bit integer $b$ using the iterative addition algorithm, you are adding up to $n$ shifted copies of $a$. Each addition costs you $O(n)$ rather than $O(\log n)$. The numbers $a,b$ itself could be as large as $2^n$.

Yuval Filmus
- 276,994
- 27
- 311
- 503