Using the grade school method of multiplying two binary numbers takes $O(n^2)$ time, where $n$ is the length of the number in bits. Why is this true?
4 Answers
As computer scientists, we can consider two numbers to be multiplied, $A$ and $B$.
We can then rearrange the problem as follows. Let the smaller number have $n$ bits, and the larger number have $m$ bits. We can then break apart the larger number into chunks of $n$ bits, and perform seperate multiplications on these pieces. After this is done, we can add the results together.
Breaking the multiplication into pieces allows us to focus on the important part of multiplication, which then becomes two $n$ bit numbers. It's generally easiest to describe multiplication in this fashion, because all other multiplication problems generally reduce to this problem.
As for the $O(n^2)$ term, this comes from having to multiply each bit of $A$ with each bit of $B$. Since there are now $n$ bits in each number, this takes $n^2$ bit multiplicatoins. In general, we can describe this in asymptotic notation or Big-O notation.
All other operations require $n^2$ or less operations, so this is why we describe this as $O(n^2)$. For example, there are also approximately $n^2$ additions, or $O(n^2)$. This is because we take the multiplication results from multiplying all the bits together, of which there are $n^2$, and add them together. So this also takes $O(n^2)$. In general, there are $O(n^2)$ operations or less for each type of arithmatic that we use in the problem.

- 6,117
-
@awfsdioafj: Welcome! Let us know if there is anything else that you would like to know. – Matt Groff Oct 31 '12 at 23:47
-
Thank You! One last thing: Aren't there at most $2n$ additions? There are $2n -1$ columns + one additional possibly "carry over" 1. Why are there $n^2$ additions? – awfsdioafj Oct 31 '12 at 23:52
-
@awfsdioafj: Yes, but each of those $2n$ additions has up to $n$ terms, so they don't complete in constant time each. (Which is strictly speaking besides the point because "$O(n^2)$" by definition expresses an upper bound, so counting too much work will never give a wrong big-O result). – hmakholm left over Monica Oct 31 '12 at 23:55
-
Let $a$ and $b$ be binary numbers with $n$ digits. (We use $n$ digits for each since that is worst case.) When using the partial products (grade school) method, you take one of the digits of $a$ and multiply it with each digit of $b$. This single pass takes $n$ steps. This process must be repeated for each digit of $a$. Since $a$ has $n$ digits and $b$ has $n$ digits, the worst-case runtime is $O(n^{2})$.

- 1,086
-
-
Adding up the partial sums takes $O(n^{2})$ time. $O(n^{2} + n^{2}) = O(n^{2})$ – Unreasonable Sin Oct 31 '12 at 23:48
This O(N^2) operattion is true for number larger than 64 bits. If the number fits in 64 bits this is done in 1 clockcyle or O(1)
-
2Could you describe how a multiplication algorithm could be done in such case in O(1)? a link will be even better :) – User Mar 04 '14 at 15:53
If parallel processing is allowed, and $O(n^2)$ parallel processing units are available, then addition/subtraction can be done in $O(\log(n))$ time using carry-look-ahead-adder. Booth's algorithm uses $O(n)$ additions to multiply/divide two $n$ bit numbers.
Now applying pipelining, we get the total time to compute multiplication as $O(\frac{n}{\log(n)} \times \log(n)) = O(n)$.

- 131
- 5
n
here represents the length of the number (number of it's binary digits) and not the number itself. – Artium Oct 31 '12 at 23:30