8

Let's say we had to evaluate the following string of multiplications $5 \times 6 \times 7 \times 8$ , we could, for instance, order it by doing the biggest multiplications first:

$$ 5 \times \left( 6 \times \left( 7 \times 8 \right) \right)$$

Or, we could do it the other way bracketting from the smallest:

$$\left( (5 \times 6) \times 7 \right) \times 8$$

In general, would it be possible to say which one would involve a more number of computational steps? If no general can be given, share insight on why?


I'll try to explain computational steps using examples. Consider the following product:

$$ \text{ }56 \\ \times 2$$

We'd have one step for multiplying $2$ by $6$ , and another for multiplying $2$ by $5$ and finally one step for adding over the carry over from $2$ by $6$ (which is 1). Hence total of three steps.

If we had the produce $11 \times 12$, we'd have four multiplication steps, which give us two numbers to add $22$ and $110$. Now for adding, we'd have three steps since we'd have to add ones place, tens place and hundreds place.

So, total seven steps.


Notes:

  1. I consider multiplication between any two number less than or equal to size ten as one step.

  2. Multiplication by a zero takes no steps. Eg:10x1 is 1 step, 110x2 is 2 step and so on.

  3. In the addition step following a multiplication, the additions are considered till largest place value of the numbers resulted out of multiplication. For eg, in 11x12 , we consider it till hundreds due to 110

  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed. – Pedro Mar 05 '23 at 18:28

4 Answers4

5
  1. If the numbers and the product are all "small" (magnitude less than $2^{63}$); the order doesn't matter at all; the multiplication will take time linear in the number of factors as it simply requires one IMUL operation per factor.

  2. If the numbers are large and you use the standard (elementary school) multiplication algorithm, the order also doesn't matter. The standard multiplication algorithm requires $O(d_1d_2)$ operations to multiply a $d_1$-digit number by a $d_2$-digit number, and the result will have $d_1+d_2$ digits. If you have three numbers $n_1, n_2, n_3$ with $d_1,d_2,d_3$ digits and multiply the first two together first, you will need $$O(d_1d_2) + O((d_1+d_2)d_3) = O(d_1 d_2 + d_1 d_3 + d_2 d_3)$$ operations, and by the symmetry of the resulting expression you can see that changing the order won't change the cost. You can prove using induction that in general, no matter how you associate the multiplications, multiplying together numbers with $d_1 + d_2 + \cdots + d_k$ digits will require $O\left(\left[\sum_{i=1}^k d_i\right]^2 - \sum_{i=1}^k d_i^2\right)$ operations.

  3. For very large numbers, faster algorithms such as Strassen multiplication will be used. I don't know what order is optimal for these algorithms.

user7530
  • 49,280
  • 2
    (1) I don't know what IMUL operation is, if you are talking about some CPU (or abstract) instruction, it is not necessarily true that each IMUL is equally fast. That would have to be an assumption, but then it is equivalent to the claim. So not really interesting. Plus I don't know where $2^{63}$ comes from, again concrete hardware model? (2) the complexity argument does not seem to be relevant. It may be symmetric in complexity, but not necessarily in real performance. I think OP asks about concrete number of steps though. – freakish Mar 04 '23 at 21:51
  • 1
    @freakish You can perform an n-bit times n-bit multiplication in O(log n) time if you throw O (n^2) amounts of hardware at it. That's well known, except of course only a very small number of hardware manufacturers do it. – gnasher729 Mar 04 '23 at 21:57
  • 1
    @freakish the “real performance” will depend on many factors not specified in the problem statement. The best I can do here is explain how a computational complexity theorist would think about the problem. – user7530 Mar 04 '23 at 21:57
  • 2
    @freakish, you seriously propose that you don't know what an IMUL instruction is? I mean seriously? – gnasher729 Mar 04 '23 at 22:16
  • @freakish I don't know. My assumption is that OP is asking: in practice, when implementing a multiplication of multiple factors, should I be associating the factors in some particular order to maximize performance? I tried to answer this, but of course the OP can accept another answer if that's not what they want to know. – user7530 Mar 05 '23 at 04:33
3

I'm considering only the number of multiplication steps, because addition has a negligible time-complexity compared to multiplication.

The order in which you multiply the numbers doesn't affect the number of computational steps.

To convince yourself, take the example of the product $9\times99\times999$

Order 1:

  1. $(9\times99)\times999$
  2. Two $(1\times2)$ multiplication steps
  3. $891\times999$
  4. Nine $(3\times3)$ multiplication steps
  5. $890109$

Order 2:

  1. $9\times(99\times999)$
  2. Six $(2\times3)$ multiplication steps
  3. $9\times98901$
  4. Five $(1\times5)$ multiplication steps
  5. $890109$

Both orders take a total of eleven multiplication steps each.

Amogh
  • 1,103
1

We can assume that every single word multiplication takes some constant time, and that we need some algorithm to multiply n word by m word numbers. Now the answer depends on how fast this n by m word multiplication is. If it takes n x m times as long, then the order of multiplications doesn't actually matter much; the maths is a bit difficult.

If the time is only a bit more than n + m (which you can get in practice with FFT), then you want to keep the total size of all inputs of all multiplications small. For example, to multiply 1024 single word operands, multiplying 1x1, 2x1, 3x1, ..., 1023x1 will take O(n^2), while 512 1x1 word multiplications, followed by 256 2x2 word, 128 4x4, ..., and finally one 512 x 512 word multiplication take O (n log n).

Anyone interested can do the numbers for the Karatsuba algorithm. There's the obvious problem that published results usually describe the time for an n by n product, not for an n by m product when m can be much smaller.

gnasher729
  • 10,113
0

A funny question.

In human terms: years ago, my own choice in doing such a multiplication by hand would be to save the multiplications by smaller numbers to the end, because the mental overhead with "carries" and such tends to be less... so the chance of error is somewhat less.

Slightly more abstractly, there are various speed-ups possible in human terms (e.g., see Trachtenberg's "Speed system of/for basic mathematics"), but I've found these cleverer algorithms burdensome to remember accurately. :)

paul garrett
  • 52,465
  • I think its the other way around: multiplying small numbers first requires less work, e.g. $((2\cdot 2)\cdot 2)\cdot 6$ is only $3$ steps, vs $2\cdot (2\cdot (2\cdot 6))$ which requires $7$ steps. At least according to OPs assumptions. – freakish Mar 04 '23 at 23:17
  • @freakish, ah, well, maybe it depends on the individual's proclivities, etc. :) "Number of steps" is not a perfectly reliable gauge, for me, ... :) – paul garrett Mar 04 '23 at 23:38
  • @paulgarrett It does not depend on anyone's individual proclivities, OP explained exactly what they meant by "computational step". If you use a different definition, then your answer is incorrect. – Cecilia Mar 05 '23 at 11:41
  • @Richard, well, ok, ... But, sometimes, it is worthwhile to offer reframing of a question to some degree... since, sometimes, the questioner realizes that they'd really wanted to ask a slightly different question. – paul garrett Mar 05 '23 at 19:58