1

I want to know the time complexity of specifically calculating ${n \choose k}$ where it is defined as

$$ {n \choose k} = \frac{n!}{k!(n-k)!}. $$

If the factorial function is recursive $O(n)$:

def factorial(n):
    if n == 0:
        return 1
    else:
        return n * factorial(n-1)

then the time complexity would be $O(n + k + (n-k)) = O(2n)$? This seems counterintuitive as there are 2 inputs to $n \choose k$ but it seems like $O(2n)$ only considers one input yet $O(2n)$ has more definite proof to me as I could list out $O(n + k + (n-k)) = O(2n)$. I get that $O(2n)\propto O(n)$, that is not what I am looking for.

VJZ
  • 161
  • 6

1 Answers1

2

I'll assume $k<n/2$, for simplicity (otherwise replace $k$ by $n-k$).

One way to calculate it is via

$${n \choose k} = {n (n-1)(n-2) \cdots (n-k+1) \over k!}.$$

This can be calculated using $2k$ multiplications. So, if we counted each multiplication/division/addition as $O(1)$ time, this would be $O(k)$ time.

However, that is misleading. The size of the numbers grows dramatically. So, if you want to compute this exactly (as a rational number), we need to operate on very large numbers, which takes more than $O(1)$ time. In particular, the numbers can grow as large as $n \lg k$ bits long, so each multiplication or division might take $O((n \lg k)^2)$ time [*]. So, the running time might something like $O((n \lg k)^2 k)$ bit operations. If you are a bit cleverer about the order in which you do the multiplications and divisions (multiplying small numbers first, using a binary tree structure to minimize the number of large numbers you have to deal with) you can get this down to something like $O(k^2 \log n)$ bit operations.


Footnote *: I am ignoring sub-quadratic multiplication algorithms. There are algorithms that are asymptotically faster, for very large numbers, but they tend to be only useful when the numbers are super-large, so for simplicity of analysis, I'm ignoring them.

D.W.
  • 159,275
  • 20
  • 227
  • 470
  • I wonder how faster Stirling's approximation would be (we have to compute it to the unit precision). – rus9384 May 21 '22 at 08:39
  • Isn't the size up to k log_2 (n) bits? (Much smaller if k is small, say 1,000,000 over 10). Or better min (k, n-k) * log_2 (n) bits. – gnasher729 May 21 '22 at 12:04
  • Stirling's approximation would be fixed time. You wouldn't calculate three factorials, but directly the result of the division. The e^-n part falls away. You get (n/k)^k (n / (n-k)) ^ (n-k), and a bit for the 2pi and the square root. Of course you'd have to compare this to the direct calculation with floating-point. – gnasher729 May 21 '22 at 12:08
  • @gnasher729 For Stirling's approximation to be arbitrary precision (which would be required for large numbers) Stirling series would be used which is not fixed time. – rus9384 May 21 '22 at 12:36
  • @D.W. Is $lg \implies log_2$ and normally would an analysis of bit operations factor into time complexity? I see everyone treating your standard math to be $O(1)$ when clearly addition is $O(n)$? – VJZ May 22 '22 at 01:08
  • @gnasher729, yup, $k \lg n$ is a better estimate if $k$ is small compared to $n$. Stirling's approximation gives you an estimate; my answer considers the situation if you want to compute the exact value (as a rational). – D.W. May 22 '22 at 03:12
  • @VJZ, yes, $\lg n = \log_2 n$. For your other question, I recommend studying https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations, https://cs.stackexchange.com/q/1643/755, https://cs.stackexchange.com/q/86070/755, https://cs.stackexchange.com/q/68053/755, https://cs.stackexchange.com/q/52640/755, https://cs.stackexchange.com/q/124158/755, https://cs.stackexchange.com/a/87530/755. – D.W. May 22 '22 at 03:16
  • Didn’t notice the last part about Stirlings formula… The Stirling formula does not converge. If you try calculating n! for some fixed n, adding more terms makes it more precise to sum optimal point and then it diverges. So not useful to get the exact solution. The usual Stirling formula gives you a reasonable approximation and does that fast, but not the exact result. – gnasher729 May 22 '22 at 21:56