0

I do not have much experience in mathematics but I would really like to grasp Big-O notation on its mathematical level. I already read What does the "big O complexity" of a function mean? from references, but I still do not understand (even graphically), what does it mean when we say:

T(n) = O(f(n)) if and only if there are constants c and g such that: T(n) <= c*f(n), where n>=g

Specifically, we say that T(n) is upper bounded by c*f(n). What does that actually mean and why does it matter? Does it have to do with eliminating constant factors and low-ordered terms?

Sorry if question is kind of confusing, and thanks for the help!

2 Answers2

2

Try it like this: "we can pick a constant $C$ such that, for sufficiently large $n$, $T(n)$ will always be less than $Cf(n)$".

Intuitively, this does indeed mean that lower-order terms and constant factors don't matter. Because lower-order terms stop mattering once $x$ gets sufficiently large, and constant factors can be cancelled out by an appropriate choice of $C$. If $T(n) = n^3 + 2019n^2 + 99999n + 10^{10}$, for example, that will still eventually be dominated by $Cn^3$, as long as $C > 1$ and $n$ is large enough—the $n^3$ term in that expression will eventually outweigh everything else. So we say that $T(n) \in O(n^3)$. (Some people use an equals sign instead; the meaning is the same.)

The "sufficiently large $n$", by the way, is why it's called "asymptotic" complexity: we only care about what happens as $n$ goes toward infinity, not what happens for "small" values.

Draconis
  • 7,098
  • 1
  • 17
  • 27
0

The clearest explanation (with a lot of applications) I've seen is Hildebrand's "A short course in asymptotics". Somewhat heavy going, and more targeted at analysis/number theory than computer science.

The concepts aren't really too hard, but wrapping your mind around them is critical for computer science.

vonbrand
  • 14,004
  • 3
  • 40
  • 50