8

I read an interesting comment in a paper recently about how weirdly useful maths turns out to be. It mentions how polynomial time doesn't have to mean efficient in reality (e.g., $O(n^{999999999999999999999})$ is polynomial time, but not efficient). Yet, isn't it the case that all algorithms in polynomial time also happen to be realistic, like at most $O(n^4)$ or something? I guess my questions are:

  1. Is this surprising?

  2. Are there any examples of algorithms that are polynomial time but not practical?

Yuval Filmus
  • 276,994
  • 27
  • 311
  • 503
Arnold
  • 81
  • 1
  • This paper is an example for question 2. ​ ​ ​ (See the paragraph just below ​ Theorem 1.1 .) ​ ​ ​ ​ ​ ​ ​ ​ –  Jan 23 '16 at 00:54
  • 3
    Welcome to CS.SE! So you know for the future, we generally prefer that you ask one question per post; our site works better that way. Anyway, you might want to take a look at https://cs.stackexchange.com/q/13202/755, http://cs.stackexchange.com/q/3523/755, http://cs.stackexchange.com/q/45296/755, http://cs.stackexchange.com/q/210/755, http://cs.stackexchange.com/q/3523/755, http://cs.stackexchange.com/q/10472/755, http://cs.stackexchange.com/q/38219/755, https://rjlipton.wordpress.com/2010/10/23/galactic-algorithms/ – D.W. Jan 23 '16 at 01:51
  • Some other examples are some algorithms for linear programming; some matrix multiplication algorithms; some network flow algorithms; and more. (Question to community members: is this a question a dup of any of those other questions on CS.SE?) – D.W. Jan 23 '16 at 02:00
  • Not a duplicate, though, @D.W.? – Raphael Jan 24 '16 at 16:01
  • 1
    Please restrict yourself to one question. Seeing as there are so many duplicates, the second is obsolete; I'd remove it were there not as many answers. The first question calls for opinion, not fact, so I daresay it'd be closed on its own. I'll let the community decite: which questions takes precedence? Is this a duplicate? – Raphael Jan 24 '16 at 16:03

3 Answers3

10

The comment is wrong. It is very easy to give examples of polynomial time algorithms which aren't practical at all:

  1. The ellipsoid algorithm for solving linear programs runs in polynomial time but is too slow to be practical. The simplex algorithm, whose worst-case running time is exponential, is preferred in practice.

  2. The AKS primality testing algorithm runs in polynomial time but is too slow to be practical. Instead, randomized polynomial time algorithms are used. We sacrifice certainty for performance.

  3. Fast matrix multiplication algorithms run asymptotically faster than high-school matrix multiplication (both are polynomial), but are too slow to be practical. The high-school algorithm is used in practice.

  4. A similar issue occurs in fast integer multiplication algorithms. The algorithm with the best asymptotic complexity, Fürer's algorithm, is too slow to be used in practice. Instead, relatively simple algorithms are used even for quite large integers.

  5. Big data algorithms need to run in linearithmic time (which is $O(n \log^{O(1)}n)$) to be practical.

These examples show that the identification of polynomial time with practical is not accurate, and might depend on the circumstances. Researchers in theoretical algorithms feel a need to justify their field of research, and so they ideologically believe in the sentiments expressed in the comment you mention. You shouldn't take such comments too literally.

In fact, many algorithms used in practice are heuristic and we don't have any estimates on their running time other than empirical results. Such algorithm don't fit into the theoretical framework at all, yet there are very useful in practice. Several (but not all) machine learning algorithms belong to this class just in terms of running time (not to mention in terms of performance), as do search algorithms such as A* and alpha-beta, and SAT solving algorithms.

Yuval Filmus
  • 276,994
  • 27
  • 311
  • 503
6

Question (1) is a tricky one that I have never seen a good reason for. One suggestion may be that we're much more likely to comprehend and find the answers for simpler problems, the harder ones take special skills, so the chance the right person works on it and comes up with the answer is low. This is all very hand-waving though.

For (2) there's definitely examples, in fact, this question has already been asked and answered! Note also that there's a chain of links to a cstheory.se question and a math.se question in there that have other examples.

Luke Mathieson
  • 18,125
  • 4
  • 55
  • 86
5

I have seen the following explanation regarding your question (1):

The powers of $n$ in the running time usually arise out of for loops or similar constructs. Each for loop in turn is needed, because we, as solvers, had an idea how to break up the problem or index over something useful. Since we usually use only a small number of ideas, the powers tend to be small. It's hard to imagine an algorithm with 20 nested for loops, where inside the very last for loop we actually do something that depends on all 20 indices.

This argument is appealing to me, but it is quite weak because it only deals with for loops. For instance, one can easily create any number of nested for loops with recursion, but then one can find good reasons against doing such a bizarre construction. In any case, I think this argument can be strengthened with an analysis of a lot of different cases, but I present just the high-level idea.

Yuval Filmus
  • 276,994
  • 27
  • 311
  • 503
Denis Pankratov
  • 1,483
  • 10
  • 16