12

We often hear about some algorithms' running time that is polynomial, and some algorithms' running time that is exponential. But is there an algorithm whose time complexity is between polynomial time and exponential time?

Nathaniel
  • 15,071
  • 2
  • 27
  • 52
lz9866
  • 315
  • 2
  • 6
  • 1
    To quote Aaronson, "In some contexts, "exponential" means $c^n$ for some constant $c>1$, but in most complexity-theoretic contexts it can also mean $c^{n^d}$ for constants $c>1$ and $d>0$". Some intermediates proposed in answers here are of the latter form for some $d\in(0,,1)$. Could you clarify which meaning of sub-exponential you intended? – J.G. Dec 23 '21 at 12:59
  • 1
    I think the question should be rephrased. In fact, most of the answers below are answering a different question. Making up an algorithm that takes a specific time is extremely easy and not particularly interesting. Most of the answers below focus instead on finding a problem that can be solved in subexponential time but cannot be solved in polynomial time. – Stef Dec 24 '21 at 11:03

4 Answers4

27

There is a category of time complexity called quasi-polynomial. It consists of a time complexity of $2^{\mathcal{O}(\log^cn)}$, for $c> 1$. It is asymptoticaly greater than any polynomial function, but lesser than exponential time.

Another category is sub-exponential time which name speaks for itself. It is sometimes defined as $2^{o(n)}$.

The problem of graph isomorphism can be solved in sub-exponential time, but no algorithm in polynomial time is known.

Nathaniel
  • 15,071
  • 2
  • 27
  • 52
14

The general number field sieve, the most efficient known algorithm for factoring large numbers, has a runtime that's roughly $\exp(n^{1/3})$, where $n$ is the number of digits in the number to be factored. This runtime grows with $n$ faster than any polynomial, but slower than exponential.

This class of runtimes is generally still considered "hard" for the purposes of practical computation (hence the security of RSA and other cryptography systems that depend on the difficulty of factoring). The cutoff for "practically efficient to compute" is generally taken to be right above polynomial growth, not right below exponential growth.

tparker
  • 1,096
  • 6
  • 16
  • 3
    Small add: If one can solve the factoring they can solve the RSA problem, for the reverse, there is no proof yet. So one may find a way to solve the RSA problem to access the messages without factoring. – kelalaka Dec 22 '21 at 21:26
  • @kelalaka Indeed, thanks for the add. RSA does indeed depend on the hardness of factoring, but it also depends on the hardness of other tasks that might end up being easier than factoring. It’s interesting to note that the most efficient implementation of Shor’s algorithm to decrypt an RSA message doesn’t actually involve factoring any numbers. – tparker Dec 22 '21 at 22:55
  • Shor's algorithm is a period finding algorithm that can be used to factor the modulus. Are we talking about Shor's quantum algorithm? – kelalaka Dec 22 '21 at 22:58
  • 5
    ‘The cutoff for "practically efficient to compute" is generally taken to be right above polynomial growth’ – well that depends a lot on the field. In many applications, the cutoff between “efficient” and “infeasibly expensive” lies between $\mathcal{O}(n\cdot\log n)$ and $\mathcal{O}(n^2)$... – leftaroundabout Dec 23 '21 at 00:39
  • 1
    @kelalaka Yes, Shor's quantum algorithm can be used to factor the modulus, and this one way that it can be used to decrypt an RSA-encrypted message - but it turns out that there's an even (slightly) more efficient implementation of Shor's algorithm that directly decrypts the message without ever factoring the modulus. – tparker Dec 23 '21 at 15:01
  • @leftaroundabout Interesting - what's an application with an algorithm that's $\mathcal{O}(n^2)$ but computationally infeasible in practice? – tparker Dec 23 '21 at 15:03
  • @tparker I think the comment was more that what constitutes "computationally infeasible" in practice is domain-dependent. – Jared Smith Dec 23 '21 at 15:14
  • @JaredSmith Yeah, isn't that what I said too? I'm asking for an example of a domain in which there's an $\mathcal{O}(n^2)$ algorithm that's computationally infeasible. – tparker Dec 23 '21 at 15:39
  • 1
    @tparker in you could theoretically train a deep neural network by forward-mode automatic differentiation. That's $\mathcal{O}(n^2)$ in the number of parameters. But in practice, these networks always have so many parameters that this would be infeasible. – leftaroundabout Dec 23 '21 at 15:43
  • @tparker - "infeasibly expensive" (what leftaroundabout said) doesn't necessarily mean "computationally infeasible" (which is generally what we're talking about in this question)! It might just mean in your application you can't possibly afford it. Typically, these would be computations which were actually real time (e.g., video processing, gotta do something every 16ms) or where a person is waiting for an answer (e.g., on 30 items O(n log n) might mean a 30 second wait but O(n^2) might mean a 300 second wait ... - at least, that's what it means to me ... – davidbak Dec 23 '21 at 20:27
  • @tparker Classical molecular dynamics simulations or gravitational simulations are naturally $O(n^2)$, where $n$ is the number of atoms, stars, or other particles, since you need to calculate the force between each pair of particles. This becomes infeasible for more than a few hundred atoms. In practice, most simulations use approximations that are $O(n \log n)$, such as the particle-mesh Ewald algorithm for electrostatics or the Barnes-Hut algorithm for gravity. – WaterMolecule Dec 24 '21 at 16:04
  • Big-O notation is an academic property. Whether something is feasible depends on the actual numbers involved, not the asymptotic behavior as n goes to infinity. For n = 1000, e^0.00001n is feasible, and n^1000 is not. – Acccumulation Dec 25 '21 at 18:08
  • @tparker is this what you mean om Shor's algorithm? – kelalaka Dec 27 '21 at 21:22
  • @kelalaka Yes, and I posted an answer at that link. – tparker Dec 27 '21 at 22:01
11

Other answers have given some specific examples of problems whose difficulty is conjectured to be between polynomial and exponential time, but the Time Hierarchy Theorem gives a general proof that for any time-constructible function $f(n)$, there is some decision problem which can be solved in time $f(n) \cdot log^2(n)$ but not in time $f(n)$.

Namely: Given an encoding of a Turing machine $M $and an input $x$, does $M$ accept $x$ within $f(|x|)$ steps?

So taking $f(n) = 2^{\sqrt{n}}$, the problem of deciding whether an arbitrary Turing machine halts within $2^{\sqrt{n}}$ is solvable in time $ n \cdot 2^{\sqrt{n}}$, but not solvable in time less than $2^{\sqrt{n}}$.

Tjaden Hess
  • 248
  • 2
  • 8
2

Sure, eg you can easily built an algorithm taking $\Theta(2^{\sqrt{n}})$ steps, and this is slower than polynomial but faster than exponential. For this, compute $\frac{2^{\sqrt{n}}}{\sqrt{n}}$ (in binary), and then count down from there, making sure to make a full pass through the array each time.

If you just want something intermediate, you can make it even simpler and just generate an array of length $\sqrt{n}$ and count up from $00\ldots 0$ to $11\ldots 1$. This clearly takes at least $2^{\sqrt{n}}$ steps, and at most $\sqrt{n}2^{\sqrt{n}}$.

Arno
  • 3,065
  • 10
  • 22
  • 1
    Can you help give me an example of such an algorithm? – lz9866 Dec 22 '21 at 13:37
  • 6
    just count numbers upto it. – Rinkesh P Dec 22 '21 at 14:02
  • 1
    You've essentially just told OP "sure, it's easy". That's not much of an answer. – einpoklum Dec 22 '21 at 21:59
  • 1
    @einpoklum I find a bit more content than just "it's easy", specifically because it names an asymptotic class between the two. I personally took the question to essentially be asking if there were any such. – Daniel Wagner Dec 22 '21 at 22:20
  • @DanielWagner: One might interpret OP to not be aware of this, but - it's not what OP literally wrote. – einpoklum Dec 22 '21 at 22:28
  • @einpoklum My point is that it is easy to built an algorithm with intermediate complexity, as opposed to understand some naturally-occurring example. – Arno Dec 23 '21 at 11:11
  • @Arno: So build one, and this will be a swell answer. – einpoklum Dec 23 '21 at 11:23
  • @Arno I think the confusion in the comments here is that you are actually giving an algorithm (as was literally asked in the question) and not a problem (which wasn't asked, but would be way more interesting, and is what the other answers are giving). – Stef Dec 24 '21 at 11:04