6

CS sometimes seems take for granted that $\mathcal O(\text{poly}(n))$ is "easy", while $\mathcal O\left(2^{poly(n)}\right)$ is "difficult". I am interested in research into "difficult" polynomial-time algorithms, where the best-case solution to the constructed problem runs in $\Theta(n^c)$, where $c$ can be chosen to be large; but the solution could be tested in $O(n)$ time.

Question:

Given an integer $c$, can we construct problems that would:

  • Take $\Theta\left(n^c\right)$ best-case-time to solve,
  • While taking $\tilde{\mathcal O}(n)$ time, and $\tilde{\mathcal O}(n)$ space, to test a solution?

($\tilde{\mathcal O}(n)$ is soft-big-oh, meaning $O(n \log^k n)$ for some $k$)


Something I note - I might be mistaken somewhere here - is that presumably, if there is a $\mathcal O(n)$ algorithm to test the solution, then perhaps there can be a $\mathcal O(n)$ reduction to $\rm k\text{-}SAT$. If so, and, if $\rm P=NP$, and there was a polynomial-time algorithm, say ${\rm S{\small OLVE}}\left(\Phi(\mathbf x)\right) \in O({|\mathbf x|}^{\alpha})$ time, then I think this might contradict our $\Theta(n^c)$ problem, if $\alpha < c$.


The motivation would be to research the possibility of having a "one-way-function", that is linear(ithmic)-time computable, and best-case "difficult"-polynomial-time invert-able, where "difficult" means a high degree polynomial, instead of the usual exponential-time definition of "difficult"; perhaps this might be able to be used for cryptography even if $\rm P=NP$ (like "post-P-equals-NP-cryptography", similar to how there is a field of "post-quantum-cryptography").

Realz Slaw
  • 6,191
  • 32
  • 71

2 Answers2

5

If you believe in the exponential time hypothesis, then you can construct such an example by padding SAT. The ETH states that solving SAT on $n$ variables takes time $2^{\Omega(n)}$; let's say the time is $T(n)$. We can assume that SAT instances consist of at most $O(n^3)$ clauses, and so have length at most $\tilde{O}(n^3)$. Pad such an instance by adding $N = T(n)^{1/c}$ (where $c > 1$ need not be an integer) spaces. According to the ETH, the resulting languages requires time $\Omega(n^c)$ to solve in the worst case (the "best case" time complexity of a problem is almost always $\tilde{O}(n)$, depending on your model of computation and how devious the problem is), but witnesses can be verified in time $\tilde{O}(n)$ and $O(\log n)$ space; most of these resources are spent on checking that the input is well-formed.

The same idea would work even with much weaker hypothesis, such as P$\neq$NP; I'll leave you the details.

Yuval Filmus
  • 276,994
  • 27
  • 311
  • 503
  • Very interesting. I find the question more interesting when allowing $\rm P=NP$ (though I did not state it explicitly), in the context of finding a polynomial sort of "hardness" that could be used for cryptography in the event that $\rm P=NP$. – Realz Slaw Dec 01 '13 at 20:16
1

if am understanding your question right, there are probably many examples of this based on "fixed parameters" of NP complete problems. eg finding a $k$-clique in a graph takes $O(n^{\sqrt k})$ time and can be verified in $O(n)$ time (a $k$ edge clique has $\sqrt k$ vertices).

vzn
  • 11,034
  • 1
  • 27
  • 50
  • 1
    But there is no reason to believe a hard problem is only solvable by brute-force (in fact, we know this is not true for many NP-hard problems). – Juho Nov 21 '13 at 17:11
  • J thats more an observation than an objection, the answer fits & rs is not asking for the optimal solution. but yes basically agreed; this all ties in closely with the P=?NP question (which in fact, as conjectured to be unequal, does in fact assert that there cannot be "much" difference between the optimal and brute force solutions). – vzn Nov 21 '13 at 17:21
  • 1
    more strict answer. see rs is asking about $\Theta(n^c)$ which also implies $\Omega(n^c)$ ie a optimal lower bound. but there are almost no significant (nonlinear) lower bounds known for any algorithms except those constructed by diagonalization (time/space hierarchy thms).... – vzn Nov 21 '13 at 17:26
  • 1
    You must always specify a model of computation whenever you consider lower bounds; but surely we have plenty of algorithms we know are optimal -- how about comparison based integer sorting algorithms for a starter? – Juho Nov 21 '13 at 17:31
  • @juho ?? the model of computation is TMs. sorting does indeed have a proven lower bound but is more of a "toy" problem.... its a rare case.... basically the entire field of TCS lower bounds is based on simpler models than TMs and a lot of so-far disconnected results.... "surely we have plenty of algorithms we know are optimal ".... the current situation is quite to the contrary as probably most complexity theorists would agree... – vzn Nov 21 '13 at 17:37
  • There are lower bound results in very powerful models of computation as well, such as the cell probe model. These results consider problems that are far from "toy problems" (not that I would agree sorting is such a problem), such as data structure lower bounds. – Juho Nov 21 '13 at 17:43
  • @juho refs? there are just very few nontrivial bounds on TMs, "few and far between". all other models are arguably somewhat contrived and/or not too closely related to TMs. somewhat related on tcs.se why are there not even quadratic lower bounds on NP complete problems? – vzn Nov 21 '13 at 17:44
  • See Fredman-Saks for example. To circumvent the cell probe model bound here, your algorithm must not access memory. – Juho Nov 21 '13 at 17:58
  • 1
    You (probably) can't verify cliques in time $O(n)$ on a Turing machine. (Though you can fix that by padding.) – Yuval Filmus Dec 10 '13 at 20:04