11

Is there an algorithm for the following problem:

Given a Turing machine $M_1$ that decides a language $L$,
Is there a Turing machine $M_2$ deciding $L$ such that $t_2(n) = o(t_1(n))$?

The functions $t_1$ and $t_2$ are the worst-case running times of Turing machines $M_1$ and $M_2$ respectively.

What about space complexity?

Pål GD
  • 16,115
  • 2
  • 41
  • 65
StaticBug
  • 213
  • 1
  • 6

3 Answers3

10

Here is a simple argument to show that they are undecidable, i.e. there are no algorithms to check if a given algorithm is optimal regarding its running-time or memory usage.

We reduce the halting problem on blank tape to your problem about running-time optimality.

Let $M$ be a given Turing machine. Let N be the following Turing machine:

$N$: on input $n$
1. Run $M$ on blank tape for (at most) $n$ steps.
2. If $M$ does not halt in $n$ steps, run a loop of size $2^n$, then return NO.
3. Otherwise, return YES.

There are two cases:

  1. If $M$ does not halt on blank tape, the machine $N$ will run for $\Theta(2^n)$ steps on input $n$. So its running time is $\Theta(2^n)$. In this case, $N$ is obviously not optimal.

  2. If $M$ halts on blank tape, then machine $N$ will run for constant number of steps for all large enough $n$, so the running time is $O(1)$. In this case, $N$ is obviously optimal.

In short:

$$M \text{ halts on blank tape } \Leftrightarrow N \text{ is optimial }$$

Moreover given the code for $M$ we can compute the code for $N$. Therefore we have reduction from halting problem on blank tape to running-time optimality problem. If we could decide if a given Turing machine $N$ is optimal, we could use the above reduction to check if a given machine $M$ halts on blank tape. Since halting on blank tape is unecidable your problem is also undecidable.

A similar argument can be used for space, i.e. it is also undecidable to check if a given Turing machine is optimal regarding the space it uses.

Even a stronger statement is true: we can't decide if a given computable function is an upper-bound on the time complexity of computing a given computable function. Similarly for space. I.e. even basic complexity theory cannot be automatized by algorithms (which can be considered a good news for complexity theorists ;).

Kaveh
  • 22,231
  • 4
  • 51
  • 111
  • Just want to mention that in the original question, OP assumed that $M_1$ decides the language in quadratic time. – Pål GD Feb 02 '13 at 12:52
  • Please clarify that you look at asymptotic optimality. Even in case 2, $n$ is not strictly optimal; the function $n \mapsto \textrm{YES}$ can be computed in one step, whereas $N$ needs more than $n_0$ (for large $n$), with $n_0$ the length of the computation of $M$ on blank tape. – Raphael Feb 02 '13 at 14:12
  • Ah, the question changed since I last read it. Never mind. – Raphael Feb 02 '13 at 14:13
  • @PålGD, I think OP used that as an example (based on the original question posted on cstheory). You can check the comments under that question. – Kaveh Feb 02 '13 at 14:52
2

As others mentioned the answer is no.

But there is an interesting article written by Blum "A Machine-Independent theory of the Complexity of Recursive Functions". He showed that there are some functions with the property that no matter how fast a program may be for computing these functions another program exists for computing them very much faster.

a very nice property!

Reza
  • 2,258
  • 16
  • 17
-3

Ha! Were the answer yes, we would be living in a different world.

Imagine that the answer to your question was yes (and of course we knew the algorithm $A_0$ that would answer your question), then for any algorithm $A$ for language $L$, we would be able to tell (using $A_0$) if $A$ is optimal or not.

Unfortunately, this is not possible, and indeed I personally think proving (non-trivial) optimality is the most interesting (and difficult) problem in computer science. As far as I know - I would be glad to be corrected - there exists no optimality result for any polynomial problem (except the trivial optimality results of course of algorithms taking time proportional to input size).

  • 1
    For some problems there are known bounds of the form $\Omega(N)$, and algorithms which satisfy this. Simple examples are e.g., sorting by comparison, finding the least element of an array. – vonbrand Jan 31 '13 at 20:28
  • 1
    First, "asymptotically optimal" is not the same as "optimal". Second, you don't answer the question. Third, there are $\Omega(n \log n)$ lower bounds for (certain kinds of) sorting algorithms. – Raphael Feb 01 '13 at 10:56
  • @vonbrand - that's what I meant by algorithms taking proportional to input size. – t to the t Feb 01 '13 at 13:42
  • @Raphael - First, it is. Second, my first sentence in English means that the answer is no. Third, as you said : for certain kinds of sorting algorithms, no one has been able to prove any bound worse than input size for the problem of sorting itself. – t to the t Feb 01 '13 at 13:44
  • 1
    @ttothet Ok, I'm afraid it will be fruitless but I'll try again. 1) No, not at all. If you save only one step on every input, you have a better algorithm than before, even though it has the same asymptotic runtime. 2) No, it does not. It can also mean "I don't know, but if yes, then X". This is not uncommon (cf P?=NP). 3) You claimed there were no non-trivial lower bounds (on asymptotics, I assume) at all. That is wrong. Do your homework, please. – Raphael Feb 01 '13 at 13:50
  • Ok, I'm afraid it will be fruitless but I'll try again too. 1) Yes. Search in google scholar for "an optimal algorithm for". You will see that NONE of those papers claim any constant-factor optimality. 2) Yes it does. If the answer were true, we would be living in a different world, alas, we are in this same particular world, so the answer is not true. 3) No, it is correct (as far as I know - and your counterexample is not valid - I would be very glad to hear a real counterexample if there exists one). There exists no language $L$ in $P$ that has a proven lower bound greater than the input. – t to the t Feb 01 '13 at 15:05
  • @Kaveh - correct, I completely concur with your first sentence. All I am pointing to is, I don't know any problems in $P$ with non-trivial lower bounds either. The problem you're alluding to (clique) is NP-complete, and not yet shown to be in $P$ so unfortunately it's not a counterexample for my statement. – t to the t Feb 01 '13 at 18:38
  • @Kaveh - Ah - your previous comment didn't render very well (look after the size) so I hadn't understood what you said - my apologies. Can you point me to the paper that proves the $n^5$ lower bound? – t to the t Feb 01 '13 at 21:46
  • @Kaveh - I looked through rather quickly, and as far as I understand, Razborov's results are on boolean circuit depth complexity (if so, this would not entail any direct result on asymptotic complexity of the problem). – t to the t Feb 02 '13 at 00:09
  • @Kaveh I don't know of any superlinear lower bound on the running time of a turing machine for a problem in P. I strongly suspect you're referring to Razborov's result on the monotone circuit complexity of clique. Monotone circuits are quite a bit weaker than turing machines. – Sasho Nikolov Feb 02 '13 at 06:18
  • @SashoNikolov: If you consider one-tape turing machine then there it is proven you need $\Omega(n^2)$ time to decide whether input word is palindrome or not. – Martin Jonáš Feb 02 '13 at 08:58
  • My mistake, Sasho is right. Still by the time hierarchy theorem there are problems in P which require $n^k$ for arbitrary fix $k$ and we can obtain such problems explicitly thought they might not be very natural. When people say that we don't have superlinear lower-bounds they mean we don't have such bound for some particular interesting problems like SAT. – Kaveh Feb 02 '13 at 14:55
  • 1
    @MartinJonáš I mean a 2-tape Turing machine. Kaveh has a point, the proof of the time hierarchy theorem does give polytime solvable problems with arbitrarily high complexity, but the examples are not exactly natural and don't feel very explicit. Also, no hierarchy is known for probabilistic time, so there we really have nothing. – Sasho Nikolov Feb 03 '13 at 02:14