3

After reading about the Curry-Howard correspondence, I came back to the commonly asked area of questioning for those less learned (such as I) in the areas of computability theory and logic: the connection between uncomputability in foundational computer science and incompleteness in formal systems. In order to examine this in a more formal manner, I wonder - is it possible to explicitly create, either in first/second/whatever-order logic or in a made up formal system, a statement which encodes the concept of the halting problem and is, as a consequence, incomplete (as in, either independent or true but unprovable)?

It's an unfortunate fact that the above question is deeply vague (because if I knew how to formulate it, I would have done so). Some sketches of possible ways of doing what I was describing above:

  • A formal system where one (or more) of the rules to get from a theorem $A$ to a new theorem $B$ essentially encodes performing an iteration of a Turing machine's tape-looking (perhaps in SKI-combinator form). Then, the undecidable statement here would be something to do with determining whether there exists a finite number $N$ for every proof of each such statement so that the length of the proof (the number of derivations required to get to it from the axioms) is $N$.

  • Perhaps a formal system which describes numbers and, in particular, the concept of busy beaver numbers. Then the incomplete statement would come somewhere from statements about whether one can prove that a given number $n$ is a Busy Beaver number (or an upper bound for one, or a lower bound or something).

Is it possible to formulate this more rigorously or is my question so vague that there is nothing meaningful here? If it possible to formulate more rigorously, are there some examples of "solutions"?

Isky Mathews
  • 3,235
  • 11
  • 25

2 Answers2

2

Yes. The "arithmetization" techniques developed by Gödel for the incompleteness theorem allows us to write just about any claim Turing machines as a first-order statement in the language of arithmetic. Gödel showed it suffices to have the constants $0$, $1$, operations $+$, $\times$, the relation $=$, and quantification over the natural numbers.

In particular, "such-and-such Turing machine terminates on such-and-such input" can be encoded as an arithmetical sentence in a systematic way, and we can then search for a proof of either it or its negation in our favorite proof system for arithmetical statements. Since it is easy to search for proofs (just enumerate all possible symbol strings and check whether each of them is a valid proof of what you're looking for), such a search would give an algorithm for the halting problem unless some of those sentences are undecidable.

The central trick in the construction is the beta function which allows you to express quantification over arbitrarily long finite sequences of natural numbers by a finite number of quantifiers over single natural numbers. Once you can quantify over such sequences, expressing a Turing machine computation is as easy as

There exists sequences $(s_i)_{0\le i\le n}$, $(L_i)_{0\le i\le n}$, $(R_i)_{0\le i\le n}$, intuitively encoding the state of the Turing machine at step $i$ of the computation, and the content of the tape to the left and right of the read head at step $i$ such that for each $i<n$, the numbers $(s_i,L_i,R_i,s_{i+1},L_{i+1},R_{i+1})$ are in a certain relation to each other and $s_n$ is a halting state.

Representing tapes as numbers is easy enough; just use base-$b$ representation for a $b$ large enough to contain the tape alphabet.

0

I had a similar question, there is what I went through :

Use a (recursively enumerable) axiomatic theory T to create a partial solution to the halting problem

StopT(p) {   // p is the sourcecode of a program with no arguments in the programming language you'd like

  enumerate : for each theorem t of T {
     if (t proves p terminates) return 1
     if (t proves p doesn't terminate) return 0
  }
}

"t proves p terminates" means we chose an encoding in T allowing to execute programs of our programming language. For example in PA, we will encode the states of p (how the memory evolves instruction after instruction) into an integer sequence $a_0 = g(p), a_{n+1} = f(a_n)$, and the statement "p terminates" will be $\exists n, a_n = 0$.

Then look at this program

ParadoxeT() {
    if (StopT(ParadoxeT) == 1) {   infinite loop }
    else { return }
}

When running ParadoxeT, we see that whenever StopT(ParadoxeT) returns a value, it is absurd.

If T is consistent, then StopT never output an absurd value, so StopT(ParadoxeT) doesn't terminate, and hence the termination of ParadoxeT is undecidable in T.

This last statement can be turned into a theorem in ZFC. An easier version is thisone :

If ZFC proves that whenever StopT output a value, it is correct, then ZFC proves StopT(ParadoxeT) doesn't terminate.

What do we get with T = ZFC ? That if ZFC proves StopZFC never output absurd values, then it proves StopZFC(ParadoxeZFC) doesn't terminate, so it proves ParadoxeZFC doesn't terminate, thus StopZFC(ParadoxeZFC) returns 0, which is absurd.

reuns
  • 77,999
  • Note that the premise "If ZFC proves that whenever StopT output a value, it is correct" essentially requires ZFC to prove that $T$ is consistent. So either we have to restrict ourselves to weak $T$ that we can prove consistent, or we need to add an explicit assumption that it is consistent. – hmakholm left over Monica Dec 26 '17 at 14:41
  • @HenningMakholm What I was thinking is that for a fixed T, we can easily formalize its consistency in ZFC (and try to prove it), but if we want to look at the general statement, it is much easier to quantify over programs StopT than over axiomatic theories T – reuns Dec 27 '17 at 03:21