1

I am trying to get a better sense of what recursion theorem entails (or doesn't entail). I understand the basic statement of the theorem (or at least I would think I do) but I have difficulty seeing its applications. And it doesn't seem to me that this would be uncommon at all. I should also add that I know essentially nothing about the proof of the theorem (I only learned it once long time ago and have forgotten it completely). I will phrase my question in two parts.

First Part: The statement of the theorem that I am familiar with is that if $f:\mathbb{N} \rightarrow \mathbb{N}$ is any total computable function then there exists a natural number $e \in \mathbb{N}$ such that $\phi_e \simeq \phi_{f(e)}$. As I understand, we are assuming reasonable bijections between programs (for our computational model) and $\mathbb{N}$. Further, the symbol $\simeq$ here seems to indicate equality of (possibly partial) functions.

(Q1) The form of theorem seems to be $\forall f \in \mathrm{R}\, \exists e \in \mathbb{N}\,\{\phi_e \simeq \phi_{f(e)}\}$. Here I have used $\mathrm{R}$ for the set of all (total) recursive functions. My first question is that is this the correct statement of the theorem logically (or is there some mistake and theorem is saying something different perhaps)?

Second Part: Now I want to come to the part that relates to the title of the question. For this, please let me refer to the language that I described in Rigorous books on basic computability theory. Note that the only reason for this is to get a sense of the question I am asking in a setting I am more familiar with (rather than any actual intrinsic reason). Also, for the sake of keeping things as simple as possible I want to focus on functions from $\mathbb{N}$ (instead of $\mathbb{N}^k$ for example) to $\mathbb{N}$. For the sake of easier reference I will just re-cap it here in next two paragraphs (note that I have replaced $A0$ with $C0$):

We consider $C0$-programs. The $C0$-programs have the following variables: (1) temporary variables: $\mathrm{t0}$,$\mathrm{t1}$,$\mathrm{t2}$,..... (2) input variable: $\mathrm{x}$ (3) output variable: $\mathrm{y}$. All variables take can only take on the natural number values $\{0,1,2,3,4,5,.....\}$.

These have the following four commands ($v$ and $w$ can be any variables ... that is input/temporary/output): (1) $\mathrm{v:=0}$ (2) $\mathrm{v:=v+1}$ (3) $\mathrm{while(v!=w)}$ (4) $\mathrm{End}$. Hopefully the commands are kind of self-explanatory. The fourth command is used in place of brackets (to mark the end of a loop). If you are more comfortable with brackets then you could replace it with $\{$ and $\}$. Note that if the brackets don't properly match the corresponding "while" commands then we declare the program to be syntactically incorrect. The command $while(v!=w)$ can be re-written as $while(v \neq w)$. Ultimately it is just a check for non-equality (when equality is detected the loop is exited). Also note that we can assume that all variables (except the "input variable") have the initial value $0$.

Now after this suppose we introduce an abstract model of computation which we call $C1$-programs. The $C1$-programs are actually just about the same as $C0$-programs but just with one extra variable $\mathrm{r}$ added that doesn't start with initial value $0$ (which we call "special variable"). The command that I wrote above for $v$ and $w$ in case for $C0$-programs are the same for the same $C1$-programs where $v$ and $w$ can be any variables ... that is input/temporary/output/special. Even syntactic validity (with regards to $\mathrm{End}$ commands .... or alternatively brackets) is judged in the same way.

Once again, consider a reasonable bijection between all syntactically valid $C1$-programs and $\mathbb{N}$. All variables other than input variable $\mathrm{x}$ and special variable $\mathrm{r}$ can be assumed to be $0$ at the beginning of the program. However, we require that the variable $\mathrm{r}$ contains the index/number of the program (with respect to the bijection I just described at the beginning of the program). Otherwise, in all aspects the program just runs as one would expect.

Now consider the following statement:

$C1$-programs compute exactly the set of all partial computable functions

I am trying to understand how this above statement compares to the actual theorem: (Q2a) Firstly in the sense of logical implication I suppose. There seem to be two versions of the recursion theorem (and I tried to describe the first one at the beginning of the question it seems) so I guess this question could be asked for both. (Q2b) In the sense of applying as a logical step in various results (because the theorem seems to be used in a lot of results).

Lastly, I would note that the question could also be similarly phrased for $\Sigma^*$ (for a suitable alphabet $\Sigma$) instead of $\mathbb{N}$. Thanks for reading this long question. I hope the question is making some sense.

SSequence
  • 1,022

1 Answers1

2

Terminological note: you're talking about the second recursion theorem in this question, or more specifically Rogers' simpler version of Kleene's second recursion theorem.

For Q1, yes; note in particular that the relation $\simeq$ of equality between possibly-partial functions is defined as $f\simeq g$ iff for each $x$, either $f(x)$ and $g(x)$ are both undefined or they are both defined and equal.

For Q2, your description is a bit long, but it sounds like you're talking about something equivalent to counter machines for $C0$ and counter machines equipped with their own 'code' (the $r$-variable) for $C1$. If this is true, then - modulo small $C1$-implementation details (e.g. you didn't include a decrement function as basic in your setup, and that might matter although I don't think it does) - the answer to your question is yes: giving a machine access to its own code doesn't change its computing power at all, and this is indeed a consequence of the recursion theorem. At worst you might undershoot, but as long as $C0$ is "rich enough" your statement will be true. Certainly every $C1$-computable thing is computable in the usual sense!

Noah Schweber
  • 245,398
  • $C0$-programs do compute all the partial computable functions. Though one can add the decrement command (since it doesn't really affect the question in any way). Few questions though: (1) If this is the second recursion theorem then is there a first one too? Any reference to it? (2) Secondly, taking $C1$-programs computation equivalent to partial computable functions as given (suppose just as a hypothesis) could one derive the statement of recursion theorem (such as one in OP)? – SSequence Jan 18 '24 at 19:23
  • @SSequence Re: (1), yes, look at the wiki page for "Recursion theorem" (linked at the top of my answer). Re: (2), it's hard to give a negative answer to this since you could always just ignore the hypothesis and prove the recursion theorem as usual, but I don't see a "natural" way to do it. – Noah Schweber Jan 18 '24 at 19:40
  • Yes of course for (2) I did mean some (perhaps obvious) way that "uses" $C1$-programs. – SSequence Jan 18 '24 at 20:06