3

Summary: I understand the proof Halmos gives in his Naive Set Theory for the recursion theorem, but I don't understand why he has to do it that way. I give an alternative proof which may be flawed. If so, I want to understand why it's flawed.

The context is the following discussion made by Halmos in Section 12 (The Peano Axioms).

Induction is often used not only to prove things but also to define things. Suppose, to be specific, that $f$ is a function from a set $X$ into the same set $X$, and suppose that $a$ is an element of $X$. It seems natural to try to define an infinite sequence $\{u(n)\}$ of elements of $X$ (that is, a function $u$ from $\omega$ to $X$) in some such way as this: write $u(0) = a,\ u(1) = f(u(0)),\ u(2) = f(u(1))$, and so on. If the would-be definer were pressed to explain the "and so on," he might lean on induction. What it all means, he might say, is that we define $u(0) = a$, and then, inductively, we define $u(n^+)$ as $f(u(n))$ for every $n$. This may sound plausible, but as justification for an existential assertion, it is insufficient. The principle of mathematical induction does indeed prove, easily, that there can be at most one function satisfying all the stated conditions, but it does not establish the existence of such a function.

OK, fair enough. This argument is circular in that it uses the function $u$ in order to define $u$. Halmos then proves a theorem which does establish the existence of such a function:

Recursion theorem. If $a$ is an element of a set $X$, and if $f$ is a function from $X$ into $X$, then there exists a function $u$ from $\omega$ into $X$ such that $u(0) = a$ and such that $u(n^+) = f(u(n))$ for all $a \in \omega$.

He proves this by considering the class $\mathcal{C}$ of all subsets $A$ of $\omega \times X$ such that $(0,a) \in A$ and for which $(n^+, f(x)) \in A$ whenever $(n,x) \in A$. He notes that the class is not vacuous, because $\omega \times X$ itself satisfies the conditions. He then forms the intersection of all elements of $\mathcal{C}$ and proves that it is a function with the desired properties.

I have no problem understanding the proof, but I am trying to understand why it's necessary to do it the way he did. He uses what one might call a "top-down" approach, starting with all of $\omega \times X$, and reducing it to a subset which is the desired function.

Would not the following "bottom-up" approach work? To me it just seems like a more pedantic version of the flawed induction argument, so I'm guessing that this is also wrong. But I don't see what the problem is.

Possibly flawed proof:


For each $n \in \omega$, define $U_n = \{(n, f^n(a))\} \in \omega \times X$, where $f^n$ denotes the $n$-fold composition of $f$. My convention (and that of Halmos) is that $0 \in \omega$, and by $f^0(a)$ I mean simply $a$.

Define $U = \bigcup_{n=0}^{\infty}U_n$. Clearly $U \subseteq \omega \times X$. To prove that $U$ is a function from $\omega$ to $X$, we need to verify that for each $n \in \omega$, there is exactly one element of $U$ with $n$ in the first slot. But this is immediately clear from the construction of $U$.

It remains to verify that $U$ satisfies the indicated recursion. Since $U_0 = \{(0, a)\} \subseteq U$, we have $U(0) = a$. Moreover, given any $n \in \omega$, we have $U_n = \{(n, f^n(a)\} \subseteq U$, so $U(n) = f^n(a)$, and $U_{n^+} = U_{n+1} = \{(n+1, f^{n+1}(a)\} \subseteq U$, so $U(n+1) = f^{n+1}(a)$. But the latter is simply $f(f^n(a)) = f(U(n))$, and so $U$ satisfies the recursion.


The following question is related but not the same as mine. I understand why the method proposed in that question is flawed due to circular reasoning. Definition by Recursion: why is the existence part not (almost) obvious?

1 Answers1

2

If you look closely, you'll see that you're defining the sequence of sets $U_n$ recursively: you can't get $U_{n+1}$ without having $f^{n+1}(a)$, which requires that you have $f^n(a)$ -- and that's equivalent to having $U_n$. In essence you're assuming that there is a function $\varphi$ from $\omega$ to $\omega\times X$ such that $\varphi(n)=\{\langle n,f^n(a)\rangle\}$ for each $n\in\omega$; but that's pretty clearly equivalent to assuming the existence of the function $U$ whose existence you're trying to prove.

Brian M. Scott
  • 616,228
  • I'm not seeing the recursion. Each $U_n$ is a singleton with the element $(n, f^n(a))$. Clearly I don't need recursion to put $n$ in the first slot, so the problem must be with $f^n(a)$. But isn't $f^n(a)$ a perfectly well-defined element of $X$? How does its existence depend on whether I have previously defined $U_k$ for $k < n$? –  Sep 04 '15 at 06:13
  • @Bungo: You can't get $f^{n+1}(a)$ without first having $f^n(a)$, which amounts to having $U_n$. – Brian M. Scott Sep 04 '15 at 06:15
  • 1
    Hmm, I think I see the problem. To form the union $\bigcup_{n=0}^{\infty}U_n$, I need all of the $U_n$'s defined first, and that means I need $f^{n}(a)$ defined for all $n$, and that clearly is the same as having the sequence in hand. I was envisioning "adding one $U_n$ at a time to the union", but that's not how unions work: an element $x$ is in $\bigcup_{n=0}^{\infty}U_n$ if and only if it's in one of the $U_n$, and in order to decide whether a given $x$ is in the union, I need all of the $U_n$ up front. –  Sep 04 '15 at 06:23
  • @Bungo: Pretty much, yes. – Brian M. Scott Sep 04 '15 at 06:36
  • Cool, thanks. This stuff is more subtle than I thought at first glance. To me it seemed that the main flaw in the induction argument described by Halmos was that $u(n+1)$ was defined explicitly in terms of $u(n)$, and that after removing this explicit dependence, the induction argument should be basically sound. But I guess the real issue is that while induction is powerful enough to define the finite set ${u(0),\ldots,u(n)}$ for any given $n$, it is not powerful enough to generate the infinite set ${u(n) : n \in \omega}$. –  Sep 04 '15 at 06:43
  • @Bungo: Exactly. It's the old business of being able to do any given finite number of things, but not necessarily an infinite sequence of them. It comes up in dealing with the axiom of choice, too, for instance. You're welcome! – Brian M. Scott Sep 04 '15 at 06:46
  • 2
    I read the corresponding section in Hrbacek and Jech, Introduction to Set Theory. I was surprised to see that they use a "bottom-up" construction like the one I proposed. They define an $m$-step computation as being an $m$-length sequence (embedded in the obvious way in $\omega \times X$) satisfying the recursion, and they define $f = \bigcup{t \subseteq \omega \times X : t\ \text{is an $m$-step computation}$. They then prove that $f$ is a function $\omega \to X$, satisfying the recursion, and that $f$ is the unique such function. Pretty much what I proposed. So now I'm really confused... –  Sep 05 '15 at 00:23
  • @Bungo: Sorry; I should have said something about that, but I’d gone to bed and was typing on a Kindle, which is not conducive to the clearest thought. Yes, you can make your argument legitimate by adding to it one crucial step, but you have to make the step explicit: you have to show that $U_n$ exists for each $n\in\omega$. This is parallel to what H&J do at the top of p. 49 when they prove that the domain of their $f$ is $N$. – Brian M. Scott Sep 05 '15 at 00:31
  • Thanks, I'm going to re-read the section carefully so I can see exactly where the gaps in my argument were. But it's intuitively satisfying to know that both the top-down and bottom-up methods can be made to work. I also am not fully satisfied that I understand why the recursion theorem is needed in cases where a closed form solution to the recurrence is available (e.g. $u_n = n!$ or $u_n = 2^n$). Difficulty of computation for a given $n$ doesn't seem to be a barrier preventing us from defining sequences such as $u_n = sin(n)$ or $u_n = $ the largest prime factor of $n$, for example. –  Sep 05 '15 at 00:35
  • But I guess that's a subject for another question :-) –  Sep 05 '15 at 00:35
  • 1
    @Bungo: The short answer is that $n!$ isn’t a closed form until you know that the factorial function exists. Essentially the same goes for $2^n$ when you’re working over $\omega$. – Brian M. Scott Sep 05 '15 at 00:43
  • Neither is $\sin(n)$, though. In fact, $\sin$ is worse because it requires infinitely many steps (i.e., summing the series) to calculate for even one $n$, and there is no recurrence that I can think of offhand. And $n!$ and $2^n$ don't have to be calculated recursively. We can use iteration instead. Just as $u(n) = n$ itself can be expressed as the recurrence $u(n+1) = u(n) + 1$, but surely we don't need the recursion theorem to define $u(n) = n$? Anyway, I guess the point is moot because we're not adding a new axiom. The recursion theorem is proved from the axioms we have... –  Sep 05 '15 at 00:48
  • @Bungo: But with $\sin n$ you’re no longer dealing with $\omega$; you’re in the realm of the continuous, and I’d have to think about the formal machinery actually required. \ Iteration basically is just finite recursion, and then you need the recursion theorem to get the function on $\omega$. \ Yes, you can define ${\langle n,n\rangle\in\omega\times\omega:n\in\omega}$ without recursion. – Brian M. Scott Sep 05 '15 at 01:21