1

I have the following informally stated and weakly held beliefs, some of which seem inconsistent to me upon further reflection. I'm wondering where the source of the error(s) in my thinking might be; errors in basic definitions are a definite possibility.

  1. It is impossible to do quantifier elimination in the first-order theory of the integers with addition and multiplication. (This is, as far as I can tell, a slightly stronger version of the first incompleteness theorem.)

  2. In the first-order theory of the integers with addition and multiplication, it's possible to define a primitive recursive predicate for exponentiation. (By a predicate for exponentiation, I just mean something that behaves like "$Fabc\text{ just when }a^b = c.$")

  3. It is possible to do quantifier elimination in the first-order theory of the integers with two operations $a \oplus b = \min(a, b)$ and $a \otimes b = a + b$ (i.e., ordinary addition of integers). I'm aware that we also need divisibility predicates and multiplication operators for the primes to actually do quantifier elimination.

  4. In the first-order theory of the integers with the operations $\oplus$ and $\otimes$, it's possible to define a primitive recursive predicate for multiplication (in almost exactly the same way as the predicate for exponentiation above).


Roughly speaking, it seems like there's a breakdown in the analogy between the "ordinary tower of operations" $(+, \times, \hat{\phantom{n}}, \cdots)$ and the "tropical tower of operations" $(\min, +, \times, \cdots)$.

More specifically, if (4) and (3) are true I don't understand why one can't just freely use the multiplication predicate and then have a situation where we can both do quantifier elimination (via (3)) and not do quantifier elimination (via (1)). It would very much surprise me if (2) were true but (4) were not, and it would surprise me even more if (2) were false.

I suspect that I'm not quite understanding what is meant by an exponentiation predicate (i.e., my informal definition of $Fabc$ is incorrect, or else there is some more detail regarding "freely using the multiplication predicate" that I am not aware of.

Thurmond
  • 1,103
  • 2
    Primitive recursive definitions can be expressed by first-order formulas if you have addition and multiplication available, but not if you have just $\oplus$ and $\otimes$. – Andreas Blass Aug 18 '20 at 18:46
  • Hi @AndreasBlass, that's certainly not one of the failures I was expecting. Can you expand on this a little bit? – Thurmond Aug 18 '20 at 18:51

1 Answers1

4

Your claims $(1), (2)$, and $(3)$ are each correct. Claim $(4)$, however, is incorrect; indeed, if multiplication were definable over $(\mathbb{N};\max,+)$ then the theory $Th(\mathbb{N};\max,+)$ would be as complicated as $Th(\mathbb{N};+,\times)$. But the former is recursive while the latter is not even arithmetically definable.

The issue is that the "obvious" definition of multiplication in terms of addition is not actually first-order: recursive definitions are not a priori something first-order logic can do. In sufficiently rich structures we can find ways to perform recursive definitions in a first-order way, and indeed it's the richness of $Th(\mathbb{N};+,\times)$ in this sense which makes Godel's eorem possible, but addition alone isn't powerful enough to make this work. The key is that if we have both addition and multiplication we can "code" finite sequences of naturals by individual naturals (e.g. via the $\beta$ function) and so talk about recursive constructions by talking about the sequences coding their "step-by-step behaviors," but with addition alone we can't even code pairs of numbers by individual numbers.

Elaborating on that last sentence and getting back to your claim $(2)$, here's an outline of how to define exponentiation using addition and multiplication in a first-order way:

We have $a^b=c$ iff there is some number which, when interpreted as a sequence, has length is $b$, first term $a$, last term $c$, and $i+1$th term equal to $a$ times the $i$th term.

Note that this is an "all at once" definition rather than a definition by a "recursive process:" modulo the details of coding finite sequences by numbers, it just involves quantifying over individual numbers and checking basic properties, which is exactly what first-order logic can do. Without the ability to code finite sequences as individual numbers in a first-order way - which $(\mathbb{N};\max,+)$ lacks - we would be stuck with the usual non-first-order definition.

  • As an aside, it's important that this is a "verifiable definition:" in the theory $\mathsf{Q}$, which is a tiny fragment of the full theory $Th(\mathbb{N};+,\times)$, we have that for each $a,b,c$ the sentence abbreviated by $$\underline{a}^{\underline{b}}=\underline{c}$$ (where $\underline{k}$ is the numeral standing for the natural number $k$) is provable in $\mathsf{Q}$ if $a^b=c$ and is disprovable in $\mathsf{Q}$ if $a^b\not=c$. This is called representability, and is one of the key ideas of Godel's proof; in fact, every recursive function is representable.
Noah Schweber
  • 245,398
  • Noah, thank you very much for the explanation. I see now that I hadn't really thought about the detailed structure of what a "first-order predicate for exponentiation" might look like, and it makes sense that you need this sequence-encoding property which $(\mathbb{N},\min,+)$ lacks to encode the guts of the calculation somewhere. – Thurmond Aug 19 '20 at 02:45