1

In the chapter Definitions of "Type Theory and Formal Proof" by Nederpelt and Geuvers, they start with some motivating examples and then state (with my emphasis added)

[T]here is also a practical reason for introducing definitions: without definitions, logical or mathematical texts grow rapidly beyond reasonable bounds. This is an experimental fact, which can be verified by making a calculation for the ‘worst case scenario’; it has been shown that definition-less mathematics may obtain a complexity that is considerably worse than exponential growth.

Hence, in order to do logic and mathematics in a feasible way, we need definitions.

We conclude that it is very convenient, and almost inevitable, to introduce and use definitions.

The authors appear to be making three unsubstantiated claims about proof complexity when definitions (in the formal sense given in the chapter) are not allowed. I would like to understand how to justify them.

  1. "This is an experimental fact...". Are there indeed experiments to justify this, or is this more of an anecdote? What would an experimental justification of this claim look like?

  2. "...can be verified by making a calculation...". What would this calculation be? No hints are given in the text.

  3. "It has been shown...". Shown by who? Are there works that could have been cited to back up this claim?

Answering my third point would be the most helpful.

user10108
  • 19
  • 1

1 Answers1

2

Since it is difficult to talk about arbitrary formal systems, I shall restrict this post to FOL theories, but the same notions and phenomena carry over to any reasonable formal system. "Definitions" here refers to definitorial expansion (and see the comments there for a citation).

It turns out that although definitorial expansion is conservative (i.e. every sentence in the original language that is proven by the expanded theory is also proven by the original theory), there can be exponential proof blowup if you insist on using the original theory.

To see why, consider adding predicate-symbols $Q_{0..m}$, where each $Q_{k+1}$ is defined by a formula that uses two occurrences of $Q_k$. Then translating any formula involving $Q_m$ into the original language would yield a formula with length exponential in $m$.

However, all the examples given in that chapter are covered by definitorial expansion, and there is certainly no more than exponential blowup. So I would say that it is wrong to claim "considerably worse than exponential growth" in complexity without these "definitions".

That said, once you reify definitions (i.e. have some rules/axioms that assert that certain kinds of definitions are captured by objects), so that you can quantify over those, then the 'efficacy' of the system can vastly outstrip the original even if it is conservative. One example of this reification is to get from PA to ACA0 (see this MO post). But this does not justify the claim you cited, because it is an argument for conservative second-order extension rather than definitorial expansion.

user21820
  • 57,693
  • 9
  • 98
  • 256