31

I am reading a Set Theory book by Kunen. He presents first-order logic and claims that if a set of sentences in inconsistent, then it proves every possible sentence. Since he does not explicitly specify the inference rules, I became curious as to how fundamental this property of inconsistent systems is.

So my question is what is the simplest proof, with the least use of assumptions, of the vague claim that "inconsistent systems can prove anything" - in particular I'm interested in the assumptions about the system needed to prove this - is it true only for first order logic? Only for first order logic with the "standard" rules of inference (Modus ponens and GEN)? Or is it such a basic truth that it can be proved for every "reasonable" proof system (and what is "reasonable")?

Gadi A
  • 19,265
  • 3
    This property is sometimes called the "principle of explosion". It holds many deductive systems, including the standard systems for both classical and intuitionistic first-order logic. An example of a system weak enough that it doesn't have the principle of explosion is "minimal logic", which is usually only discussed in the context of proof theory. See http://en.wikipedia.org/wiki/Minimal_logic . There are also many "paraconsistent logics" that are designed explicitly to avoid the principle of explosion. – Carl Mummert Sep 27 '10 at 13:28
  • 2
    It's good to keep in mind that, because every first-order structure satisfies the axiom scheme that represents the principle of explosion, any deductive system that is complete for classical first-order semantics is going to have to prove this principle one way or another. – Carl Mummert Sep 27 '10 at 13:29
  • 1
    @Carl: Minimal logic is a funny beast. It fails to establish the principle of explosion only by lacking enough propositions to express the notion of contradiction at all. One can say that it is vacuously explosive: there are no counterexamples, and indeed intuitionistic logic, which is a conservative extension, does satisfy the principle of explosion. Releveance logic is a much better example of a paraconsistent logic. Note, you can formulate a "classical minimal logic" using Peirce's rule, which can't express contradictions and which is conservatively extended by classical logic . – Charles Stewart Sep 29 '10 at 07:53
  • I agree that relevance logic and (other) paraconsistent logics are good examples. For minimal logic, we have a terminological difference. I was thinking of a system that does include the symbol $\bot$, and so is able to express a contradiction in a some sense, but does not have any rules of inference pertaining to $\bot$, and in particular is unable to prove the scheme $\bot \to \phi$. Since the system I call intuitionistic logic adds previously unprovable axioms to minimal logic (including that axiom scheme), it's not a conservative extension. – Carl Mummert Sep 29 '10 at 11:53
  • @Carl: But what is this symbol '⊥' if you don't specify its meaning via axioms? It's a symbol that happens not to be provable, but the logic wouldn't mind there being a proof for it. Isn't it just a different glyph for a propositional variable? – Charles Stewart Sep 29 '10 at 12:03
  • I agree. This leads to the question of what it means to say that something is expressible in some formal system. I usually follow the school that "express" refers to the standard interpretation, rather than to the actual axioms of the theory. Of course that only captures part of the story, especially in this case, which is why I included "in some sense" in my last response. However, if we ignore this interesting philosophical issue, there's also a mathematical issue: if I include $\bot$ as a distinguished symbol in minimal logic, intuitionistic logic won't be a conservative extension. – Carl Mummert Sep 29 '10 at 13:19
  • 1
    @Carl: Right, because you can't extend if you share the same language. But all theorems of intuitionistic logic without ⊥ are theorems of this minimal logic. The more interesting question is whether you can extend minimal logic so that it can express the notion of contradiction paraconsistently. – Charles Stewart Sep 29 '10 at 15:23
  • 1
    @CharlesStewart I realize this is an old question, but I've recently been reading Dag Prawitz's Natural Deduction in which he includes a discussion of minimal logic. Therein, $\lnot A$ is defined as an abbreviation for $A \to \bot$. Although the resulting system doesn't have $\bot \to \phi$ an theorem, it still has negation introduction (assume $A$, derive $\bot$, therefore $A \to \bot$ (which is $\lnot A$)). Minimal logic is interesting in that it still has some of structure of negation, e.g., elimination for negations $\lnot\lnot\lnot A \vdash \lnot A$, and double-negation introduction. – Joshua Taylor May 02 '13 at 20:43
  • @Joshua: All the theorems and valid inference rules involving $\perp$ are still theorems/ valid rules if you substitute that symbol by any other proposition: there is no difference between that symbol and a schematic variable. – Charles Stewart May 03 '13 at 05:38
  • @CharlesStewart Absolutely right; in minimal logic by itself, the choice of $\bot$ is arbitrary, since it doesn't have the usual properties of $\bot$. But the interesting results about the relationship between classical, intuitionistic, and minimal logic (e.g., that classical and intuitionistic logic are interpretable with respect to derivability in minimal logic by a double-negation translation) are more elegant to express if all the systems have a symbol $\bot$, and abbreviate $A \to \bot$ by $\lnot A$. – Joshua Taylor May 03 '13 at 12:50

4 Answers4

16

If $T$ is an inconsistent set of first order theorems (or axioms for the proof), then it is possible to prove from $T$ for some $\alpha$ both $\alpha$ and $\neg \alpha$. So without the loss of generality we can assume that $T$ includes $\alpha$ and $\neg\alpha$.

Now suppose that $\beta$ is whatever first order sentence that you want to prove.

  1. $\alpha$ (in $T$)
  2. $\beta \to \alpha$ (easily verified to be true since $\alpha$ is an axiom of $T$)
  3. $\neg \alpha \to \neg \beta$ (Contrapositive Law)
  4. $\neg \alpha$ (axiom of $T$)
  5. $\neg \beta$ (inferred from 3 & 4)
  6. $\neg \beta \to \alpha$ (holds for the same reason in 2, further more we have $\neg\beta$)
  7. $\neg \alpha \to \neg\neg\beta$
  8. $\neg \neg \beta$ (inferred from 4,7)
  9. $\neg \neg \beta \to \beta$ (tautology)
  10. $\beta$ (inferred from 8,9)

So you see, you can prove pretty much anything you want from $\\{\alpha ,\neg\alpha\\}$ for some first order sentence $\alpha$.

Asaf Karagila
  • 393,674
  • 2
    You assume here that A -> B implies not A -> not B; is this an inference rule? An axiom? Also, I think that using not (not A) -> A you can stop at step 5 (start with not B instead of B). – Gadi A Sep 27 '10 at 13:39
  • It is a first-order axiom, as for the length of the proof I know it can be shortened. I just provided you with one. – Asaf Karagila Sep 27 '10 at 13:43
  • @Gadi A: I think you misread the order of some of the implications. Originally, I missed the double-negation in step 7 and thought this proof was intuitionistically valid. The princple of explosion itself is intuitionistically sound, but apparently this particular proof does require some amount of classical logic. – Carl Mummert Sep 27 '10 at 14:06
  • It seems the proof was edited since I wrote my comment. At first there was not double negation at 7. – Gadi A Sep 27 '10 at 17:04
  • @Gadi: You can see the revision history on http://math.stackexchange.com/posts/5566/revisions and yes, I have corrected the proof. – Asaf Karagila Sep 27 '10 at 17:06
  • 1
    @GadiA I don't understand why you accepted Stewart's answer instead of this: Asaf answered exactly in the context you asked, Stewart's answer is a nice one, but it is more a curiosity than something that answers what you asked. – Red Banana Dec 12 '16 at 07:21
11

It doesn't have to: logics which don't are called paraconsistent.

The most important paraconsistent logic is relevance logic, which repudiates the K axiom: $$\alpha \rightarrow (\beta \rightarrow \alpha)$$ and replaces it by axioms that do not allow there to be unused assumptions. This is equivalent to saying weakening, the principle that if $\Gamma \vdash \alpha$ then $\Gamma'\vdash \alpha$ for $\Gamma\subset\Gamma'$. This blocks derivations such as Weltschmertz's, which appeals to the K axiom once, Asaf's which uses it twice; Francesco appeals to monotonicity in his proof, which is another name for weakening.

It's not difficult to see that this also blocks proofs of everything from a contradictory pair of propositions in a logic satisfying compactness, since one can prove inductively about such proof systems that if $\alpha\rightarrow\beta$, then all positive atoms in $\beta$ must occur either negatively in $\beta$ or positively in $\alpha$. So if our contradictory pair (over an assumption) takes the form $\alpha\rightarrow\beta$ and $\alpha\rightarrow\neg\beta$, we need to prove for any $\gamma$ that $\alpha\rightarrow\gamma$. But if we choose $\gamma$ to be any positive atom not occuring in $\alpha$, our inductive proof tells us this cannot be done. We need compactness here, to be ensure that the basis for all contradictory pairs can be expressed by a finitary formula.

  • 2
    α→β→α should read (α→(β→α)) or at least α→(β→α). In classical logic, for example, (α→(β→α)) is always true, while ((α→β)→α) is not always true. (α→(β→α)) is repudiated by relevance logic, while ((α→β)→α) doesn't even exist to get repudiated. – Doug Spoonwood Aug 27 '11 at 03:35
  • 3
    @Doug: To quote Carl Mummert's succinct phrasing, The usual convention among authors who drop parentheses is that the binary connectives are right-associative, which means that a→b→c means a→(b→c). You are quite right about (a→b)→a being a problematic scheme: it entails anything at all, even when added to intuitionistic relevance logic. – Charles Stewart Aug 27 '11 at 14:28
  • Do such authors who drop parentheses like that abandon the rule of replacement? – Doug Spoonwood Aug 27 '11 at 23:58
  • 1
    @Doug: Hilbert systems are more often, I think, formalised without replacement as a rule, with the greek variables showing axiom schema, rather than propositional variables. There's no problem with using implicit parentheses with replacement, though: you can introduce parentheses when substituting in formulae with connectives. – Charles Stewart Aug 28 '11 at 07:10
  • I don't see how the introduction of parentheses comes as justified when using the rule of substitution. How is the unparenthesized statement of the same category as the parenthesized statement (especially if the unparenthesized statement is not a wff, while the parenthesized statement is a wff)? If we insert parentheses when replacing a statement p by statement q, how is the parenthesized statement q' equiform with q? Doesn't the rule of replacement, to get used correctly, require mechanical replacement? – Doug Spoonwood Aug 28 '11 at 23:14
  • @Doug: You could ask that as a proper question. If you do, link to it from here. The easiest answer to your question involves equivalence classes of formula-representing strings. – Charles Stewart Aug 29 '11 at 07:54
  • @CharlesStewart: is the notion of algebraic data types (and proof trees) foreign to mathematicians? – Blaisorblade Apr 15 '14 at 07:21
  • 1
    @Blaisorblade - if you mean that abstract syntax can solve the issue with substitution, then I guess that most mathematicians have little idea of the work done in this field. – Charles Stewart Jun 24 '14 at 10:13
  • @CharlesStewart: yes, that's what I mean. I mentioned proof trees (as in natural deduction) because they are also ASTs (or values of inductive types), after seeing "proof" defined as "list of statements", with informal comments connecting each statement with the previous ones — as in Kleene's "Mathematical Logic" (1967). That seems especially ugly after you learn to encode derivations through inductive types in dependently typed languages. – Blaisorblade Jun 25 '14 at 23:04
  • How about adding the parentheses? The convention you're citing is obscure to many people who would be able to follow what you are saying immediately if you would add the parentheses. – Don Hatch Aug 01 '18 at 03:03
7

From my recollection, it goes something like this: if $K$ is a first-order inconsistent theory, there exists, by definition, a formula $C$ such that $\vdash_{K} C$ and $\vdash_{K}\neg C$. If $D$ is an arbitrary formula in $K$ then we have the following chain (proof):

  1. $C$
  2. $\neg C$
  3. $\neg C \Rightarrow (C\Rightarrow D)$ (Tautology)
  4. $C \Rightarrow D$ (2,3, Modus Ponens)
  5. $D$ (1,4, Modus Ponens).

Now, if inconsistency is defined the same way for logics of any other order, I think this proof wouldn't change. I don't see anything explicit about first-orderness in it, but correct me if I'm wrong.

Weltschmerz
  • 6,875
  • 1
    Your proof assumes our proof system uses the Tautology in 3 and Modus Ponens. I also recall this proof but wonder if there is something simpler with less assumptions about the proof system. – Gadi A Sep 27 '10 at 12:57
  • 1
    Yes, using a tautology means invoking the completeness theorem, for propositional calculus (no quantifiers) (if the first order theory contains, in particular, that calculus' axioms and the MP rule, any tautology in $K$ is also a theorem in $K$). So basically you can change step 3 for a chain of steps that uses the axioms of the propositional calculus, which would make it longer, but wouldn't resort to completeness. Again, I'm not sure. And I've no idea how that'd go without MP, which was necessary here (not GEN though). – Weltschmerz Sep 27 '10 at 13:05
  • how do you show 3 is a tautology? Or even better, how did you know it was a tautology and that you needed that? – Charlie Parker Oct 25 '18 at 03:31
  • In particular I am concerned my system with axioms:
    1. T
    2. $\varphi \to (\varphi \lor \psi); \varphi \to (\psi \lor \varphi)$
    3. $\neg \varphi \to (\neg \psi \to \neg (\varphi \lor \psi) $
    4. $(\varphi \land \psi) \to \varphi; (\psi \land \varphi) \to \psi$
    5. $\varphi \to (\psi \to (\varphi \land \psi))$
    6. $(\varphi \to (\psi \to \theta)) \to ((\varphi \to \psi)\to (\varphi \to \theta))$
    7. $\varphi \to (\neg \varphi \to \bot)$
    8. $(\neg \varphi \to \bot) \to \varphi$

    doesn't have the property you mentioned...

    – Charlie Parker Oct 25 '18 at 03:32
  • Now I see why your answer doesn't help me. For me inconsistent means that our syntactic derivation rules prove $\bot$. But $\bot$ does not mean $p \land \neg p$. In my case $\Sigma \vdash p \land \neg p \to \Sigma \vdash \bot$ bot the converse is not obvious (your proof doesn't work cuz I don't know $p$ and $\neg p$ are provable, thats what I want to show if $\bot$ is provable in $\Sigma$). – Charlie Parker Oct 25 '18 at 03:39
2

If you assume natural deduction, this is easy to explain. Let $\Sigma$ be an inconsistent set of axioms. This means you can prove both $\phi$ and $\lnot\phi$ for some statement $\phi$: in symbols $\Sigma\vdash\phi$ and $\Sigma\vdash\lnot\phi$. Now you suppose the negation of whatever statement $\psi$ you like to prove, but since, by monotonicity, $\Sigma\cup\{\lnot\psi\}\vdash\phi$ and $\Sigma\cup\{\lnot\psi\}\vdash\lnot\phi$, you can apply negation elimination obtaining $\Sigma\vdash\psi$.

  • 3
    Like Asaf's proof, this requires [double] negation elimination, which is not available in intuitionistic logic. – user21820 Mar 15 '15 at 12:50