1

I realise that an expression like $x=x$ is not tautological in propositional logic due to the fact that "=" is not formally defined as a propositional connector. However, what I am confused about is how these propositional connectors are defined in the first place, because surely if these connectors are defined as axioms, they are assumed to be true, and so anything constructed from them would also be based on assumed axioms, including such tautologies. So whilst true statements can be constructed, I don't really understand how these can be tautologies, since they rely on axioms (defining the propositional connectors) that are assumed true, and are not then true inherently. So I was just wondering how this was got around, and how defining propositional connectors work without ending up in the same situation as the = sign in x=x.

Apologies in advance if this is a silly question.

Ahmed
  • 21

3 Answers3

1

Since your question is too large, I think the best way is to read some textbooks about mathematical logic involving propositional logic and first order logic. After you read some textbook, I think the questions as above would disappear naturally for you. For the textbooks, What I recommend is as follows:

  1. Enderton, H. B., 2001. A Mathematical Introduction to Logic. 2nd ed. A Harcourt Science and Technology Company.
  2. Rautenberg, W., 2010. A Concise Introduction to Mathematical Logic. 3rd ed. Springer.

By the way, propositional logic can only "catch" part of valid laws which are called tautologies, and first order logic can "catch" more valid laws which are called valid formulas, for example, $x=x$ would be a valid formula.

In fact even first order logic can't "catch" all the valid laws. Does it means that it's not necessary to learn propositional logic and first order logic at all? The answer is clearly NO. Since the formalization method used in them is very useful in many ways, and the distinguish between semantics and syntax in them is also very useful, and so on.

M. Logic
  • 1,302
1

I think what you're missing is that there are two different and independent ways to describe (classically) valid propositional formulas:

  1. First, by defining a proof system (such as a Hilbert calculus, or natural deduction, or sequent calculus) with particular axioms and inference rules, and asking: is there a proof of such-and-such formula in the system.

  2. Second, by using truth tables to define the truth value of formulas directly: $$ \begin{array}{cc|c} A & B & A\land B \\ \hline F & F & F \\ F & T & F \\ T & F & F \\ T & T & T \end{array} \qquad \begin{array}{cc|c} A & B & A\lor B \\ \hline F & F & F \\ F & T & T \\ T & F & T \\ T & T & T \end{array} \qquad \begin{array}{cc|c} A & B & A\to B \\ \hline F & F & T \\ F & T & T \\ T & F & F \\ T & T & T \end{array} \qquad \begin{array}{c|c} A & \neg A \\ \hline F & T \\ T & F \end{array} $$ For every formula, we can use the truth tables for the connectives to compute an overall truth value for every assignment of truth values to the propositional variables in it. There are finitely many such assignments, so we can do all of it systematically with pencil and paper. If the result is $T$ for all of them, the formula we're evaluating is valid.

You see that the second of these approaches does not involve declaring anything to be axioms -- we simply have tables that describe how each connective behaves in full detail.

The wonderful thing is now that the two descriptions lead to exactly the same formulas being "valid". That's a non-trivial claim that it typically takes at least a handful of pages in a mathematical logic textbook to prove.

This particular nice situation, where we have either a syntactic way of determining what is valid (namely proofs), or a semantic way (namely direct evaluation of truth tables) and they agree, is the happy ideal that we attempt to preserve as much as we can when we move to predicate logic (where the $=$ symbol becomes possible).

As it turns out, we cannot get it quite as nice as in the propositional case: the semantic definition of "valid" in predicate logic requires stating that such-and-such is true for a potentially infinite set of structures (whereas for a propositional formula there are always finitely many relevant truth assignments to try), which means that formal proofs take on additional importance in predicate logic: they're the way to show a formula is valid that we actually have a chance of writing down on a finite amount of paper!

In predicate logic, the word tautology is by pure convention restricted to those valid formula where we know they are valid because they are substitution instances of propositional formulas that are valid. They are not all the valid predicate-logic formulas, and they are not "more valid" than the other formulas -- they're simply a class of valid formulas that have particularly straightforward proofs, and it's considered useful to give that property a name.

Alternatively we could also have decided to use the word "tautology" about every predicate-logic formula that is always true in every structure (that is, independently of any "non-logical axioms"). I think there are even a few authors who do this, though I cannot name one offhand. It's just not the meaning most logicians use for the word.

Troposphere
  • 7,158
  • 1
    Truth tables are just a convenient way to present the results of formal proofs--a proof by cases and subcases. In your first 3 truth tables, the proof would be essentially one that considers 2 cases for A (T or F) and 2 subcases for B (T or F) for each of the cases for A--four cases in all, one for each line of the truth table. – Dan Christensen Aug 01 '21 at 02:59
  • 2
    @RyanG: Why did you post a link to a shady scraper site? The real source is this post. And Dan's comment is bogus; truth-tables are semantic. Don't anyhow mix up semantics and the possible reflection of semantics in a deductive system. The semantics exist even if you have no deductive system. – user21820 Aug 01 '21 at 10:52
  • 1
    @user21820 "truth-tables are semantic. Don't anyhow mix up semantics and the possible reflection of semantics in a deductive system." +1 Thank you for defending the explaining this point. – ryang Sep 27 '21 at 16:59
0

Consider the following example in which we formally prove the tautology A => [~A => B] for any logical propositions A and B. We will make use of a simplified form of natural deduction.

Here we make use of 4 different rules of inference. These rules are not tautologies. They are not logical statements composed of logical propositions and connectors as are tautologies. They are rules for generating strings of characters that themselves encode logical statements.

(Screenshot from my proof checker)

enter image description here

Here, the Premise Rule is applied on each of the first 3 lines. It allows us to, essentially, enter a what-if statement subject to various rules of syntax. On line 1, we enter the logical proposition A, asking what if A is true. On line 2 we enter the negation ~A, asking what if ~A is true. Similarly on line 3.

On line 4, we join together the statements on lines 1 and 2 to obtain a contradiction.

On line 5, we invoke the Conclusion Rule. Because the previous statement is a contradiction, we conclude that the premise on line 3 is false, i.e. that ~~B is true.

I hope the remaining lines and the rule of inference invoked on each are self explanatory.