This is a very annoying aspect of logic: there is a lot of terminological overload. This happens repeatedly (e.g. in computability theory, also known as recursion theory, the terms "recursively enumerable," "computably enumerable," "semidecidable," and "recognizable" are all synonymous). Ironically, this largely stems from attempts to make the subject more understandable by using terms we have some intuition about.
Often the notation is much clearer than the terminology, even though it looks weirder. For example, in our case we have two relevant symbols, "$\vdash$" and "$\models$" (LaTeX codes "\$\vdash\$" and "\$\models\$" respectively). The former corresponds to "syntactic deduction"/"syntactic entailment"/"provability," while the latter corresponds to "semantic deduction"/"semantic entailment"/"entailment."
For $\Gamma$ a set of sentences and $\varphi$ a single sentence, we have:
$\Gamma\vdash\varphi$ means that there is some formal proof - in whatever system we're using - of $\varphi$ from $\Gamma$. A formal proof is a string of symbols following some basic rules; there's no discussion of what the sentences in our logic mean.
- Note that a priori "$\vdash$" is ambiguous, since there are multiple proof systems out there (e.g. Hilbert style, sequent calculus, ...). Really we should distinguish the various $\vdash$s via subscripts (e.g. $\vdash_A$ vs. $\vdash_B$ for different proof systems $A$ and $B$), but in practice this isn't done in general since we can prove that all the usual ones are equivalent (see also the completeness theorem mentioned below).
$\Gamma\models\varphi$ means that every structure satisfying $\Gamma$ also satisfies $\varphi$. That is, if $\mathcal{M}$ is a first-order structure and each sentence in $\Gamma$ is true in $\mathcal{M}$, then $\varphi$ is true in $\mathcal{M}$. For example, giving an example of a nonabelian group demonstrates that the group axioms do not entail, in the sense of $\models$, the sentence $\forall x,y(x*y=y*x)$.
- The symbol "$\models$" is also used to relate structures to sentences: we write "$\mathcal{M}\models\varphi$" if $\varphi$ is true in $\mathcal{M}$. Similarly, there is an abuse of notation around sentences vs. sets of sentences: "$\mathcal{M}\models\Gamma$" means $\mathcal{M}\models\varphi$ for every $\varphi\in\Gamma$, "$\Gamma\models\Delta$" means that $\Gamma\models\varphi$ for every $\varphi\in\Delta$, "$\varphi\models\psi$" means $\{\varphi\}\models\psi$, etc.
Arguably the first real theorem in logic is that $\vdash$ and $\models$ coincide (at least when we use a reasonable proof system for $\vdash$): this is Godel's completeness theorem (yes, he proved a completeness theorem and an incompleteness theorem). This theorem is far from obvious; see my summary here. This also explains why we can get away with using the term "entailment" rather sloppily: all the reasonable versions of syntactic entailment agree, and agree with semantic entailment.
If the inference is tautologically valid then it's logically valid.
– cekami7844 May 11 '20 at 15:52Premises⊢Conclusion when a chain of argument leads from the premises to the conclusion via intermediate valid inferential steps, then the argument constitutes a proof of the conclusion from those premisses. This is a syntactic relation because whether an array of wffs counts as a proof doesn't depend on their meaning, but only on whether the wffs are related in the ways allowed by the inference rules of the proof system. Correct right?
– cekami7844 May 11 '20 at 15:56