2

As I am new to Forcing, I would appreciate any help on whether the following is anywhere near being correct :

Given a Structure M, Enderton, 2001, "A Mathematical Introduction to Logic" defines truth in the structure M on page 84 with a valuation S, for an expression A(x) and the Truth Relation $\models$ :

(M, S) $\models$ $\neg$ A[S] $\;$ iff $\;$ (M, S) $\nvDash$ A[S]

In Cohen Forcing, "Set Theory and the Continuum Hypothesis" a very similar truth definition is taken for the Forcing Relation $\Vdash$ and conditions P,Q (having a role 'similar' to a valuation, but being undertaken in stages) containing a finite number of possible elements in/not in a set G. The Forcing truth definition for any expression A in a structure M is :

(M, P) $\Vdash$ $\neg$ A[P] $\;$ iff $\;$ For all Q $\supseteq$ P $\;$ (M, Q) $\nVdash$ A[Q]

Both these definitions are of the logical form (abstracting away the relations $\models$ and $\Vdash$) :

$\neg$ A $\iff$ A cant be shown to be true

This means that (a) one of either $\neg$ A or A will be true, (b) both $\neg$ A and A won't be true and (c) there will not be a case where neither is true.

Case (c) means in particular that if the expression A[P] can't be shown to be true, maybe due to some limitation in its expressive power, then the truth definition defaults to assuming its negation is true (as classical logic is assumed), even if its negation can also not be derived to be true 'in reality' due to the expressions limited expressive power. During the recursive evaluation of these truth relations, the truth of A is evaluated first and based upon the result, the truth of $\neg$ A is then assigned. So cases where the expression A has limited expressive power the truth evaluation defaults to assuming its negation is true.

In addition presumably the similar negation truth definition for both Tarski and Forcing relations ensures a consistent truth evaluation of the set of all expressions will be produced.

In Cohen Forcing the conditions P & Q are effectively finite lists of elements that are in/not in an object G, which is otherwise a 'blank set with no properties (called generic)' with no other properties than those determined by the conditions such as P.

Since any P is finite, it would be impossible for it to determine whether the 'blank' object G is finite or infinite or whether it is the same set as any other set in the model M. Even if G had some properties, unless they were included within the Forcing relation definition, the P wouldn't be able to determine what its properties were. As a result of the order the recursion is undertaken if A := "G is finite" then A[P] asks "Is G finite using the information in P". No finite amount of information would be enough to determine this 'positive statement'. As a result the truth definition defaults to its negation "G is not finite". In this case 'in reality' the negation "G is not finite" also could not be determined from the finite amount of information in P, but the order of the truth recursion forces "G is not finite' to be the true expression, despite neither being true 'in reality'.

So forcing looks to be all about a subtle use of 'lack of expressive power' in the conditions P, with a classical logic truth definition that ensures for all expressions A / $\neg$ A is produced, even when 'in reality' in some cases neither A nor $\neg$ A could be determined to be true. So expressions could be true in Forcing but have no meaning 'in reality'.

  • I think you're using $\iff$ wrong. – Asaf Karagila Jul 08 '19 at 19:26
  • "even if its negation can also not be derived to be true 'in reality' due to the expressions limited expressive power." What exactly does "in reality" mean here, and how does expressive power enter into it? – Noah Schweber Jul 08 '19 at 20:17
  • Apologies, the ⟺ right hand side has a typo, which I've just updated. –  Jul 09 '19 at 04:03
  • 1
    @Noah OK, thanks once again (I really do appreciate your deep insights - I need to use Forcing in my Research) - I've just seen your response. As I'm a physicist I unconsciously dropped into physics speak. I guess mathematically the physics 'In Reality' would be 'In the Meta-Theory" (or a sort of 'Meta Meta Theory as the knowledge that a finite expression cant express much more about other expressions than the information contained in it would sit outside of the Tarski Truth definition) ? I initially thought the Tarski Truth definition was 'obvious' - how wrong I was ! –  Jul 09 '19 at 04:38
  • 1
    @LittleCheese I mean, I do consider the Tarski approach to be the obvious one - that doesn't mean, of course, that there aren't highly non-obvious things to be said about it. I don't understand your comment about reality, though. Can you give an alternative property you'd expect negation to obey? – Noah Schweber Jul 09 '19 at 05:18
  • 1
    @Noah ... I'm still digesting your response - so much fascinating stuff to learn and much more importantly understand ! Your spot on very positive question "Can you give an alternative property you'd expect negation to obey" is way too tough for me to answer, though I am now prompted to think hard on it. –  Jul 09 '19 at 06:13

1 Answers1

2

What is this "reality" of which you speak?

There's a lot of vagueness here, but I think the right way to ask the question is:

How can we understand $\Vdash$ and $\models$ as "non-syntactically-biased" definitions, as opposed to ones which appear to "default to negation" (or indeed display any other kind of syntactic bias) for no clear reason?

My jumping off point is that there's no kill like overkill: you can't be syntactically biased if you don't have syntax. Once we understand the syntactic approach as an "implementation" of a semantic one, things get much nicer and we see fundamental symmetry replace apparent bias. In the case of $\Vdash$, we have two plausible semantics, each of which is "unbiased," and the recursive definition of forcing corresponds to one but not the other (so that we see an apparent bias if we conflate the two approaches). In the case of $\models$ we get something even better: not only does the apparent bias go away, it actually guarantees a fundamental symmetry! How cool is that?$^1$

$^1$Pretty cool.


Generic comments

Let's look at forcing first, since it's more intuitively difficult in my opinion to critique more fundamental notions. I claim that while the definition of forcing may appear "biased towards negation," it's actually not: it's a concrete way of describing a fully unbiased semantics.

Roughly speaking, a given forcing notion $\mathbb{P}$ gives rise to a "big semantics" and a "small semantics" - the former consisting of all maximal filters through $\mathbb{P}$, and the latter of all sufficiently generic filters through $\mathbb{P}$.

  • OK I'm hiding an important detail here - what does "sufficiently generic" mean? This is where the ground model comes in: it provides a canonical collection of dense subsets of $\mathbb{P}$ we're required to meet. For simplicity, though, I'm pushing that to the rear for now.

Each semantics gives rise to an "all-at-once" notion of forcing (in addition to the usual recursive definition of forcing):

  • $p\Vdash_{all}\varphi$ iff every maximal filter through $\mathbb{P}$ extending $p$ has property $\varphi$.

  • $p\Vdash_{gen}\varphi$ iff every sufficiently generic maximal filter through $\mathbb{P}$ extending $p$ has property $\varphi$.

The key point now is that each of these is completely unbiased - indeed, each is completely non-syntactic (what sort of property is $\varphi$? anything!).

Now here's the key point: in the appropriate sense, the "recursive" notion $\Vdash_{rec}$ of forcing is exactly $\Vdash_{gen}$! This is one of the forcing theorems. So an apparently-biased definition ($\Vdash_{rec}$) is actually the same as a clearly-unbiased definition ($\Vdash_{gen}$).

So what's the other relation $\Vdash_{all}$ doing up there? Well, it's clarifying a separate confusion: when you write

In this case 'in reality' the negation "G is not finite" also could not be determined from the finite amount of information in P, but the order of the truth recursion forces "G is not finite' to be the true expression, despite neither being true 'in reality,'

you're observing that $\Vdash_{rec}=\Vdash_{gen}$ is not sound with respect to the "big" semantics (for which $\Vdash_{all}$ is sound), but that's not the only semantics in town: we also have the "small" semantics, which is "unbiased" as well and for which $\Vdash_{rec}=\Vdash_{gen}$ is sound. The key point of forcing is the observation that $\Vdash_{gen}$ is actually better in many situations, and in particular for building models of set theory.

  • OK fine, it's not quite true that $\Vdash_{gen}$ is unbiased - it's clearly biased towards some maximal filters over others. But the point is that that's a different kind of asymmetry: it's kicking in at the level of what we even consider an object of study, rather than how we approach analyzing those objects once we have them.

Something satisfying

Now on to the more fundamental side of things: satisfaction in structures. Can we, similarly to forcing, "unbias" Tarski's truth definition? I claim that we can, and moreover when we look carefully the apparent syntactic bias is replaced by an actual guarantee of symmetry!

Let's set up our semantics. We have a collection $\mathfrak{S}$ of things called "structures," and sentences carve out subcollections. The usual approach in abstract model theory (unfortunately, I have to introduce my own terminology going forward) is:

An assertion logic is a pair $\mathbb{S}=\langle S, \leadsto\rangle$ with $\leadsto\subseteq\mathfrak{S}\times S$. Intuitively, we think of $S$ as the collection of sentences, and $\leadsto$ as the satisfaction notion.

First-order logic gives an example of an assertion logic, but there are plenty of others, including interesting ones like infinitary and second-order logics and very silly ones which don't have names because nobody likes them.

In this approach, the Tarskian definition of negation amounts to a kind of symmetry in that it tells us that for every $A\subseteq\mathfrak{S}$, the following are equivalent:

  • There is some $\varphi\in S$ such that for each $\mathcal{M}\in\mathfrak{S}$ we have $\mathcal{M}\leadsto\varphi$ iff $\mathcal{M}\in A$.

  • There is some $\varphi\in S$ such that for each $\mathcal{M}\in\mathfrak{S}$ we have $\mathcal{M}\leadsto\varphi$ iff $\mathcal{M}\not\in A$.

But this may feel contrived. I think there may be a more satisfying approach - a "negation-blind" notion of logics (which I've actually never seen before but I'm sure is well-known):

A partition logic is a collection $\mathbb{D}$ of partitions of $\mathfrak{S}$ into two pieces. Intuitively, a partition in $\mathbb{D}$ is a "question," and the two sides represent the two possible answers, yes and no. I'll suggestively call these the "expressible partitions."

(Note that a partition logic has bivalence built in: every structure lies on exactly one side of each "question.") Negation-blindness is due to the fact that we're looking at unordered partitions: we don't distinguish $\mathfrak{S}=X\sqcup Y$ from $\mathfrak{S}=Y\sqcup X$.

Every assertion logic induces a partition logic in an obvious way. This passage is only one-way: two inequivalent assertion logics can yield the same partition logic (it's a good exercise to formalize and prove this claim). That said, it's easy to (formalize and) check that first-order logic is "maximal" with respect to the partition logic it generates, as a consequence of Tarski's definition of negation. So already we have a weak kind of symmetry here.

But even more can be said. Despite the loss of information when we pass from assertions to partitions, partition logics are still content-ful: properties of logical systems can be approached from this standpoint. For example,

$(*)\quad$ "our logic has conjunctions and disjunctions"

would be expressed in this context by saying

$(**)\quad$ "whenever we have expressible partitions $$\mathfrak{S}=A_0\sqcup A_1=B_0\sqcup B_1,$$ the partition $$\mathfrak{S}=(A_0\cap B_0)\sqcup (A_1\cup B_1)$$ is also expressible."

(But see below ...)

The (downward) Lowenheim-Skolem property (for single sentences) is

$(\dagger)\quad$ If $\mathfrak{S}=X\sqcup Y$ is expressible and $X$ is nonempty, then $X$ contains a countable structure.

And the compactness property is

$(\star)\quad$ Whenever we have a family of expressible partitions $$\mathfrak{S}=X_i\sqcup Y_i\quad\quad (i\in I)$$ such that $\bigcap_{i\in I}X_i=\emptyset$, there is some finite $F\subseteq I$ such that $\bigcup_{i\in F}X_i=\emptyset$.

Now we come to the symmetry issue. Consider first-order logic without negation. This possesses conjunctions and disjunctions, trivially. However, consider the principle $(**)$ above. That fails for this logic: take $A_0$ to correspond to $\exists x(P(x))$ and $B_1$ to correspond to $\exists x(Q(x))$. Then the partition $$\mathfrak{S}=(A_0\cap B_0)\sqcup (A_1\cup B_1)$$ is not expressible here since neither $\exists x,y(P(x)\wedge\neg Q(y))$ nor $\exists x,y(\neg P(x)\wedge Q(y))$ can be expressed in a negation-free way. Instead, negation-free first-order logic has only the much weaker version of $(**)$:

$(***)\quad$ "Whenever we have expressible partitions $$\mathfrak{S}=A_0\sqcup A_1=B_0\sqcup B_1,$$ at least one of the partitions $$\mathfrak{S}=(A_0\cap B_0)\sqcup (A_1\cup B_1)\quad\mbox{and}\quad\mathfrak{S}=(A_0\cap B_1)\sqcup (A_1\cup B_0)$$ is also expressible."

And ... $(***)$ is asymmetric at a semantic level, in that an expressible partition will in general have a "distinguished side." The Tarskian approach to negation prevents exactly this, since it says that whenever one side of a partition corresponds to a sentence, so does the other side (by virtue of passing from $\varphi$ to $\neg\varphi$).

Basically, Tarski's approach to negation tells us that we can always take a configuration occurring in our partition logic, "flip sides" of the partitions however we want, and still get the same configuration. And here we have a kind of symmetry which is really equivalent to the one coming from assertion logics, but may feel a bit more fundamental since it shows how even when we naively symmetrize everything, there are still asymmetries which can emerge, and that the Tarskian approach to negation is exactly what prevents them.

Noah Schweber
  • 245,398