1

Please read the full question (and the linked answer) before responding.


This started out as much longer question about extending complete theories by partial functions. That question was effectively answered here. In their answer, Noah Schweber presents an alternative logic that allows for the introduction of partial functions while preserving decidability.

Using this logic, "$\frac10\ne\frac10$" is provable in the theory of real closed fields (hence, provable in any theory able to express the algebra of the real numbers with division - up to elementary equivalence.)

However, this result cannot be replicated in strict first-(or second)-order logic, which means that somewhere out there is a mathematician who will disagree despite using division in there daily life. So to that mathematician I ask:

Is the statement $\frac10=\frac10$ true or false? Explain.

Then, depending on the answer, I will hopefully be able to reverse-engineer "the standard" method of extending a theory by partial functions implied by the existence and frequent use of the symbols $\sqrt,\div,\det,\lim,$ etc.

Alternatively, choose up to three:

  1. Mathematics takes place within the context of first-(or second)-order logic ("formalizable in ZFC" means "within first-order logic"), and the distinction between "syntax" and "semantics" is meaningful.

  2. "$\forall x(x=x)$" is a theorem of first-(and second)-order logic.

  3. The theory of real closed fields is complete, and "$\frac11=1$" is provable in the theory of real closed fields.

  4. The correctness of a mathematical proof is not a matter of opinion, and $\frac10\ne\frac10$

Additional Notes

Many of the comments responding to this question seem to boil down to to "$\frac10$ does not make sense, therefore '$\frac10=\frac10$' is meaningless." This completely misses the point. Using standard definitions, "1" is a constant symbol, "0" is a constant symbol, and "/" is a binary (partial) function symbol. If we take partial functions to be first-orderizable (which they must be to keep real closed fields [with division] inside of first-order logic), then "$\frac10$" must be a term - any parser for first-order logic will recognize $\frac10$ as a term. To have otherwise demands a context-sensitive language, but the language of first-order logic is context free. You cannot have "is context free" and "is not context free" simultaneously, so either we are not working in first-order logic when we introduce "$/$", or $\frac10$ is a term. If $\frac10$ is a term, then "$\frac10=\frac10$ is well-formed - true, even, by instantiation on the logical axiom $\forall x(x=x)$. All of this, I believe, is encapsulated quite succinctly in my "choose three."

Also should I tag this with something else? There's a lot going on here, but I'm not sure if this is properly within the scope of any particular field; like, would this still fall under "reverse mathematics," or maybe "formal proofs?"

R. Burton
  • 4,942
  • 12
    Is $1+2=3+cow?$ It is meaningless. Division is not a full binary operation, which is one reason we don’t tend to use the operation in formal mathematics. We only ascribe “true” or “false” when we have a meaningful statement, and this is not one. – Thomas Andrews May 02 '22 at 16:36
  • Essentially, we have for real or rational numbers, for any $x,y$ with $y\neq0$ exactly one $z$ such that $z*y=x.$ When you write $x/y$ you are referring to that one value $z.$ But $1/0$ has no such value, and $0/0$ has many such values. So neither is taken to be a meaningful expression. You can prove: “For all $z_1,z_2,$ if $0\cdot z_1=1$ and $0\cdot z_2=1,$ then $z_1=z_2.$” But you can also show “For all $z_1,z_2,$ if $0\cdot z_1=0\cdot z_2=1,$ then $z_1\neq z_2.$” This is because if $\forall x:\lnot P(x)$ is true, then $\forall x P(x)\implies Q(x)$ is true for any statement $Q(xj.$ – Thomas Andrews May 02 '22 at 16:46
  • 1
    This of course depends on context and conventions. But I think most people would agree that it only makes sense to ask about the truth value of $a = b$ when $a$ and $b$ are names of mathematical objects (or, if you prefer to take a more formalist approach, when $a$ and $b$ are well-formed terms). Now if you're in a context where you've defined a mathematical object called $\frac{1}{0}$ (or if you've set up some syntax such that $\frac{1}{0}$ is a well-formed term), then $\frac{1}{0} = \frac{1}{0}$ is true. – Alex Kruckman May 02 '22 at 17:02
  • 2
    But without this extra context, i.e. if we a relying on our standard shared understanding of the notation $\frac{n}{m}$, then $\frac{1}{0}$ is not a mathematical object (and not a well-formed term). So it doesn't make sense to ask about the truth value of $\frac{1}{0} = \frac{1}{0}$. – Alex Kruckman May 02 '22 at 17:04
  • Since $\frac{1}{0}$ is undefined the statement is as nonsensical as the statement "the sky tastes sweet" , although it is of the form $A=A$ which is true for every $A$ , but only if $A$ is a defined object. – Peter May 02 '22 at 17:04
  • Yes the issue is: if we change the meaning of zero and of division, we may have a result. – Mauro ALLEGRANZA May 02 '22 at 17:31
  • @AlexKruckman Preventing $\frac10$ from being well-formed comes at the cost of context-freeness. Since the language of first-(and second)-order logic is context free, we must depart from first-(and second)-order logic in order to have "$\frac10$ is not well-formed" – R. Burton May 02 '22 at 18:02
  • 1
    @Peter "$A$ is a defined object" means "the symbol '$A$' denotes something." This is an assertion about the semantics of language, something which the language itself, and the proof system, is unaware of. Put differently: there is know way to know that $\frac10$ is not defined from inside first-order logic. – R. Burton May 02 '22 at 18:24
  • @R.Burton Yes, and we already have to depart from first-(and second)-order logic in order to implement the division symbol as a partial function. I'm trying (and failing) to understand what you're driving at here. Is your question something like this? "Fix some way of translating statements that mathematicians make into the first-order language of set theory. Is the translation of $\frac{1}{0} = \frac{1}{0}$ provable or disprovable from ZFC?" – Alex Kruckman May 02 '22 at 18:28
  • I mean, obviously when mathematicians do mathematics, they write down things which are not sentences of first-order logic. The set of meaningful statements of natural language mathematics is not context-free! This is not in conflict with the fact that we have a translation procedure that maps meaningful statements of natural language mathematics into the first-order language of set theory. – Alex Kruckman May 02 '22 at 18:32
  • @AlexKruckman It was more like "how do we translate partial functions into first-order logic in a way that both mathematicians and logicians will be happy with?" I chose $\frac10$ because it makes the discrepancy clear. If everyone can agree that mathematics does not happen in first-order set theory, that's fine. If everyone can agree that $\frac10=\frac10$, that's fine too. – R. Burton May 02 '22 at 18:39
  • @R.Burton Noah Schweber's answer gives you a few choices for how to resolve this issue. 1) ban partial functions 2) ban all functions 3) return 0 or the first argument or something arbitrarily 4) use a form of free logic that assigns the truth value "false" to atomic wff's containing an invalid subexpression. Of these, (1) is the most common. FOL does not handle partial functions out of the box. You can extend it to handle them, but if you do that you get a different system. – Greg Nisbet May 02 '22 at 18:52
  • 6
    I think you could improve this question by maybe changing the title to something like "how do you handle partial functions in FOL?" and maybe refer specifically to some of the solutions that Noah proposes. Based on reading the question, I can't tell whether you want clarification on some of the potential solutions or want to know which one of them is the most standard. – Greg Nisbet May 02 '22 at 18:57
  • IF we work in FOL, the definition will be : $y \ne 0 \to (x/y=z \leftrightarrow x=y \cdot z)$ – Mauro ALLEGRANZA May 03 '22 at 08:18
  • 3
    I feel silly commenting after writing such a long answer, but I just noticed you wrote "If we take partial functions to be first-orderizable (which they must be to keep real closed fields [with division] inside of first-order logic)". Have you ever looked at a first-order axiomatization of the theory of fields? The standard choice is to use the language of rings ${0,1,+,-,\times}$ (no division). We get multiplicative inverses with an axiom: $\forall x ((x \neq 0)\rightarrow \exists y (xy = 1))$. – Alex Kruckman May 03 '22 at 15:54
  • @AlexKruckman: Well, for the purpose of actually doing real analysis within a practical foundational system, we actually do want to be able to define division, which would make the 'field' part of the axioms satisfied by ℝ into a ∀-theory and hence nicer to work with. For instance, we can easily state and prove basic facts like "∀x,z∈ℝ ∀y,w∈ℝ[≠0] ( (x/y)·(z/w) = (x·z)/(y·w) )". – user21820 May 16 '22 at 18:46

3 Answers3

6

I'll try to answer your question with an extended analogy.

Here is a simplified picture of how programming computers works. At a basic level, computers run machine code: a simple set of instructions that say things like "add the bits in these two positions in memory" or "move this bit from this position to this other position". It's extremely laborious to write programs in machine code, so people develop higher-level programming languages with more complicated instructions, data types, etc. When I write a program in a higher-level language, another program called a compiler takes my program as input, reads it, and translates it into machine code, which can be run on my computer.

In mathematics, first-order ZFC is like machine code (or maybe assembly code? but that's splitting hairs). Essentially no one actually writes proofs or definitions in the first-order language of set theory. Instead, they communicate using the whole apparatus of ordinary / natural language mathematics, which is like a higher-level programming language. The role of first-order ZFC as a foundation for mathematics is that concepts and proofs in ordinary mathematics can in principle be translated (compiled) into formal proofs from ZFC.

Asides: (1) Different computers with different hardware have different machine languages. But that's no problem (and this is one of the advantages of having a higher-level programming language). The same program can be run on different computers, as long as it is compiles to appropriate machine code for each computer it might run on. The same is true in mathematics: we can swap out first-order ZFC for a different foundation system, e.g. another set theory or homotopy type theory or whatever, and at the expense of picking a new "compiler", we will still be able to translate our higher-level proofs to this new foundation - as long as they didn't use any features not present in the new foundation (e.g. the axiom of choice is not implemented in ZF). (2) We can also swap out the higher-level language. If my use of the phrase "natural language mathematics" bothers you, feel free to instead think about some formal language used by an automated proof assistant like Coq or Agda or Lean. These languages are more explicitly like higher-level programming languages that can be compiled to formal proofs. A growing number of people think that mathematicians should "program" in these languages instead of (or rather, in addition to) natural language.

Ok, now one feature of higher-level programming languages and their compilers is that they catch bugs automatically: they detect syntax errors and type errors when they read your code, before trying to compile it. For example, if you try to write something like $5+\texttt{"banana"}$, most programming languages will complain that it doesn't make sense to add a number to a string. Such code will never make it to the stage of running on a computer, because it won't get compiled: the compiler doesn't know what to do with it. Of course, the same programming language can have different compilers, and some compiler might make a weird choice like treating every string as $0$ when you add it to a number, so that $5+\texttt{"banana"} = 0$. But in general, it's helpful if your compiler has strong type checking and error checking capabilities so that bugs are caught before buggy code gets run on the computer.

Similarly, there are lots of ways to decide on a translation procedure from higher-level mathematics to first-order ZFC, and it's helpful if our translation procedure refuses to translate statements that are obviously buggy (meaningless). Some simple translation procedures may happily accept the "sentence" $\frac{1}{0} = \frac{1}{0}$ and translate it into a sentence of ZFC that can be proved or disproved - but since this sentence is actually meaningless, whether it is provable or disprovable will depend heavily on the implementation of the translation procedure! But it's more reasonable to expect the notation $\frac{a}{b}$ to require $a$ to be a number and $b$ to be a non-zero number. If $b = 0$, we should get a "type error". That is, our translation procedure should be smart enough that before it translates a statement involving the notation $\frac{a}{b}$, we have to give it proofs that $a$ is a number and $b$ is a non-zero number (and this is something mathematicians are used to doing - when I write $\frac{a}{b}$ on the board in the course of a proof or calculation in class, I always justify to my students why $b\neq 0$, if it's not obvious). In this case, it doesn't make sense to ask about the truth value of $\frac{1}{0} = \frac{1}{0}$ because it is rejected out of hand by our "mathematics compiler".

Let me now respond to your points 1-4.

  1. Mathematics takes place within the context of first-(or second)-order logic ("formalizable in ZFC" means "within first-order logic"), and the distinction between "syntax" and "semantics" is meaningful.

I believe that mathematics is not solely first-order ZFC. Rather, it's a complex, strongly typed, natural language system (that includes things like partial functions) but can, in principle, be "compiled" to first-order ZFC. This is part of my personal philosophy of mathematics.

But the discussion above is consistent with the worldview that mathematics is solely first-order ZFC. Under such a view, we should call all the higher-level mathematics that people actually do "pre-mathematics". It is only "mathematics" once it is translated to ZFC. This view does not require that the translation procedure is total (defined on all possible input). It's perfectly consistent to view the sentence $\frac{1}{0} = \frac{1}{0}$ as "pre-mathematics" that has a type error and hence cannot be turned into "mathematics".

  1. "$\forall x(x=x)$" is a theorem of first-(and second)-order logic.

Clearly true.

  1. The theory of real closed fields is complete, and "$\frac11=1$" is provable in the theory of real closed fields.

I'm not sure of the relevance of this point. The first-order theory of real closed fields is indeed complete. $\frac{1}{1} = 1$ is not provable in the theory of real closed fields, because the syntax $\frac{1}{1}$ is not part of the first-order language of real closed fields! But we would usually take this as shorthand for $1 = 1\cdot 1$, which is provable.

  1. The correctness of a mathematical proof is not a matter of opinion, and $\frac10\ne\frac10$

In the ideal world, the correctness of a mathematical proof should not be a matter of opinion. In the real world, of course, reasonable people do disagree all the time about the correctness of proofs! But yes, I believe that in such a situation, either the two parties disagree about the standards for proof, or at least one of them is wrong.

$\frac{1}{0}\neq\frac{1}{0}$ is meaningless and has no truth value.

Alex Kruckman
  • 76,357
  • I don't like this answer. It's a great answer, but now rather than having a technical problem which I might actually be able to solve, my new problem is "experts sometimes say and/or believe things which are not actually true," and "the literature of mathematics is riddled with inconsistencies, because authors are human." I don't know how to deal with that! – R. Burton May 13 '22 at 17:56
  • @R.Burton I can assure you that experts frequently say and believe things that are not true, and the mathematical literature is chock full of mistakes. Fortunately most (but not all!) of these mistakes are minor and easily fixable and will not propogate to other work. But if you think these mistakes and inconsistencies are about issues like the truth value of $\frac{1}{0}=\frac{1}{0}$, you are looking at totally the wrong scale. – Alex Kruckman May 13 '22 at 18:32
  • I'm not sure whether you saw my answer, but it sketches one approach that goes beyond FOL and uses guarded conjunction and implication in order to capture correctly what are meaningful statements. In that approach, "$1/0$" is indeed an invalid expression (in the global context) because we will typically define $/$ to have domain $ℂ×ℂ{≠0}$ and we will never be able to prove $0∈ℂ{≠0}$. There is still a catch though. In a subcontext in which we prove a contradiction (e.g. in a proof by contradiction), we can prove $0∈ℂ_{≠0}$, so we would be able to prove things like $2 = 1/0$. Just too bad. =P – user21820 May 16 '22 at 18:38
2

If you want to axiomatize real analysis in standard FOL, and be able to use the division symbol, you have no choice but to accept that the following is a true sentence:

$∀x{∈}ℝ\ ( \ x ≠ 0 ⇒ 1/(1/x) = x \ )$.

I'm sure you agree with that. Notice that when you read the sentence it make sense and (seems to) only involve division by non-zero reals. But the sentence is trivially equivalent to:

$∀x{∈}ℝ\ ( \ 1/(1/x) ≠ x ⇒ x = 0 \ )$.

Do you like this? No? Then you cannot stay within standard FOL. However, that doesn't mean that you should just give up standard FOL completely. In fact, you would have great trouble with all of mathematics if you refuse to accept the standard approach taken by most logicians in formalizing mathematics in FOL. Consider that you do not only want to talk about reals. How about:

$∀k{∈}ℕ\ ∃x{∈}ℝ\ ( \ 0 < x·(n+1) < 1 \ )$.

There is no division here, but there is actually the same issue hiding in the restricted quantifiers. Do you want "$∀x{∈}S\ ( \ Q(x) \ )$" to be equivalent to "$∀x\ ( \ x{∈}S ⇒ Q(x) \ )$" or not? This is standard, but if you say "yes" then you are forced to accept that the above sentence is equivalent to:

$∀k\ ∃x\ ( \ ( \ 0 < x·(n+1) < 1 ∨ k{∉}ℕ \ ) ∧ x{∈}ℝ \ )$.

There are two main approaches you can take to 'resolve' this. The first approach is to still stick to standard FOL and simply view each predicate/function-symbol as producing arbitrary output on objects outside of its intended domain. This is completely compatible with many-sorted FOL and on-the-fly definitorial expansion (see this post for the details), both of which are expedient (in some form or another) for any practically usable foundational system. Yes, you will be able to prove all the above sentences, but there is nothing surprising anymore once you understand that it is the nature of having simple syntax rules.

The second approach is to restrict the well-formed terms and formulae in some manner, so that the second and fourth sentences above are invalid. All variants of this approach are complicated to the point that I will not give any details here. Note that mere syntactic restrictions are not enough. For instance, you generally want to be able to define a new predicate/function-symbol $F$ on the fly with intended domain being some set $S$, and you want to be able to write "$F(x)$" in any context where you have proven "$x{∈}S$". So such an approach is best carried out in a Fitch-style system where you can have relatively clean context-based syntax rules. $ \def\cimp{\mathbin{?{⇒}}} \def\cand{\mathbin{?∧}} $

In the second approach, if you do not want to mangle the standard meaning of "$⇒$", and want to be able to use PL tautologies in deduction, then you can instead add guarded implication/conjunction, which I shall denote by "$\cimp$" and "$\cand$" respectively. (These are related to short-circuit evaluation.) Then the first sentence must be written as:

$∀x{∈}ℝ\ ( \ x ≠ 0 \cimp 1/(1/x) = x \ )$.

The second and fourth sentence are simply invalid. Of course, you need to have suitable deductive rules to govern the guarded operations. And in such a system "$∀x{∈}S\ ( \ Q(x) \ )$" would be equivalent to "$∀x{∈}obj\ ( \ x{∈}S \cimp Q(x) \ )$".

user21820
  • 57,693
  • 9
  • 98
  • 256
  • Note that logicians mostly use the first approach, and therefore if you want to know what they mean when they say division is definable over the theory of fields, you must understand it essentially as per the first approach. – user21820 May 09 '22 at 15:55
-1

The usual definition of division on $\mathbb{R}$ is simply not applicable for divisors of $0$. The numerical value of such constructs are thus said to be undefined (not to be confused with false).

For all $x,y,z \in \mathbb{R}$, if $y\neq 0$ then $x\div 0 = z$ if and only if $x=yz$

Examples

  1. Regardless of the value of $x$ and $z$, if $y=0$, we cannot apply this definition to determine whether or not $x\div 0=z$ is true. We can say the $x\div 0$ is undefined.

  2. If $x=0$ and $y=z=1$, then we can apply this definition to say that $0\div 1 = 1 $ is false. We cannot say from this that $0\div 1$ is undefined.