0

Definition 2.2.1 (Addition of natural numbers).

Let $m$ be a natural number. To add zero to $m$, we define $0$ $+$ $m$ := $m$. Now suppose inductively that we have defined how to add $n$ to $m$. Then we can add $n$++ to m by defining ($n$++) + $m$ := ($n + m$)++.

My question is: Is addition of natural numbers defined just from nowhere? So that we don't need to prove whether this definition is true or false? I mean why can't we just define ($n$++) + $m$ := ($n + m$)+++

Blue
  • 75,673
Andrew Li
  • 431

3 Answers3

2

In principle you can define any notion as anything you want, as long as the definition meets formal criteria of making sense (like all symbols being used having been introduced prior, and them being used in accordance with that definition), and as long as you are willing to live with the consequences.

However, in the case of addition of natural numbers, most people will recall from primary school (or before) that this operation was introduced for a particular purpose in the context of counting. Natural numbers serve as a property of certain (finite) sets, namely there number of elements (or cardinal). Then addition of natural numbers is supposed to say something about the number of elements in disjoint unions, namely:

Whenever $n$ and $m$ are natural numbers, $X$ is a set with $n$ elements and $Y$ is a set with $m$ elements, and the intersection $X\cap Y$ is the empty set, then the union $X\cup Y$ has $n+m$ elements.

So anyone proposing a definition of addition in $\Bbb N$ is under the moral obligation to prove that this property holds with their definition of $n+m$. If this cannot be proved, then their operation will most likely not have the properties of addition that everybody knows and loves, and therefore only cause considerable confusion.

0

You could define addition as $(n++)+m:=(n+m)+++$, but it wouldn't capture the notion you are trying to define (because then $1+0$ would be $2$, for example).

When "inventing" a definition you're trying to give a precise representation of an idea of something you already know what it is.

jjagmath
  • 18,214
0

The answer to your question depends on the formal language and axiom system you are using.

But most of the time, no, you don't need to verify that a definition is true or valid. There are two things that can happen:

  1. Your definition does not do what you expect.
  2. Your definition cannot be formalized in your formal system.

Your alternative "wrong" definition of + falls under case 1. There is no mathematical problem; you've just defined an operation that does not correspond to our intuitive understanding of addition. But there is no way to verify that something corresponds to our intuitive understanding. Mathematicians simply agree that the first definition of addition you give, is the one we want.

In case 2 there would be a mathematical problem. What needs to be verified in principle is that what you are defining can indeed be formally defined.

Say you want to define $a$ to be $\{1, 2\}$. Then there is nothing to be verified. Formally, what is happening is that you add a new symbol to the formal language, "$a$", and you add the axiom that $a = \{1, 2\}$. This is called an "extension by definitions", and it is a (meta-)theorem that an extension by definitions does not enable you to prove any new things, it just makes the language more expressive: The extension is "conservative": https://en.wikipedia.org/wiki/Conservative_extension

Now, the example you give is a little more complicated. We are not just defining something to be something else, it appears that we are defining many things at the same time and it is not obvious whether all those definitions are consistent. For them to be consistent (and even just to be able to define many things at once) we need some properties of $\mathbb N$, and here things depend very much on the formal system you use.

Let's say your formal system is ZFC, then we look at https://en.wikipedia.org/wiki/Set-theoretic_definition_of_natural_numbers: $\mathbb N$ is defined as the "smallest" set containing $\varnothing$ and which is closed under the operation $n \mapsto n \cup \{n \}$. The way we define "smallest" set is complicated, but it can be done using the axiom of infinity, axiom of power set and axiom scheme of separation: We start with a set $X$ containing $\varnothing$ and which is closed under that operation, and then consider the intersection of all elements of $P(X)$ with the same property: $$\mathbb N = \{ n \in X : \left( \forall S \in P(X) : \left( (\varnothing \in S) \wedge (\forall m \in S : (m \cup \{m\}) \in S ) \right) \implies n \in S\right) \} \,.$$ The next question is how to define the function $+ : \mathbb N \times \mathbb N \to \mathbb N$. This depends on the implementation of functions, but according to one implementation it should be a subset of $(\mathbb N \times \mathbb N) \times \mathbb N$. You can then run a complicated proof by induction (induction itself being a theorem about $\mathbb N$ that needs to be proven) to show that there exists a unique subset $A$ of $(\mathbb N \times \mathbb N) \times \mathbb N$ with the following properties:

  • $\forall n, m \in \mathbb N: \exists! s \in \mathbb N : ((n, m), s) \in A$.
  • $\forall m \in \mathbb N: ((0, m), m) \in A$.
  • $\forall n, m, s \in \mathbb N: ((n, m), s) \in A \implies ((n++, m), s++) \in A$.

Finally, you can make an extension by definitions that allows you to write $n+m$ with the axiom that $n+m = s \iff ((n, m), s) \in A$.

In other formal systems the definition will be much easier, such as in calculus of inductive constructions: https://en.wikipedia.org/wiki/Calculus_of_constructions

Bart Michels
  • 26,355