There exists a Theory of Identity in mathematical logic. I've encountered
it for the first time in
Principia Mathematica
by Alfred North Whitehead and Bertrand Russell (1910).
Quote: "This definition states that $x$ and $y$ are to be called identical when
every predicative function satisfied by $x$ is also satisfied by $y$".
Many contemporary philosophers call the principle which expresses this view
"Leibniz' Law".
One particularly explicit statement can be found in
Introduction to Logic and to the Methodology of Deductive Sciences by
Alfred Tarski.
In chapter III, On the Theory of Identity, it is read that
"Among logical laws which involve the concept of identity, the most fundamental
is the following: $x = y$ if, and only if, $x$ and $y$ have every property in
common. This law was first stated by Leibniz (although in somewhat different terms)."
Tarski does not provide a reference to the place where, according to
him, Leibniz stated that law. Further refinements can be found on the
Internet.
But, for our purpose, it is sufficient to stick to the original definition,
as given with the Theory of Identity by Tarski / Russell and Whitehead:
$$
(x = y) :\Longleftrightarrow
\left[\;\forall P : P(x) \Longleftrightarrow P(y) \;\right]
$$
Where $:\Longleftrightarrow$ means: logically equivalent by definition.
Let's try something with that definition. Every property
in common, they said. We take that quite literally and have, for example:
$$
P(x) :\Longleftrightarrow
( x \, \mbox{is on the left of the} \, "=" \, \mbox{sign} )
$$
With this property in mind, consider the expression:
$$
1 = 1
$$
Then we see that the $1$ on the right in $1 = 1$ is not on the left, hence the
property $P(1)$ as defined does not hold for that one. Consequently: $1 \ne 1$.
We have run into a paradox.
Oh, you should say, but self-referential properties are of course not allowed.
Sure, I am the last one to disagree with you. This highly artificial example
stresses an important point, though:
- With Leibniz's Law, almost any but not all properties are in common
Therefore consider the decimal representation of numbers and define the following properties: $$ P_{c,k}(x) \; :\Longleftrightarrow \; \mbox{" cipher at position $k$ in the decimal representation of $x$ is $c$ "} \\ \mbox{where} \quad c \in \{0,1,2,3,4,5,6,7,8,9\} $$ We have the two (in)famous numbers, as announced in the header: $$ 1.000... \quad \mbox{and} \quad 0.999... $$ Indeed, there exist numerous proofs of the following statement ( e.g. Wikipedia ) : $$ 1.000... = 0.999... $$ However, the following statement is easy to prove now as well. So we have run into some sort of a paradox: $$ \neg \left[1.000... \stackrel{A}{=} 0.999... \right] $$ This raises some obvious Questions.
Maybe "common" equality in mathematics is not Leibniz' equality ? But how can that be?
Hasn't equality been rigorously defined with Russel's / Tarski's Theory of Identity ?
Or maybe, is there a difference between identity and equality in mathematics ?
Should $\equiv$ and $\stackrel{A}{=}$ be identified perhaps ? And is the following statement true then: $$ 1.000... = 0.999... \qquad \mbox{but} \qquad 1.000... \not \equiv 0.999... $$