I'm working my way through an introduction to lambda calculus, and it seems to start from the premise that TRUE
and FALSE
(as well as everything else) can be encoded as functions. From this, they define these two functions as:
\begin{align} T & = \lambda a b.a\\ F & = \lambda a b.b \end{align}
Now, I'm not a genius, but it seems like this looks less like 'booleans' and more like 'first' and 'second' (or left and right, if looking at the equation).
Is this just an agreed-upon convention or how is that allowed? For example, what if I had a function that passed the value of 0
or False
(or low-voltage or whatever represents zero in the physical world), some examples being:
\begin{align} & False=0 \\ & True=1 \\ & T(False)(False) \\ & T(0)(0) \\ & T(50)(12) \end{align}
Or even passing more than two values to the function, how could one tell where it is True or False?
$$T(2)(3)(4)(5)(6)(7)$$
Maybe I'm missing the boat, but basically I'm curious why what seems to me like such an arbitrary definition of booleans would be allowed in lambda calculus, other than as a convenience where nothing else exists.
cons x (cons y nil)
gets encoded as $\lambda a b.a~x~(a~y~b)$; for the natural type (in unary) with constructorsS
(successor) andO
(zero), $3 = S (S (S O))$ gets encoded as $\lambda a b.a~(a~(a b))$; for theoption
orMaybe
type with constructorsSome
andNone
,Some x
gets encoded as $\lambda a b.a~x$ andNone
gets encoded as $\lambda a b.b$; etc. – Daniel Schepler May 17 '21 at 22:06