2

In this discussion I asked people about applications of intuitionistic logic, and one of the participants of this forum, HallaSurvivor, told me that there are applications in programming.

I am a mathematician, and I have very vague impression about programming, so it's difficult for me to understand details. Can anybody explain to non-specialists, like me, the idea of the usage of intuitionistic logic in programming? Why is this logic useful there? (As far as I understand, programmers consider intuitionistic logic even more useful than the classical logic, so I suppose there must be more or less simple explanations of why they think so.)

I have a congecture that if we restrict ourselves on computations, i.e. on propositions like $f(a)=b$, then the negation throws us out of the class of propositions we are interested in, and that is why the law of the excluded middle is not considered acceptable for programmers. Am I right? Or are there more precise explanations?

1 Answers1

11

I think most of this confusion is coming from a deeper confusion about how programs-as-proofs works, so I'll discuss that. At the end, it will hopefully be clear why we get intuitionistic logics in this way, unless we're actively trying to get something classical.

The idea is that our types should correspond to "propositions" of interest, and then programs inhabiting those types are "proofs" that that proposition is true. Let's start small and work our way up. As a simple example, let's see a proof that $A \to A$.

We literally want to write a function which takes in a proof $a$ of $A$ (which we write as $a : A$) and then outputs a proof of $A$. If we use syntax from lambda calculus, we have

$$ \lambda (a : A) . a \ \ : A \to A$$

this is the function which takes in $a$ as input, and just... outputs $a$. So we've successfully turned a proof of $A$ into a proof of $A$ (which wasn't very hard). But notice this proof also has a type. So this is a proof that $A \to A$, and we know that $A \to A$ is a true proposition.

Let's get a little bit fancier. Can we show $A \land B \to A$? In programming languages we typically write $\times$ instead of $\land$ for types, but they're the same thing.

We take in a pair of proofs $(a,b) : A \times B$. We want to output a proof of $A$. I think it should be clear how to do this:

$$ \lambda \left ( (a,b) : A \times B \right ) . a \ \ : A \times B \to A.$$

Let's try de morgan's laws. If $\mathbb{0}$ is the empty type, we associate it with "false", because there's nothing inside it, so it is unprovable. Then we think of $\lnot A$ as being an abbreviation for $A \to \mathbb{0}$, since if $A$ can prove false, we have a problem. Notice this idea of defining $\lnot A$ in terms of $A$ and false is already an intuitionistic thing to do.

But now, let's prove $\lnot (A \lor B) \to \lnot A \land \lnot B$. Again, we typically write $+$ instead of $\lor$ (it's a sumtype) and $\times$ instead of $\land$.

We take in a proof (or a program) $f : (A + B) \to \mathbb{0}$. We want to spit out a pair of proofs $g_A : A \to \mathbb{0}$ and $g_B : B \to \mathbb{0}$. But that's easy to do! Since our input $f$ can take either an $A$ or a $B$ as input, we're golden!

$$ \lambda \left ( f : (A + B) \to \mathbb{0} \right ) . \left ( \lambda (a : A) . f (\mathtt{inl}\ a), \ \lambda (b : B) . f ( \mathtt{inr}\ b) \right ) \ \ : ((A + B) \to \mathbb{0}) \to (A \to \mathbb{0}) \times (B \to \mathbb{0}) $$

With our abbreviation, this program has type $\lnot(A + B) \to \lnot A \times \lnot B$.

I'm mainly talking about how programs can be interpreted as proofs, since you mentioned you're more a mathematician than a programmer. But the correspondence goes both ways. Say you want to prove that your programming language has nice properties. It's helpful to mathematically formalize how your programming language behaves, and we do this with the language of (intuitionistic) logic, with operational semantics.


Now! Notice the running theme in each of these cases. How are we able to prove a proposition $A$ is true? We have to build a program which inhabits its type. In fact, this is the only way to prove a proposition. But there's a reason to care about this! Now all of our proofs have "computational content". You can imagine having a proof of some complicated implication $A \to B$. If we do this constructively (that is, intuitionistically), then if you give me an $a : A$, I know how to actually convert it into a $b : B$. That's really cool!

If you think about Brouwer's fixed point theorem, it tells us we have an implication

$$ \{ f : D^2 \to D^2 \} \to \{ x : D^2 \mid f(x) = x \} $$

but, rather aggravatingly, the classical proof of this fact doesn't tell you how to find the $x$. It tells you that something exists, but gives you no idea how to get your hands on it. Intuitionistically, this is not possible. Every proof of a proposition has to actually witness the existence of the object it constructs by building the object in question. This is great, because an intuitionistic proof of Brouwer's fixed point theorem is basically an algorithm taking in a continuous function $f : D^2 \to D^2$ and spitting out a point $x$ fixed by $f$, and if we ever have a real life function $f$ we're interested in, we can evaluate the proof with that input and it will actually hand us a fixed point in return.


Now. Why is this all intuitionistic? What does this have to do with $\mathsf{LEM}$ or $\mathsf{DNE}$?

I think it's slightly clearer with $\mathsf{DNE}$, so let's use that. $\mathsf{DNE}$ tells us that, for every $A$, there's something inhabiting $\lnot \lnot A \to A$.

But what does that mean? It means there's a function

$$\mathtt{dne}_A : ((A \to \mathbb{0}) \to \mathbb{0}) \to A$$

and I encourage you to try to write one. The problem is that knowing $\lnot A$ is false doesn't furnish us with a proof of $A$! There's no way to build a term $a : A$ from our input $f : ((A \to \mathbb{0}) \to \mathbb{0})$, and if you use heyting algebras you can prove that no such term can possibly exist.

Now, there's nothing stopping you from saying "oh, I actually want to put a term $\mathtt{dne}_A : \lnot \lnot A \to A$ into my programming language as a primitive!" You can totally do that, and plenty of people have. The difficulty is that it doesn't compute anything. How do you interpret a term of that type if you're a programmer? What does it do?

In my mind, every answer to this question is a bit contrived. The best we have is continuations, which you can read more about here, for instance.

If you want to read more about this stuff as a mathematician, you should try the lecture notes from CMU's class on constructive logic (15-317). That's where I learned a lot of this, and the notes are available online.

If you want to read more about this stuff as a programmer, or rather someone who builds programming languages, you should read Harper's Practical Foundations for Programming Languages. Since I'm linking old classes, I learned a lot of this stuff in 15-312 (which is taught by Harper), and you can find lecture notes here.


I hope this helps ^_^

HallaSurvivor
  • 38,115
  • 4
  • 46
  • 87
  • 1
    This is a wonderful answer! – Shaun Jun 07 '21 at 18:05
  • 1
    @Shaun <3 ${}{}{}$ – HallaSurvivor Jun 07 '21 at 18:07
  • Nice answer. I would strongly recommend Sørensen and Urzyczyn's book Lectures on the Curry Howard Isomorphism. – Rob Arthan Jun 07 '21 at 22:28
  • Halla, I must confess that I don't see in this picture any objective obstacles for the things like LEM and DNE in programming. If, as you say, I can add terms as primitives, in particular, this $\operatorname{\sf dne}$, then the impression is that this is totally a question of interpretation, this is not a problem in programming. Is that correct? – Sergei Akbarov Jun 08 '21 at 07:21
  • I saw programs that construct deductions (as Gentzen trees) for formulas in classical propositional calculus (and verify the deducibility of these formulas), and there were no problems for them with LEM and DNE. So, as far as I understand, the choice whether we use LEM and DNE or not, is a voluntary decision of the user, it is not dictated by the internal problems of programming. – Sergei Akbarov Jun 08 '21 at 07:30
  • Actually, the very idea of using $\lambda$-calculus looks strange for me. I would expect that Gentzen's sequent calculus would be much more simple and more useful for this class of problems. – Sergei Akbarov Jun 08 '21 at 08:02
  • @SergeiAkbarov your last comment I think gets at the heart of the disconnect. We're not using λ-calc for theorem proving (as you mention sequent calculus is better for that). We're just trying to describe some set of λ terms as functions (on its own the untyped λ-calculus doesn't necessarily have anything to do with functions, it's just a term algebra modulo some equivalences). The arrow in a type is not implication, it's a generalization of implication to functions between arbitrary domains. DNE doesn't generalize to arbitrary domains because we can't write that $\mathtt{dne}_A$ function. – GeoffChurch Mar 22 '22 at 19:48
  • We can interpret the usual propositional calculus type-theoretically in a universe with exactly 2 types: () (true) and Void (false), where () is the unit type with a single eponymous inhabitant (), and Void is the empty type. Then DNE holds because we can implement $\mathtt{dne}_A$ (in e.g. Haskell) like this. – GeoffChurch Mar 22 '22 at 20:26
  • Basically, there's of course no obstacle to writing a theorem prover for e.g. classical propositional calculus with DNE/LEM. It becomes a problem in the propositions-as-types interpretation, where we want negation and implication to mean something way more general when we're proving theorems about programs. – GeoffChurch Mar 22 '22 at 20:43