Exploring the Curry-Howard correspondence
This topic has been treated by many great sources such as this and that, but I’d like to give my take on it. The Curry-Howard correspondence is a powerful idea that states that there exists an equivalence between programs and proofs. In other words, there is an isomorphism mapping programs to proofs, allowing us to find programs that represent proofs and vice versa by definition. This raises the question: how do we go about proving theorems with programs? We can start by finding the connection between propositional logic and functions.
Types are formulas and programs are proofs.
Therefore, a formula is valid iff there exists a function (program) with the corresponding type.
So how do we construct the correspondence between formulas and types? It’s not immediately apparent, but the big idea is that there exists a connection between validity and the inhabitants of a type: the values it describes. Thus, we can use the following rule to define the relationship.
A type p represents a valid formula iff it refers to an inhabitable type p
.
We can then define the correspondence between formulas constructed with logical connectives and types.
- Falsities are ⊥, corresponding to the uninhabited type.
- Negation is a function p → ⊥ on any input. This allows us to take any valid formula (inhabitable type) and produce its negation: a falsity, or ⊥.
- Disjunctions are sum types/tagged unions. p ∨ q can be represented by
Either p q
, which can take on values/inhabitants of eitherLeft p
orRight q
, capturing the behavior of disjunctions. - Conjunctions are product types. Since the tuple
(p, q)
requires the presence of values from both typesp
andq
, they describe conjunctions. - Implications are abstractions. A function p → q requires the presence of p for the presence of q, adequately describing conditionals.
Note that some semantics of natural deduction, namely the rules of introduction that allow us to derive valid formulas, are encoded in the construction of these types. For example, introducing a conjunction between p and q requires both formulas to be valid. This is evident in the construction of (p, q)
, which requires the presence of both types p and q. However, we would still need to manually prove rules of elimination, but those are trivial. For example, we can perform left and right elimination on a conjunction by extracting either element of the tuple and noting that they are present. Now, we can go straight to writing proofs of basic and derived argument forms. But first, let’s define some useful type synonyms corresponding to the above five rules.
{-# LANGUAGE TypeOperators #-}
module NaturalDeduction where
data (⊥) -- no possible inhabitants
type (¬)p = p -> (⊥)
type (∧) = (,)
type (∨) = Either
type (→) = (->)
Let’s prove a few theorems. Consider the trivial modus tollens, which states that (p → q) ∧ ¬q → ¬p. The corresponding program in Haskell is the following.
modusTollens :: (p → q) ∧ (¬)q → (¬)p
modusTollens (φ, ψ) = ψ . φ
This reads a bit weird, but we’re essentially saying that given an inhabitant of p
, we can derive a contradiction by producing an inhabitant of q
by function application and stating not q
at the same time. Since this is a well-typed and proper function, we have proven modus tollens.
But what about something more fundamental, like the law of the excluded middle? Turns out, it is impossible to prove axioms, because they are, by definition, assumed to be true. To get around this, we can define an infinitely recursive function true
such that we can assign any type to it. As a result, we can define laws and axioms as true
so that their type signatures are proper, and therefore the formulas to which they correspond are valid.
true :: truth
true = true
exclMid :: p ∨ (¬)p
exclMid = true
This is of course not a carte blanche to mark theorems you are too lazy to prove as true, but a nice way of introducing axioms into the system.
So let’s try something harder, like the transposition theorem: (p → q) ↔ (¬q → ¬p). Since this involves a biconditional, we can introduce this as a type synonym. Furthermore, the proof requires applications of modus tollens and double negation, allowing us to compose proofs based on previous proofs of theorems.
type p ↔ q = (p → q) ∧ (q → p)
dblNeg :: p ↔ (¬)((¬) p)
dblNeg = (flip ($), true) -- ($) is actually modus ponens
transposition :: (p → q) ↔ ((¬)q → (¬)p)
transposition =
(curry modusTollens, \c p -> -- c = contrapositive
undoDblNegate $ curry modusTollens c $ dblNegate p)
where (dblNegate, undoDblNegate) = dblNeg
Proving double negation works in the forward direction by applying negation then later undoing it by function application, but this cannot be done in the backwards direction since we can’t “undo” lambda abstraction. As a result, we have to let it be accepted as true. To prove transposition in the forward direction, we simply apply modus tollens. In the backwards direction, we must apply modus tollens to (¬q → ¬p) and the double negative of p, then undo the double negation to get the correct result.
In short, we’ve effectively created a system to prove theorems (automated theorem provers, anyone?) by exploiting the Curry-Howard correspondence. The big takeaway is that we can prove theorems by encoding their logic in the type signatures of functions and can actually go about proving them by manipulating inhabitants. A gist of the source code can be found here.
P.S. A great exercise would be to extend this system to accommodate universal quantification.