Disclaimer: I do eventually remark about unsoundness.

Colors blunderingly plucked from

Ethan Schoonover’s style.css .

Full code in this gist.

Edit 10:52 Central 11 Oct 2012: added mention of kamatsu’s proof.

I’m going to do some proving in Haskell in this post using singleton types. There’s no new fundamentals in here, but I’ve not seen much demonstration of these techniques. As I understand it, there’s a research agenda (with a strong UPenn contingent) to enrich Haskell’s type system to the point where translation from a vanilla Agda development to Haskell would be mechanical. I’m guessing that the code I’m sharing here is probably a very human-readable version of what such a tool from the future might output from a student’s first Agda homework assignment of the semester.

This post is self-contained, but it relies heavily on concepts that also live in at least a couple of Hackage packages that I’m aware of. Richard Eisenberg’s singleton package and his Haskell 2012 paper with Stephanie Weirich provides tremendously convenient support for dependently-typed programming with singleton types. In this post, I’m going to focus more on proving things, and I use singleton types to do so. I remark below on how this sort of proving can help with programming.

The other package that has a lot of related content is Edward Kmett’s constraints package. In particular, my evidence for implications is largely isomorphic to Kmett’s \\ function.

OK, on with the show.

A Framework for Constructive Type Theory in Haskell

One of the fundamental lessons from Harper’s lectures at this year’s Oregon Programming Language Summer School was the coarse difference between classical and constructive logic. Classical logicians only care about provability, whereas constructive logicians are also interested in evidence. In this post, I’m modeling propositions as Constraint s and their proofs as dictionaries. Actually, I’m treating a proof as an implication with a trivial antecedent, the () Constraint . The evidence of an implication is a function that squeezes a dictionary for the consequent’s Constraint out of a dictionary for the antecedent’s Constraint . For ease-of-use, I’m modelling dictionaries via rank-2 continuations instead of the Dict GADT.

type p :=> q = forall a . p => (q => a) -> a type Proof p = () :=> p

That’s really all of the CTT fundamentals we’ll need; GHC’s type rules for constraints takes care of the rest.

Just to be clear, this is just one methodology of “proving” in Haskell. Here’s a a different proof by kamatsu of the ultimate theorem discussed in this post. My motivation here is to show some techniques for “thinly” veiled dependent type theory in Haskell.

Naturals

We’re ultimately going to prove that addition on naturals is commutative, so we need to put some familiar pieces on the board.

data Nat = Z | S Nat data Nat1 :: Nat -> * where Z1 :: Nat1 Z S1 :: Nat1 n -> Nat1 ( S n)

Nat is a datatype, but we’ll be using it as a type via promotion. Nat1 is the singleton type for Nat . Its key property as a singleton type is that a value of type Nat1 n fully determines the type n (and vice versa). Since the value determines the type, this gives us leverage for dependent typing. Again, see the Eisenberg and Weirich paper for more. Also note that the singletons package could generate this Nat1 declaration, as well as many many others kinds of declaration.

Induction

Naturals are an inductive datatype; they define an induction principle. In dependent programming and CTT, we can understand an induction principle as a dependently-typed fold operator, called a “dependent eliminator.” In Haskell with singleton types, it looks like this for Nat1 .

type BaseCase c = Proof (c Z ) type InductiveStep c = forall n . Nat1 n -> c n :=> c ( S n) ind_Nat :: Predicate c -> BaseCase c -> InductiveStep c -> Nat1 n -> Proof (c n)

This is familiar from undergrad: a predicate c over the naturals is an inductive property if is has a proof for zero and a proof for any n implies a proof for 1 + n . Induction says that given those two proofs, I can built a proof for any n , which I think is a pretty simple reading of the type of ind_Nat . The definition of ind_Nat just puts this familiar notion of induction to work.

ind_Nat _ base _ Z1 = \k -> base k ind_Nat pc base step ( S1 n1) = \k -> ind_Nat pc base step n1 $ -- provides the inductive hypothesis step n1 k

In the above definitions, it probably helps to unfold the definition of Proof to see that k has a qualified type and that the code between the lambda bindings of k and its occurrence are just building up a suitable context for that qualified type from the base and step arguments and the recursive call to ind_Nat .

Now that we have induction, let’s prove an inductive property!

Another common ingredient in the singleton type recipe is the class for building Nat1 from a Nat — this is the realizer of the “vice versa” part I slipped in when grossly explaining singleton types. This class effectively characterizes what it means for a type to be a natural, so I certainly hope it’s inductive…

class Natural ( n :: Nat ) where nat1 :: Nat1 n inductive_Natural :: Nat1 n -> Proof ( Natural n) inductive_Natural = ind_Nat ( Predicate :: Predicate Natural ) (\k -> k) -- base case (\_ k -> k) -- inductive step

In fact, it is inductive, and in the most trivial way possible, as can be seen from those two arguments to ind_Nat . In a sense, “naturalness” is inductive by definition, which reminds me that we haven’t actually defined Natural yet, we’ve just declared it. We need instances.

In this case, I’m confident the instances are pretty self-explanatory, but I want to show a slight odd way of deriving their shape. In Agda, you can put placeholders (called sheds, I think) in your incomplete proofs and then Agda will tell you what that property that placeholers actually needs to prove. We can kind of get the same behavior here, but inspecting the type errors generated by these most naive “placeholders” \k -> k and \_ k -> k for the base case and inductive step. Since we haven’t declared the necessary instances of Natural , we get the following errors, in which GHC specifies what instances we need in order for Natural to be an inductive property of Nat s.

{- InductiveConstraints.hs:56:10: Could not deduce (Natural 'Z) ... ... Possible fix: add an instance declaration for (Natural 'Z) InductiveConstraints.hs:57:12: Could not deduce (Natural ('S m)) ... ... or from (Natural m) ... Possible fix: add an instance declaration for (Natural ('S m)) -}

To determine what implications are necessary for a more complicated property, this technique might be helpful. We’ll try it out again for the commutativity proof later.

The requisite instances are the usual ones for a singleton type’s builder class.

instance Natural Z where nat1 = Z1 instance Natural m => Natural ( S m) where nat1 = S1 nat1

Now the inductive_Natural definition type checks; we’ve “proved” it. “Proved” because Haskell wasn’t designed for proving things: the pervasive laziness bites us here insomuch as the GHC Haskell type system is not a sound realization of the Curry-Howard isomorphism. For example, here’s another definition of inductive_Natural that GHC happily accepts as well-typed.

inductive_Natural = let x = x in x

Just because we have a well-typed term of type Proof doesn’t mean that Proof is actually true. Note well that RHSs can be accidentally equivalent to this one in any number of insidious ways. If you want soundness, use a proof assistant. That is, extract your Haskell code from proofs in Coq, in Isabelle, or in Agda. CRUCIAL DISCLAIMER: I never said the extraction would be easy…

So why do this?

Extracting Haskell via proof assistants is one end of the spectrum. But common Haskell practice has been creeping in that direction from the other end of the spectrum for quite some time now. Just because our type system isn’t sound, doesn’t mean we can’t write valid proofs in it, if it would behoove our program. … And when would it do that?

I’ll by honest: the majority of the times I’ve applied these techniques, I’ve later realized I didn’t actually need it — there was a better simplification to be made. However, not always. One lesson I’ve learned while enjoying Conor McBride’s online lectures is that Curry-Howard is vastly easier to use when the recursion schemes of your type definitions match the recursion schemes of the relevant value-level definitions that involve those types. When they don’t match up, however, you might be able to overcome the mismatch via the techniques sketched in this post. For example, if you define addition as I do in the next section, you might have to do a bit of extra work in order to get the accumulator-based definition of reverse for vectors (ie fixed-length lists encoded with a GADT) to type-check. Note that I said might; I’m just estimating based on my experiences…

Finally, the proof

Recall that we’re proving that Plus is commutative; here’s the set up.

type family Plus ( n :: Nat ) ( m :: Nat ) :: Nat type instance Plus Z m = m type instance Plus ( S n) m = S ( Plus n m) class ( Plus n m ~ Plus m n) => Plus_Comm ( n :: Nat ) ( m :: Nat ) instance ( Plus n m ~ Plus m n) => Plus_Comm n m plus_Comm :: forall n . Nat1 n -> forall m . Nat1 m -> Proof ( Plus_Comm n m)

The property we’re really after is Plus n m ~ Plus m n , for any n and m . Note that this is precisely the declared type of plus_Comm . The Plus_Comm class just organizes the bindings of n and m (in an arbitrary way) so that we can use the ind_Nat function.

Since Plus recurs on in its left argument, we’ll induct on that one.

-- ill-typed first guess plus_Comm n1 = ind_Nat ( Predicate :: Predicate ( Plus_Comm n)) (\k -> k) -- the most naive first guess at base case (\_ k -> k) -- the most naive first guess at inductive step

This doesn’t type-check, but again the type errors are telling.

{- InductiveConstraints.hs:74:10: Couldn't match type `m' with `Plus m 'Z' `m' is a rigid type variable bound by the type signature for plus_Comm :: Nat1 n -> Nat1 m -> Proof (Plus_Comm m n) at InductiveConstraints.hs:72:22 ... InductiveConstraints.hs:95:12: Could not deduce (Plus n ('S m1) ~ 'S (Plus n m1)) ... or from (Plus_Comm n m1) -}

The first one comes from our base case. It says we need to explain to GHC how Plus Z m ~ Plus m Z . Recall that Plus inspects its first argument, an in this property that’s a universally quantified type variable of kind Nat . We just need to use induction again, but on m this time.

-- well-typed actual proof plus_Comm n1 = ind_Nat ( Predicate :: Predicate ( Plus_Comm n)) -- base case ( let plus0 :: forall n . Nat1 n -> Proof ( Plus_Comm Z n) plus0 = ind_Nat ( Predicate :: Predicate ( Plus_Comm Z )) (\x -> x) -- base case (\_ x -> x) -- inductive step in \x -> plus0 n1 x) -- inductive step below

The second error comes from the naive guess for the inductive step. It says that GHC doesn’t know how Plus_Comm n m1 should imply Plus n ('S m1) ~ 'S (Plus n m1) . Again, Plus is blocked on a variable, this time n . So let’s induct on n in another lemma.

-- the well-typed inductive step ( let lemma :: forall m . Nat1 m -> Nat1 n -> Plus_Comm n m :=> Plus_Comm_Lemma m n lemma _ = ind_Nat ( Predicate :: Predicate ( Plus_Comm_Lemma m)) (\x -> x) (\_ x -> x) in \m1 x -> lemma m1 n1 x) class ( Plus n ( S m) ~ S ( Plus n m)) => Plus_Comm_Lemma m n instance ( Plus n ( S m) ~ S ( Plus n m)) => Plus_Comm_Lemma m n

Again, the class just imposes some binding order on the ~ constraint so that we can use our ind_Nat function.

Q.E.D. … sort of, as I qualified before.

That’s very similar to one way to prove commutativity in Coq, by the way. In that sense, the above is an actual proof, regardless of Haskell’s type system’s unsoundness. With the appropriate totality analysis (and maybe something else, I don’t know), I’m pretty sure we could verify that we used only sound inference in the above. In other words, the Core coercions that the proof reduces to when GHC compiles it will be safe at run-time.

Final Remarks

I didn’t show any interesting inductive type classes; Natural is a pretty cheesy example, but there are other inductive classes. However, the proof of ~ properties is a particularly interesting thing to do because there’s no other way to extend it. At least not for something like Plus commutativity, as far as I know.

That’s also the reason I used ConstraintKinds as properties: because I wanted to reason about ~ type equalities. There’s probably other possible models for properties in the Haskell type system, but I can’t think of anything as interesting at the moment.

Thanks for reading. I hope you enjoyed it.