A while back I decided to try my hand at using Agda and I wrote a proof of the Church-Rosser theorem in it. It was a fun exercise. I took all the knowledge I have picked up over the years about using dependent types to represent binders, to define well-typed terms, and what I have learned about the Category of Contexts and applied it to my definition of Terms for a simply typed lambda calculus. I am proud of the result.

They say you do not understand a topic in mathematics until you can teach it. And you do not really understand it until you can prove it in Coq. And you do not really really understand it until you can prove it in Agda. What really struck me was how my exercise in Adga affected my understanding of “Propositions as Types”.

I view types as being divided into roughly two categories. Some types are propositions. They are the types that have at most one inhabitant, which is to say types for which you can prove that all their inhabitants are equal. Other types are data types. They are types with potentially more than one inhabitant. As such you can distinguish between values by (possibly indirectly) doing case analysis on them. Indeed, HoTT defines propositions in exactly this way.

This classification of types is not fundamental in the theory of types. The theory of types treats both propositions and data types uniformly. I simply find it a useful way of characterizing types when programming and reasoning about programs with dependent type theory. The void and unit types, ⊥ and ⊤ respectively, are both propositions. We can define a function from the Boolean type to the universe of types which map the two Boolean values to these two types. In this way we can covert any Boolean valued (computable) predicate into a logical (type-theoretical) predicate.

To me the phrase “Proposition as Types” just meant the embedding of logical proposition as types with at most one inhabitant. For example, given a decidable type A , we can compute if a given value of type A is a member of a given list of A s. This computable predicate can be lifted to a logical predicate to define a logical membership relationship. Expressions using this logical membership relationship are propositions according to the above definition of proposition.

This is a fine way to do formal reasoning. However, this is not the way that membership is typically defined in Agda. Instead, Agda defines the membership relation as an inductive family whose constructors witness that an element is either the head of the list, or is a member of the tail of the list. (Coq also defines the membership relation this way; however it is marked as non-extractable for computation by virtue of being a proposition.) The type ( a ∈ l ) is, in general, a data type rather than a proposition. When the value a occurs multiple times in l , then the type ( a ∈ l ) has multiple different “proofs” corresponding the the different occurrences of a within l . Values of this type act as “the type of indexes where a occurs in l ”, and one can write programs that operate over this type.

Given two lists, l 1 and l 2 , the “proposition” that one list is a subset of the other states that any element of l 1 is also an element of l 2 :

l 1 ⊆ l 2 ≔ ∀ a . a ∈ l 1 → a ∈ l 2

In dependent type theory this implication is represented as a function. Because our membership relation is a data type, this subset relation is represented by a real program. Specifically it is a program that, for any value, maps each index where it occurs in l 1 to some index where it occurs in l 2 ; you can really evaluate this function. This subset type is also, in general, a data type because there can be multiple such functions, which represent all the possible permutations that maps l 1 onto l 2 .

The consequences of this are fantastic. For example, what you normally think of as a theorem for weakening,

weaken : ∀ {Γ₁ Γ₂ A} → Γ₁ ⊆ Γ₂ → Term Γ₁ A → Term Γ₂ A

also captures contraction and exchange due to this definition of subset. All the structural rules of the lambda calculus are subsumed within a single theorem!

It took me several attempts before I learned to take full advantage of this sort of logic. For example, I originally defined a module as an inductive family. However, I eventually settled on a functional representation for modules:

-- A Module is a list of terms of types Δ all in the same context Γ. -- A Module is a block of terms for simultaneous substitution. -- A Module is an arrow in the category of contexts. -- A Module is a profuctor. Module : List Type → List Type → Set Module Γ Δ = ∀ {A} → A ∈ Δ → Term Γ A

This definition means a module can evaluate. In particular modules can be partially evaluated during type-checking, which greatly simplified many of my proofs as compared to using the inductive family way of defining modules.

Notice how we make use of values of A ∈ Δ as data defining an index into the context Δ . If Δ has multiple occurrences of A , then the term generated by the module can be different depending on which occurrence the index is referencing.

In conclusion, although I do like thinking of types as divided between propositional types and data types, it can be fruitful to view expressions that mathematics traditionally treats as propositions instead as real data types, and really program with them. This is what I mean by taking "Propositions as Types" seriously.