Introduction to introduction

Agda doesn’t lack tutorials and introductions, there is a whole page of them on the Agda wiki [1] (for a general documentation list see [2]). Personally, I recommend:

Anton Setzer’s introduction (works especially very well for those with a logical background, but is easy enough to follow for everyone else too) [3]

Ana Bove’s and Peter Dybjer’s introduction [4] .

. Ulf Norell’s introduction for functional programmers [5] .

. Thorsten Altenkirch’s lectures [6] .

(This list is not an order, the best practice is to read them (and this page) simultaneously. By the way, this document is far from finished, but should be pretty useful in its current state.)

Same proposition holds for Coq [7], Idris [8] and, to a lesser extent, for Epigram [9].

For a general introduction to type theory field look no further than:

Morten Heine B. Sørensen’s and Pawel Urzyczyn’s lectures [10] .

. Simon Thompson’s book [11] .

There’s also a number of theoretical books strongly related to the languages listed above:

The notes of Per Martin-Löf’s (the author of the core type theory used by all of Agda, Coq, Idris and Epigram) lectures [12], [13] .

. A bit more practical-oriented book by Bengt Nordström et al [14] .

And a number of tutorials which show how to implement a dependently typed language yourself:

“Simpler Easier” [15] .

. A tutorial by Andrej Bauer [16]–[18] .

There’s a lot to read already, why another introduction? Because there is a gap. Theory is huge and full of subtle details that are mostly ignored in tutorial implementations and hidden in language tutorials (so that unprepared are not scared away). Which is hardly surprising since the current state of art takes years to implement correctly, and even then some (considerable) problems remain.

Nevertheless, I think it is the hard parts that matter, and I always wanted a tutorial that at least mentioned their existence (well, obviously there is a set of dependently typed problems most people appreciate, e.g. undecidable type inference, but there is still a lot of issues that are not so well-understood). Moreover, after I stumbled upon some of these lesser known parts of dependently typed programming I started to suspect that hiding them behind the language goodnesses actually makes things harder to understand. “Dotted patterns” and “unification stuck” error in Agda are perfect examples. I claim that:

People find it hard to understand “dotted patterns” exactly because it’s hard to explain them above the language abstraction level.

Explicit access to unification engine is a useful interactive program construction tool. I did a dozen of proofs I probably couldn’t do otherwise by unifying expressions by hand. As far as I’m aware, no proof checker automates this in a usable way yet. There is a proposal with my implementation ideas for agda2-mode .

Having said that, this article serves somewhat controversial purposes:

It is a introduction to Agda, starting as a very basic one, written for those with half-way-through-undergrad-course (read “basic”) discrete math, number theory, set theory and Haskell background. I actually taught this to undergrad students [19] and it works.

and it works. But it aims not to teach Agda, but to show how dependently typed languages work behind the scenes without actually going behind the scenes (because, as noted above, going behind the scenes takes years). I’m pretty sure it is possible to write a very similar introduction to Coq, Idris, Epigram or whatever, but Agda works perfectly because of its syntax and heavy unification usage.

There is also a number of [italicized comments in brackets] laying around, usually for those with type theory background. Don’t be scared, you can completely ignore them. They give deeper insights if you research on them, though.

The last two sections contain completely type theoretic stuff. They are the reason I started writing this, but, still, you may ignore them completely if you wish.

You are expected to understand everything else. Do exercises and switch to other tutorials when stuck.

Finally, before we start, a disclaimer: I verified my core thoughts about how all this stuff works by reading (parts of) Agda’s source code, but still, as Plato’s Socrates stated, “I know that I know nothing”.

Slow start

You want to use Emacs, trust me

There is agda2-mode for Emacs. It allows to:

input funny UNICODE symbols like ℕ or Σ,

interactively interact with Agda (more on that below).

Installation:

install emacs ,

, install everything with agda substring from your package manager or Agda and Agda-executable with cabal ,

substring from your package manager or and with , run agda-mode setup .

Running:

run emacs ,

, press C-x C-f FileName RET (Control+x, Control+f, type “FileName”, press Return/Enter key).

Note that you can load this article in Literate Agda format directly into Emacs. This is actually a recommended way to use this text, you can’t do exercises in HTML version.

Syntax

module BrutalDepTypes where

In Agda a module definition always goes first:

Nested modules and modules with parameters are supported. One of the most common usages of nested modules is to hide some definitions from the top level namespace:

module ThrowAwayIntroduction where

data Bool : Set where true false : Bool -- Note, we can list constructors of a same type -- by interspersing them with spaces. -- input for ℕ is \bn, -- input for → is \to, but -> is fine too -- Naturals. data ℕ : Set where zero : ℕ succ : ℕ → ℕ -- Identity container data Id ( A : Set ) : Set where pack : A → Id A -- input for ⊥ is \bot -- Empty type. Absurd. False proposition. data ⊥ : Set where

Datatypes are written in GADTs-style:

Set here means the same thing as kind * in Haskell, i.e. a type of types (more on that below).

Agda is a total language. There is no undefined , all functions are guaranteed to terminate on all possible inputs (if not explicitly stated otherwise by a compiler flag or a function definition itself), which means that ⊥ type is really empty.

-- input for ₀ is \_0, ₁ is \_1 and so on idℕ₀ : ℕ → ℕ idℕ₀ x = x

-- Note, argument's name in a type might differ from a name used in pattern-matching idℕ₁ : (n : ℕ) → ℕ idℕ₁ x = x -- this `x` refers to the same argument as `n` in the type

idℕ₀

idℕ₂ : (_ : ℕ) → ℕ idℕ₂ x = x

Function declarations are very much like in Haskell:except function arguments have their names even in type expressions:with’s definition being a syntax sugar for:

where the underscore means “I don’t care about the name”, just like in Haskell.

id₀ : ( A : Set ) → A → A id₀ _ a = a

Dependent types allow type expressions after an arrow to depend on expressions before the arrow, this is used to type polymorphic functions:

Note that this time A in the type cannot be changed into an underscore, but it’s fine to ignore this name in pattern matching.

not : Bool → Bool not true = false not false = true

not₀ : Bool → Bool not₀ true = false not₀ fals = true

data Three : Set where COne CTwo CThree : Three three2ℕ : Three → ℕ three2ℕ COne = zero three2ℕ Ctwo = succ zero three2ℕ _ = succ ( succ zero) -- intersects with the previous clause

id : { A : Set } → A → A id a = a idTest₀ : ℕ → ℕ idTest₀ = id

-- positional: id₁ : { A : Set } → A → A id₁ { A } a = a idTest₁ : ℕ → ℕ idTest₁ = id {ℕ} -- named: const₀ : { A : Set } { B : Set } → A → B → A const₀ { B = _} a _ = a constTest₀ : ℕ → ℕ → ℕ constTest₀ = const₀ { A = ℕ} { B = ℕ}

Pattern matching looks as usual:except if you make an error in a constructor name:Agda will say nothing. This might be critical sometimes:Finally, Agda supports implicit arguments:Values of implicit arguments are derived from other arguments’ values and types by solving type equations (more on them below). You don’t have to apply them or pattern match on them explicitly, but you still can if you wish:

[It’s important to note that no proof search is ever done. Implicit arguments are completely orthogonal to computational aspect of a program, being implicit doesn’t imply anything else. Implicit variables are not treated any way special, they are not type-erased any way differently than others. They are just a kind of syntax sugar assisted by equation solving.]

id₃ : { A : Set } (a : A ) → A id₃ a = a

const : { A B : Set } → A → B → A const a _ = a

idTest₃ : ℕ → ℕ idTest₃ = id₀ _

It’s allowed to skip arrows between arguments in parentheses or braces:and to intersperse names of values of the same type by spaces inside parentheses and braces:What makes Agda’s syntax so confusing is the usage of underscore. In Haskell “I don’t care about the name” is the only meaning for it, in Agda there are another two and a half. The first one being “guess the value yourself”:

which works exactly the same way as implicit arguments.

Or, to be more precise, it is the implicit arguments that work like arguments implicitly applied with underscores, except Agda does this once for each function definition, not for each call.

unpack₀ : { A : _} → Id A → A unpack₀ ( pack a) = a

∀

-- input for ∀ is \all or \forall unpack : ∀ { A } → Id A → A unpack ( pack a) = a -- explicit argument version: unpack₁ : ∀ A → Id A → A unpack₁ _ ( pack a) = a

∀

unpack₂ : ∀ { A B } → Id A → Id B → A unpack₂ ( pack a) _ = a unpack₃ : ∀ { A } (_ : Id A ) { B } → Id B → A unpack₃ ( pack a) _ = a

∀

data ForAllId A ( B : Id A ) : Set where

The another half being “guess the type yourself”:which has a specialsyntax sugar:extends to the right up to the first arrow:Datatype syntax assumes implicitwhen there is no type specified:

It is important to note that Agda’s ∀ is quite different from Haskell’s ∀ ( forall ). When we say ∀ n in Agda it’s perfectly normal for n : ℕ to be inferred, but in Haskell ∀ n always means {n : Set} , [i.e. Haskell’s ∀ is an implicit (Hindley-Milner) version of second order universal quantifier while in Agda it’s just a syntax sugar].

Syntax misinterpreting becomes a huge problem when working with more than one universe level (more on that below). It is important to train yourself to desugar type expressions subconsciously (by doing in consciously at first). It will save hours of your time later. For instance, ∀ {A} → Id A → A means {A : _} → (_ : Id A) → A (where the last → A should be interpreted as → (_ : A) ), i.e. the first A is a variable name, while the other expressions are types.

MixFix

if_then_else_ : { A : Set } → Bool → A → A → A if true then a else _ = a if false then _ else b = b -- Are two ℕs equal? _ = ℕ?_ : ℕ → ℕ → Bool zero = ℕ? zero = true zero = ℕ? succ m = false succ m = ℕ? zero = false succ n = ℕ? succ m = n = ℕ? m -- Sum for ℕ. infix 6 _ + _ _ + _ : ℕ → ℕ → ℕ zero + n = n succ n + m = succ (n + m) ifthenelseTest₀ : ℕ ifthenelseTest₀ = if (zero + succ zero) = ℕ? zero then zero else succ ( succ zero) -- Lists data List ( A : Set ) : Set where [] : List A _ ∷ _ : A → List A → List A [_] : { A : Set } → A → List A [ a ] = a ∷ [] listTest₁ : List ℕ listTest₁ = [] listTest₂ : List ℕ listTest₂ = zero ∷ (zero ∷ ( succ zero ∷ []))

Finally, the last meaning of an underscore is to mark arguments’ places in function names for theparser, i.e. an underscore in a function name marks the place where the arguments goes:

Note the fixity declaration infix which has the same meaning as in Haskell. We didn’t write infixl for a reason. With declared associativity Agda would not print redundant parentheses, which is good in general, but would somewhat complicate explanation of a several things below.

where

ifthenelseTest₁ : ℕ ifthenelseTest₁ = if (zero + succ zero) = ℕ? zero then zero else x where x = succ ( succ zero)

-- ⊥ implies anything. ⊥-elim : { A : Set } → ⊥ → A ⊥-elim ()

There is aconstruct, just like in Haskell:While pattern matching, there is a special case when a type we are trying to pattern match on is obviously ([type inhabitance problem is undecidable in a general case]) empty. This special case is called an “absurd pattern”:

which allows you to skip a right-hand side of a definition.

-- Absurd implies anything, take two. ⊥-elim₀ : { A : Set } → ⊥ → A ⊥-elim₀ x = ⊥-elim x

You can bind variables like that still:

Agda has records, which work very much like newtype declarations in Haskell, i.e. they are datatypes with a single constructor that is not stored.

record Pair ( A B : Set ) : Set where field first : A second : B getFirst : ∀ { A B } → Pair A B → A getFirst = Pair.first

Note, however, that to prevent name clashes record definition generates a module with field extractors inside.

-- input for ⊤ is \top -- One element type. Record without fields. True proposition. record ⊤ : Set where tt : ⊤ tt = record {}

There is a convention to define a type with one element as a record with no fields:

A special thing about this convention is that an argument of an empty record type automatically gets the value record {} when applied implicitly or with underscore.

-- input for ‵ is \` -- input for ′ is \' ⊥-elim‵′ : { A : Set } → ⊥ → A ⊥-elim‵′ ∀ x : ⊥ → - - = ⊥-elim ∀ x : ⊥ → - -

Lastly, Agda uses oversimplified lexer that splits tokens by spaces, parentheses, and braces. For instance (note the name of the variable binding):

is totally fine. Also note that -- doesn’t generate a comment here.

The magic of dependent types

div2 : ℕ → ℕ div2 zero = zero div2 ( succ ( succ n)) = succ (div2 n)

div2 ( succ zero) = { ! check me ! }

Let’s define the division by two:the problem with this definition is that Agda is total and we have to extend this function for odd numbers

by changing {!check me!} into some term, most common choice being zero .

div2

div2

succ zero

div2

even

even : ℕ → Set even zero = ⊤ even ( succ zero) = ⊥ even ( succ ( succ n)) = even n

Suppose now, we know that inputs toare always even and we don’t want to extendfor thecase. How do we constrainto even naturals only? With a predicate! That is,predicate:

which returns ⊤ with with a trivial proof tt when argument is even and empty ⊥ then the argument is odd.

div2e

div2e : (n : ℕ) → even n → ℕ -- Note, we have to give a name `n` to the first argument here div2e zero p = zero div2e ( succ zero) () div2e ( succ ( succ y)) p = succ (div2e y p) -- Note, a proof of `even (succ (succ n))` translates -- to a proof of `even n` by the definition of `even`.

Now the definition ofconstrained to even naturals only becomes:

When programming with dependent types, a predicate on A becomes a function from A to types, i.e. A → Set . If a : A satisfies the predicate P : A → Set then the function P returns a type with each element being a proof of P a , in a case a doesn’t satisfy P it returns an empty type.

The magic of dependent types makes the type of the second argument of div2e change every time we pattern match on the first argument n . From the callee side, if the first argument is odd then the second argument would get ⊥ type sometime (after a number of recursive calls) enabling the use of an absurd pattern. From the caller side, we are not able to call the function with an odd n , since we have no means to construct a value for the second argument in this case.

Type families and Unification

data Even : ℕ → Set where ezero : Even zero e2succ : {n : ℕ} → Even n → Even ( succ ( succ n)) twoIsEven : Even ( succ ( succ zero)) twoIsEven = e2succ ezero

There is another way to define “even” predicate. This time with a datatype indexed by ℕ:

Even : ℕ → Set is a family of types indexed by ℕ and obeying the following rules:

Even zero has one element ezero .

has one element . For any given n type Even (succ (succ n)) has one element if Even n is nonempty.

type has one element if is nonempty. There are no other elements.

Compare this to even : ℕ → Set definition translation:

There is a trivial proof that zero has property even .

has property . There is no proof that succ zero has property even .

has property . If n has property even then so has succ (succ n) .

In other words, the difference is that Even : ℕ → Set constructs a type whereas even : ℕ → Set returns a type when applied to an element of ℕ .

The proof that two is even even (succ (succ zero)) literally says “two is even because it has a trivial proof”, whereas the proof that two is even twoIsEven says “two is even because zero is even and two is the successor of the successor of zero”.

Even

ℕ

div2E : (n : ℕ) → Even n → ℕ div2E zero ezero = zero div2E ( succ zero) () div2E ( succ ( succ n)) (e2succ stilleven) = succ (div2E n stilleven) -- Compare this case to div2e.

datatype allows us to define another non-extended division by two for

Note, there is no case for div2E zero (e2succ x) since e2succ x has the wrong type, there is no such constructor in Even zero . For the succ zero case the type of the second argument is not ⊥ , but is empty. How do we know that? Unification!

Unification is the most important (at least with pattern matching on inductive datatypes involved) and easily forgotten aspect of dependently typed programming. Given two terms M and N unification tries to find a substitution s such that using s on M gives the same result as using s on N . The precise algorithm definition is pretty long, but the idea is simple: to decide if two expressions could be unified we

reduce them as much as possible,

then traverse their spines until we hit an obvious difference between them, find a place where we can not decide for sure, or successfully finish the traversal generating a substitution s .



For instance:

To unify [ (succ a) + b with succ (c + d) ] we need reduce both of them, now we need to unify [ succ (a + b) with succ (c + d) ], which means that we need to unify [ a + b with c + d ], which means that we need to unify [ a with c ] and [ b with d ], which means that [ a = c , b = d ].

with ] we need reduce both of them, now we need to unify [ with ], which means that we need to unify [ with ], which means that we need to unify [ with ] and [ with ], which means that [ , ]. On the other hand, succ a can not be unified with zero for any a , and succ b can not be unified with b for any b .

can not be unified with for any , and can not be unified with for any . We don’t know if it’s possible to unify foo n with zero for some unknown function foo (it might or might not reduce to zero for some n ).

In the code above succ zero doesn’t unify with any of the Even constructors’ indexes [ zero , succ (succ n) ] which makes this type obviously empty by its definition.

[Refer to “The view from the left” paper by McBride and McKinna [20] for more details on pattern matching with type families.]

More type families and less Unification

In datatype declarations things before a : are called “parameters”, things after the colon but before a Set are called “indexes”.

data Vec ( A : Set ) : ℕ → Set where [] : Vec A zero _ ∷ _ : ∀ {n} → A → Vec A n → Vec A ( succ n)

There is a famous datatype involving both of them:

Vec A n is a vector of values of type A and length n , Vec has a parameter of type Set and is indexed by values of type ℕ . Compare this definition to the definition of List and Even . Note also, that Agda tolerates different datatypes with constructors of the same name (see below for how this is resolved).

[]

List

head₀ : ∀ { A } → List A → A head₀ [] = { ! check me ! } head₀ (a ∷ as) = a

We can not omit the clause for ancase in a function which takes a head of a

but we have nothing to write in place of {!check me!} there (if we want to be total).

[]

Vec A (succ n)

head : ∀ { A n} → Vec A ( succ n) → A head (a ∷ as) = a

On the other hand, there is noconstructor in atype:

Note that there are no absurd patterns here, Vec A (succ n) is inhabited, it just happens that there is no [] in there.

Vec

-- Concatenation for `List`s _ ++ _ : ∀ { A } → List A → List A → List A [] ++ bs = bs (a ∷ as) ++ bs = a ∷ (as ++ bs) -- Concatenation for `Vec`tors -- The length of a concatenation is the sum of lengths of arguments and is available in types. _ ++ v_ : ∀ { A n m} → Vec A n → Vec A m → Vec A (n + m) [] ++ v bs = bs (a ∷ as) ++ v bs = a ∷ (as ++ v bs)

By the way, thetype is famous for a concatenation function:

Compare _+_ , _++_ , and _++v_ definitions.

Why does the definition of _++v_ work? Because we defined _+_ this way! In the first clause of _++v_ the type of [] gives n = zero by unification, zero + m = m by the _+_ definition, bs : Vec A m . Similarly, in the second clause n = succ n0 , as : Vec A n0 , (succ n0) + m = succ (n0 + m) by the _+_ definition, a ∷ (as ++ bs) : succ (n0 + m) .

Dotted patterns and Unification

infix 6 _ - _ _ - _ : ℕ → ℕ → ℕ zero - _ = zero succ n - zero = succ n succ n - succ m = n - m

Let’s define a substraction:

Note that n - m = zero for m > n .

(succ n) - zero

_≤_

data _≤_ : ℕ → ℕ → Set where z≤n : ∀ {n} → zero ≤ n s≤s : ∀ {n m} → n ≤ m → succ n ≤ succ m

m > n

sub₀ : (n m : ℕ) → m ≤ n → ℕ sub₀ n zero (z≤n . {n}) = n sub₀ . ( succ n) . ( succ m) (s≤s {m} {n} y) = sub₀ n m y

Let us get rid of thiscase withrelation:We are now able to write a substraction that is not extended for

Note the dots, these are called “dotted patterns”. Ignore them for a second.

Consider the case sub₀ n zero (z≤n {k}) . The type of the third argument is zero ≤ n . The type of z≤n {k} is zero ≤ k . Unification of these two types gives [ k = n , m = zero ]. After a substitution we get sub₀ n zero (z≤n {n}) . Which of the n s we want to bind/match on? In the code above we say “on the first” and place a dot before the second occurrence to mark this intention. Dotted pattern says “do not match on this, it is the only possible value” to the compiler.

The second clause is sub₀ n m (s≤s {n'} {m'} y) . The type of the third argument is m ≤ n . The type of s≤s {n'} {m'} y is succ n' ≤ succ m' . This gives [ n = succ n' , m = succ m' ]. This time we decided to match on n' and m' .

case

case

sub₀ n m even = case even of z≤n {k} -> case m of -- [`k = n`, `m = zero`] zero -> n succ m' -> __ IMPOSSIBLE__ -- since `m = zero` doesn't merge with `m = succ m'` s≤s n' m' y -> sub₀ n' m' y -- [`n = succ n'`, `m = succ n'`]

Rewritten with aconstruct from Haskell (Agda doesn’t have, see below) the code above becomes (in pseudo-Haskell):

Where __IMPOSSIBLE__ is just like an undefined but is never executed.

k = n

m = zero

zero

m

sub₁ : (n m : ℕ) → m ≤ n → ℕ sub₁ n . zero (z≤n . {n}) = n sub₁ . ( succ n) . ( succ m) (s≤s {m} {n} y) = sub₁ n m y

sub₁ n m even = case even of z≤n {k} -> n s≤s n' m' y -> sub₁ n' m' y

sub

sub : (n m : ℕ) → m ≤ n → ℕ sub n zero (z≤n . {n}) = n sub ( succ n) ( succ m) (s≤s . {m} . {n} y) = sub n m y

sub n m even = case m of zero -> case even of z≤n {k} -> n s≤s {k} {l} y -> __ IMPOSSIBLE__ -- since `zero` (`m`) can't be unified -- with `succ k` succ m' -> case n of zero -> case even of z≤n {k} -> __ IMPOSSIBLE__ -- since `succ m'` (`m`) can't be unified -- with `zero` s≤s {k} {l} y -> __ IMPOSSIBLE__ -- since `zero` (`n`) can't be unified -- with `succ l` succ n' -> case even of z≤n {k} -> __ IMPOSSIBLE__ -- since `succ n'` (`n`) can't be unified -- with `zero` s≤s {k} {l} y -> sub n' m' y

Note, that we have [] in the first case for even. This means we can dot the first usage ofto optimize the match onaway:which translates toFinally, we can also rewriteto match on the first two arguments (usual, common sense definition):which translates into the following:

Exercise. Write out the unification constraints for the pseudo-Haskell translation above.

sub n m p

n

m

sub₀

sub₁

p

sub n zero

z≤n {n}

sub₂ : (n m : ℕ) → m ≤ n → ℕ sub₂ n zero . (z≤n {n}) = n sub₂ ( succ n) ( succ m) (s≤s . {m} . {n} y) = sub₂ n m y

Note, thatcomputes the difference betweenandwhileandextract it from the proof. Note also, that forthe third argument is always, so we would like to write

but Agda doesn’t allow this. See below for why.

sub₃ : (n m : ℕ) → m ≤ n → ℕ sub₃ n zero _ = n sub₃ ( succ n) ( succ m) (s≤s . {m} . {n} y) = sub₃ n m y

We can write

still.

sub₄ : (n m : ℕ) → m ≤ n → ℕ sub₄ n zero (z≤n . {n}) = n sub₄ ( succ . n) ( succ . m) (s≤s {m} {n} y) = sub₄ n m y

Translate the following definition into pseudo-Haskell with unification constraints:

The moral is that dotted patterns are inlined unification constraints. This is why we couldn’t dot z≤n {n} in the first clause of sub₂ , Agda didn’t generate such a constraint (it could, have it tried a bit harder).

Propositional equality and Unification

-- ≡ is \== infix 4 _≡_ data _≡_ { A : Set } (x : A ) : A → Set where refl : x ≡ x

We shall now define the most useful type family, that is, Martin-Löf’s equivalence (values only version, though):

For x y : A the type x ≡ y has exactly one constructor refl if x and y are convertible, i.e. there exist such z that z →β✴ x and z →β✴ y , where →β✴ is “β-reduces in zero or more steps”. By a consequence from a Church-Rosser theorem and strong normalization convertibility can be solved by normalization. Which means that unification will both check convertibility and fill in any missing parts. In other words, x y : A the type x ≡ y has exactly one constructor refl if x and y unify with each other.

_≡_

-- _≡_ is symmetric sym : { A : Set } {a b : A } → a ≡ b → b ≡ a sym refl = refl -- transitive trans : { A : Set }{a b c : A } → a ≡ b → b ≡ c → a ≡ c trans refl refl = refl -- and congruent cong : { A B : Set } {a b : A } → (f : A → B ) → a ≡ b → f a ≡ f b cong f refl = refl

Let’s prove some of’s properties:

Consider the case sym {A} {a} {b} (refl {x = a}) . Matching on refl gives [ b = a ] equation, i.e. the clause actually is sym {A} {a} .{a} (refl {x = a}) which allows to write refl on the right-hand side. Other proofs are similar.

sym

sym′ : { A : Set }{a b : A } → a ≡ b → b ≡ a sym′ { A } . {b} {b} refl = refl

Note, we can provethe other way:

sym packs a into refl . sym′ packs b . “Are these two definitions equal?” is an interesting philosophical question. (From the Agda’s point of view they are.)

Since dotted patterns are just unification constraints, you don’t have to dot implicit arguments when you don’t bind or match on them.

_≡_ type family is called “propositional equality”. In Agda’s standard library it has a bit more general definition, see below.

Proving things interactively

With _≡_ we can finally prove something from basic number theory. Let’s do this interactively.

_+_

+- assoc₀ : ∀ a b c → (a + b) + c ≡ a + (b + c) +- assoc₀ a b c = { !! }

Our first victim is the associativity of

Note a mark {!!} . Anything of the form {!expr!} with expr being any string (including empty) a goal after a buffer is loaded by agda2-mode. Typing {!!} is quite tedious, so there is a shortcut ? . All ? symbols are automatically transformed into {!!} when a buffer is loaded.

Goals appear as green holes in a buffer, pressing special key sequences while in a goal allows to ask Agda questions about and perform actions on a code. In this document “check me” in a goal means that this goal is not expected to be filled, it’s just an example.

Press C-c C-l (load) to load and typecheck the buffer.

C-c C-c a RET

a

+- assoc₁ : ∀ a b c → (a + b) + c ≡ a + (b + c) +- assoc₁ zero b c = { ! check me ! } +- assoc₁ ( succ a) b c = { ! check me ! }

C-c C-,

refl

C-c C-r

+- assoc₂ : ∀ a b c → (a + b) + c ≡ a + (b + c) +- assoc₂ zero b c = refl +- assoc₂ ( succ a) b c = { ! check me ! }

C-c C-f

cong succ

C-c C-r

+- assoc₃ : ∀ a b c → (a + b) + c ≡ a + (b + c) +- assoc₃ zero b c = refl +- assoc₃ ( succ a) b c = cong succ { ! check me ! }

C-c C-a

+- assoc : ∀ a b c → (a + b) + c ≡ a + (b + c) +- assoc zero b c = refl +- assoc ( succ a) b c = cong succ ( +- assoc a b c)

Placing the cursor in the goal above (the green hole in the text) and pressing(case by) gives (ignore changes to the name of a function and “check me”s everywhere):Press(goal type and context) while in the goal. This will show the goal type and the context inside the hole. Writein there and press(refine), this gives:(next goal), writethere,Next goal, goal type and context, press(auto proof search):

Done.

(In Agda 2.3.2 you have to reload a buffer for proof search to work, it’s probably a bug.)

lemma -+ zero : ∀ a → a + zero ≡ a lemma -+ zero zero = refl lemma -+ zero ( succ a) = cong succ (lemma -+ zero a) lemma -+succ : ∀ a b → succ a + b ≡ a + succ b lemma -+succ zero b = refl lemma -+succ ( succ a) b = cong succ (lemma -+succ a b)

_+_

-- A fun way to write transitivity infixr 5 _ ~ _ _ ~ _ = trans +- comm : ∀ a b → a + b ≡ b + a +- comm zero b = sym (lemma -+ zero b) +- comm ( succ a) b = cong succ ( +- comm a b) ~ lemma -+succ b a

{! !}

+- comm ( succ a) b = cong succ { ! ( +- comm a b) ! } ~ lemma -+succ b a

Similarly, we proveThe commutativity foris not hard to follow too:Nice way to “step” through a proof is to wrap some subexpression with, e.g.:

and ask for a type, context and inferred type of a goal with C-c C-l followed by C-c C-. , refine, wrap another subexpression, repeat. I dream of a better interface for this.

Solving type equations

The second case of +-comm is pretty fun example to infer implicit arguments by hand. Let’s do that. Algorithm is as follows:

First, expand all implicit arguments and explicit arguments applied with _ in a term into “metavariables”, that is, special meta-level variables not bound anywhere in the program.

in a term into “metavariables”, that is, special meta-level variables not bound anywhere in the program. Look at the types of all symbols and construct a system of equations. For instance, if you see term1 term2 : D , term1 : A → B and term2 : C , add equations A == C and B == D to the system.

, and , add equations and to the system. Solve the system with a help from unification. Two possible results: All the metavariables got their values from the solution. Success. There are some that didn’t. This situation is reported to a user as “unsolved metas” type checking result. These act like warnings while you are not trying to compile or to type check in a “safe mode”. In the latter two cases unsolved metas transform into errors.

Substitute the values of metavariables back into the term.

trans (cong succ ( +- comm1 a b)) (lemma -+succ b a)

trans {ma} {mb} {mc} {md} (cong {me} {mf} {mg} {mh} succ ( +- comm a b)) (lemma -+succ b a)

Applying the first step of the algorithm to a termgives:

with m* being metavariables.

a b : ℕ

_+_ : ℕ → ℕ → ℕ

+comm

trans (cong succ ( +- comm a b)) (lemma -+succ b a) : _≡_ {ℕ} ( succ a + b) (b + succ a) trans (cong succ ( +- comm a b)) (lemma -+succ b a) : _≡_ {ℕ} ( succ (a + b)) (b + succ a) -- after normalization ma = ℕ mb = succ (a + b) md = b + succ a +- comm a b : _≡_ {ℕ} (a + b) (b + a) mg = (a + b) me = ℕ mh = (b + a) mf = ℕ cong succ ( +- comm a b) : _≡_ {ℕ} ( succ (a + b)) ( succ (b + a)) mc = succ (b + a) lemma -+succ b a : _≡_ {ℕ} ( succ b + a) (b + succ a) lemma -+succ b a : _≡_ {ℕ} ( succ (b + a)) (b + succ a) -- after normalization trans (cong succ ( +- comm a b)) (lemma -+succ b a) : _≡_ {ℕ} ( succ a + b) (b + succ a)

C-c C-t

C-c C-,

?0

?1

metaVarTest : Vec ℕ (div2 ( succ zero)) → ℕ metaVarTest = { ! check me ! }

sincein the type of. This gives the following system (with duplicated equations and metavariable applications skipped):The most awesome thing about this is that from Agda’s point of view, a goal is just a metavariable of a special kind. When you ask for a type of a goal withorAgda prints everything it has for the corresponding metavariable. Funny things like, and etc in agda2-mode outputs are references to these metavariables. For instance, in the following:

the type of the goal mentions the name of very first goal metavariable from this article.

By the way, to resolve datatype constructor overloading Agda infers the type for a constructor call expected at the call site, and unifies the inferred type with the types of all possible constructors of the same name. If there are no matches found, an error is reported. In case with more than one alternative available, an unsolved meta (for the return type metavariable) is produced.

Termination checking, well-founded induction

Work in progress.

Propositional equality exercises

module Exercise where infix 7 _ * _ _ * _ : ℕ → ℕ → ℕ n * m = { !! }

-- Distributivity. *+- dist : ∀ a b c → (a + b) * c ≡ a * c + b * c *+- dist zero b c = refl -- λ is \lambda *+- dist ( succ a) b c = cong (λ x → c + x) ( *+- dist a b c) ~ sym ( +- assoc c (a * c) (b * c))

*- assoc : ∀ a b c → (a * b) * c ≡ a * (b * c) *- assoc zero b c = refl *- assoc ( succ a) b c = *+- dist b (a * b) c ~ cong { !! } ( *- assoc a b c) lemma -* zero : ∀ a → a * zero ≡ zero lemma -* zero a = { !! } lemma -+ swap : ∀ a b c → a + (b + c) ≡ b + (a + c) lemma -+ swap a b c = sym ( +- assoc a b c) ~ { !! } ~ +- assoc b a c lemma -*succ : ∀ a b → a + a * b ≡ a * succ b lemma -*succ a b = { !! } *- comm : ∀ a b → a * b ≡ b * a *- comm a b = { !! }

Define multiplication by induction on the first argument:so that the following proof works:Now, fill in the following goals:

Pressing C-c C-. while there is a term in a hole shows a goal type, context and the term’s inferred type. Incredibly useful key sequence for interactive proof editing.

Pattern matching with with

filter

filter :: (a → Bool ) → [a] → [a] filter p [] = [] filter p (a : as) = case p a of True -> a : ( filter p as) False -> filter p as

filter : { A : Set } → ( A → Bool ) → List A → List A filter p [] = [] filter p (a ∷ as) with p a ... | true = a ∷ ( filter p as) ... | false = filter p as

...

filter₀ : { A : Set } → ( A → Bool ) → List A → List A filter₀ p [] = [] filter₀ p (a ∷ as) with p a filter₀ p (a ∷ as) | true = a ∷ (filter₀ p as) filter₀ p (a ∷ as) | false = filter₀ p as

Consider the following implementation of afunction in Haskell:It could be rewritten into Agda like this:which doesn’t look very different. But desugaringby the rules of Agda syntax makes it a bit less similar:

There’s no direct analogue to case in Agda, with construction allows pattern matching on intermediate expressions (just like Haskell’s case ), but (unlike case ) on a top level only. Thus with effectively just adds a “derived” argument to a function. Just like with normal arguments, pattern matching on a derived argument might change some types in a context.

The top level restriction simplifies all the dependently typed stuff (mainly related to dotted patterns), but makes some things a little bit more awkward (in most cases you can emulate case with a subterm placed into a where block). Syntactically, vertical bars separate normal arguments from a derived ones and a derived ones from each other.

filterN : { A : Set } → ( A → Bool ) → List A → List A filterN p [] = [] filterN p (a ∷ as) with p a filterN p (a ∷ as) | true with as filterN p (a ∷ as) | true | [] = a ∷ [] filterN p (a ∷ as) | true | b ∷ bs with p b filterN p (a ∷ as) | true | b ∷ bs | true = a ∷ (b ∷ filterN p bs) filterN p (a ∷ as) | true | b ∷ bs | false = a ∷ filterN p bs filterN p (a ∷ as) | false = filterN p as -- or alternatively filterP : { A : Set } → ( A → Bool ) → List A → List A filterP p [] = [] filterP p (a ∷ []) with p a filterP p (a ∷ []) | true = a ∷ [] filterP p (a ∷ []) | false = [] filterP p (a ∷ (b ∷ bs)) with p a | p b filterP p (a ∷ (b ∷ bs)) | true | true = a ∷ (b ∷ filterP p bs) filterP p (a ∷ (b ∷ bs)) | true | false = a ∷ filterP p bs filterP p (a ∷ (b ∷ bs)) | false | true = b ∷ filterP p bs filterP p (a ∷ (b ∷ bs)) | false | false = filterP p bs

Obfuscating the function above gives:

which shows that with could be nested and multiple matches are allowed to be done in parallel.

filter≡filterN₀ : { A : Set } → (p : A → Bool ) → (as : List A ) → filter p as ≡ filterN p as filter≡filterN₀ p [] = refl filter≡filterN₀ p (a ∷ as) = { ! check me ! }

Let us prove that all these functions produce equal results when applied to equal arguments:

note the goal type (filter p (a ∷ as) | p a) ≡ (filterN p (a ∷ as) | p a) which shows p a as derived argument to filter function.

a + b

a

b

_+_

filter≡filterN

p a

as

p b

filterN

filter≡filterN : { A : Set } → (p : A → Bool ) → (as : List A ) → filter p as ≡ filterN p as filter≡filterN p [] = refl filter≡filterN p (a ∷ as) with p a filter≡filterN p (a ∷ as) | true with as filter≡filterN p (a ∷ as) | true | [] = refl filter≡filterN p (a ∷ as) | true | b ∷ bs with p b filter≡filterN p (a ∷ as) | true | b ∷ bs | true = cong (λ x → a ∷ (b ∷ x)) (filter≡filterN p bs) filter≡filterN p (a ∷ as) | true | b ∷ bs | false = cong (_ ∷ _ a) (filter≡filterN p bs) filter≡filterN p (a ∷ as) | false = filter≡filterN p as

Remember that to reducewe had to match onin the proofs above, matching ongave nothing interesting becausewas defined by induction on the first argument. Similarly, to finish theproof we have to match on, and, essentially duplicating the form ofterm:

Exercise. Guess the types for filter≡filterP and filterN≡filterP . Argue which of these is easier to prove? Do it (and get the other one almost for free by transitivity).

Rewriting with with and Unification

When playing with the proofs about filters you might had noticed that with does something interesting with a goal.

filter≡filterN₁ : { A : Set } → (p : A → Bool ) → (as : List A ) → filter p as ≡ filterN p as filter≡filterN₁ p [] = refl filter≡filterN₁ p (a ∷ as) = { ! check me ! }

(filter p (a ∷ as) | p a) ≡ (filterN p (a ∷ as) | p a)

with

filter≡filterN₂ : { A : Set } → (p : A → Bool ) → (as : List A ) → filter p as ≡ filterN p as filter≡filterN₂ p [] = refl filter≡filterN₂ p (a ∷ as) with p a | as ... | r | rs = { ! check me ! }

In the following holethe type of the goal is. But after the following

it becomes (filter p (a ∷ rs) | r) ≡ (filterN p (a ∷ rs) | r) .

strange -id : { A : Set } { B : A → Set } → (a : A ) → (b : B a) → B a strange -id { A } { B } a ba with B a ... | r = { ! check me ! }

Same things might happen not only to a goal but to a context as a whole:

in the hole, both the type of ba and the goal’s type are r .

From these observations we conclude that with expr creates a new variable, say w , and “backwards-substitutes” expr to w in a context, changing all the occurrences of expr the types of the context to w . Which means that in a resulting context every type that had expr as a subterm starts dependending on w .

with

lemma -+ zero′ : ∀ a → a + zero ≡ a lemma -+ zero′ zero = refl lemma -+ zero′ ( succ a) with a + zero | lemma -+ zero′ a lemma -+ zero′ ( succ a) | . _ | refl = refl -- same expression with expanded underscore: lemma -+ zero′₀ : ∀ a → a + zero ≡ a lemma -+ zero′₀ zero = refl lemma -+ zero′₀ ( succ a) with a + zero | lemma -+ zero′₀ a lemma -+ zero′₀ ( succ a) | . a | refl = refl

This property allows usingfor rewriting:

In these terms a + zero is replaced by a new variable, say w , which gives lemma-+zero‵ a : a ≡ w . Pattern matching on refl gives [ w = a ] and so the dotted pattern appears. After that the goal type becomes a ≡ a .

f ps with a | eqn ... | . _ | refl = rhs

f ps rewrite eqn = rhs

This patternis so common that it has its own short hand

Exercise. Prove (on paper) that rewriting a goal type with with and propositional equality is a syntax sugar for expressions built from refl , sym , trans and cong .

Universes and postulates

When moving from Haskell to Agda expression “every type is of kind * , i.e. for any type X , X : * ” transforms into “every ground type is of type Set , i.e. for any ground type X , X : Set ”. If we are willing to be consistent, we can’t afford Set : Set because it leads to a number of paradoxes (more on them below). Still, we might want to construct things like “a list of types” and our current implementation of List can not express this.

To solve this problem Agda introduces an infinite tower of Set s, i.e. Set0 : Set1 , Set1 : Set2 , and so on with Set being an alias for Set0 . Agda is also a predicative system which means that Set0 → Set0 : Set1 , Set0 → Set1 : Set2 , and so on, but not Set0 → Set1 : Set1 . Note, however, that this tower is not cumulative, e.g. Set0 : Set2 and Set0 → Set1 : Set3 are false typing judgments.

[As far as I know, in theory nothing prevents us from making the tower cumulative, it’s just so happened that Agda selected this route and not another. Predicativity is a much more subtle matter (more on that below).]

data List1 ( A : Set1 ) : Set1 where [] : List1 A _ ∷ _ : A → List1 A → List1 A

A list of types now becomes:

which looks very much like the usual List definition.

Level

postulate Level : Set postulate lzero : Level postulate lsucc : Level → Level postulate _⊔_ : Level → Level → Level

postulate undefined : { A : Set } → A

BUILTIN

Level

BUILTIN

{-# BUILTIN LEVEL Level #-} {-# BUILTIN LEVELZERO lzero #-} {-# BUILTIN LEVELSUC lsucc #-} {-# BUILTIN LEVELMAX _⊔_ #-}

To prevent a code duplication like that Agda allows universe polymorphic definitions. To define the typeof universe levels we need a bit of postulate magic:Postulates essentially define propositions without proofs, i.e. they say “trust me, I know this to be true”. Obviously, this can be exploited to infer contradictions:but for every postulate Agda expects to be safe there is apragma that checks the postulated definition promoting it from a simple postulate to an axiom. Forthere are the followings:

The Level type works very much like ℕ . It has two constructors lzero and lsucc that signify zero and successor, there is also an operator _⊔_ which returns a maximum from its arguments. The difference between ℕ and Level is that we are not allowed to pattern match on elements of the latter.

Given the definition above, expression Set α for α : Level means “the Set of level α ”.

data PList ₀ {α : Level } ( A : Set α) : Set α where [] : PList ₀ A _ ∷ _ : A → PList ₀ A → PList ₀ A -- or a bit nicer: data PList ₁ {α} ( A : Set α) : Set α where [] : PList ₁ A _ ∷ _ : A → PList ₁ A → PList ₁ A

We are now able to define universe polymorphic list in the following way:

It is interesting to note that Agda could have gone another route, say “GHC route”, by defining all the builtin things inside with fixed names. Instead, BUILTIN pragma allows us to redefine the names of the builtins, which is very helpful when we want to write our own standard library. This is exactly what we are now going to do.

Library

Modules and the end of throw away code

ThrowAwayIntroduction

{- end of ThrowAwayIntroduction -}

Note that we have been writing everything above inside a module called. From here on we are going to (mostly) forget about it and write a small standard library for Agda from scratch. The idea is to remove any module with a name prefixed by “ThrowAway” from this file to produce the library code. To make the implementation of this idea as simple as possible we place markers like:

at the ends of throw away code. It allows to generate the library by a simple shell command:

cat BrutalDepTypes.lagda | sed '/^\\begin{code}/,/^\\end{code}/ ! d; /^\\begin{code}/ d; /^\\end{code}/ c \ ' | sed '/^ *module ThrowAway/,/^ *.- end of ThrowAway/ d;'

Level

module Level where -- Universe levels postulate Level : Set postulate lzero : Level postulate lsucc : Level → Level -- input for ⊔ is \sqcup postulate _⊔_ : Level → Level → Level infixl 5 _⊔_ -- Make them work {-# BUILTIN LEVEL Level #-} {-# BUILTIN LEVELZERO lzero #-} {-# BUILTIN LEVELSUC lsucc #-} {-# BUILTIN LEVELMAX _⊔_ #-}

open

open ModuleName

ModuleName

public

open Level public

Common functions with types not for the faint of heart

module Function where -- Dependent application infixl 0 _ $ _ _ $ _ : ∀ {α β} → { A : Set α} { B : A → Set β} → (f : (x : A ) → B x) → ((x : A ) → B x) f $ x = f x -- Simple application infixl 0 _ $ ′_ _ $ ′_ : ∀ {α β} → { A : Set α} { B : Set β} → ( A → B ) → ( A → B ) f $ ′ x = f $ x -- input for ∘ is \o -- Dependent composition _∘_ : ∀ {α β γ} → { A : Set α} { B : A → Set β} { C : {x : A } → B x → Set γ} → (f : {x : A } → (y : B x) → C y) → (g : (x : A ) → B x) → ((x : A ) → C (g x)) f ∘ g = λ x → f (g x) -- Simple composition _∘′_ : ∀ {α β γ} → { A : Set α} { B : Set β} { C : Set γ} → ( B → C ) → ( A → B ) → ( A → C ) f ∘′ g = f ∘ g -- Flip flip : ∀ {α β γ} → { A : Set α} { B : Set β} { C : A → B → Set γ} → ((x : A ) → (y : B ) → C x y) → ((y : B ) → (x : A ) → C x y) flip f x y = f y x -- Identity id : ∀ {α} { A : Set α} → A → A id x = x -- Constant function const : ∀ {α β} → { A : Set α} { B : Set β} → ( A → B → A ) const x y = x open Function public

We are now going to redefine everything useful from above in a universe polymorphic way (when applicable), starting withs:Each module in Agda has an export list. Everything defined in a module gets appended to it. To place things defined for export in another module into a current context there is anconstruct:This doesn’t append’s export list to current module’s export list. To do that we need to addkeyword at the end:Understand what is going on in types of the following functions:

Especially note the scopes of variable bindings in types.

Logic

Logic

module Logic where -- input for ⊥ is \bot -- False proposition data ⊥ : Set where -- input for ⊤ is \top -- True proposition record ⊤ : Set where -- ⊥ implies anything at any universe level ⊥-elim : ∀ {α} { A : Set α} → ⊥ → A ⊥-elim ()

-- input for ¬ is \lnot ¬ : ∀ {α} → Set α → Set α ¬ P = P → ⊥

Intuitionisticmodule:Propositional negation is defined as follows:

The technical part of the idea of this definition is that the principle of explosion (“from a contradiction, anything follows”) gets a pretty straightforward proof.

module ThrowAwayExercise where contradiction : ∀ {α β} { A : Set α} { B : Set β} → A → ¬ A → B contradiction = { !! } contraposition : ∀ {α β} { A : Set α} { B : Set β} → ( A → B ) → (¬ B → ¬ A ) contraposition = { !! } contraposition¬ : ∀ {α β} { A : Set α} { B : Set β} → ( A → ¬ B ) → ( B → ¬ A ) contraposition¬ = { !! } → ¬² : ∀ {α} { A : Set α} → A → ¬ (¬ A ) → ¬² a = { !! } ¬³ → ¬ : ∀ {α} { A : Set α} → ¬ (¬ (¬ A )) → ¬ A ¬³ → ¬ = { !! }

Prove the following propositions:

Hint. Use C-c C-, here to see the goal type in its normal form.

From a more logical standpoint the idea of ¬ is that false proposition P should be isomorphic to ⊥ (i.e. they should imply each other: ⊥ → P ∧ P → ⊥ ). Since ⊥ → P is true for all P there is only P → ⊥ left for us to prove.

From a computational point of view having a variable of type ⊥ in a context means that there is no way execution of a program could reach this point. Which means we can match on the variable and use absurd pattern, ⊥-elim does exactly that.

¬² → : ∀ {α} { A : Set α} → ¬ (¬ A ) → A ¬² → ¬¬a = { ! check me ! } {- end of ThrowAwayExercise -}

Note that, being an intuitionistic system, Agda has no means to prove “double negation” rule. See for yourself:

By the way, proofs in the exercise above amounted to a serious scientific paper at the start of 20th century.

private module DummyAB {α β} { A : Set α} { B : Set β} where contradiction : A → ¬ A → B contradiction a ¬a = ⊥-elim (¬a a) contraposition : ( A → B ) → (¬ B → ¬ A ) contraposition = flip _∘′_ contraposition¬ : ( A → ¬ B ) → ( B → ¬ A ) contraposition¬ = flip open DummyAB public private module DummyA {α} { A : Set α} where → ¬² : A → ¬ (¬ A ) → ¬² = contradiction ¬³ → ¬ : ¬ (¬ (¬ A )) → ¬ A ¬³ → ¬ ¬³a = ¬³a ∘′ → ¬² open DummyA public

Solution for the exercise:

Exercise. Understand this solution.

Note clever module usage. Opening a module with parameters prefixes types of all the things defined there with these parameters. We will use this trick a lot.

-- input for ∧ is \and record _∧_ {α β} ( A : Set α) ( B : Set β) : Set (α ⊔ β) where constructor _,′_ field fst : A snd : B open _∧_ public -- input for ∨ is \or data _∨_ {α β} ( A : Set α) ( B : Set β) : Set (α ⊔ β) where inl : A → A ∨ B inr : B → A ∨ B -- input for ↔ is \<-> _↔_ : ∀ {α β} ( A : Set α) ( B : Set β) → Set (α ⊔ β) A ↔ B = ( A → B ) ∧ ( B → A )

open Logic public

MLTT: types and properties

[12]

module MLTT where -- input for ≡ is \== -- Propositional equality infix 4 _≡_ data _≡_ {α} { A : Set α} (x : A ) : A → Set α where refl : x ≡ x -- input for Σ is \Sigma -- Dependent pair record Σ {α β} ( A : Set α) ( B : A → Set β) : Set (α ⊔ β) where constructor _,_ field projl : A projr : B projl open Σ public -- Make rewrite syntax work {-# BUILTIN EQUALITY _≡_ #-} {-# BUILTIN REFL refl #-}

Σ

_∧_

_∧_

Σ

-- input for × is \x _×_ : ∀ {α β} ( A : Set α) ( B : Set β) → Set (α ⊔ β) A × B = Σ A (λ _ → B ) ×↔∧ : ∀ {α β} { A : Set α} { B : Set β} → ( A × B ) ↔ ( A ∧ B ) ×↔∧ = (λ z → projl z ,′ projr z) ,′ (λ z → fst z , snd z)

Let us define conjunction, disjunction, and logical equivalence:Make all this goodness available:Some definitions from Per Martin-Löf’s type theoryThetype is a dependent version of(the second field depends on the first), i.e.is a specific case of

Personally, I use both _∧_ and _×_ occasionally since _×_ looks ugly in the normal form and makes goal types hard to read.

module ≡- Prop where private module DummyA {α} { A : Set α} where -- _≡_ is symmetric sym : {x y : A } → x ≡ y → y ≡ x sym refl = refl -- _≡_ is transitive trans : {x y z : A } → x ≡ y → y ≡ z → x ≡ z trans refl refl = refl -- _≡_ is substitutive subst : ∀ {γ} { P : A → Set γ} {x y} → x ≡ y → P x → P y subst refl p = p private module DummyAB {α β} { A : Set α} { B : Set β} where -- _≡_ is congruent cong : ∀ (f : A → B ) {x y} → x ≡ y → f x ≡ f y cong f refl = refl subst₂ : ∀ {ℓ} { P : A → B → Set ℓ} {x y u v} → x ≡ y → u ≡ v → P x u → P y v subst₂ refl refl p = p private module DummyABC {α β γ} { A : Set α} { B : Set β} { C : Set γ} where cong₂ : ∀ (f : A → B → C ) {x y u v} → x ≡ y → u ≡ v → f x u ≡ f y v cong₂ f refl refl = refl open DummyA public open DummyAB public open DummyABC public

open MLTT public

Decidable propositions

module Decidable where

data Dec {α} ( A : Set α) : Set α where yes : ( a : A ) → Dec A no : (¬a : ¬ A ) → Dec A

Some properties:Make all this goodness available:Decidable proposition it is a proposition that has an explicit proof or disproval:

This datatype is very much like Bool , except it also explains why the proposition holds or why it must not.

Decidable propositions are the glue that make your program work with the real world.

n

stdin

div2E

n

Even

Even

module ThrowAwayExample ₁ where open ThrowAwayIntroduction ¬Even + 2 : ∀ {n} → ¬ ( Even n) → ¬ ( Even ( succ ( succ n))) ¬Even + 2 ¬en (e2succ en) = contradiction en ¬en Even ? : ∀ n → Dec ( Even n) Even ? zero = yes ezero Even ? ( succ zero) = no (λ ()) -- note an absurd pattern in -- an anonymous lambda expression Even ? ( succ ( succ n)) with Even ? n ... | yes a = yes (e2succ a) ... | no a¬ = no (¬Even + 2 a¬) {- end of ThrowAwayExample₁ -}

Suppose we want to write a program that reads a natural number, say, fromand divides it by two with. To do that we need a proof thatis. The easiest way to do it is to define a function that decides if a given natural is

then read n from stdin , feed it to Even? , match on the result and call div2E if n is Even .

Same idea applies to almost everything:

Want to write a parser? Parser is a procedure that decides if a string conforms to a syntax.

Want to type check a program? Type checker is a procedure that decides if the program conforms to a given set of typing rules.

Want an optimizing compiler? Parse, match on yes , type check, match on yes , optimize typed representation, generate output.

, type check, match on , optimize typed representation, generate output. And so on.

data Di {α β} ( A : Set α) ( B : Set β) : Set (α ⊔ β) where diyes : ( a : A ) (¬b : ¬ B ) → Di A B dino : (¬a : ¬ A ) ( b : B ) → Di A B data Tri {α β γ} ( A : Set α) ( B : Set β) ( C : Set γ) : Set (α ⊔ (β ⊔ γ)) where tri < : ( a : A ) (¬b : ¬ B ) (¬c : ¬ C ) → Tri A B C tri≈ : (¬a : ¬ A ) ( b : B ) (¬c : ¬ C ) → Tri A B C tri > : (¬a : ¬ A ) (¬b : ¬ B ) ( c : C ) → Tri A B C

open Decidable public

Natural numbers: operations, properties and relations

rewrite

module Data - ℕ where -- Natural numbers (positive integers) data ℕ : Set where zero : ℕ succ : ℕ → ℕ module ℕ- Rel where infix 4 _≤_ _ < _ _ > _ data _≤_ : ℕ → ℕ → Set where z≤n : ∀ {n} → zero ≤ n s≤s : ∀ {n m} → n ≤ m → succ n ≤ succ m _ < _ : ℕ → ℕ → Set n < m = succ n ≤ m _ > _ : ℕ → ℕ → Set n > m = m < n ≤-unsucc : ∀ {n m} → succ n ≤ succ m → n ≤ m ≤-unsucc (s≤s a) = a <- ¬refl : ∀ n → ¬ (n < n) <- ¬refl zero () <- ¬refl ( succ n) (s≤s p) = <- ¬refl n p ≡ → ≤ : ∀ {n m} → n ≡ m → n ≤ m ≡ → ≤ {zero} refl = z≤n ≡ → ≤ { succ n} refl = s≤s (≡ → ≤ {n} refl) -- Note this ≡ → ¬< : ∀ {n m} → n ≡ m → ¬ (n < m) ≡ → ¬< refl = <- ¬refl _ ≡ → ¬> : ∀ {n m} → n ≡ m → ¬ (n > m) ≡ → ¬> refl = <- ¬refl _ < → ¬≡ : ∀ {n m} → n < m → ¬ (n ≡ m) < → ¬≡ = contraposition¬ ≡ → ¬< > → ¬≡ : ∀ {n m} → n > m → ¬ (n ≡ m) > → ¬≡ = contraposition¬ ≡ → ¬> < → ¬> : ∀ {n m} → n < m → ¬ (n > m) < → ¬> {zero} (s≤s z≤n) () < → ¬> { succ n} (s≤s p < ) p > = < → ¬> p < (≤-unsucc p > ) > → ¬< : ∀ {n m} → n > m → ¬ (n < m) > → ¬< = contraposition¬ < → ¬> module ℕ- Op where open ≡- Prop pred : ℕ → ℕ pred zero = zero pred ( succ n) = n infixl 6 _ + _ _ + _ : ℕ → ℕ → ℕ zero + n = n succ n + m = succ (n + m) infixr 7 _ * _ _ * _ : ℕ → ℕ → ℕ zero * m = zero succ n * m = m + (n * m) private module Dummy ₀ where lemma -+ zero : ∀ a → a + zero ≡ a lemma -+ zero zero = refl lemma -+ zero ( succ a) rewrite lemma -+ zero a = refl lemma -+succ : ∀ a b → succ a + b ≡ a + succ b lemma -+succ zero b = refl lemma -+succ ( succ a) b rewrite lemma -+succ a b = refl open Dummy ₀ -- + is associative +- assoc : ∀ a b c → (a + b) + c ≡ a + (b + c) +- assoc zero b c = refl +- assoc ( succ a) b c rewrite ( +- assoc a b c) = refl -- + is commutative +- comm : ∀ a b → a + b ≡ b + a +- comm zero b = sym $ lemma -+ zero b +- comm ( succ a) b rewrite +- comm a b | lemma -+succ b a = refl -- * is distributive by + *+- dist : ∀ a b c → (a + b) * c ≡ a * c + b * c *+- dist zero b c = refl *+- dist ( succ a) b c rewrite *+- dist a b c | +- assoc c (a * c) (b * c) = refl -- * is associative *- assoc : ∀ a b c → (a * b) * c ≡ a * (b * c) *- assoc zero b c = refl *- assoc ( succ a) b c rewrite *+- dist b (a * b) c | *- assoc a b c = refl private module Dummy ₁ where lemma -* zero : ∀ a → a * zero ≡ zero lemma -* zero zero = refl lemma -* zero ( succ a) = lemma -* zero a lemma -+ swap : ∀ a b c → a + (b + c) ≡ b + (a + c) lemma -+ swap a b c rewrite sym ( +- assoc a b c) | +- comm a b | +- assoc b a c = refl lemma -*succ : ∀ a b → a + a * b ≡ a * succ b lemma -*succ zero b = refl lemma -*succ ( succ a) b rewrite lemma -+ swap a b (a * b) | lemma -*succ a b = refl open Dummy ₁ -- * is commutative *- comm : ∀ a b → a * b ≡ b * a *- comm zero b = sym $ lemma -* zero b *- comm ( succ a) b rewrite *- comm a b | lemma -*succ b a = refl module ℕ- RelOp where open ℕ- Rel open ℕ- Op open ≡- Prop infix 4 _≡?_ _≤?_ _ <? _ _≡?_ : (n m : ℕ) → Dec (n ≡ m) zero ≡? zero = yes refl zero ≡? succ m = no (λ ()) succ n ≡? zero = no (λ ()) succ n ≡? succ m with n ≡? m succ . m ≡? succ m | yes refl = yes refl succ n ≡? succ m | no ¬a = no (¬a ∘ cong pred ) -- Note this _≤?_ : (n m : ℕ) → Dec (n ≤ m) zero ≤? m = yes z≤n succ n ≤? zero = no (λ ()) succ n ≤? succ m with n ≤? m ... | yes a = yes (s≤s a) ... | no ¬a = no (¬a ∘ ≤-unsucc) _ <? _ : (n m : ℕ) → Dec (n < m) n <? m = succ n ≤? m cmp : (n m : ℕ) → Tri (n < m) (n ≡ m) (n > m) cmp zero zero = tri≈ (λ ()) refl (λ ()) cmp zero ( succ m) = tri < (s≤s z≤n) (λ ()) (λ ()) cmp ( succ n) zero = tri > (λ ()) (λ ()) (s≤s z≤n) cmp ( succ n) ( succ m) with cmp n m cmp ( succ n) ( succ m) | tri < a ¬b ¬c = tri < (s≤s a) (¬b ∘ cong pred ) (¬c ∘ ≤-unsucc) cmp ( succ n) ( succ m) | tri≈ ¬a b ¬c = tri≈ (¬a ∘ ≤-unsucc) (cong succ b) (¬c ∘ ≤-unsucc) cmp ( succ n) ( succ m) | tri > ¬a ¬b c = tri > (¬a ∘ ≤-unsucc) (¬b ∘ cong pred ) (s≤s c) open Data - ℕ public

Using same idea we can define decidable dichotomous and trichotomous propositions:Make all this goodness available:Consider this to be the answer (encrypted withs) for the exercise way above:

Exercise. Understand this. Now, remove all term bodies from ℕ-RelProp and ℕ-RelOp and reimplement everything yourself.

Lists and Vectors

module Data - List where -- List infixr 5 _ ∷ _ data List {α} ( A : Set α) : Set α where [] : List A _ ∷ _ : A → List A → List A module List - Op where private module DummyA {α} { A : Set α} where -- Singleton `List` [_] : A → List A [ a ] = a ∷ [] -- Concatenation for `List`s infixr 5 _ ++ _ _ ++ _ : List A → List A → List A [] ++ bs = bs (a ∷ as) ++ bs = a ∷ (as ++ bs) -- Filtering with decidable propositions filter : ∀ {β} { P : A → Set β} → ( ∀ a → Dec ( P a)) → List A → List A filter p [] = [] filter p (a ∷ as) with p a ... | yes _ = a ∷ ( filter p as) ... | no _ = filter p as open DummyA public module Data - Vec where -- Vector infixr 5 _ ∷ _ data Vec {α} ( A : Set α) : ℕ → Set α where [] : Vec A zero _ ∷ _ : ∀ {n} → A → Vec A n → Vec A ( succ n) module Vec - Op where open ℕ- Op private module DummyA {α} { A : Set α} where -- Singleton `Vec` [_] : A → Vec A ( succ zero) [ a ] = a ∷ [] -- Concatenation for `Vec`s infixr 5 _ ++ _ _ ++ _ : ∀ {n m} → Vec A n → Vec A m → Vec A (n + m) [] ++ bs = bs (a ∷ as) ++ bs = a ∷ (as ++ bs) head : ∀ {n} → Vec A ( succ n) → A head (a ∷ as) = a tail : ∀ {n} → Vec A ( succ n) → A tail (a ∷ as) = a open DummyA public

{- Work in progress. TODO. I find the following definition quite amusing: module VecLists where open Data-Vec private module DummyA {α} {A : Set α} where VecList = Σ ℕ (Vec A) -}

Being in a List

module ThrowAwayMore ₁ where open Data - List open List - Op -- input for ∈ is \in -- `a` is in `List` data _∈_ {α} { A : Set α} (a : A ) : List A → Set α where here : ∀ {as} → a ∈ (a ∷ as) there : ∀ {b as} → a ∈ as → a ∈ (b ∷ as) -- input for ⊆ is \sub= -- `xs` is a sublist of `ys` _⊆_ : ∀ {α} { A : Set α} → List A → List A → Set α as ⊆ bs = ∀ {x} → x ∈ as → x ∈ bs

Indexing allows to define pretty fun things:

The _∈_ relation says that “being in a List ” for an element a : A means that a in the head of a List or in the tail of a List . For some a and as a value of type a ∈ as , that is “ a is in a list as ” is a position of an element a in as (there might be any number of elements in this type). Relation ⊆ , that is “being a sublist”, carries a function that for each a in xs gives its position in as .

listTest₁ = zero ∷ zero ∷ succ zero ∷ [] listTest₂ = zero ∷ succ zero ∷ [] ∈Test₀ : zero ∈ listTest₁ ∈Test₀ = here ∈Test₁ : zero ∈ listTest₁ ∈Test₁ = there here ⊆Test : listTest₂ ⊆ listTest₁ ⊆Test here = here ⊆Test (there here) = there (there here) ⊆Test (there (there ()))

⊆

⊆- ++- left : ∀ { A : Set } (as bs : List A ) → as ⊆ (bs ++ as) ⊆- ++- left as [] n = n ⊆- ++- left as (b ∷ bs) n = there (⊆- ++- left as bs n) ⊆- ++- right : ∀ { A : Set } (as bs : List A ) → as ⊆ (as ++ bs) ⊆- ++- right [] bs () ⊆- ++- right (a ∷ as) bs here = here ⊆- ++- right (a ∷ as) bs (there n) = there (⊆- ++- right as bs n) {- end of ThrowAwayMore₁ -}

Examples:Let us prove some properties forrelation:

Note how these proofs renumber elements of a given list.

Being in a List generalized: Any

_∈_

x ∈ (x ∷ xs)

x

module Data - Any where open Data - List open List - Op -- Some element of a `List` satisfies `P` data Any {α γ} { A : Set α} ( P : A → Set γ) : List A → Set (α ⊔ γ) where here : ∀ {a as} → (pa : P a) → Any P (a ∷ as) there : ∀ {a as} → (pas : Any P as) → Any P (a ∷ as) module Membership {α β γ} { A : Set α} { B : Set β} ( P : B → A → Set γ) where -- input for ∈ is \in -- `P b a` holds for some element `a` from the `List` -- when P is `_≡_` this becomes the usual "is in" relation _∈_ : B → List A → Set (α ⊔ γ) b ∈ as = Any ( P b) as -- input for ∉ is

otin _∉_ : B → List A → Set (α ⊔ γ) b ∉ as = ¬ (b ∈ as) -- input for ⊆ is \sub= _⊆_ : List A → List A → Set (α ⊔ β ⊔ γ) as ⊆ bs = ∀ {x} → x ∈ as → x ∈ bs -- input for ⊈ is \sub=n _⊈_ : List A → List A → Set (α ⊔ β ⊔ γ) as ⊈ bs = ¬ (as ⊆ bs) -- input for ⊇ is \sup= _⊆⊇_ : List A → List A → Set (α ⊔ β ⊔ γ) as ⊆⊇ bs = (as ⊆ bs) ∧ (bs ⊆ as) ⊆-refl : ∀ {as} → as ⊆ as ⊆-refl = id ⊆-trans : ∀ {as bs cs} → as ⊆ bs → bs ⊆ cs → as ⊆ cs ⊆-trans f g = g ∘ f ⊆⊇-refl : ∀ {as} → as ⊆⊇ as ⊆⊇-refl = id ,′ id ⊆⊇-sym : ∀ {as bs} → as ⊆⊇ bs → bs ⊆⊇ as ⊆⊇-sym (f ,′ g) = g ,′ f ⊆⊇-trans : ∀ {as bs cs} → as ⊆⊇ bs → bs ⊆⊇ cs → as ⊆⊇ cs ⊆⊇-trans f g = ( fst g ∘ fst f) ,′ ( snd f ∘ snd g) ∉[] : ∀ {b} → b ∉ [] ∉[]() -- When P is `_≡_` this becomes `b ∈ [ a ] → b ≡ a` ∈singleton → P : ∀ {a b} → b ∈ [ a ] → P b a ∈singleton → P (here pba) = pba ∈singleton → P (there ()) P → ∈singleton : ∀ {a b} → P b a → b ∈ [ a ] P → ∈singleton pba = here pba ⊆- ++- left : (as bs : List A ) → as ⊆ (bs ++ as) ⊆- ++- left as [] n = n ⊆- ++- left as (b ∷ bs) n = there (⊆- ++- left as bs n) ⊆- ++- right : (as : List A ) (bs : List A ) → as ⊆ (as ++ bs) ⊆- ++- right [] bs () ⊆- ++- right (x ∷ as) bs (here pa) = here pa ⊆- ++- right (x ∷ as) bs (there n) = there (⊆- ++- right as bs n) ⊆- filter : ∀ {σ} { Q : A → Set σ} → (q : ∀ x → Dec ( Q x)) → (as : List A ) → filter q as ⊆ as ⊆- filter q [] () ⊆- filter q (a ∷ as) n with q a ⊆- filter q (a ∷ as) (here pa) | yes qa = here pa ⊆- filter q (a ∷ as) (there n) | yes qa = there (⊆- filter q as n) ⊆- filter q (a ∷ as) n | no ¬qa = there (⊆- filter q as n)

⊆-filter

C-c C-.

module ThrowAwayMore ₂ where goal = { ! Data - Any.Membership . ⊆- filter! } {- end of ThrowAwayMore₂ -}

By generalizingrelation from propositional equality (inboths are propositionally equal) to arbitrary predicates we arrive at:Note how general this code is.covers a broad set of propositions, with “filtered list is a sublist (in the usual sense) of the original list” being a special case. Doin the following goal and explain the type:

Explain the types of all the terms in Membership module.

Dual predicate: All

{- Work in progress. TODO. I didn't have a chance to use `All` yet (and I'm too lazy to implement this module right now), but here is the definition: module Data-All where open Data-List -- All elements of a `List` satisfy `P` data All {α β} {A : Set α} (P : A → Set β) : List A → Set (α ⊔ β) where []∀ : All P [] _∷∀_ : ∀ {a as} → P a → All P as → All P (a ∷ as) -}

Booleans

Are not that needed with Dec , actually.

module Data - Bool where -- Booleans data Bool : Set where true false : Bool module Bool - Op where if_then_else_ : ∀ {α} { A : Set α} → Bool → A → A → A if true then a else _ = a if false then _ else b = b not : Bool → Bool not true = false not false = true and : Bool → Bool → Bool and true x = x and false _ = false or : Bool → Bool → Bool or false x = x or true x = true open Data - Bool public

Other stuff

Work in progress. TODO. We need to prove something from A to Z. Quicksort maybe.

Pre-theoretical corner

This section discusses interesting things about Agda which are somewhere in between practice and pure theory.

module ThrowAwayPreTheory where open ≡- Prop open ℕ- Op

Equality and unification

Rewriting with equality hides a couple of catches.

lemma-+zero′

lemma -+ zero′ : ∀ a → a + zero ≡ a lemma -+ zero′ zero = refl lemma -+ zero′ ( succ a) with a + zero | lemma -+ zero′ a lemma -+ zero′ ( succ a) | . _ | refl = refl

lemma -+ zero′′ : ∀ a → a + zero ≡ a lemma -+ zero′′ zero = refl lemma -+ zero′′ ( succ a) with a | lemma -+ zero′′ a lemma -+ zero′′ ( succ a) | . _ | refl = refl

Remember the term offrom above:it typechecks, but the following proof doesn’t:

The problem here is that for arbitrary terms A and B to pattern match on refl : A ≡ B these A and B must unify. In lemma-+zero′ case we have a + zero backward-substituted into a new variable w , then we match on refl we get w ≡ a . On the other hand, in lemma-+zero′′ case we have a changed into w , an refl gets w + zero ≡ w type which is a malformed (recursive) unification constraint.

There is another catch. Our current definition of _≡_ allows to express equality on types, e.g. Bool ≡ ℕ .

lemma - unsafe - eq : ( P : Bool ≡ ℕ) → Bool → ℕ lemma - unsafe - eq P b with Bool | P lemma - unsafe - eq P b | . ℕ | refl = b + succ zero

This enables us to write the following term:

which type checks without errors.

lemma-unsafe-eq

P

lemma - unsafe - eq₀ : ( P : Bool ≡ ℕ) → Bool → ℕ lemma - unsafe - eq₀ refl b = b

{- end of ThrowAwayPreTheory -}

Note, however, thatcannot be proven by simply pattern matching on

Exercise. lemma-unsafe-eq is a food for thought about computation safety under false assumptions.

Theoretical corner

In this section we shall talk about some theoretical stuff like datatype encodings and paradoxes. You might want to read some of the theoretical references like [10], [12] first.

module ThrowAwayTheory where

In literature Agda’s arrow (x : X) → Y (where Y might have x free) is called dependent product type, or Π-type (“Pi-type”) for short. Dependent pair Σ is called dependent sum type, or Σ-type (“Sigma-type”) for short.

Finite types

Given ⊥ , ⊤ and Bool it is possible to define any finite type, that is a type with finite number of elements.

module FiniteTypes where open Bool - Op _∨′_ : ( A B : Set ) → Set A ∨′ B = Σ Bool (λ x → if x then A else B ) zero′ = ⊥ one′ = ⊤ two′ = Bool three′ = one′ ∨′ two′ four′ = two′ ∨′ two′ --- and so on

TODO. Say something about extensional setting and ⊤ = ⊥ → ⊥ .

Simple datatypes

module ΠΣ- Datatypes where

Given finite types, Π-types, and Σ-types it is possible to define non-inductive datatypes using the same scheme the definition of _∨′_ uses.

data DataTypeName ( Param1 : Param1Type ) ( Param2 : Param2Type ) ... : Set whatever Cons1 : ( Cons1Arg1 : Cons1Arg1Type ) ( Cons1Arg2 : Cons1Arg2Type ) ... → DataTypeName Param1 Param2 ... Cons2 : ( Cons2Arg1 : Cons2Arg1Type ) ... → DataTypeName Param1 Param2 ... ... ConsN : ( ConsNArg1 : ConsNArg1Type ) ... → DataTypeName Param1 Param2 ...

DataTypeName : ( Param1 : Param1Type ) ( Param2 : Param2Type ) ... → Set whatever DataTypeName Param1 Param2 ... = Σ FiniteTypeWithNElements choice where choice : FiniteTypeWithNElements → Set whatever choice element1 = Σ Cons1Arg1Type (λ Cons1Arg1 → Σ Cons1Arg2Type (λ Cons1Arg2 → ... )) choice element2 = Σ Cons2Arg1Type (λ Cons2Arg1 → ... ) ... choice elementN = Σ ConsNArg1Type (λ ConsNArg1 → ... )

Di

Di ′ : ∀ {α β} ( A : Set α) ( B : Set β) → Set (α ⊔ β) Di ′ {α} {β} A B = Σ Bool choice where choice : Bool → Set (α ⊔ β) choice true = A × ¬ B choice false = ¬ A × B

Datatypes with indices

Non-inductive datatype without indexes has the following scheme:Re-encoded into Π-types, Σ-types, and finite types it becomes:For instance,type from above translates into:

Work in progress. TODO. The general idea: add them as parameters and place an equality proof inside.

Recursive datatypes

Work in progress. TODO. General ideas: W-types and μ.

Curry’s paradox

Negative occurrences make the system inconsistent.

{-# OPTIONS --no-positivity-check #-} module CurrysParadox where data CS ( C : Set ) : Set where cs : ( CS C → C ) → CS C paradox : ∀ { C } → CS C → C paradox (cs b) = b (cs b) loop : ∀ { C } → C loop = paradox (cs paradox) contr : ⊥ contr = loop

Universes and impredicativity

Copy this to a separate file and typecheck:

Work in progress. TODO. * Russell’s paradox * Hurkens’ paradox

{- end of ThrowAwayTheory -}

References

[1] Wiki, “Agda Tutorials list.” [Online]. Available: http://wiki.portal.chalmers.se/agda/pmwiki.php?n=Main.Othertutorials

[2] Wiki, “Agda: Documentation.” [Online]. Available: http://wiki.portal.chalmers.se/agda/pmwiki.php?n=Main.Documentation

[3] A. Setzer, “Interactive Theorem Proving for Agda Users.” [Online]. Available: http://www.cs.swan.ac.uk/~csetzer/lectures/intertheo/07/interactiveTheoremProvingForAgdaUsers.html

[4] A. Bove and P. Dybjer, “Dependent Types at Work.” [Online]. Available: http://www.cse.chalmers.se/~peterd/papers/DependentTypesAtWork.pdf

[5] U. Norell, “Dependently Typed Programming in Agda.” [Online]. Available: http://www.cse.chalmers.se/~ulfn/papers/afp08/tutorial.pdf

[6] T. Altenkirch, “Computer Aided Formal Reasoning.” [Online]. Available: http://www.cs.nott.ac.uk/~txa/g53cfr/

[7]“Coq: Documentation.” [Online]. Available: http://coq.inria.fr/documentation

[8]“Idris: Documentation.” [Online]. Available: http://idris-lang.org/documentation

[9]“Epigram.” [Online]. Available: http://www.e-pig.org/

[10] M.H.B. Sørensen and P. Urzyczyn, “Lectures on the Curry-Howard Isomorphism.” 1998 [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.7385

[11] S. Thompson, “Type Theory and Functional Programming.” [Online]. Available: https://www.cs.kent.ac.uk/people/staff/sjt/TTFP/

[12] P. Martin-Löf, “Intuitionistic type theory. Notes by Giovanni Sambin.” [Online]. Available: http://www.csie.ntu.edu.tw/~b94087/ITT.pdf

[13] P. Martin-Löf, “Intuitionistic type theory.” [Online]. Available: http://intuitionistic.files.wordpress.com/2010/07/martin-lof-tt.pdf

[14] B. Nordström, K. Petersson, and J.M. Smith, “Programming in Martin-Löf’s Type Theory. An Introduction.” [Online]. Available: http://www.cse.chalmers.se/research/group/logic/book/

[15]“Simpler Easier.” [Online]. Available: http://augustss.blogspot.ru/2007/10/simpler-easier-in-recent-paper-simply.html

[16] A. Bauer, “How to implement dependent type theory I.” [Online]. Available: http://math.andrej.com/2012/11/08/how-to-implement-dependent-type-theory-i/

[17] A. Bauer, “How to implement dependent type theory II.” [Online]. Available: http://math.andrej.com/2012/11/11/how-to-implement-dependent-type-theory-ii/

[18] A. Bauer, “How to implement dependent type theory III.” [Online]. Available: http://math.andrej.com/2012/11/29/how-to-implement-dependent-type-theory-iii/

[19] J. Malakhovski, “Functional Programming and Proof Checking Course.” [Online]. Available: http://oxij.org/activity/itmo/fp/

[20] C. McBride and J. McKinna, “The view from the left.” [Online]. Available: http://strictlypositive.org/view.ps.gz

Related notes