First of all, before I start on the actual blog post, let me put this in context. I rembember a couple of years ago when I developed an interest in functional programming languages, and Haskell in particular. There was a phase when I was able to use Haskell to solve problems in the small. I understood most of the basics of pure functional programming; then there were things I regarded as magic; and of course there was a lot of things I didn't even know that I didn't know about. But none of it did I grok .

I feel like I'm starting to get to the same level with Agda now. So this is going to be one of those "look at this cool thing I made" posts where the actual result is probably going to be trivial for actual experts of the field; but it's an important milestone for my own understanding of the subject.

I wanted to play around with simple but Turing-complete languages, and I started implementing an interpreter for a counter machine. More on that in a later post; this present post describes just the representation of register values. In the model that I implemented, values of registers are byte counters, meaning they have 256 different values, and two operations +1 and -1 that the inverses of each other. Incrementing/decrementing should roll over: 255 +1 = 0 and 0 -1 = 255 .

My first approach was to just use the Fin type from the standard library. However, the structure of Fin is nothing like the structure imposed by +1 and -1 , so while one can define these functions, proving properties like -1 ∘ +1 = id is unwieldy and the resulting proofs are not easy to reuse in other proofs.

So I eventually settlend on a zipper-like representation. The intuition behind it is to think of the possible values of Counter (suc n) as points on the discrete number line from 0 to n . You have a vector of numbers behind you and a vector of numbers in front of you; with the invariant that the length of the two vectors is always n . For example, if n =3, you can be at positions ([], [1, 2, 3]) , ([1], [2, 3]) , ([1, 2], [3]) and ([1, 2, 3], []) . To increase the value, just move the leftmost item of the second vector to the end of the first one; rollover is handled by the simple syntactic rule (xs, []) ↦ ([], xs) .

Of course, there is no point in actually storing the numbers, so we can use vectors of units instead; but why store those if we only care about their length?

So the eventual representation I came up with was:

data Counter : ℕ → Set where cut : (i j : ℕ) → Counter (suc i + j)

I was hoping that I could write +1 and -1 like this:

_+1 : ∀ {n} → Counter n → Counter n cut i zero +1 = cut zero i cut i (suc j) +1 = cut (suc i) j _-1 : ∀ {n} → Counter n → Counter n cut zero j -1 = cut j zero cut (suc i) j -1 = cut i (suc j)

But life with indexed types is not that simple: for example, in the first case, the left-hand side has, by definition, type Counter (suc i + 0) and the right hand Counter (suc 0 + i) . So we also need to inject proofs that the types actually match (with the actual proofs p 1 and p 2 omitted here for brevity):

_+1 : ∀ {n} → Counter n → Counter n cut i zero +1 = subst Counter p 1 (cut zero i) cut i (suc j) +1 = subst Counter p 2 (cut (suc i) j) _-1 : ∀ {n} → Counter n → Counter n cut zero j -1 = subst Counter (sym p 1 ) (cut j zero) cut (suc i) j -1 = subst Counter (sym p 2 ) (cut i (suc j))

However, this leads to more problems further down the line: you can't get rid of that subst later on, thus forcing you to use heterogenous equality for the rest of your proofs. While I was able to prove the property

+1-1 : ∀ {n} → {k : Counter n} → k +1 -1 ≡ k

using heterogenous equality, it broke down on me further down the road when actually trying to use these counters in the semantics of my register machines.

So instead of storing the size of the counter in a type index, I used a type parameter. This requires carrying around an explicit proof that the sizes match up, but we needed those proofs for the indexed case in the subst calls anyway, and then invoke proof irrelevance in the proof of +1-1 :

data Counter (n : ℕ) : Set where cut : (i j : ℕ) → (i+j+1=n : suc (i + j) ≡ n) → Counter n _+1 : ∀ {n} → Counter n → Counter n (cut i zero i+1=n) +1 = cut zero i p 1 (cut i (suc j) i+j+2=n) +1 = cut (suc i) j p 2 _-1 : ∀ {n} → Counter n → Counter n (cut zero j j+1=n) -1 = cut j zero p 3 (cut (suc i) j i+j+2=n) -1 = cut i (suc j) p 4 +1-1 : ∀ {n} → {k : Counter n} → k +1 -1 ≡ k +1-1 {k = cut i zero _} = cong (cut i zero) (proof-irrelevance _ _) +1-1 {k = cut i (suc j) _} = cong (cut i (suc j)) (proof-irrelevance _ _) -1+1 : ∀ {n} → {k : Counter n} → k -1 +1 ≡ k -1+1 {k = cut zero j _} = cong (cut zero j) (proof-irrelevance _ _) -1+1 {k = cut (suc i) j _} = cong (cut (suc i) j) (proof-irrelevance _ _)

With this approach, lifting these theorems to be about whole states, not just individual register values, is a breeze, e.g.:

+1-1Σ : ∀ {Σ x y} → (getVar y ∘ decVar x ∘ incVar x) Σ ≡ getVar y Σ +1-1Σ {x = x} {y = y} with toℕ x ≟ toℕ y ... | yes x=y = +1-1 ... | no x≠y = refl

But this is takes us to my actual application for these counters; and that will be the topic of a next post.

Here are the complete sources of the two counter implementations: