basics of bidirectionalism

I like my type systems to be bidirectional. Different people mean different things by the word “bidirectional”, and I’m one of them. Some people relate the notion to “polarity”, and I’m not one of them.

I’m also a fan of small-step computation, because I work with dependent types, and I don’t like presuming awfully strong properties of computation while I’m still in the process of writing the rules down.

setup

I separate my term languages into separate syntactic categories.

checked s,t,S,T ::= c | [e]

synthed e,f ::= x | t : T | e d

The c stands for constructor and the d for destructor: more on them shortly.

Correspondingly, I have two typing judgments.

Γ |- T ∋ t

Γ |- e ∈ S

Contexts, Γ, assign types to variables.

context Γ,Δ ::= | Γ, x : S

If you bother me about variable freshness conditions, I shall respond by switching to a nameless notation: I’m perfectly comfortable living there, but I know I should use names when communicating with people who aren’t used to it.

There’s a small-step computation relation for each syntactic category, overloaded. Computation never changes syntactic category. It’s a little bit funny to include type ascriptions. Those t : T terms I refer to as radicals, because they’re the active things in a computation.

We have υ-contraction which notes that a radical no longer capable of computation needs no type. That’s how things stop.

[t : T] ↝ υ t

We have another class of β-contractions which explain how things go.

(c : C) d ↝ β e

That’s to say computation is always a reaction between a constructor and a destructor at a given type. The ↝ relation is the closure of the contractions under all contexts.

Discipline 1. Do not refer to the context explicitly in a typing rule. Premises may give a local context extension.

The assertion

-| x : S

means that the (implicit) context ascribes S to x. This is used in precisely one rule.

-| x : S

———– (var)

x ∈ S

Here’s another rule I enjoy, without even adding in any stuff to the type theory.

e ∈ S S = T

———————— (embed)

T ∋ [e]

We can and will worry about what = means, but it’s ok to think of it as α-equivalence, which those of us in the de Bruijnies just call syntactic identity.

I’m going to insist on the existence of one constructor, *, whose job is to classify anything (other than itself, possibly) which can be ascribed as a type.

With that, I add

* ∋ T T ∋ t

———————— (ascribe)

t : T ∈ T

There are but two other rules that are common to all my bidirectional systems, allowing types to compute before checking, or after synthesis.

T ↝ T’ T’ ∋ t

————————– (pre)

T ∋ t

e ∈ S S ↝ S’

————————– (post)

e ∈ S’

syntax of patterns and expressions

I get ahead of myself. I should talk a little about how the syntax of these calculi works, and more pressingly, how the metasyntax of the formulae which show up in typing rules works.

Our calculus has syntactic sorts

sort i ::= chk | syn | dst

and from syntactic sorts we generate the metasyntactic kinds, which are higher-order and uncurried, indicating the syntactic binding structure of things.

kind k ::= (k, .. k)i

where we drop empty ().

A signature assigns kinds to constants. A scope assigns kinds to variables and a problem assigns kinds to metavariables. I indicate kinding by juxtaposition. E.g.,

Π(chk, (syn)chk)chk λ((syn)chk)chk

We may then construct expressions from constants, variables and metavariables: each must be fully applied, in accordance with their kinds, to spines which, in each position, abstract according to the required kind before providing a subexpression. That is, our metalanguage keeps everything η-long with respect to kinds.

Π(S, x. T(x)) λ(x. t(x))

Note that in the above, the problem is

S()chk T(syn)chk t(syn)chk

Expressions admit simultaneous hereditary substitution (scope morphisms) for variables and thus substitutions (problem morphisms) for metavariables.

But there’s more, or rather, less. A pattern allows constants and variables to take spines, but the metavariables are instead given only the subset of the variables M[x,..] upon which they may depend. We may write patterns

Π(S, x. T[x]) λ(x. t[x])

If you think of type ascription as a reserved infix constructor whose kind is (chk, chk)syn, then you will see that a β-redex is a pattern

(λ(x. t[x]) : Π(S, x. T[x])) s

but that its reduct is an expression, not a pattern.

t(s : S) : T(s : S)

Which brings me to

Discipline 2. Judgments and relations are moded.

There are three modes: input, subject and output.

We have

input |- input ∋ subject

input |- subject ∈ output

input -| x : output

input ↝ output

Discipline 3. Rule conclusions have patterns for inputs and subjects, but expressions for outputs. Rule premises have expressions for inputs and subjects, but patterns for outputs. Metavariable scope rotates clockwise, with patterns binding metavariables and expressions merely using them.

Observe that our β-rule exactly follows this discipline, as do the typing rules, so far.

One notationally helpful consequence of this discipline is that a metavariable always has one binding site t[x] in a pattern which identifies its dependencies, so at its use sites in expressions, we may write merely t(e), rather than t(e/x), because the binding site makes clear what is to be substituted. I apologize to anonymous reviewers I have confused in the past by following this convention without elucidating it.

Note that this discipline is not enough to make typing rules algorithmic in that inputs need not specify outputs. Indeed, one can imagine a judgment form which makes up an output from thin air, whose use in a premise allows us to bring a new metavariable into the problem. However, following this discipline takes us a great deal closer to an algorithm.

Pattern matching is the business of solving an equation between a pattern and an expression, yielding an instantiation of the metavariables in the pattern. It is stable under substitution in the sense that if a given expression matches a pattern, any substitution instance of the expression matches the pattern too, with the correspondingly substituted instantiation.

what free variables?

Let me double down on discipline 1.

Discipline 4. With the exception of the variable rule, it is forbidden to mention variables in the context, apart from those bound in the context extension of premises.

The consequence of these disciplines is that we cannot help but achieve stability under substitution. Our rules, by discipline, characterize only those properties of terms which are invariant under substitution.

Lemma 5. Substitution is admissible for all judgments and relations.

x : S |- J[x] e ∈ S

————————————-

J(e)

The variable rule is constructed exactly to ensure that we know how to substitute for its deployment in derivations (which are, by the by, also stable under thinning, by construction — Lemma 5-ε).

the subject discipline

There is a helpful refinement of discipline 4 which I took too long to invent, partly because a great many rule systems out there in the wild do not respect it, resulting in the need for CLEVERNESS in situations where STUPIDITY is perfectly effective.

Discipline 6. Each premise subject must be a distinct conclusion subject metavariable applied to distinct variables bound in that premise’s context extension. A conclusion subject metavariable becomes available for use in other expressions only after it has been used in such a subject. Every conclusion subject metavariable must be such a premise subject.

That is to say, a typing rule takes responsibility for validating its subject by determining how its parts must be validated.

But, moreover, a rule must never revalidate anything. Our moded discipline allows us to work contractually.

A rule is a server for its conclusion and a client for its premises. It is for clients to make promises about inputs and servers to make promises about outputs. The purpose of a typing derivation is to justify the promise about its subject asserted by its conclusion.

constructors and destructors

The syntactic constructors/destructors are constants with kinds of the form

((syn, .. syn)chk, .. (syn, .. syn)chk)chk

or, respectively,

((syn, .. syn)chk, .. (syn, .. syn)chk)dst

The constructors and destructors of our object languaeare fully applied uses of syntactic constructors and destructors, respectively. The application destructor has an empty name and kind (chk)chk, along with some liberal conventions about when parentheses are strictly necessary.

The checking rules for constructors, c or C, are subject to the following discipline.

Discipline 7. The conclusion of the typing rule for a constructor must have an outermost constructor in the pattern for its type and no use of embedding.

Our example rules all follow this discipline.

——– (type)

* ∋ *

* ∋ S x : S |- * ∋ T(x)

————————————– (Π)

* ∋ Π(S, x. T[x])

* ∋ S x : S |- T(x) ∋ t(x)

—————————————– (λ)

Π(S, x. T[x]) ∋ λ(x. t[x])

Meanwhile, the elimination rules follow a corresponding discipline.

Discipline 8. The conclusion of the typing rule for eliminating with a destructor must have a metavariable in the subject pattern for the synthed thing to be eliminated; its first premise must demand an outermost constructor in the pattern for the type synthesized for that metavariable and no use of embedding.

The application rule follows discipline 8.

f ∈ Π(S, x. T[x]) S ∋ s

——————————————— (application)

f s ∈ T(s : S)

Note that the kinds of the constants allowed in constructors and destructors ensures that all of their conclusion metavariables are checked.

The typing rules thus insist on constructor patterns for the types of things being constructed or destructed.

what is a β-redex?

A β-redex is what you get when the constructor patterns required for the active types in a constructor checking rule and a destructor synthesis rule unify.

I forgot to mention, unification is the business of solving an equation between two patterns. Fortunately (thank you, Dale Miller; thank you, James McKinna, for telling me to read Dale Miller), this is a decidable problem which admits most general solutions. Note that the most general solution to m[x] = c(n[x,y]) is n[x,y] = n'[x], m[x] = c(n'[x]), i.e., in the presence of binding, solving a metavariable may require pruning permitted dependencies of other metavariables, but apart from that, everything is as in good old Robinson ’75.

If we follow

Discipline 9. Distinct constructor/destructor rules for a given type constructor have distinct constructor/destructor constants. There is exactly one β-rule for each unifying pair.

we obtain confluence by return of post.

Lemma 10. βυ-reduction is confluent by construction.

No redex is a redex for more than one contraction scheme. Every subexpression of a redex for pattern p with matching substitution σ which happens also to be a redex is a residual, i.e., deep enough within the outer redex to be left intact within σ. Takahashi’s proof goes through on the nod. That is to say, we may construct the notion of parallel reduction |> which is closed under all syntactic constructs including nullary ones, but allows each redex to reduce to its contractum after the parallel reduction of its schematic variables. In other words, do anything you can see as long as they don’t interfere.

x |- t(x) |> t'[x] S |> S’ x |- T(x) |> T'[x] s |> s

————————————————————————— (β)

(λ(x. t[x]) : Π(S, x. T[x])) s |> t'(s’ : S’) : T’ (s’ : S’)

But discipline 9 ensures that the notion of development is well defined. That’s the born to be wild operation of firing all of your guns at once. There is a function dev(t) such that

t |> dev(t) t |> t’ ⇒ t’ |> dev(t)

because |> lets you fire all of your guns at once, but if you don’t, you can always fire just the remaining guns on your next move. So |> has the diamond property.

Without difficulty, ↝ ⊆ |> ⊆ ↝*, which means ↝* has the diamond property, too.

what is a contractum?

If we forgot, for a moment, the rules for pre- and post-computation, we could readily identify, by inversion, what we must know about the metavariables, all yielding sort chk, in any redex. For application, that would be

* ∋ S

x : S |- * ∋ T(x)

x : S |- T(x) ∋ t(x)

S ∋ s

and we can similarly figure out the type R mandated by the elim rule, which in our example is

T(s : S)

We may reduce our redex to some r : R’ such that R ↝* R’ for which * &nil R’ and R’ ∋ r are derivable from the above facts and substitution (which is known to be admissible), allowing r : R’ ∈ R’.

In our example, we choose r = t(s : S), and we can derive s : S ∈ S and hence, by stability, * ∋ T(s : S) and T(s : S) ∋ t(s : S).

subject reduction

Define optional reduction ↝? = ↝ ∪ =. By construction, if something optionally reduces, its subformulae optionally reduce also. If something that optionally reduces actually itself contracts, its subformulae stay put.

Lemma 11. We have the following.

Γ |- T ∋ t ∧ Γ ↝* Γ* ∧ T ↝* T* ∧ t ↝? t’ ⇒ Γ* |- T* ∋ t’

Γ |- e ∈ S ∧ Γ ↝* Γ* ∧ e ↝? e’ ⇒ ∃S*. S ↝* S* ∧ Γ* |- e’ ∈ S*

That is, if client computes inputs as much as they like and subjects by at most one step, server can compute outputs enough to recover the judgment.

Step 1. Transform the rules to syntax directed form by adding arbitrary pre-computation as the first premise of every checking rule and arbitrary post-computation as the last premise of every synthesis rule.

Step 2. Induction on derivations. Discipline 6 ensures that the induction hypotheses are strong enough to cover all the structural rules.

Step 3. For each contraction scheme, we deploy the induction hypotheses, then patch the derivation justifying that contraction by appeal to confluence. Crucially, matching constructor patterns is preserved by reduction: disciplines 7 and 8 ensure that computation does not destroy the applicability of rules.

conclusion

All my type systems enjoy confluence and subject reduction. It is not a thing to prove. It is a thing to not screw up, by knowing how to write down rules which follow disciplines ensuring that they’re not just any old rubbish. Andy Pitts once said “Type soundness proofs are two a penny.”, but I think they’re cheaper than that.

exercise

Add dependent pairs, following the discipline.