guest post by Mike Stay

Last year, my son’s math teacher introduced the kids to the concept of a function. One of the major points of confusion in the class was the idea that it didn’t matter whether he wrote f ( x ) = x 2 f(x) = x^2 or f ( y ) = y 2 f(y) = y^2 , but it did matter whether he wrote f ( x ) = x y f(x) = x y or f ( x ) = x z f(x) = x z . The function declaration binds some of the variables appearing on the right to the ones appearing on the left; the ones that don’t appear on the left are “free”. In a few years when he takes calculus, my son will learn about the quantifiers “for all” and “there exists” in the “epsilon-delta” definition of limit; quantifiers also bind variables in expressions.

Reasoning formally about languages with binders is hard:

“The problem of representing and reasoning about inductively-defined structures with binders is central to the PoplMark challenges. Representing binders has been recognized as crucial by the theorem proving community, and many different solutions to this problem have been proposed. In our (still limited) experience, none emerge as clear winners.” – Aydemir, Bohannon, Fairbairn, Foster, Pierce, Sewell, Vytiniotis, Washburn, Weirich, and Zdancewic, Mechanized metatheory for the masses: The PoplMark challenge . (2005)

The paper quoted above reviews around a dozen approaches in section 2.3, and takes pains to point out that their review is incomplete. However, recently Andrew Pitts and his students (particularly Murdoch Gabbay) developed the notion of a nominal set (introductory slides, book) that has largely solved this problem. Bengston and Barrow use a nominal datatype package in Isabell/HOL to formalize π \pi -calculus, and Clouston defined nominal Lawvere theories. It’s my impression that pretty much everyone now agrees that using nominal sets to formally model binders is the way forward.

Sometimes, though, it’s useful to look backwards; old techniques can lead to new ways of looking at a problem. The earliest approach to the problem of formally modeling bound variables was to eliminate them.

Abstraction elimination

λ \lambda -calculus is named for the binder in that language. The language itself is very simple. We start with an infinite set of variables x , y , z , … x, y, z, \ldots and then define the terms to be

t , t ′ : : = { x variable λ x . t abstraction ( t t ′ ) application t, t' ::= \left\{ \begin{array}{lr}x & variable \\ \lambda x.t & abstraction \\ (t\; t') & application\end{array}\right.

Schönfinkel’s idea was roughly to “sweep the binders under the rug”. We’ll allow binders, but only in the definition of a “combinator”, one of a finite set of predefined terms. We don’t allow binders in any expression using the combinators themselves; the binders will all be hidden “underneath” the name of the combinator.

To eliminate all the binders in a term, we start at the “lowest level”, a term of the form t = λ x . u , t = \lambda x.u, where u u only has variables, combinators, and applications; no abstractions! Then we’ll try to find a way of rewriting t t using combinators instead. Since the lambda term λ x . ( v x ) \lambda x.(v\; x) behaves the same as v v itself, if we can find some v v such that v x = u v x = u , then the job’s done.

Suppose u = x u=x . What term can we apply to x x and get x x itself back? The identity function, obviously, so our first combinator is

I = λ x . x . I = \lambda x.x.

What if u u doesn’t contain x x at all? We need a “konstant” term K u K_u such that ( K u x ) (K_u\; x) just discards x x and returns u . u. At the same time, we don’t want to have to specify a different combinator for each u u that doesn’t contain x x , so we define our second combinator K K to first read in which u u to return, then read in x x , throw it away, and return u : u:

K = λ u x . u . K = \lambda u x.u.

Finally, suppose u u is an application u = ( w w ′ ) . u = (w\; w'). The variable x x might occur in w w or in w ′ w' or both. Note that if we recurse on each of the terms in the application, we’ll have terms r , r ′ r, r' such that ( r x ) = w (r\; x) = w and ( r ′ x ) = w ′ (r'\; x) = w' , so we can write u = ( ( r x ) ( r ′ x ) ) . u = ((r\; x)\; (r'\; x)). This suggests our final combinator should read in r , r ′ r, r' , and x x and “share” x x with them:

S = λ r r ′ x . ( ( r x ) ( r ′ x ) ) . S = \lambda r r'x.((r\; x)\; (r'\; x)).

If we look at the types of the terms S , K , S, K, and I I , we find something interesting:

I : Z → Z K : Z → Y → Z S : ( Z → Y → X ) → ( Z → Y ) → Z → X \begin{array}{rl}I:&Z \to Z\\K:&Z \to Y \to Z\\S:&(Z \to Y\to X)\to (Z\to Y) \to Z \to X\end{array}

The types correspond exactly to the axiom schemata for positive implicational logic!

The S K I S K I -calculus is a lot easier to model formally than the λ \lambda -calculus; we can use a tiny Gph-enriched Lawvere theory (see the appendix) to capture the operational semantics and then derive the denotational semantics from it.

π \pi -Calculus

As computers got smaller and telephony became cheaper, people started to connect them to each other. ARPANET went live in 1969, grew dramatically over the next twenty years, and eventually gave rise to the internet. ARPANET was decommissioned in 1990; that same year, Robin Milner published a paper introducing the π \pi -calculus. He designed it to model the new way computation was occurring in practice: instead of serially on a single computer, concurrently on many machines via the exchange of messages. Instead of applying one term to another, as in the lambda and SK calculi, terms (now called “processes”) get juxtaposed and then exchange messages. Also, whereas in λ \lambda -calculus a variable can be replaced by an entire term, in the π \pi -calculus names can only be replaced by other names.

Here’s a “good parts version” of the asynchronous π \pi -calculus; see the appendix for a full description.

P , Q : : = 0 do nothing | P | Q concurrency | for ( x ← y ) P input | x ! y output | ν x . P new name | ! P replication \begin{array}{rll}P, Q ::= \quad& 0 & do\; nothing \\ \;|\; & P|Q & concurrency \\ \;|\; &for (x \leftarrow y) P & input \\ \;|\; & x!y & output \\ \;|\; &

u x.P & new\; name \\ \;|\; & !P& replication \end{array}

x ! z | for ( y ← x ) . P ⇒ P { z / y } communication rule x!z \;|\; for(y \leftarrow x).P \Rightarrow P\{z/y\}\quad communication\; rule

There are six term constructors for π \pi -calculus instead of the three in λ \lambda -calculus. Concurrency is represented with a vertical bar |, which forms a commutative monoid with 0 as the monoidal unit. There are two binders, one for input and one for introducing a new name into scope. The rewrite rule is reminiscent of a trace in a compact closed category: x x appears in an input term and an output term on the left-hand side, while on the right-hand side x x doesn’t appear at all. I’ll explore that relationship in another post.

The syntax we use for the input prefix is not Milner’s original syntax. Instead, we borrowed from Scala, where the same syntax is syntactic sugar for M ( λ x . P ) ( y ) M(\lambda x.P)(y) for some monad M M that describes a collection. We read it as “for a message x x drawn from the set of messages sent on y y , do P P with it”.

For many years after Milner proposed the π \pi -calculus, researchers tried to come up with a way to eliminate the bound names from a π \pi -calculus term. Yoshida was able to give an algorithm for eliminating the bound names that come from input prefixes, but not those from new names. Like the abstraction elimination algorithm above, Yoshida’s algorithm produced a set of concurrent combinators. There’s one combinator m ( a , x ) m(a, x) for sending a message a a on the name x x , and several others that interact with m m ’s to move the computation forward (see the appendix for details):

d ( a , b , c ) | m ( a , x ) ⇒ m ( b , x ) | m ( c , x ) ( fanout ) k ( a ) | m ( a , x ) ⇒ 0 ( drop ) fw ( a , b ) | m ( a , x ) ⇒ m ( b , x ) ( forward ) br ( a , b ) | m ( a , x ) ⇒ fw ( b , x ) ( branch right ) bl ( a , b ) | m ( a , x ) ⇒ fw ( x , b ) ( branch left ) s ( a , b , c ) | m ( a , x ) ⇒ fw ( b , c ) ( synchronize ) \begin{array}{rlr} d(a,b,c) | m(a,x) & \Rightarrow m(b,x) | m(c,x) & (fanout)\\ k(a) | m(a,x) & \Rightarrow 0 & (drop) \\ fw(a,b) | m(a,x) & \Rightarrow m(b,x) & (forward) \\ br(a,b) | m(a,x) & \Rightarrow fw(b,x) & (branch\; right)\\ bl(a,b) | m(a,x) & \Rightarrow fw(x,b) & (branch\; left) \\ s(a,b,c) | m(a,x) & \Rightarrow fw(b,c) & (synchronize)\end{array}

Unlike the S K I S K I combinators, no one has shown a clear connection between some notion of type for these combinators and a system of logic.

Reflection

Several years ago, Greg Meredith had the idea to combine the set of π \pi -calculus names and the set of π \pi -calculus terms recursively. In a paper with Radestock he introduced a “quoting” operator I’ll write & that turns processes into names and a “dereference” operator I’ll write * that turns names into processes. They also made the calculus higher-order: they send processes on a name and receive the quoted process on the other side.

x ! ⟨ Q ⟩ | for ( y ← x ) . P ⇒ P { & Q / y } communication rule x!\langle Q\rangle \;|\; for(y \leftarrow x).P \Rightarrow P\{\&Q/y\}\quad communication\; rule

The smallest process is 0, so the smallest name is &0. The next smallest processes are

& 0 ⟨ 0 ⟩ and for ( & 0 ← & 0 ) 0 , \&0\langle 0 \rangle \quad and \quad for(\&0 \leftarrow \&0) \; 0,

which in turn can be quoted to produce more names, and so on.

Together, these two changes let them demonstrate a ν

u -elimination transformation from π \pi calculus to their reflective higher-order (RHO) calculus: since a process never contains its own name ( & P \&P cannot occur in P P ), one can use that fact to generate names that are fresh with respect to a process.

Another benefit of reflection is the concept of “namespaces”: since the names have internal structure, we can ask whether they satisfy propositions. This lets Meredith and Radestock define a spatial-behavioral type system like that of Caires, but more powerful. Greg demonstrated that the type system is strong enough to prevent attacks on smart contracts like the $50M one last year that caused the fork in Ethereum.

In our most recent pair of papers, Greg and I consider two different reflective higher-order concurrent combinator calculi where we eliminate all the bound variables. In the first paper, we present a reflective higher-order version of Yoshida’s combinators. In the second, we note that we can think of each of the term constructors as combinators and apply them to each other. Then we can use S K I S K I combinators to eliminate binders from input prefixes and Greg’s reflection idea to eliminate those from ν .

u. Both calculi can be expressed concisely using Gph-enriched Lawvere theories.

In future work, we intend to present a type system for the resulting combinators and show how the types give axiom schemata.

Appendix

Gph-theory of S K I S K I -calculus

objects T T

morphisms S , K , I : 1 → T S, K, I\colon 1 \to T ( − − ) : T × T → T (-\; -)\colon T \times T \to T

equations none

edges σ : ( ( ( S x ) y ) z ) ⇒ ( ( x z ) ( y z ) ) \sigma\colon (((S\; x)\; y)\; z) \Rightarrow ((x\; z)\; (y\; z)) κ : ( ( K x ) z ) ⇒ x \kappa\colon ((K\; x)\; z) \Rightarrow x ι : ( I z ) ⇒ z \iota\colon (I\; z) \Rightarrow z



π \pi -calculus

grammar P , Q : : = 0 do nothing | P | Q concurrency | for ( x ← y ) . P input | x ! y output | ν x . P new name | ! P replication \begin{array}{rll}P, Q ::= \quad & 0 & do\; nothing \\ \;|\; & P|Q & concurrency \\ \;|\; &for (x \leftarrow y).P & input \\ \;|\; & x!y & output \\ \;|\; &

u x.P & new\; name \\ \;|\; & !P& replication \end{array}

structural equivalence free names FN ( 0 ) = { } FN(0) = \{\} FN ( P | Q ) = FN ( P ) ∪ FN ( Q ) FN(P|Q) = FN(P) \cup FN(Q) FN ( for ( x ← y ) . P ) = { y } ∪ FN ( P ) − { x } FN(for (x \leftarrow y).P) = \{y\} \cup FN(P) - \{x\} FN ( x ! y ) = { x , y } FN(x!y) = \{x,y\} FN ( ν x . P ) = FN ( P ) − { x } FN(

u x.P) = FN(P) - \{x\} FN ( ! P ) = FN ( P ) FN(!P) = FN(P) 0 and | form a commutative monoid P | 0 ≡ P P|0 \equiv P P | Q ≡ Q | P P|Q \equiv Q|P ( P | Q ) | R ≡ P | ( Q | R ) (P|Q)|R \equiv P|(Q|R) replication ! P ≡ P | ! P !P \equiv P\;|\;!P α \alpha -equivalence for ( x ← y ) . P ≡ for ( z ← y ) . P { z / x } for (x \leftarrow y).P\equiv for (z \leftarrow y).P\{z/x\} where z z is not free in P P new names ν x . ν x . P ≡ ν x . P

u x.

u x.P \equiv

u x.P ν x . ν y . P ≡ ν y . ν x . P

u x.

u y.P \equiv

u y.

u x.P ν x . ( P | Q ) ≡ ( ν x . P ) | Q

u x.(P|Q) \equiv (

u x.P)|Q when x x is not free in Q Q

rewrite rules x ! z | for ( y ← x ) . P ⇒ P { z / y } x!z \;|\; for(y \leftarrow x).P \Rightarrow P\{z/y\} if P ⇒ P ′ P \Rightarrow P' , then P | Q ⇒ P ′ | Q P\;|\; Q \Rightarrow P'\;|\; Q if P ⇒ P ′ P \Rightarrow P' , then ν x . P ⇒ ν x . P ′

u x.P \Rightarrow

u x.P'



To be clear, the following is not an allowed reduction, because it occurs under an input prefix:

for ( v ← u ) . ( x ! z | for ( y ← x ) . P ) ⇏ for ( v ← u ) . P { z / y } for(v \leftarrow u).(x!z \;|\; for(y \leftarrow x).P)

Rightarrow for(v \leftarrow u).P\{z/y\}

Yoshida’s combinators

grammar atom: Q : : = 0 | m ( a , b ) | d ( a , b , c ) | k ( a ) | fw ( a , b ) | br ( a , b ) | bl ( a , b ) | s ( a , b , c ) Q ::= 0 \;|\; m(a,b) \;|\; d(a,b,c) \;|\; k(a) \;|\; fw(a,b) \;|\; br(a,b) \;|\; bl(a,b) \;|\; s(a,b,c) process: P : : = Q | ν a . P | P | P | ! P P ::= Q \;|\;

u a.P \;|\; P|P \;|\; !P

structural congruence free names FN ( 0 ) = { } FN(0) = \{\} FN ( k ( a ) ) = { a } FN(k(a)) = \{a\} FN ( m ( a , b ) ) = FN ( fw ( a , b ) ) = FN ( br ( a , b ) ) = FN ( bl ( a , b ) ) = { a , b } FN(m(a,b)) = FN(fw(a,b)) = FN(br(a,b)) = FN(bl(a,b)) = \{a, b\} FN ( d ( a , b , c ) ) = FN ( s ( a , b , c ) ) = { a , b , c } FN(d(a,b,c)) = FN(s(a,b,c)) = \{a,b,c\} FN ( ν a . P ) = FN ( P ) − { x } FN(

u a.P) = FN(P) - \{x\} FN ( P | Q ) = FN ( P ) ∪ FN ( Q ) FN(P|Q) = FN(P)\cup FN(Q) FN ( ! P ) = FN ( P ) FN(!P) = FN(P) 0 and | form a commutative monoid P | 0 ≡ P P|0 \equiv P P | Q ≡ Q | P P|Q \equiv Q|P ( P | Q ) | R ≡ P | ( Q | R ) (P|Q)|R \equiv P|(Q|R) replication ! P ≡ P | ! P !P \equiv P\;|\;!P new names ν x . ν x . P ≡ ν x . P

u x.

u x.P \equiv

u x.P ν x . ν y . P ≡ ν y . ν x . P

u x.

u y.P \equiv

u y.

u x.P ν x . ( P | Q ) ≡ ( ν x . P ) | Q

u x.(P|Q) \equiv (

u x.P)|Q when x x is not free in Q Q

rewrite rules d ( a , b , c ) | m ( a , x ) ⇒ m ( b , x ) | m ( c , x ) d(a,b,c) | m(a,x) \Rightarrow m(b,x) | m(c,x) (fanout) k ( a ) | m ( a , x ) ⇒ 0 k(a) | m(a,x) \Rightarrow 0 (drop) fw ( a , b ) | m ( a , x ) ⇒ m ( b , x ) fw(a,b) | m(a,x) \Rightarrow m(b,x) (forward) br ( a , b ) | m ( a , x ) ⇒ fw ( b , x ) br(a,b) | m(a,x) \Rightarrow fw(b,x) (branch right) bl ( a , b ) | m ( a , x ) ⇒ fw ( x , b ) bl(a,b) | m(a,x) \Rightarrow fw(x,b) (branch left) s ( a , b , c ) | m ( a , x ) ⇒ fw ( b , c ) s(a,b,c) | m(a,x) \Rightarrow fw(b,c) (synchronize) ! P ⇒ P | ! P !P \Rightarrow P|!P if P ⇒ P ′ P \Rightarrow P' then for any term context C C , C [ P ] ⇒ C [ P ′ ] C[P] \Rightarrow C[P'] .



Gph-theory of RHO Yoshida combinators

objects N , T N, T

morphisms 0 : 1 → T 0\colon 1 \to T k : T → T k\colon T \to T m : N × T → T m\colon N \times T \to T fw , br , bl : N 2 → T fw,br,bl\colon N^2 \to T d , s : N 3 → T d,s\colon N^3 \to T | : T 2 → T |\colon T^2 \to T * : N → T *\colon N \to T & : T → N \&\colon T \to N

equations 0 and | form a commutative monoid P | 0 = P P|0 = P P | Q = Q | P P|Q = Q|P ( P | Q ) | R = P | ( Q | R ) (P|Q)|R = P|(Q|R) * & P = P *\&P = P

edges δ : d ( a , b , c ) | m ( a , P ) ⇒ m ( b , P ) | m ( c , P ) \delta\colon d(a,b,c) | m(a,P) \Rightarrow m(b,P) | m(c,P) κ : k ( a ) | m ( a , P ) ⇒ 0 \kappa\colon k(a) | m(a,P) \Rightarrow 0 ϕ : fw ( a , b ) | m ( a , P ) ⇒ m ( b , P ) \phi\colon fw(a,b) | m(a,P) \Rightarrow m(b,P) ρ : br ( a , b ) | m ( a , P ) ⇒ fw ( b , & P ) \rho\colon br(a,b) | m(a,P) \Rightarrow fw(b,\&P) λ : bl ( a , b ) | m ( a , P ) ⇒ fw ( & P , b ) \lambda\colon bl(a,b) | m(a,P) \Rightarrow fw(\&P,b) σ : s ( a , b , c ) | m ( a , P ) ⇒ fw ( b , c ) \sigma\colon s(a,b,c) | m(a,P) \Rightarrow fw(b,c)



Gph-theory of RHO π \pi -like combinators