12th September 2013, 03:20 pm

Since fall of last year, I’ve been working at Tabula, a Silicon Valley start-up developing an innovative programmable hardware architecture called “Spacetime”, somewhat similar to an FPGA, but much more flexible and efficient. I met the founder, Steve Teig, at a Bay Area Haskell Hackathon in February of 2011. He described his Spacetime architecture, which is based on the geometry of the same name, developed by Hermann Minkowski to elegantly capture Einstein’s theory of special relativity. Within the first 30 seconds or so of hearing what Steve was up to, I knew I wanted to help.

The vision Steve shared with me included not only a better alternative for hardware designers (programmed in hardware languages like Verilog and VHDL), but also a platform for massively parallel execution of software written in a purely functional language. Lately, I’ve been working mainly on this latter aspect, and specifically on the problem of how to compile Haskell. Our plan is to develop the Haskell compiler openly and encourage collaboration. If anything you see in this blog series interests you, and especially if have advice or you’d like to collaborate on the project, please let me know.

In my next series of blog posts, I’ll describe some of the technical ideas I’ve been working with for compiling Haskell for massively parallel execution. For now, I want to introduce a central idea I’m using to approach the problem.

Lambda calculus and cartesian closed categories

I’m used to thinking of the typed lambda calculi as languages for describing functions and other mathematical values. For instance, if the type of an expression e is Bool → Bool , then the meaning of e is a function from Booleans to Booleans. (In non-strict pure languages like Haskell, both Boolean types include ⊥ . In hypothetically pure strict languages, the range is extend to include ⊥ , but the domain isn’t.)

However, there are other ways to interpret typed lambda-calculi.

You may have heard of “cartesian closed categories” (CCCs). CCC is an abstraction having a small vocabulary with associated laws:

The “category” part means we have a notion of “morphisms” (or “arrows”) each having a domain and codomain “object”. There is an identity morphism for and associative composition operator. If this description of morphisms and objects sounds like functions and types (or sets), it’s because functions and types are one example, with id and (∘) .

and . The “cartesian” part means that we have products, with projection functions and an operator to combine two functions into a pair-producing function. For Haskell functions, these operations are fst and snd , together with (&&&) from Control.Arrow .

and , together with from . The “closed” part means that we have a way to represent morphisms via objects, referred to as “exponentials”. The corresponding operations are curry , uncurry , and apply . Since Haskell is a higher-order language, these exponential objects are simply (first class) functions.

A wonderful thing about the CCC interface is that it suffices to translate any lambda expression, as discovered by Joachim Lambek. In other words, lambda expressions can be systematically translated into the CCC vocabulary. Any (law-abiding) interpretation of that vocabulary is thus an interpretation of the lambda calculus.

Besides intellectual curiosity, why might one care about interpreting lambda expressions in terms of CCCs other than the one we usually think of for functional programs? I got interested because I’ve been thinking about how to compile Haskell programs to “circuits”, both the standard static kind and more dynamic variants. Since Haskell is a typed lambda calculus, if we can formulate circuits as a CCC, we’ll have our Haskell-to-circuit compiler. Other interpretations enable analysis of timing and demand propagation (including strictness).

Some future topics

Converting lambda expressions to CCC form.

Optimizing CCC expressions.

Plugging into GHC, to convert from Haskell source to CCC.

Applications of this translation, including the following: Circuits Timing analysis Strictness/demand analysis Type simplification (normalization)



Since fall of last year, I’ve been working at Tabula, a Silicon Valley start-up developing an innovative programmable hardware architecture called “Spacetime”, somewhat similar to an FPGA, but much more...