In forthcoming posts I’m going to be making heavy use of constraint programming, so I’m going to do an intro to constraint programming as I see it first so everyone knows what I’m talking about before I start making controversial statements. The code’s all in Haskell, which is my usual language of choice. If you’re feeling pedantic, assume I’ve got the basics imported (Data.Map, Control.Monad etc) and hidden stuff that’d conflict with the only things that typecheck – there’s a full source file to play with at the bottom if you want to go further.

Some basic definitions

Constraint programming isn’t a paradigm itself, but it can be viewed as a "paradigm transformer" – there are such things as Constraint Logic Programming, Constraint Functional Programming and Constraint Imperative Programming. What distinguishes them from their underlying paradigms is the ability to work with constraints on some kind of data, usually to either find a solution to those constraints or check whether such a solution can exist. Time to define a few terms (warning: I may be using them a little idiosyncratically or insufficiently generally!)

Constraint Domain: a specific setting for constraint problems. A type or types of data that may be constrained and the constraints (possibly: primitive constraints) that are allowed on them.

Constraint: a predicate that may or may not be satisfied by the values it constrains.

Constraint Problem: a specific problem within a constraint domain, with specific constraints.

Constraint Space: an abstract space defined by the variables in a constraint problem. For example, given three real numbers x, y and z and inequalities on them, the canonical constraint space for them is . Inequalities ‘cut’ the space in half, one side satisfiable and the other unsatisfiable – how’s that for intuition? In some constraint domains, there may not be an easily-envisageable constraint space in that the set or universe of potential solutions may not be particularly spatial, or there may not be a space that’s particularly intuitive to work with. Depending on your background we’ll be dealing with one of those in this article.

There are further concepts to come, but I’m going to leave those until they come up – time for a worked example.

Behold my domain!

Okay, first let’s pick a constraint domain. I’m going to go for a classic: equality on the Herbrand universe of terms. Logic programming fans will recognise this as the implicit constraint domain of conventional logic programming ala Prolog. A datatype:

data Term = Term ! Label ! [ Term ] deriving Show

Note the strictness: we only want to deal with finite terms. Label can be anything that provides an infinite alphabet, equality and is a member of Show . Now we need terms with variables so we can start talking about non-trivial constraints. There are more typeful ways of doing this, but I’m going to go with modifying Term to this:

data Term = Term ! Label ! [ Term ] | Metavar Integer deriving Show

Why Metavar ? Well, the language of constraints-on- Term s-with-variables is a metalanguage for talking about our original Term s – so those variables are metavariables, not variables of the Term language per se. This is also why there are more typeful ways to handle the problem: the code above doesn’t have a type of concrete Term s without metavariables in them any more. It’ll do for today though. If we have variables, we’re going to need some substitution machinery for them:

type Assignment = Map Integer Term substTerm a ( Term l cs ) = Term l $ map ( substTerm a ) cs substTerm a m @ ( Metavar i ) = case lookup i a of Nothing -> m Just t -> substTerm a t

Astute readers will notice that an assignment is equivalent to a special case of a set of equality constraints where one side is a lone metavariable. It’s important that we don’t allow circular references into our Assignment s.

It’s all Greek to the machine

So now we’ve got that much, let’s build ourselves an EDSL! You may ask where the constraints are: hold that thought for now. Instead we’re going to aim for an initial implementation that solves all constraints as it goes ala Prolog. I’d also like the language to automatically substitute in solutions as it goes so that when you run it you get out a concrete solution straight away. That means we need this typeclass and the following instances:

class Embeddable a where subst :: Assignment -> a -> a instance Embeddable () where subst _ v = v instance Embeddable Term where subst = substTerm instance ( Embeddable a , Embeddable b ) => Embeddable ( a -> b ) where subst = const id

Why Embeddable ? These are the Haskell values we’re going to allow embedded into our own language, itself embedded back into Haskell – think Applicative ‘s pure or Monad ‘s return. Most of the EDSLs we think to grace with the name are actually mutually embedded with their host language to some extent – it’s just so convenient to steal from! I’m a fan of deep embeddings when I’m explaining and/or working on the semantics of an EDSL, so syntax next:

data Code a where Pure :: Embeddable a => a -> Code a Ap :: Embeddable b => Code ( a -> b ) -> Code a -> Code b Bind :: ( Embeddable a , Embeddable b ) => Code a -> ( a -> Code b ) -> Code b NewVar :: Code Term Equate :: Term -> Term -> Code Term

If you didn’t see this coming, we’re effectively building an applicative functor and a monad – only on a subcategory of haskell’s defined by Embeddable , instead of on the entire of it. Why do I have both Ap and Bind , analogues of <*> and >>= ? Mostly historical accident, but I also wanted to highlight a design decision – different levels of power are appropriate to different constraint problems, as we’ll see with one implementation below. NewVar is used to generate fresh metavariables as we can’t do it automatically, whereas Equate is used to assert equality constraints. At this point we’re ready for multiple implementations of the language’s semantics. Here’s one for starters:

runOnFly :: Assignment -> Integer -> Code a -> Either UnificationError ( a , Assignment , Integer ) runOnFly a v ( Pure t ) = return ( subst a t , a , v ) runOnFly a v ( Ap f p ) = do ( rf , a' , v' ) <- runOnFly a v f ( rp , a'' , v'' ) <- runOnFly a' v' p return ( subst a'' ( rf rp ) , a'' , v'' ) runOnFly a v ( Bind p f ) = do ( rp , a' , v' ) <- runOnFly a v p runOnFly a' v' ( f $ subst a' rp ) runOnFly a v NewVar = return ( Metavar v , a , v + 1 ) runOnFly a v ( Equate l r ) = case unify a l r of Left e -> Left e Right ( u , a' ) -> return ( u , a' , v )

The parameters a and v are the current Assignment containing the solution-to-date during runtime and the first free metavariable counting from 0, both of which are handled as if they’re state. The implementation uses the Either monad, as it’s entirely possible for a new constraint to be unsatisfiable given the current Assignment and make the problem the program describes impossible to solve. Arguably I should’ve newtyped up a couple of StateT s on top of Either – if only so I can’t mess up the state handling! NewVar ‘s case uses simple addition, whereas Equate calls a unification algorithm on the two Term s in the context of the current Assignment . Unification’s a standard way to solve this kind of constraint, but it’s pretty tedious so I won’t list the code here – see the bottom for a source file including it as well as everything else in this article. Oh yes, I almost missed it in amongst detailing the obvious: check out the call to subst in the Pure , Ap and Bind cases! This is what we gave up our Monad and Applicative instances for, but it’s kinda neat, huh? GHC 7.4’s new ConstraintKinds feature gives us a good way to combine typeclass-defined subcategories with instances of other typeclasses – hopefully we’ll see appropriate generalisations of Functor and the rest standardised on Hackage soon so we can perform such tricks freely.

The decomposing remains of a problem?

There’s another possible set of semantics for this language though: rather than solving constraints as we go, we could just use the language for constraint generation and let the user pass the resulting problem to satisfiability checkers, solvers and other analyses as appropriate. It’s a lot easier to "debug" the satisfiability of a constraint problem without having to deal with the program that generated it as well, so this is an idea with genuine merit. We’re going to have to talk about constraints explicitly in the code now:

data Constraint = Term := Term deriving Show

I’m going to be lazy and use lists as a representation for constraint sets. We don’t have to use sets at all though – if we treat equality constraints as predicates that are true or false for specific concrete assignments to all the metavariables, we can also treat the entire constraint problem as one big constraint – the conjunction of all the constraints that would otherwise form the constraint set. To go a step further, constraint domains are (or correspond to) intentionally-limited logics! This is a theme I’ll be using in later posts. Code time!

runGathering :: Integer -> Code a -> ( a , Integer , [ Constraint ] ) runGathering v ( Pure t ) = ( t , v , [] ) runGathering v ( Ap f p ) = let ( rf , v' , csf ) = runGathering v f ( rp , v'' , csp ) = runGathering v' p in ( rf rp , v'' , csf ++ csp ) runGathering v ( Bind p f ) = let ( rp , v' , cs ) = runGathering v p ( rf , v'' , cs' ) = runGathering v' ( f rp ) in ( rf , v'' , cs ++ cs' ) runGathering v ( NewVar ) = ( Metavar v , v + 1 , [] ) runGathering v ( Equate l r ) = ( l , v , [ l := r ] )

This time around, I should’ve used a StateT for the metavariable counter and a WriterT to collect Constraint s in. This isn’t just a new operational behaviour for the same language, arguably it’s a different language. Why? Because Bind and Ap see different results than they would with runOnFly – they don’t get to observe the results of solving.

Decision time!

Prolog fans will be wondering something by now. Our runOnFly implementation might smell a little bit familiar, but it can’t branch – where’s the disjunction? Well, there’s no reason we can’t have it and still be doing constraint programming, so here goes:

data Code a where ... Or :: Embeddable a => Code a -> Code a -> Code a ... runOnFly a v ( Or l r ) = case ( runOnFly a v l ) of r @ Right { } -> r Left { } -> runOnFly a v r

Nothing special here – except that runGathering can’t handle Or because it doesn’t do any solving to branch on. This suggests a new language feature and a third language implementation – why not leave when to solve up to the program?

data Code a where ... Solve :: Code () ... data ConstraintSet = CS { assumptions :: Assignment , constraints :: [ Constraint ] } deriving Show emptyConstraintSet = CS Data . Map . empty [] solveConstraints :: ConstraintSet -> Either UnificationError ConstraintSet solveConstraints ( CS a cs ) = solveConstraints' a cs where solveConstraints' a [] = return ( CS a [] ) solveConstraints' a ( ( l := r ) : cs ) = do ( u , a' ) <- unify a l r solveConstraints' a' cs

What’s with ConstraintSet ? Well, it’s partitioned into an Assignment of solved "constraints" and a list of unsolved ones. When we call solveConstraints , it attempts to ‘move’ the unsolved ones into the solved set by unifying terms and seeing how that changes the Assignment . Feel free to complain at my use of general recursion!

run :: ConstraintSet -> Integer -> Code a -> Either UnificationError ( a , ConstraintSet , Integer ) run cs v ( Pure t ) = return ( subst ( assumptions cs ) t , cs , v ) run cs v ( Ap f p ) = do ( rf , cs' , v' ) <- run cs v f ( rp , cs'' , v'' ) <- run cs' v' p return ( subst ( assumptions cs'' ) ( rf rp ) , cs'' , v'' ) run cs v ( Bind p f ) = do ( rp , cs' , v' ) <- run cs v p run cs' v' ( f $ subst ( assumptions cs' ) rp ) run cs v NewVar = return ( Metavar v , cs , v + 1 ) run cs v ( Equate l r ) = let cs' = CS ( assumptions cs ) ( l := r : constraints cs ) in return ( subst ( assumptions cs ) l , cs' , v ) run cs v ( Or l r ) = case ( run cs v l ) of result @ Right { } -> result Left { } -> run cs v r run cs v Solve = do cs' <- solveConstraints cs return ( () , cs' , v )

Bet you’re wishing I was using monad transformers by now, huh? Now we pass along a ConstraintSet in a state-like fashion. The constraints half of it behaves in a State -like fashion throughout – this is a design decision, we could have made Solve take a computation as a parameter and only solved the constraints generated inside it instead. In fact, either way round you can find use cases that the decision breaks. I’m not going to implement further here, but clearly there are cases where you might want to both pass constraint problems around first-class (which we get for free) and access the current constraint problem/set (another command for our language).

A little test

Finally, we’d better find some problems to solve, hadn’t we?

{- Test code - let's do a little bit of boring Peano arithmetic! No logic programming here, the constraints are just pattern-matching. Feel free to write it the other way if you can stomach the awful syntax. -} solve t = t `Bind` \ result -> ( Solve `Bind` ( const $ Pure result ) ) {- Peano numbers, matching, addition -} succ t = Term "S" [ t ] zero = Term "Z" [] integerToNatTerm :: Integer -> Term integerToNatTerm i | i > 0 = succ ( integerToNatTerm ( i - 1 ) ) | i == 0 = zero | otherwise = error "integerToNatTerm: negative parameter" natTermToInteger :: Term -> Integer natTermToInteger ( Term "Z" [] ) = 0 natTermToInteger ( Term "S" [ n ] ) = 1 + natTermToInteger n natTermToInteger _ = error "natTermToInteger: parameter isn't a peano natural" matchZero t = solve $ Equate t zero matchSucc t = NewVar `Bind` ( \ n -> solve ( Equate t ( succ n ) ) `Bind` ( const $ Pure n ) ) add l r = ( matchZero l `Bind` ( const $ Pure r ) ) `Or` ( matchSucc l `Bind` \ l' -> add l' ( succ r ) )

All that code just to do a little lousy addition?! I’m afraid so. Evidence that it works:

* Herbrand > let t = add ( integerToNatTerm 3 ) ( integerToNatTerm 4 ) * Herbrand > run emptyConstraintSet 0 ( solve t ) Right ( Term "S" [ Term "S" [ Term "S" [ Term "S" [ Term "S" [ Term "S" [ Term "S" [ Term "Z" [] ] ] ] ] ] ] ] , CS { assumptions = fromList [ ( 0 , Term "S" [ Term "S" [ Term "Z" [] ] ] ) , ( 1 , Term "S" [ Term "Z" [] ] ) , ( 2 , Term "Z" [] ) ] , constraints = [] } , 3 )

You can thank me for the manually added layout later. Feel free to feed the result term into natTermToInteger to confirm that it is indeed 7. If we’d written our addition in a Prolog-like style, asserting a relation rather than solving a function, that would have arguably embedded something interesting: a non-primitive constraint. Admittedly not a particularly strong argument given that the constraint system itself not only doesn’t have an entity matching the addition but also doesn’t have the power to encode addition without the externally-imposed branching of our program. But it’s a nice idea for other constraint systems, no?

Closing up

So that’s constraint programming for you – you take a metametalanguage to write programs about problems in a metalanguage about the data you actually care about, leaving you with a 200 line source file that can tell you 3+4=7! Which makes Haskell a metametametalanguage because we used it as a host – but don’t worry about that, it got infinitely meta the moment GHC existed. Incidentally, if that’s the first time this article made your head asplode then just take a breather – the world around us is far more complicated, I’m just being a little facetious about pointing it out and you’re doing just fine. Thankfully all this metaness doesn’t actually leave us with nothing but our heads forming a recursive knot. Constraint problems are genuine serious business – type systems might be my application, but many will’ve been solved in manufacturing pretty much everything you buy. And perhaps when you did your budget to buy it with too. Send more money?

So what did we learn? We learned about constraint domains (of which there are many and I covered exactly one), constraint problems, constraint spaces and constraints themselves. We learned that a problem may or may not be satisfiable. We didn’t really talk enough about solving them, but we know it’s about finding values for all those variables. We also learned that constraints can be treated as little logics – or not so little. Most importantly we learned that all this stuff exists and (check the links) has been worked on heavily – that there’s something worth learning about! Hopefully this’s been worth your time – catch you around.

Herbrand.hs – Haskell source file for everything in this article, plus all the bits I skipped

Monadic Constraint Programming package. I haven’t tried this myself, but I was impressed by the talk at Anglohaskell – well worth a look, and perhaps a quick search for the related papers.

Wikipedia articles: Constraint programming Unification (computer science)

Concepts, Techniques, and Models of Computer Programming by Peter Van Roy and Seif Haridi is an excellent book in general and has a good introduction to constraint programming. I don’t entirely agree with their notion of programming paradigms (it rules out half of why I’m interested in pure functional languages for example), but it’s certainly an advancement.

The Haskell implementation Helium is designed for beginners and aims to produce better error messages. The team’s research can be found here, and takes a constraint-based approach.

The HM(X) type system extends Hindley-Milner type inference by parameterising it on a constraint domain – examples include dimension analysis for physics and extensible records. There’s a chapter on it in ATTAPL, with an extended version available here.