Index

Haskell Communities and Activities Report

Thirteenth edition – December 22, 2007

Andres Löh (ed.)

Lloyd Allison

alpheccar

Tiago Miguel Laureano Alves

Krasimir Angelov

Apfelmus

Carlos Areces

Sengan Baring-Gould

Alistair Bayley

Clifford Beshers

Chris Brown

Bjorn Buckwalter

Andrew Butterfield

Manuel Chakravarty

Olaf Chitil

Duncan Coutts

Nils Anders Danielsson

Robert Dockins

Frederik Eaton

Keith Fahlgren

Jeroen Fokker

Simon Frankau

Leif Frenzel

Richard A. Frost

Clemens Fruhwirth

Andy Gill

George Giorgidze

Daniel Gorin

Martin Grabmüller

Murray Gross

Jurriaan Hage

Kevin Hammond

Bastiaan Heeren

Christopher Lane Hinson

Guillaume Hoffmann

Paul Hudak

Liyang Hu

Graham Hutton

Wolfgang Jeltsch

Antti-Juhani Kaijanaho

Oleg Kiselyov

Dirk Kleeblatt

Lennart Kolmodin

Slawomir Kolodynski

Eric Kow

Huiqing Li

Andres Löh

Rita Loogen

Salvador Lucas

Ian Lynagh

Ketil Malde

Christian Maeder

Simon Marlow

Steffen Mazanek

Conor McBride

Neil Mitchell

Andy Adams-Moran

Dino Morelli

Yann Morvan

Matthew Naylor

Rishiyur Nikhil

Stefan O’Rear

Simon Peyton-Jones

Dan Popa

Claus Reinke

David Roundy

Alberto Ruiz

David Sabel

Uwe Schmidt

Ganesh Sittampalam

Anthony Sloane

Dominic Steinitz

Don Stewart

Jennifer Streb

Martin Sulzmann

Doaitse Swierstra

Wouter Swierstra

Hans van Thiel

Henning Thielemann

Peter Thiemann

Simon Thompson

Phil Trinder

Andrea Vezzosi

Miguel Vilaca

Joost Visser

Janis Voigtländer

Edsko de Vries

Malcolm Wallace

Mark Wassell

Ashley Yakeley

Bulat Ziganshin

Preface

This is the 13th edition of the Haskell Communities and Activities Report, and it arrives just in time for the break between the years – if you are bored by all the free time you might suddenly have, why not sit down and study what other Haskellers have been up to during the past six months?

As always, entries that are completely new (or have been revived after disappearing temporarily from the edition) are formatted using a blue background. Updated entries have a header with a blue background. In the most cases, I have dropped entries that have not been changed for a year or longer.

Many thanks to all the contributors. A special “thank you” to the many contributors that have attempted to reduce my workload this year by sending their entries in the preferred LaTeX style – more than ever before: This has made the experience of assembling the report an even greater pleasure!

An interesting idea can be found in the Ansemond LLC entry (→7.1.1), where a screenshot is included. I would like the report to become more colourful and have more pictures. So, for previous editions, if you would like to include a screenshot along with your Haskell-related tool or application, please send it along with your entry.

Many Haskell projects exist now, and most of them seem to be looking for developers. If you are an enthusiastic Haskell programmer, please consider supporting one of the existing projects by offering your help, and please don’t forget some of the “older”, yet still very successful projects such as Darcs (→6.13) and Cabal (→4.1.1) over the continuous stream of new project and software announcements.

Despite the fun it has been, my time as editor of the Haskell Communities and Activities Report is coming to an end. I am therefore looking for a new editor who would like to take over and continue the report, possibly adapting it to her or his own vision. Please contact me if you are interested. A separate announcement will follow.

If a new editor can be found, we might prepare the next edition together, probably around May, so watch the mailing lists around this time for announcements – we continue to depend and rely on your contributions!

Feedback is of course very welcome <hcar at haskell.org>. Enjoy the Report!

Andres Löh, Universiteit Utrecht, The Netherlands

1 General

1.1 HaskellWiki and haskell.org Report by: Ashley Yakeley HaskellWiki is a MediaWiki installation running on haskell.org, including the haskell.org “front page”. Anyone can create an account and edit and create pages. Examples of content include: Documentation of the language and libraries

Explanation of common idioms

Suggestions and proposals for improvement of the language and libraries

Description of Haskell-related projects

News and notices of upcoming events We encourage people to create pages to describe and advertise their own Haskell projects, as well as add to and improve the existing content. All content is submitted and available under a “simple permissive” license (except for a few legacy pages). In addition to HaskellWiki, the haskell.org website hosts some ordinary HTTP directories. The machine also hosts mailing lists. There is plenty of space and processing power for just about anything that people would want to do there: if you have an idea for which HaskellWiki is insufficient, contact the maintainers, John Peterson and Olaf Chitil, to get access to this machine. Further reading http://haskell.org/

http://haskell.org/haskellwiki/Mailing_Lists

The #haskell IRC channel is a real-time text chat where anyone can join to discuss Haskell. The channel has continued to grow in the last 6 months, now averaging around 390 users, with a record 436 users. It is one of the largest channels on freenode. The irc channel is home to hpaste and lambdabot (→6.14), two useful Haskell bots. Point your IRC client to irc.freenode.net and join the #haskell conversation! For non-English conversations about Haskell there is now: #haskell.de – German speakers

– German speakers #haskell.dut – Dutch speakers

– Dutch speakers #haskell.es – Spanish speakers

– Spanish speakers #haskell.fi – Finnish speakers

– Finnish speakers #haskell.fr – French speakers

– French speakers #haskell.hr – Croatian speakers

– Croatian speakers #haskell.it – Italian speakers

– Italian speakers #haskell.jp – Japenese speakers

– Japenese speakers #haskell.no – Norwegian speakers

– Norwegian speakers #haskell_ru – Russian speakers

– Russian speakers #haskell.se – Swedish speakers Related Haskell channels are now emerging, including: #haskell-overflow – Overflow conversations

– Overflow conversations #haskell-blah – Haskell people talking about anything except Haskell itself

– Haskell people talking about anything except Haskell itself #gentoo-haskell – Gentoo/Linux specific Haskell conversations (→7.4.3)

– Gentoo/Linux specific Haskell conversations (→7.4.3) #haskell-books – Authors organizing the collaborative writing of the Haskell +wikibook

– Authors organizing the collaborative writing of the Haskell +wikibook #darcs – Darcs revision control channel (written in Haskell) (→6.13)

– Darcs revision control channel (written in Haskell) (→6.13) #ghc – GHC developer discussion (→2.1)

– GHC developer discussion (→2.1) #happs – HAppS Haskell Application Server channel

– HAppS Haskell Application Server channel #xmonad – Xmonad a tiling window manager written in Haskell (→6.3) Further reading More details at the #haskell home page: http://haskell.org/haskellwiki/IRC_channel

Planet Haskell is an aggregator of Haskell people’s blogs and other Haskell-related news sites. As of mid-November 2007 content from 78 blogs and other sites is being republished in a common format. A common misunderstanding about Planet Haskell is that it republishes only Haskell content. That is not its mission. A Planet shows what is happening in the community, what people are thinking about or doing. Thus Planets tend to contain a fair bit of “off-topic” material. Think of it as a feature, not a bug. For information on how to get added to Planet, please read http://planet.haskell.org/policy.html. Further reading http://planet.haskell.org/

The Haskell Weekly News (HWN) is a weekly newsletter covering developments in Haskell. Content includes announcements of new projects, jobs, discussions from the various Haskell communities, notable project commit messages, Haskell in the blogspace, and more. The Haskell Weekly News also publishes latest releases uploaded to Hackage. It is published in html form on The Haskell Sequence, via mail on the Haskell mailing list, on Planet Haskell (→1.3), and via RSS. Headlines are published on haskell.org (→1.1). Further reading Archives, and more information can be found at: http://www.haskell.org/haskellwiki/Haskell_Weekly_News

There are plenty of academic papers about Haskell and plenty of informative pages on the Haskell Wiki. Unfortunately, there’s not much between the two extremes. That’s where The Monad.Reader tries to fit in: more formal than a Wiki page, but more casual than a journal article. There are plenty of interesting ideas that maybe don’t warrant an academic publication – but that doesn’t mean these ideas aren’t worth writing about! Communicating ideas to a wide audience is much more important than concealing them in some esoteric journal. Even if its all been done before in the Journal of Impossibly Complicated Theoretical Stuff, explaining a neat idea about ‘warm fuzzy things’ to the rest of us can still be plain fun. The Monad.Reader is also a great place to write about a tool or application that deserves more attention. Most programmers don’t enjoy writing manuals; writing a tutorial for The Monad.Reader, however, is an excellent way to put your code in the limelight and reach hundreds of potential users. Since the last HCAR there have been two new issues, including a special issue about this year’s Summer of Code. I’m always interested in new submissions, whether you’re an established researcher or fledgling Haskell programmer. Check out the Monad.Reader homepage for all the information to you need to start writing your article. Further reading All the recent issues and the information you need to start writing an article are available from: http://www.haskell.org/haskellwiki/The_Monad.Reader.

1.5 Books and tutorials

Haskell is one of the leading languages for teaching functional programming, enabling students to write simpler and cleaner code, and to learn how to structure and reason about programs. This introduction is ideal for beginners: it requires no previous programming experience and all concepts are explained from first principles via carefully chosen examples. Each chapter includes exercises that range from the straightforward to extended projects, plus suggestions for further reading on more advanced topics. The presentation is clear and simple, and benefits from having been refined and class-tested over several years. Features include: freely accessible powerpoint slides for each chapter; solutions to exercises, and examination questions (with solutions) available to instructors; downloadable code that’s fully compliant with the latest Haskell release. Publication details: Published by Cambridge University Press, 2007. Paperback: ISBN 0521692695; Hardback: ISBN: 0521871727; eBook: ISBN 051129218X. In-depth review: Duncan Coutts, The Monad.Reader, http://www.haskell.org/sitewiki/images/0/03/TMR-Issue7.pdf Further information: http://www.cs.nott.ac.uk/~gmh/book.html

The goal of the Haskell wikibook project is to build a community textbook about Haskell that is at once free (as in freedom and in beer), gentle and comprehensive. We think that the many marvelous ideas of lazy functional programming can and thus should be accessible to everyone in a central place. Since the last report, the wikibook has been progressing slowly but steadily. A chapter about Applicative Functors has been added, the module about Monads is being rewritten and comprehensive material about graph reduction and lazy evaluation is beginning to emerge. Thanks to the authors and to the many contributors that spot mistakes and ask those questions we’d never thought of! Further reading http://en.wikibooks.org/wiki/Haskell

Mailing list: <wikibook at haskell.org>

1.5.3 Gtk2Hs tutorial Report by: Hans van Thiel Part of the original GTK+2.0 tutorial by Tony Gail and Ian Main has been adapted to Gtk2Hs (→4.8.3), which is the Haskell binding to the GTK GUI library. The Gtk2Hs tutorial assumes intermediate level Haskell programming skills, but no prior GUI programming experience. See: http://darcs.haskell.org/gtk2hs/docs/tutorial/Tutorial_Port/ Available, at the time of writing (November 2007): 2. Getting Started 3. Packing 3.1 Packing Widgets 3.2 Packing Demonstration Program 3.3 Packing Using Tables 4. Miscellaneous Widgets 4.1 The Button Widget 4.2 Adjustments, Scale and Range 4.3 Labels 4.4 Arrows and Tooltips 4.5 Dialogs, Stock Items and Progress Bars 4.6 Text Entries and Status Bars 4.7 Spin Buttons 5. Aggregated Widgets 5.1 Calendar 5.2 File Selection 5.3 Font and Color Selection 5.4 Notebook 6. Supporting Widgets 6.1 Scrolled Windows 6.2 EventBoxes and ButtonBoxes 6.3 The Layout Container 6.4 Paned Windows and Aspect Frames The completed tutorial will consist of ten or more chapters and will also build on “Programming with gtkmm” by Murray Cumming et al. and the Inti (Integrated Foundation Classes) tutorial by the Inti team. Completion is expected in 2Q 2008. The Glade tutorial, an introduction to visual Gtk2Hs programming, has been updated to Glade 3 by to Alex Tarkovsky. It is available on: http://haskell.org/gtk2hs/docs/tutorial/glade/

2 Implementations

Lots has happened on the GHC front over the last few months. We released GHC 6.8.1 on 3 November 2007. GHC now has so many users, and such a large feature “surface area”, that simply getting to the point where we can make a release is becoming quite a challenge. Indeed, a good deal of our effort in the last six months has been in the form of consolidation: fixing bugs and solidifying what we have. These graphs show “tickets” which include bugs, feature requests, and tasks. Of the “open tickets”, about half are bugs. Notice the big spike in “closed tickets” just before the 6.8.1 release! The major new features of 6.8.1 were described in the last issue of the Haskell Communities Newsletter, so we won’t repeat them here. Instead, here are some of the highlights of what we are working on now. Syntactic and front-end enhancements Several people have developed syntactic innovations, which are (or will shortly be) in the HEAD: Three improvements to records Wild-card patterns for records. If you have data T = MkT {x,y::Int, z::Bool}

then you can say f :: T -> Int

f (MkT {..}) = x+y

g :: Int -> Int -> T

g x y = MkT {..}

where

z = x>y

The “ .. ” in a pattern brings into scope all the fields of the record; while in a record construction it uses variables with those names to initialise the record fields. Here’s the user manual entry: http://www.haskell.org/ghc/dist/current/docs/users_guide/syntax-extns.html#record-wildcards.

If you have then you can say The “ ” in a pattern brings into scope all the fields of the record; while in a record construction it uses variables with those names to initialise the record fields. Here’s the user manual entry: http://www.haskell.org/ghc/dist/current/docs/users_guide/syntax-extns.html#record-wildcards. Record puns is a slightly less abbreviated approach. You can write f like this: f (MkT {x,y}) = x+y

whereas Haskell 98 requires you to write x=x,y=y in the pattern. Similarly in record construction.

is a slightly less abbreviated approach. You can write like this: whereas Haskell 98 requires you to write in the pattern. Similarly in record construction. Record field disambiguation is useful when there are several types in scope, all with the same field name. For example, suppose another data type S had an x field. Then if you write h (MkT {x=p,y=q}) = ...

there is no doubt which x you mean, but Haskell 98 will complain that there are two x s in scope. Record field disambiguation just uses the constructor to decide which x you must mean. None of these changes tackle the deeper issue of whether or not Haskell’s current approach to records is the Right Way; rather the changes just make the current approach work a bit better. Furthermore, they are all somewhat controversial, because they make it harder to see where something comes into scope. Let’s see what you think! View patterns View patterns are implemented, by Dan Licata. Here’s a simple example: polar :: Complex -> (Float, Float)

polar = ...

f :: Complex -> Bool

f (polar -> (r,theta)) = r <= 1

Here polar is an ordinary function, used to transform the Complex to polar form. The view pattern is the argument pattern for f . Many details here: Hereis an ordinary function, used to transform the Complex to polar form. The view pattern is the argument pattern for. Many details here: http://hackage.haskell.org/trac/ghc/wiki/ViewPatterns Generalised list comprehensions Generalised list comprehensions (see Comprehensive comprehensions: comprehensions with “Order by” and “Group by”, Phil Wadler and Simon Peyton Jones, Haskell Workshop 2007) have been implemented by Max Bolinbroke. Example: [ (the dept, sum salary)

| (name, dept, salary) <- employees

, then sortWith by salary

, then takeWhile by salary < 50

, then take 5 ]

More details here: More details here: http://hackage.haskell.org/trac/ghc/wiki/SQLLikeComprehensions Quasi-quoting We are keen to get Geoff Mainland’s quasi-quoting mechanism into GHC (see “Why It’s Nice to be Quoted: Quasiquoting for Haskell”, Geoffrey Mainland. Haskell Workshop 2007). Geoff is working on polishing it up. Type system stuff The big innovation in GHC’s type system has been the gradual introduction of indexed type families in the surface syntax, and of type equalities in the internal machinery. Indexed data families (called “associated data types” when declared in type classes) are fairly simple, and they work fine in GHC 6.8.1. Indexed type families (aka “associated type synonyms”) are a different kettle of fish, especially when combined with the ability to mention type equalities in overloaded types, thus: f :: forall a b. (a ~ [b]) => ...

Tom Schrijvers spent three months at Cambridge, working on the theory and implementation of a type inference algorithm. As a result we have a partially-working implementation, and we understand the problem much better, but there is still much to do, both on the theoretical and practical front. It’s trickier than we thought! We have a short paper Tom Schrijvers spent three months at Cambridge, working on the theory and implementation of a type inference algorithm. As a result we have a partially-working implementation, and we understand the problem much better, but there is still much to do, both on the theoretical and practical front. It’s trickier than we thought! We have a short paper Towards open type functions for Haskell which describes some of the issues, and a wiki page ( http://hackage.haskell.org/trac/ghc/wiki/TypeFunctions ) that we keep up to date; it has a link to details of implementation status. This is all joint work with Martin Sulzmann, Manuel Chakravarty, and Tom Schrijvers. Parallel GC Since 6.6 GHC has had support for running parallel Haskell on a multi-processor out of the box. However, the main drawback has been that the garbage collector is still single-threaded and stop-the-world. Since GC can commonly account for 30%of runtime (depending on the GC settings), this can seriously put a crimp in your parallel speedup. Roshan James did an internship at MSR in 2006 during which he and Simon M worked on parallelising the major collections in GHC’s generational garbage collector. We had a working algorithm, but didn’t observe much speedup on a multi-processor. Since then, Simon rewrote the implementation and spent a large amount of time with various profiling tools, which uncovered some cache-unfriendly behaviour. We are now seeing some speedup, but there is more tweaking and measuring still to be done. This parallel GC is likely to be in GHC 6.10. Note that parallel GC is independent of whether the Haskell program itself is parallel – so even single-threaded Haskell programs (e.g. GHC itself) should benefit from it. The other side of the coin is to parallelise the minor collections. These are normally too small and quick to apply the full-scale parallel GC to, and yet the whole system still has to stop to perform a minor GC. The solution is almost certainly to allow each CPU to GC its own nursery independently. There is existing research describing how to do this, and we plan to try applying it in context of GHC. Data parallel Haskell After many months of designing, re-designing, and finally implementing a vectorisation pass operating on GHC’s Core intermediate language, we finally have a complete path from nested data parallel array programs to the low-level, multi-threaded array library in package ndp. We are very excited about having reached this milestone, but the path is currently very thin, complete unoptimised, and requires a special Prelude mockup. More work is required before vectorisation is ready for end-users, but now that the core infrastructure is in place, we expect more rapid progress on user-visible features. Besides working on optimisations and completing the backend library, we still need to implement Partial Vectorisation of Haskell Programs (http://www.cse.unsw.edu.au/~chak/papers/CLPK07.html) and the treatment of unboxed types, which is crucial to vectorise the standard Prelude. Most of the code was written by Roman Leshchinskiy. Back end stuff GHC’s back end code generator has long been known to generate poor code, particularly for tight loops of the kind that are cropping up more and more in highly optimised Haskell code. So in typical GHC style, rather than patch the immediate problem, we’re redesigning the entire back end. What we want to do: Split the STG-to-C -- code generator (codeGen) into two: one pass generating C -- with functions and calls, and a second pass (“CPS”) to manifest the stack and calling/return conventions.

code generator (codeGen) into two: one pass generating C with functions and calls, and a second pass (“CPS”) to manifest the stack and calling/return conventions. Redesign the calling and return conventions, so that we can use more registers for parameter passing (this will entail decommissioning the via-C code generator, but the native code generator will outperform it).

Give the back end more opportunity to do low-level transformation and optimisation, e.g. by exposing loops at the C -- level.

level. Implement more optimisations over C -- .

. Plug in a better register allocator. What we’ve done so far: Michael Adams came for an internship and built a CPS converter for GHC’s internal C -- data type.

data type. He had barely left when Norman Ramsey arrived for a short sabbatical. Based on his experience of building back ends for the Quick C -- compiler, he worked on a new zipper-based data structure to represent C -- code, and a sophisticated dataflow framework so that you can write new dataflow analyses in 30 mins.

compiler, he worked on a new zipper-based data structure to represent C code, and a sophisticated dataflow framework so that you can write new dataflow analyses in 30 mins. Ben Lippmeir spent his internship building a graph-colouring, coalescing register allocator for GHC’s native code generator. As a result, we now have lots of new code. Some of it is working; much of it is as yet un-integrated and un-tested. However, once we have it all glued back together, GHC will become a place where you can do Real Work on low-level optimisations, and code generation. Indeed John Dias (one of Norman’s graduate students) will spend six months here in 2008 to do work on code generation. In short, GHC’s back end, which has long been a poor relation, is getting a lot of very sophisticated attention. Expect good things. Libraries GHC ships with a big bunch of libraries. That is good for users, but it has two bad consequences, both of which are getting worse with time. First, it make it much harder to get a release together, because we have to test more and more libraries too. Second, it’s harder (or perhaps impossible) to upgrade the libraries independently from GHC. There’s a meta-issue too: it forces us into a gate-keeper role in which a library gets a big boost by being in the “blessed set” shipped with GHC. Increasingly, therefore, we are trying to decouple GHC from big libraries. We ship GHC with a set of “boot” libraries, without which GHC will not function at all, and “extra” libraries, which just happen to come with some binary distributions of GHC, and which can be upgraded separately at any time. To further that end, we’ve split the “base” package into a bunch of smaller packages, and expect to further split it up for GHC 6.10. This has led to lots of pain, because old programs that depended on ‘base’ now need to depend on other packages too; see upgrading packages (http://www.haskell.org/haskellwiki/Upgrading_packages) for details. But it’s good pain, and matters should improve too as Cabal matures. We have been exploring possibilities for lessening the pain in 6.10: http://hackage.haskell.org/trac/ghc/wiki/PackageCompatibility. We have also devised a package versioning policy which will help future library upgrades: http://www.haskell.org/haskellwiki/Package_versioning_policy.

The York Haskell Compiler (yhc) is a fork of the nhc98 compiler, with goals such as increased portability, platform independent bytecode, integrated Hat support and generally being a cleaner code base to work with. Yhc now compiles and runs almost all Haskell 98 programs, has basic FFI support – the main thing missing is haskell.org base libraries, which is being worked on. Since that last HCAR we have continued to improve our Yhc.Core library, making use of it in a number of projects (optimisers, analysis tools) to be made available shortly. The Javascript back end has undergone lots of improvements with new libraries for writing dynamic web pages. Further reading Homepage: http://www.haskell.org/haskellwiki/Yhc

Darcs repository: http://darcs.haskell.org/yhc

2.3 The Helium compiler Report by: Jurriaan Hage Participants: Jurriaan Hage, Bastiaan Heeren Helium is a compiler that supports only a subset of Haskell (e.g., n+k patterns are missing). Moreover, type classes are restricted to a number of built-in type classes and all instances are derived. The advantage of Helium is that it generates novice friendly error feedback. The Helium compiler is still available for Download from http://www.cs.uu.nl/helium/. At this moment, we are working on making version 1.7 available. Internally little will change except that the interface to Helium will be generalized so that multiple versions of Helium can side by side (motivated by the development of Neon) and that the logging facility can be more easily used outside our own environment. The loggings obtained in classes outside our university may help to improve the external validity of studies performed by using Neon (→4.2.2).

3 Language

3.1 Variations of Haskell

3.1.1 Liskell Report by: Clemens Fruhwirth Status: experimental When Haskell consists of Haskell semantics plus Haskell syntax, then Liskell consists of Haskell semantics plus Lisp syntax. Liskell is Haskell on the inside but looks like Lisp on the outside, as in its source code it uses the typical Lisp syntax forms, namely symbol expressions, that are distinguished by their fully parenthesized prefix notation form. Liskell captures the most Haskell syntax forms in this prefix notation form, for instance: if x then y else z becomes (if x y z) , while a + b becomes (+ a b) . Except for aesthetics, there is another argument for Lisp syntax: meta-programming becomes easy. Liskell features a different meta-programming facility than the one found in Haskell with Template Haskell. Before turning the stream of lexed tokens into an abstract Haskell syntax tree, Liskell adds an intermediate processing data structure: the parse tree. The parse tree is essentially is a string tree capturing the nesting of lists with their enclosed symbols stored as the string leaves. The programmer can implement arbitrary code expansion and transformation strategies before the parse tree is seen by the compilation stage. After the meta-programming stage, Liskell turns the parse tree into a Haskell syntax tree before it sent to the compilation stage. Thereafter the compiler treats it as regular Haskell code and produces a Haskell calling convention compatible output. You can use Haskell libraries from Liskell code and vice versa. Liskell is implemented as an extension to GHC and its darcs branch is freely available from the project’s website. The Liskell Prelude features a set of these parse tree transformations that enables traditional Lisp-styled meta-programming as with defmacro and backquoting. The project’s website demonstrates meta-programming application such as proof-of-concept versions of embedding Prolog inference, a minimalistic Scheme compiler and type-inference in meta-programming. The future development roadmap includes stabilization of its design, improving the user experience for daily programming – especially error reporting – and improving interaction with Emacs. Further reading http://liskell.org

3.1.2 Haskell on handheld devices Report by: Anthony Sloane Participants: Michael Olney Status: unreleased The project at Macquarie University (→7.3.6) to run Haskell on handheld devices based on Palm OS has a running implementation for small tests but, like most ports of languages to Palm OS, we are dealing with memory allocation issues. Also, other higher priority projects have now intervened so this project is going into the background for a while.

3.2 Non-sequential Programming

Status A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development. System Evaluation and Enhancement A major revision of the parallel runtime environment for GHC 6.8 is currently under development. The GpH and Eden parallel Haskells share much of the implementation technology and both are being used for parallel language research and in the SCIEnce project (see below).

We have developed an adaptive runtime environment (GRID-GUM) for GpH on computational grids. GRID-GUM incorporates new load management mechanisms that cheaply and effectively combine static and dynamic information to adapt to the heterogeneous and high-latency environment of a multi-cluster computational grid. We have made comparative measures of GRID-GUM’s performance on high/low latency grids and heterogeneous/homogeneous grids using clusters located in Edinburgh, Munich and Galashiels. Results are published in: Al Zain A. Implementing High-Level Parallelism on Computational Grids , PhD Thesis, Heriot-Watt University, 2006. Al Zain A. Trinder P.W. Loidl H.W. Michaelson G.J. Evaluating a High-Level Parallel Language (GpH) for Computational Grids , IEEE Transactions on Parallel and Distributed Systems (February 2008).

SMP-GHC, an implementation of GpH for multi-core machines has been developed by Tim Harris, Satnam Singh, Simon Marlow and Simon Peyton Jones.

We are teaching parallelism to undergraduates using GpH at Heriot-Watt and Phillips Universitat Marburg. GpH Applications As part of the SCIEnce EU FP6 I3 project (026133) (→7.3.10) (April 2006 - April 2011) we use GpH and Eden to provide access to computational grids from Computer Algebra(CA) systems, including GAP, Maple MuPad and KANT. We have implemented an interface, GCA, which orchestrates computational algebra components into a high-performance parallel application. GCA is capable of exploiting a variety of modern parallel/multicore architectures without any change to the underlying code. GCA is also capable of orchestrating heterogeneous computations across a high-performance computational Grid. Implementations The GUM implementation of GpH is available in two main development branches. The focus of the development has switched to versions tracking GHC releases, currently GHC 6.8, and the development version is available upon request to the GpH mailing list (see the GpH web site).

The stable branch (GUM-4.06, based on GHC-4.06) is available for RedHat-based Linux machines. The stable branch is available from the GHC CVS repository via tag gum-4-06. Our main hardware platform are Intel-based Beowulf clusters. Work on ports to other architectures is also moving on (and available on request): A port to a Mosix cluster has been built in the Metis project at Brooklyn College, with a first version available on request from Murray Gross. Further reading GpH Home Page: http://www.macs.hw.ac.uk/~dsg/gph/

Stable branch binary snapshot: ftp://ftp.macs.hw.ac.uk/pub/gph/gum-4.06-snap-i386-unknown-linux.tar

Stable branch installation instructions: ftp://ftp.macs.hw.ac.uk/pub/gph/README.GUM Contact <gph at macs.hw.ac.uk>, <mgross at dorsai.org>

Description Eden has been jointly developed by two groups at Philipps Universität Marburg, Germany and Universidad Complutense de Madrid, Spain. The project has been ongoing since 1996. Currently, the team consists of the following people: in Madrid: Ricardo Peña, Yolanda Ortega-Mallén, Mercedes Hidalgo, Fernando Rubio, Clara Segura, Alberto Verdejo in Marburg: Rita Loogen, Jost Berthold, Steffen Priebe, Mischa Dieterle Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling. Eden’s main constructs are process abstractions and process instantiations. The function process :: (a -> b) -> Process a b embeds a function of type (a -> b) into a process abstraction of type Process a b which, when instantiated, will be executed in parallel. Process instantiation is expressed by the predefined infix operator ( # ) :: Process a b -> a -> b . Higher-level coordination is achieved by defining skeletons, ranging from a simple parallel map to sophisticated replicated-worker schemes. They have been used to parallelise a set of non-trivial benchmark programs. Survey and standard reference Rita Loogen, Yolanda Ortega-Mallén and Ricardo Peña: Parallel Functional Programming in Eden, Journal of Functional Programming 15(3), 2005, pages 431–475. Implementation A major revision of the parallel Eden runtime environment for GHC 6.8.1 is available on request. Support for Glasgow parallel Haskell (GpH) is currently being added to this version of the runtime environment. It is planned for the future to maintain a common parallel runtime environment for Eden, GpH and other parallel Haskells. Recent and Forthcoming Publications Mischa Dieterle: Parallel functional implementation of Master-Worker-Skeletons , Diploma Thesis, Philipps-University Marburg, October 2007 (in German).

, Diploma Thesis, Philipps-University Marburg, October 2007 (in German). Jost Berthold, Mischa Dieterle, Rita Loogen: Functional Implementation of a Distributed Work Pool Skeleton , submitted.

, submitted. Jost Berthold, Mischa Dieterle, Rita Loogen, Steffen Priebe: Hierarchical Master-Worker Skeletons , Practical Aspects of Declarative Languages (PADL) 08, San Francisco, USA, January 2008, LNCS, Springer, to appear.

, Practical Aspects of Declarative Languages (PADL) 08, San Francisco, USA, January 2008, LNCS, Springer, to appear. Jost Berthold, Abyd Al-Zain, and Hans-Wolfgang Loidl: Adaptive High-Level Scheduling in a Generic Parallel Runtime Environment , Practical Aspects of Declarative Languages (PADL) 08, San Francisco, USA, January 2008, LNCS, Springer, to appear.

, Practical Aspects of Declarative Languages (PADL) 08, San Francisco, USA, January 2008, LNCS, Springer, to appear. Jost Berthold and Rita Loogen: Visualising Parallel Functional Program Runs - Case Studies with the Eden Trace Viewer , Parallel Computing: Architectures, Algorithms and Applications, Proceedings of the International Conference ParCo 2007, NIC-Series, to appear.

, Parallel Computing: Architectures, Algorithms and Applications, Proceedings of the International Conference ParCo 2007, NIC-Series, to appear. Mercedes Hidalgo-Herrero and Yolanda Ortega-Mallen: To be or not to be &hellp;lazy , In Draft Proceedings of 19th Intl. Symposium on the Implementation of Functional Languages (IFL 2007), University of Kent, Canterbury (UK) 2007.

, In Draft Proceedings of 19th Intl. Symposium on the Implementation of Functional Languages (IFL 2007), University of Kent, Canterbury (UK) 2007. A. de la Encina, L. Llana, F. Rubio, M. Hidalgo-Herrero: Observing Intermediate Structures in a Parallel Lazy Functional Language , 9th International ACM-SIGPLAN Symposium on Principles and Practice of Declarative Programming, PPDP’07, ACM Press 2007, pages 111-120.

, 9th International ACM-SIGPLAN Symposium on Principles and Practice of Declarative Programming, PPDP’07, ACM Press 2007, pages 111-120. Mercedes Hidalgo-Herrero, Alberto Verdejo, Yolanda Ortega-Mallen: Using Maude and its strategies for defining a framework for analyzing Eden semantics , WRS 06 (6th International Workshop on Reduction Strategies in Rewriting and Programming), Electronic Notes in Theoretical Computer Science, Volume 174, Issue 10, Pages 119-137 (July 2007).

, WRS 06 (6th International Workshop on Reduction Strategies in Rewriting and Programming), Electronic Notes in Theoretical Computer Science, Volume 174, Issue 10, Pages 119-137 (July 2007). A. de la Encina, I. Rodriguez, F. Rubio: Testing Speculative Work in a Lazy / Eager Parallel Functional Language., LCPC’05, LNCS 4339, Springer 2007. Further reading http://www.mathematik.uni-marburg.de/~eden

3.3 Type System/Program Analysis

3.3.1 Free Theorems for Haskell Report by: Janis Voigtländer Participants: Sascha Böhme and Florian Stenger Free theorems are statements about program behavior derived from (polymorphic) types. Their origin is the polymorphic lambda-calculus, but they have also been applied to programs in more realistic languages like Haskell. Since there is a semantic gap between the original calculus and modern functional languages, the underlying theory (of relational parametricity) needs to be refined and extended. We aim to provide such new theoretical foundations, as well as to apply the theoretical results to practical problems. For recent application papers, see “Proving Correctness via Free Theorems: The Case of the destroy/build-Rule” (PEPM’08) and “Much Ado about Two: A Pearl on Parallel Prefix Computation” (POPL’08). Also on the practical side, Sascha Böhme implemented a library and tool for generating free theorems from Haskell types. Downloadable source and a web interface are accessible at http://linux.tcs.inf.tu-dresden.de/~voigt/ft. Features include: three different language subsets to choose from

equational as well as inequational free theorems

relational free theorems as well as specializations down to function level

support for algebraic data types, type synonyms and renamings, type classes While the web interface is restricted to algebraic data types, type synonyms, and type classes from Haskell standard libraries, a shell-based application contained in the source package also enables the user to declare their own algebraic data types and so on, and then to derive free theorems from types involving those. While the web interface is restricted to algebraic data types, type synonyms, and type classes from Haskell standard libraries, a shell-based application contained in the source package also enables the user to declare their own algebraic data types and so on, and then to derive free theorems from types involving those. Further reading http://wwwtcs.inf.tu-dresden.de/~voigt/project/

3.3.2 Agda Report by: Nils Anders Danielsson Status: Actively developed by a number of people Do you crave for highly expressive types, but do not want to resort to type-class hackery? Then Agda might provide a view of what the future has in store for you. Agda is a dependently typed functional programming language (developed using Haskell). The language has inductive families, i.e. GADTs which can be indexed by values and not just types. Other goodies include parameterised modules, mixfix operators, and an interactive Emacs interface (the type checker can assist you in the development of your code). A lot of work remains in order for Agda to become a full-fledged programming language (effects, good libraries, mature compilers, documentation, &hellp;), but already in its current state it can provide lots of fun as a platform for experiments in dependently typed programming. Further reading The Agda Wiki: http://www.cs.chalmers.se/~ulfn/Agda/

Epigram is a prototype dependently typed functional programming language, equipped with an interactive editing and typechecking environment. High-level Epigram source code elaborates into a dependent type theory based on Zhaohui Luo’s UTT. The definition of Epigram, together with its elaboration rules, may be found in ‘The view from the left’ by Conor McBride and James McKinna (JFP 14 (1)). A new version, Epigram 2, based on Observational Type Theory (see ‘Observational Equality, Now!’ by Thorsten Altenkirch, Conor McBride, and Wouter Swierstra) is in preparation. Motivation Simply typed languages have the property that any subexpression of a well typed program may be replaced by another of the same type. Such type systems may guarantee that your program won’t crash your computer, but the simple fact that True and False are always interchangeable inhibits the expression of stronger guarantees. Epigram is an experiment in freedom from this compulsory ignorance. Specifically, Epigram is designed to support programming with inductive datatype families indexed by data. Examples include matrices indexed by their dimensions, expressions indexed by their types, search trees indexed by their bounds. In many ways, these datatype families are the progenitors of Haskell’s GADTs, but indexing by data provides both a conceptual simplification – the dimensions of a matrix are numbers – and a new way to allow data to stand as evidence for the properties of other data. It is no good representing sorted lists if comparison does not produce evidence of ordering. It is no good writing a type-safe interpreter if one’s typechecking algorithm cannot produce well-typed terms. Programming with evidence lies at the heart of Epigram’s design. Epigram generalises constructor pattern matching by allowing types resembling induction principles to express as how the inspection of data may affect both the flow of control at run time and the text and type of the program in the editor. Epigram extracts patterns from induction principles and induction principles from inductive datatype families. History James McKinna and Conor McBride designed Epigram in 2001, whilst based at Durham, working with Zhaohui Luo and Paul Callaghan. McBride’s prototype implementation of the language, ‘Epigram 1’ emerged in 2004: it is implemented in Haskell, interfacing with the xemacs editor. This implementation effort involved inventing a number of new programming techniques which have found their way into the Haskell community at large: central components of Control.Applicative and Data.Traversable started life in the source code for Epigram. Following the Durham diaspora, James McKinna and Edwin Brady went to St. Andrews, where they continued their work on phase analysis and efficient compilation of dependently typed programs. More recently, with Kevin Hammond, they have been studying applications of dependent types to resource-aware computation in general, and network protocols in particular. Meanwhile, Conor McBride went to Nottingham to work with Thorsten Altenkirch. They set about redesigning Epigram’s underlying type theory, radically changing its treatment of logical propositions in general, and equality in particular, making significant progress on problems which have beset dependent type theories for decades. The Nottingham duo grew into a strong team of enthusiastic researchers. Peter Morris successfully completed a PhD on generic programming in Epigram and is now a research assistant: his work has led to the redesign of Epigram’s datatype language. Nicolas Oury joined from Paris as a postdoctoral research fellow, and is now deeply involved in all aspects of design and implementation. PhD students James Chapman and Wouter Swierstra are working on Epigram-related topics, studying formalized metatheory and effectful programming, respectively. Meanwhile, Nottingham research on containers, involving Neil Ghani, Peter Hancock and Rawle Prince, together with the Epigram team, continues to inform design choices as the language evolves. Epigram 1 was used successfully by Thorsten Altenkirch, Conor McBride, and Peter Hancock in an undergraduate course on Computer Aided Formal Reasoning http://www.e-pig.org/darcs/g5bcfr/. It has also been used in a number of graduate-level courses. James McKinna is now at Radboud University, Nijmegen; Edwin Brady is still at St. Andrews; Thorsten Altenkirch, Peter Morris, Nicolas Oury, James Chapman and Wouter Swierstra are still in Nottingham; Conor McBride has left academia. All are still contributing to the Epigram project. Current Status Epigram 2 is based on a radical redesign of our underlying type theory. The main novelties are a bidirectional approach to typechecking, separating syntactically the terms whose types are inferred from those for which types are pushed in—with stronger guarantees of prior type information, we can reduce clutter in terms and support greater overloading;

approach to typechecking, separating syntactically the terms whose types are inferred from those for which types are pushed in—with stronger guarantees of prior type information, we can reduce clutter in terms and support greater overloading; explicit separation of propositions and sets , ensuring that proofs never influence control-flow and can be erased at run-time;

and , ensuring that proofs never influence control-flow and can be erased at run-time; a type-directed approach to propositional equality, comparing functions extensionally, records componentwise, data by construction, and proofs trivially—we shall soon support equality for codata by bisimulation and for quotients by whatever you want;

approach to propositional equality, comparing functions extensionally, records componentwise, data by construction, and proofs trivially—we shall soon support equality for codata by bisimulation and for quotients by whatever you want; three closed universes of data structures, finite enumerations, record types, and inductive datatypes, each with its datatype of type descriptions—this supports generic programming over all of Epigram 2’s data structures and removes the need for any means of ‘making new stuff’ other than definition. Nicolas Oury, Peter Morris, and Conor McBride have implemented this theory, together with a system supporting interactive construction (and destruction) within it. This the engine which will drive Epigram 2: we plan to equip it with human-accessible controls and release it for the benefit of the curious, shortly. With this in place, we shall reconstruct the Epigram source language and its elaboration mechanism: constructs in source become constructions in the core. There is still a great deal of work to do. We need to incorporate the work from Edwin Brady and James McKinna on type erasure and efficient compilation; we need to bring out and exploit the container structure of data; we need to support programming with effects (including non-termination); we need a declarative proof language, as well as a functional programming language. The Epigram project relies on Haskell, its libraries, and tools such as alex (→5.2.1), happy (→5.2.2), bnfc, cabal (→4.1.1), and darcs (→6.13). We have recently developed tools for assembling the modules corresponding to each component of the Epigram system from files corresponding to each feature of the Epigram language: this may prove useful to others, so we hope to clean them up and release them. Meanwhile, as Haskell itself edges ever closer to dependent types, the Epigram project has ever more to contribute, in exploration of the design space, in the development of implementation technique, and in experimentation with the pragmatics of programming with such power and precision. Epigram source code and related research papers can be found on the web at http://www.e-pig.org and its community of experimental users communicate via the mailing list <epigram at durham.ac.uk>. The current, rapidly evolving state of Epigram 2 can be found at http://www.e-pig.org/epilogue/.

3.3.4 Chameleon project Report by: Martin Sulzmann Chameleon is a Haskell style language which integrates sophisticated reasoning capabilities into a programming language via its CHR programmable type system. Thus, we can program novel type system applications in terms of CHRs which previously required special-purpose systems. Chameleon including examples and documentation is available via http://taichi.ddns.comp.nus.edu.sg/taichiwiki/ChameleonHomePage.

XHaskell is an extension of Haskell which combines parametric polymorphism, algebraic data types and type classes with XDuce style regular expression types, subtyping and regular expression pattern matching. The latest version can be downloaded via http://taichi.ddns.comp.nus.edu.sg/taichiwiki/XhaskellHomePage. Latest developments We have fully implemented the system which can be used in combination with the Glasgow Haskell Compiler. We have taken care to provide meaningful type error messages in case the static checking of programs fails. Our system also allows to defer some static checks until run-time. We make use of GHC-as-a-library so that the XHaskell programmer can easily integrate her programs into existing applications and take advantage of the many libraries available in GHC. We also provide a convenient interface to the HaXML parser.

3.3.6 HaskellJoin Report by: Martin Sulzmann Participants: Edmund S. L. Lam and Martin Sulzmann HaskellJoin extends Haskell with Join-calculus style concurrency primitives. The novelty lies in the addition of guards and propagated join patterns. These additional features prove to be highly useful. See for details: http://taichi.ddns.comp.nus.edu.sg/taichiwiki/HaskellJoinRules. Latest developments We have implemented a prototype in STM Haskell. Experimental results show that we can achieve significant speed-ups on multi-core architectures (more cores = programs runs faster). HaskellJoin subsumes in expressive power “ADOM: Agent Domain of Monads” which is no longer supported.

3.3.7 Uniqueness Typing Report by: Edsko de Vries Participants: Rinus Plasmeijer, David M Abrahamson Status: ongoing An important feature of pure functional programming languages is referential transparency. A consequence of referential transparency is that functions cannot be allowed to modify their arguments, unless it can be guaranteed that they have the sole reference to that argument. This is the basis of uniqueness typing. We have been developing a uniqueness type system based on that of the language Clean but with various improvements: no subtyping is required, and the type language does not include constraints (types in Clean often involve implications between uniqueness attribute). This makes the type system sufficiently similar to standard Hindley/Milner type systems that (1) standard inference algorithms can be applied, and (2) that modern extensions such as arbitrary rank types and generalized algebraic data types (GADTs) can easily be incorporated. Although our type system is developed in the context of the language Clean, it is also relevant to Haskell because the core uniqueness type system we propose is very similar to the Haskell’s core type system. Moreover, we are currently working on defining syntactic conventions, which programmers can use to write type annotations, and compilers can use to report types, without mentioning uniqueness at all. Further reading Edsko de Vries, Rinus Plasmeijer and David Abrahamson, “Equality-Based Uniqueness Typing”. Presented at TFP 2007, submitted for post-proceedings.

Edsko de Vries, Rinus Plasmeijer and David Abrahamson, “Uniqueness Typing Redefined”, in Z. Horvath, V. Zsok, and Andrew Butterfield (Eds.): IFL 2006, LNCS 4449 (to appear).

3.4 Backend

3.4.1 The Reduceron Report by: Matthew Naylor Participants: Colin Runciman, Neil Mitchell Status: Experimental The Reduceron is a prototype of a special-purpose graph reduction machine, built using an FPGA, featuring: parallel, dual-port, quad-word, stack, heap and combinator memories

two-cycle n-ary application node unwinding (where n <=8)

octo-instantiation (8 words per cycle) of supercombinator bodies

parallel instantiation of combinator spine to heap and stack The Reduceron is an extremely simple machine, containing just four instructions, and executes core Haskell almost directly. The translator from Yhc.Core to Reduceron bytecode and the FPGA machine are both implemented in Haskell, the latter using Lava. Other notable differences since the initial release of Reduceron are: Performs supercombinator (not SK) reduction, with data types encoded as functions, inspired by Jan Martin Jansen’s SAPL interpreter

Uses entirely on-chip memories on a Xilinx Virtex-II FPGA

Has a garbage collector (in hardware)

Includes a basic bytecode interpreter written in C, which is competitive with the nhc98 compiler on a small set of examples

Includes Lava support for multi-output primitives and Xilinx block RAMs

Includes three new Lava modules: CircLib.hs, a prelude of common circuits; CascadedRam.hs, for constructing RAMs of various widths and sizes; and RTL.hs, for writing register-transfer level descriptions. The URL below links to the latest code, details and results of the Reduceron experiment. Further reading http://www.cs.york.ac.uk/~mfn/reduceron2/

4 Libraries

4.1 Packaging and Distribution

4.1.1 Cabal and HackageDB Report by: Duncan Coutts Background The Haskell Cabal is a Common Architecture for Building Applications and Libraries. It is an API distributed with GHC (→2.1), nhc98, and Hugs which allows a developer to easily build and distribute packages. HackageDB (Haskell Package Database) is an online database of packages which can be interactively queried via the website and client-side software such as cabal-install. From HackageDB, an end-user can download and install Cabal packages. Recent progress The last year has seen HackageDB take off. It has grown from a handful of packages to over 300. It has also seen the release of a major new version of the Cabal library – the 1.2.x series – which is bundled with recent GHC versions. This release was a big step forward in terms of new features, fewer rough edges and improved internal design. Growing pains The rapid growth of the HackageDB collection has highlighted some problems. There is now a lot of choice in packages but relatively little information to help users decide which package they want or whether it is likely to build on their platform. Another problem is having to manually download and build packages and their dependencies. Fortunately this problem has a solution in the form of the command line tool cabal-install which has become increasingly usable in the last few months. The plan is for cabal-install to be the primary command line interface to Cabal and HackageDB, replacing runhaskell Setup.lhs and other cabal-* wrappers you may have heard of. Everyone is encouraged to preview this bright new future by trying the latest development versions of the Cabal library and cabal-install tool. Looking forward There is a great deal to do. The Cabal library needs a proper dependency framework. There are many good ideas for technical and social solutions to the current problems with HackageDB. Unfortunately, for something that is now a vital piece of community infrastructure, there are relatively few people working on the solutions. We would like to encourage people to get involved, join the development mailing list, get the code and check the bug tracker for what needs to be done. Even if you do not have time for hacking, you probably have a favourite Cabal bug or limitation. Do not just assume it is well known. Make sure it is properly described on the bug tracker and add yourself to the cc list so Cabal hackers can get some impression of priorities. People Cabal has seen contributions from 39 people in the three and a half years since Isaac Jones started the project. By simplistically counting patches we see that 90%of the code is by the top 8 contributors who have 50 or more patches each. 5%is by the next 5 most active contributors with 10 or more patches each. Contributions from a further 26 people make up the remaining 5%. Further reading Cabal homepage http://www.haskell.org/cabal

HackageDB package collection http://hackage.haskell.org/

Bug tracker http://hackage.haskell.org/trac/hackage/

4.2 General libraries

4.2.1 HPDF Report by: alpheccar Status: Continuous development HPDF is an Haskell library allowing to generate PDF documents. HPDF is supporting several features of the PDF standard like outlines, multi-pages, annotations, actions, image embedding, shapes, patterns, text. In addition to the standard PDF features, HPDF is providing some typesetting features built on top of the PDF core. With HPDF, it is possible to define complex styles for sentences and paragraphs. HPDF is implementing an optimum-fit line breaking algorithm a bit like the TeX one and HPDF is using the standard Liang hyphenation algorithm. HPDF is at version 1.3. It is progressing continuously. HPDF is available on Hackage (→4.1.1). There are several missing features: the only supported fonts are the standard PDF ones. A next version should support TrueType and different character encodings. For support of Asian languages, I’ll ask for help in the Haskell community. I also plan to define an API easing the definition of complex layouts (slides, books). Currently the layout has to be coded by hand but it is already possible to build complex things. The documentation is a bit weak and will have to be improved. Further reading http://www.alpheccar.org

4.2.2 The Neon Library Report by: Jurriaan Hage As part of his master thesis work, Peter van Keeken implemented a library to data mine logged Helium (→2.3) programs to investigate aspects of how students program Haskell, how they learn to program and how good Helium in generating understandable feedback and hints. The software can be downloaded from http://www.cs.uu.nl/wiki/bin/view/Hage/Neon which also gives some examples of output generated by the system. The downloads only contain a small samples of loggings, but it will allow programmers to play with it.

The Test.IOSpec library provides a pure specification of several functions in the IO monad. This may be of interest to anyone who wants to debug, reason about, analyse, or test impure code. The Test.IOSpec library is essentially a drop-in replacement for several other modules, most notably Data.IORef and (most of) Control.Concurrent. Once you’re satisfied that your functions are reasonably well-behaved with respect to the pure specification, you can drop the Test.IOSpec import in favour of the “real” IO modules. The current release is described by a recent Haskell Workshop paper. The development version in the darcs repository, however, supports several exciting new features, including a modular way to combine specifications and a specification of STM. I have used Test.IOSpec to test and debug several substantial programs, such as a distributed Sudoku solver. If you use Test.IOSpec for anything useful at all, I’d love to hear from you. Further reading http://www.cs.nott.ac.uk/~wss/repos/IOSpec/

GSLHaskell is a simple library for linear algebra and numerical computation, internally implemented using GSL, BLAS and LAPACK. A new version with important changes has been recently released. The internal code has been rewritten, based on an improved matrix representation. The interface is now simpler and more generic. It works on Linux, Windows and Mac OS X. The library is available from HackageDB (→4.1.1) with the new name “hmatrix” (because only a small part of GSL is currently available, and matrix computations are based on LAPACK). Most linear algebra functions mentioned in GNU-Octave’s Quick Reference are already available both for real and complex matrices: eig, svd, chol, qr, hess, schur, inv, pinv, expm, norm, and det. There are also functions for numeric integration and differentiation, nonlinear minimization, polynomial root finding, and more than 200 GSL special functions. A brief manual is available at the URL below. This library is used in the easyVision project (→6.21). Further reading http://alberrto.googlepages.com/gslhaskell

4.2.5 An Index Aware Linear Algebra Library Report by: Frederik Eaton Status: unstable; actively maintained The index aware linear algebra library is a Haskell interface to a set of common vector and matrix operations. The interface exposes index types to the type system so that operand conformability can be statically guaranteed. For instance, an attempt to add or multiply two incompatibly sized matrices is a static error. The library should still be considered alpha quality. A backend for sparse vector types is near completion, which allows low-overhead “views” of tensors as arbitrarily nested vectors. For instance, a matrix, which we represent as a tuple-indexed vector, could also be seen as a (rank 1) vector of (rank 1) vectors. These different views usually produce different behaviours under common vector operations, thus increasing the expressive power of the interface. Further reading Original announcement: http://article.gmane.org/gmane.comp.lang.haskell.general/13561

Library: http://ofb.net/~frederik/stla/

4.3 Parsing and Transforming

4.3.1 Graph Parser Combinators Report by: Steffen Mazanek Status: research prototype A graph language can be described by a graph grammar in a manner similar to a string grammar known from the theory of formal languages. Unfortunately, graph parsing is known to be computationally expensive in general. There are even context-free graph languages the parsing of which is NP-complete. Therefore we have developed the Haskell library graph parser combinators, a new approach to graph parsing inspired by the well-known string parser combinators. The basic idea is to define primitive graph parsers for elementary graph components and a set of combinators for the construction of more advanced graph parsers. Using graph parser combinators efficient special-purpose graph parsers can be composed conveniently in a for Haskell programmers familiar manner. The following features are already implemented: a module PolyStateSet that is an extension of PolyState of the polyparse library that can deal with sets of tokens

that is an extension of of the polyparse library that can deal with sets of tokens graph type declarations for several purposes

graph parser combinators for important graph patterns

parsers for several example graph languages

for comparison a general-purpose parser for hyperedge replacement graph grammars The library will soon be provided via hackage (→4.1.1).

4.3.2 uniplate Report by: Neil Mitchell Uniplate is a boilerplate removal library, with similar goals to the original Scrap Your Boilerplate work. It requires fewer language extensions, and allows more succinct traversals with higher performance than SYB. A paper including many examples was presented at the Haskell Workshop 2007ãcrefhaskell-workshop. If you are writing a compiler, or any program that operates over values with many constructors and nested types, you should be using a boilerplate removal library. This library provides a gentle introduction to the field, and can be used practically to achieve substantial savings in code size and maintainability. Further reading Homepage: http://www-users.cs.york.ac.uk/~ndm/uniplate

4.3.3 InterpreterLib Report by: Jennifer Streb Participants: Garrin Kimmell, Nicolas Frisby, Mark Snyder, Philip Weaver, Jennifer Streb, Perry Alexander Maintainer: Garrin Kimmell, Nicolas Frisby Status: beta, actively developed The InterpreterLib library is a collection of modules for constructing composable, monadic interpreters in Haskell. The library provides a collection of functions and type classes that implement semantic algebras in the style of Hutton and Duponcheel. Datatypes for related language constructs are defined as non-recursive functors and composed using a higher-order sum functor. The full AST for a language is the least fixed point of the sum of its constructs’ functors. To denote a term in the language, a sum algebra combinator composes algebras for each construct functor into a semantic algebra suitable for the full language and the catamorphism introduces recursion. Another piece of InterpreterLib is a novel suite of algebra combinators conducive to monadic encapsulation and semantic re-use. The Algebra Compiler, an ancillary preprocessor derived from polytypic programming principles, generates functorial boilerplate Haskell code from minimal specifications of language constructs. As a whole, the InterpreterLib library enables rapid prototyping and simplified maintenance of language processors. InterpreterLib is available for download at the link provided below. Version 1.0 of InterpreterLib was released in April 2007. Further reading http://www.ittc.ku.edu/Projects/SLDG/projects/project-InterpreterLib.htm Contact <nfrisby at ittc.ku.edu>

4.3.4 hscolour Report by: Malcolm Wallace Status: stable, maintained HsColour is a small command-line tool (and Haskell library) that syntax-colorises Haskell source code for multiple output formats. It consists of a token lexer, classification engine, and multiple separate pretty-printers for the different formats. Current supported output formats are ANSI terminal codes, HTML (with or without CSS), and LaTeX. In all cases, the colours and highlight styles (bold, underline, etc) are configurable. It can additionally place HTML anchors in front of declarations, to be used as the target of links you generate in Haddock documentation. HsColour is widely used to make source code in blog entries look more pretty, to generate library documentation on the web, and to improve the readability of ghc’s intermediate-code debugging output. Further reading http://www.cs.york.ac.uk/fp/darcs/hscolour

4.3.5 Utrecht Parsing Library and Attribute Grammar System Report by: Doaitse Swierstra and Jeroen Fokker Status: Released as cabal packages The Utrecht attribute grammar system has been extended: the attribute flow analysis has been completely implemented by Joost Verhoog, and it is now possible to generate visit-function based evaluators, which are much faster and use less space. We assume that such functions are strict in all their arguments, and generate the appropriate `seq` calls to make the GHC aware of this. As a result also case ’s are generated instead on let ’s wherever possible. Several improvements were made: better error reporting of cyclic dependencies, and a large speed improvements in the overall flow analysis have been made. The first versions of the EHC now compile without circularities, nor direct nor induced by fixing the attribute evaluation orders

calls to make the GHC aware of this. As a result also ’s are generated instead on ’s wherever possible. we are adding better support for higher order attribute grammars and forwarding rules

Tthe error correcting strategies of the parser combinators are now being used as a base for providing automatic feedback in systems for training strategies (Johan Jeuring, Arthur van Leeuwen)

a start has been made with providing Haddock information with the code of the parser combinators

we plan to enhance the parser combinators with a second basic parsing engine, in order to support monadic uses of the combinators while keeping the error correcting capabilities The software is again available through the Haskell Utrecht Tools page. ( The software is again available through the Haskell Utrecht Tools page. ( http://www.cs.uu.nl/wiki/HUT/WebHome ).

The goal of the X-Saiga project is to create algorithms and implementations which enable language processors (recognizers, parsers, interpreters, translators, etc.) to be constructed as modular and efficient embedded eXecutable SpecificAtIons of GrAmmars. To achieve modularity, we have chosen to base our algorithms on top-down parsing. To accommodate ambiguity, we implement inclusive choice through backtracking search. To achieve polynomial complexity, we use memoization. We have developed an algorithm which accommodates direct left-recursion using curtailment of search. Indirect left recursion is also accommodated using curtailment together with a test to determine whether previously computed and memoized results may be reused depending on the context in which they were created and the context in which they are being considered for reuse. The algorithm is described more fully in Frost, R., Hafiz, R. and Callaghan, P. (2007) Modular and Efficient Top-Down Parsing for Ambiguous Left-Recursive Grammars. Proceedings of the 10th International Workshop on Parsing Technologies (IWPT), ACL-SIGPARSE. Pages: 109 - 120, June 2007, Prague. http://cs.uwindsor.ca/~hafiz/iwpt-07.pdf We have implemented our algorithms, at various stages of their development, in Miranda (up to 2006) and in Haskell (from 2006 onwards). A description of a Haskell implementation of our 2007 algorithm can be found in Frost, R., Hafiz, R. and Callaghan, P. (2008) Parser Combinators for Ambiguous Left-Recursive Grammars. Proceedings of the 10th International Symposium on Practical Aspects of Declarative Languages (PADL), to be published in LNCS. January 2008, San Francisco, USA. http://cs.uwindsor.ca/~hafiz/PADL_PAPER_FINAL.pdf The X-SAIGA website contains more information, links to other publications, proofs of termination and complxity, and Haskell code of the development version. http://cs.uwindsor.ca/~hafiz/proHome.html We are currently extending our algorithm and implmentation to accommodate executable specifications of full-general attribute grammars.

4.4 System

4.4.1 hspread Report by: Andrea Vezzosi Participants: Andrea Vezzosi, Jeff Muller Status: active hspread is a client library for the Spread toolkit. It is fully implemented in Haskell using the binary package (→4.7.1) for fast parsing of network packets. Its aim is to make easier to implement correct distributed applications by taking advantage of the guarantees granted by Spread, such as reliable and total ordered messages, and supports the most recent version of the protocol. There is interest in further developing an higher level framework for Haskell distributed programming by extending the protocol if necessary. Further reading Hackage: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hspread

Developement version: darcs get http://happs.org/repo/hspread

Spread homepage: http://www.spread.org

Harpy is a library for run-time code generation of IA-32 machine code. It provides not only a low level interface to code generation operations, but also a convenient domain specific language for machine code fragments, a collection of code generation combinators and a disassembler. We use it in two independent (unpublished) projects: On the one hand, we are implementing a just-in-time compiler for functional programs, on the other hand, we use it to implement an efficient type checker for a dependently typed language. It might be useful in other domains, where specialized code generated at run-time can improve performance. Harpy’s implementation makes use of the foreign function interface, but only contains functions written in Haskell. Moreover, it has some uses of other interesting Haskell extensions as for example multi-parameter type classes to provide an in-line assembly language, and Template Haskell to generate stub functions to call run-time generated code. The disassembler uses Parsec to parse the instruction stream. We intend to implement supporting operations for garbage collectors cooperating with run-time generated code. A second release is forthcoming, featuring improvements in the memory management, better floating point instruction support, and named labels that are shown in the disassembler output. Further reading http://uebb.cs.tu-berlin.de/harpy/

4.4.3 hs-plugins Report by: Don Stewart Status: maintained hs-plugins is a library for dynamic loading and runtime compilation of Haskell modules, for Haskell and foreign language applications. It can be used to implement application plugins, hot swapping of modules in running applications, runtime evaluation of Haskell, and enables the use of Haskell as an application extension language. hs-plugins has been ported to GHC 6.6. Further reading Source and documentation can be found at: http://www.cse.unsw.edu.au/~dons/hs-plugins/

The source repository is available: darcs get http://www.cse.unsw.edu.au/~dons/code/hs-plugins/

4.4.4 The libpcap Binding Report by: Dominic Steinitz Participants: Greg Wright, Dominic Steinitz, Nicholas Burlett Nicholas Burlett has now created a cabalized version and made it available on hackage. However, beware that this doesn’t use autoconf to check your system supports sa_len and it doesn’t check which version of libpcap is installed. It will probably work but may not. If it doesn’t then try this: darcs get

http://www.haskell.org/networktools/src/pcap Install libpcap. I used 0.9.4.

autoheader

autoconf

./configure

hsc2hs Pcap.hsc

ghc -o test test.hs --make -lpcap -fglasgow-exts All contributions are welcome especially if you know how to get cabal to run autoconf and check for versions of non-Haskell libraries.

4.5 Databases and data storage

Takusen is a library for accessing DBMS’s. Like HSQL, we support arbitrary SQL statements (currently strings, extensible to anything that can be converted to a string). Takusen’s ‘unique-selling-point’ is safety and efficiency. We statically ensure all acquired database resources such as cursors, connection and statement handles are released, exactly once, at predictable times. Takusen can avoid loading the whole result set in memory and so can handle queries returning millions of rows, in constant space. Takusen also supports automatic marshalling and unmarshalling of results and query parameters. These benefits come from the design of query result processing around a left-fold enumerator. Currently we fully support Oracle, Sqlite, and PostgreSQL, and ODBC support exists but is not fully tested. Since the last report we have: added an ODBC backend

improved the installation process so that we can build Haddock docs with Cabal A new release to promote the ODBC code should be forthcoming; until then interested souls can get the latest from the darcs repo. Future plans complete ODBC interface.

Large object support.

MS SQL Server and Sybase interfaces, via FreeTDS. Further reading darcs get http://darcs.haskell.org/takusen/

browse docs: http://darcs.haskell.org/takusen/doc/html (see Database.Enumerator for Usage instructions and examples)

4.6 Data types and data structures

4.6.1 Data.Record Report by: Claus Reinke Status: library sketch Extensible records, or the lack thereof, continue to be a popular subject of discussion. There is no lack of proposals, some implemented, some not, but there seems to be no obvious winner that would justify substantial implementation efforts, not to mention upsetting Haskell’s already featureful type system. Ever since seeing the Trex in Hugs development in Nottingham, I had been wondering about whether there were smaller aspects of extensible record systems that might be added as general features to the current type system, so that a record system could be defined on top of the extended system. The latter turned out to be almost possible with the then prevailing Hugs type system, but for commutative row constructors (label ordering should not matter) and negated type predicates (record ‘has’ label vs. record ‘lacks’ label). Since then, extensions to the HList library (→4.6.7) have demonstrated that one can abuse GHC’s type system implementation to get just enough expressiveness for defining a record system (an impressive feat), provided one is prepared to define a global ordering on record field labels. Separately, Daan Leijen’s scoped labels proposal suggested that accepting the absence of negative predicates leads to a different, but not necessarily worse record system. With all these ideas in the air, I found myself needing an extensible record system for a modular attribute grammar problem and was surprised to find an implementation of such a system within the limitations of GHC’s type system – Data.Record was born! It was based on scoped labels, but went further in providing record concatenation as well. I first posted the module Data.Record as an attachment to a Haskell’ ticket on type sharing, and to the Haskell’ list as an example of how code using only language extensions nominally supported in both GHC and Hugs would nevertheless only work in GHC, not in Hugs. One of the recent revivals of the extensible records discussion made me dust off that old code and add some of the features requested for alternative systems. In particular, there is now support for unscoped operations (negative predicates, no duplicate labels) and for record label permutation. The former means that this code could grow into a library supporting all the major extensible record system styles, the latter means that record code can be label-order independent without needing a global ordering on labels (a prerequisite in most other type-class-based extensible record systems). The code is not currently in a release state, being an insufficiently systematic collection of features from several systems, but usable, with examples, and if there was sufficient interest, I could try getting it more organised. Please let me know if you like what is there sufficiently to warrant such an effort. Further reading http://www.cs.kent.ac.uk/~cr3/toolbox/haskell/#records

Data.ByteString provides packed strings (byte arrays held by a ForeignPtr), along with a list interface to these strings. It lets you do extremely fast IO in Haskell; in some cases, even faster than typical C implementations, and much faster than [Char] . It uses a flexible “foreign pointer” representation, allowing the transparent use of Haskell or C code to manipulate the strings. Data.ByteString is written in Haskell98 plus the foreign function interface and cpp. It has been tested successfully with GHC 6.4, 6.6, 6.8, Hugs 2005–2006, and the head version of nhc98. Work on Data.ByteString continues. In particular, a new fusion mechanism, stream fusion, has been developed, which should further improve performance of ByteStrings. This work is described in the recent “Stream Fusion: From Lists to Streams to Nothing at All” paper. Data.ByteString has recently been ported to nhc98. Further reading Source and documentation can be found at http://www.cse.unsw.edu.au/~dons/fps.html

The source repository is available: darcs get http://darcs.haskell.org/bytestring

Data.List.Stream provides the standard Haskell list data type and api, with an improved fusion system, as described in the papers “Stream Fusion” and “Rewriting Haskell Strings”. Code written to use the Data.List.Stream library should run faster (or at worst, as fast) as existing list code. A precise, correct reimplementation is a major goal of this project, and Data.List.Stream comes bundled with around 1000 QuickCheck properties, testing against the Haskell98 specification, and the standard library. This library is under active development, and we expect to port the ndp and bytestring libraries to use it. Further reading Source and documentation can be found at: http://www.cse.unsw.edu.au/~dons/streams.html

4.6.4 Edison Report by: Rob Dockins Status: stable, maintained Edison is a library of purely function data structures for Haskell originally written by Chris Okasaki. Conceptually, it consists of two things: A set of type classes defining data the following data structure abstractions: “sequences”, “collections” and “associative collections” Multiple concrete implementations of each of the abstractions. In theory, either component may be used independently of the other. I took over maintenance of Edison about 18 months ago in order to update Edison to use the most current Haskell tools. The following major changes have been made since version 1.1, which was released in 1999. Typeclasses updated to use fundeps (by Andrew Bromage)

Implementation of ternary search tries (by Andrew Bromage)

Modules renamed to use the hierarchical module extension

Documentation haddockized

Source moved to a darcs repository

Build system cabalized

Unit tests integrated into a single driver program which exercises all the concrete implementations shipped with Edison

Multiple additions to the APIs (mostly the associated collection API) Edison is currently in maintain-only mode. I don’t have the time required to enhance Edison in the ways I would like. If you are interested in working on Edison, don’t hesitate to contact me. The biggest thing that Edison needs is a benchmarking suite. Although Edison currently has an extensive unit test suite for testing correctness, and many of the data structures have proven time bounds, I have no way to evaluate or compare the quantitative performance of data structure implementations in a principled way. Unfortunately, benchmarking data structures in a non-strict language is difficult to do well. If you have an interest or experience in this area, your help would be very much appreciated. Further reading http://www.cs.princeton.edu/~rdockins/edison/home/

Dimensional is a library providing data types for performing arithmetic with physical quantities and units. Information about the physical dimensions of the quantities/units is embedded in their types and the validity of operations is verified by the type checker at compile time. The boxing and unboxing of numerical values as quantities is done by multiplication and division with units. The library is designed to, as far as is practical, enforce/encourage best practices of unit usage. Following a reorganization of the module hierarchy the core of dimensional is now mostly stable while additional units are being added on an as-needed basis. In addition to the si system of units dimensional has experimental support for user-defined dimensions and a proof-of-concept implementation of the cgs system of units. The most recent release is compatible with ghc 6.6.x and above and can be downloaded from hackage or the project web site. The primary documentation is the literate haskell source code but the wiki on the project web site has a few usage examples to help with getting started. Further reading http://dimensional.googlecode.com

4.6.6 Numeric prelude Report by: Henning Thielemann Participants: Dylan Thurston, Henning Thielemann, Mikael Johansson Status: experimental, active development The hierarchy of numerical type classes is revised and oriented at algebraic structures. Axiomatics for fundamental operations are given as QuickCheck properties, superfluous super-classes like Show are removed, semantic and representation-specific operations are separated, the hierarchy of type classes is more fine grained, and identifiers are adapted to mathematical terms. There are both certain new type classes representing algebraic structures and new types of mathematical objects. Currently supported algebraic structures are group (additive),

ring,

principal ideal domain,

field,

algebraic closures,

transcendental closures,

module and vector space,

normed space,

lattice,

differential algebra,

monoid. There is also a collection of mathematical object types, which is useful both for applications and testing the class hierarchy. The types are lazy Peano number

complex number, quaternion,

residue class,

fraction,

partial fraction,

numbers equipped with physical units (dynamic checks only),

fixed point arithmetic with respect to arbitrary bases and numbers of fraction digits,

infinite precision number in an arbitrary positional system as lazy lists of digits supporting also numbers with terminating representations,

polynomial, power series, Laurent series

series root set of a polynomial,

matrix (basics only),

algebra, e.g. multi-variate polynomial (basics only),

permutation group. Due to Haskell’s flexible type system, you can combine all these types, e.g. fractions of polynomials, residue classes of polynomials, complex numbers with physical units, power series with real numbers as coefficients. Due to Haskell’s flexible type system, you can combine all these types, e.g. fractions of polynomials, residue classes of polynomials, complex numbers with physical units, power series with real numbers as coefficients. Using the revised system requires hiding some of the standard functions provided by Prelude, which is fortunately supported by GHC (→2.1). The library has basic Cabal support and a growing test-suite of QuickCheck tests for the implemented mathematical objects. Future plans Collect more Haskell code related to mathematics, e.g. for linear algebra. Study of alternative numeric type class proposals and common computer algebra systems. Ideally each data type resides in a separate module. However this leads to mutual recursive dependencies, which cannot be resolved if type classes are mutually recursive. We start to resolve this by fixing the types of some parameters of type class methods. E.g. power exponents become simply Integer instead of Integral, which has also the advantage of reduced type defaulting. A still unsolved problem arises for residue classes, matrix computations, infinite precision numbers, fixed point numbers and others. It should be possible to assert statically that the arguments of a function are residue classes with respect to the same divisor, or that they are vectors of the same size. Possible ways out are encoding values in types or local type class instances. The latter one is still neither proposed nor implemented in any Haskell compiler. The modules are implemented in a way to keep all options open. That is, for each number type there is one module implementing the necessary operations which expect the context as a parameter. Then there are several modules which provide different interfaces through type class instances to these operations. Further reading http://darcs.haskell.org/numericprelude/

HList is a comprehensive, general purpose Haskell library for typed heterogeneous collections including extensible polymorphic records and variants. HList is analogous of the standard list library, providing a host of various construction, look-up, filtering, and iteration primitives. In contrast to the regular lists, elements of heterogeneous lists do not have to have the same type. HList lets the user formulate statically checkable constraints: for example, no two elements of a collection may have the same type (so the elements can be unambiguously indexed by their type). An immediate application of HLists is the implementation of open, extensible records with first-class, reusable, and compiled-time only labels. The dual application is extensible polymorphic variants (open unions). HList contains several implementations of open records, including records as sequences of field values, where the type of each field is annotated with its phantom label. We and now others (Alexandra Silva, Joost Visser: PURe.CoddFish project) have also used HList for type-safe database access in Haskell. HList-based Records form the basis of OOHaskell http://darcs.haskell.org/OOHaskell. The HList library relies on common extensions of Haskell 98. The HList repository is available via Darcs (→6.13): http://darcs.haskell.org/HList The library is being optimized and extended. Since the last report, we have added ConsUnion.hs to build homogeneous lists of heterogeneous components by constructing the union on-the-fly. We added Template Haskell code to eliminate the annoying boilerplate when defining record ‘labels’. We optimized record projection, which should be especially noticeable for record narrowing. We added equivR , record equivalence modulo field order, with witnessing conversions. ConsUnion.hs checks for record types and treat the latter equivalent modulo the order of fields. This gives optimized, shallower unions. Further reading HList: http://homepages.cwi.nl/~ralf/HList/

OOHaskell: http://homepages.cwi.nl/~ralf/OOHaskell/

4.7 Data processing

4.7.1 binary Report by: Lennart Kolmodin Participants: Duncan Coutts, Don Stewart, Binary Strike Team Status: active The Binary Strike Team is pleased to announce the release of a new, pure, efficient binary serialisation library. The ‘binary’ package provides efficient serialisation of Haskell values to and from lazy ByteStrings. ByteStrings constructed this way may then be written to disk, written to the network, or further processed (e.g. stored in memory directly, or compressed in memory with zlib or bzlib). The binary library has been heavily tuned for performance, particularly for writing speed. Throughput of up to 160M/s has been achieved in practice, and in general speed is on par or better than NewBinary, with the advantage of a pure interface. Efforts are underway to improve performance still further. Plans are also taking shape for a parser combinator library on top of binary, for bit parsing and foreign structure parsing (e.g. network protocols). Data.Derive (→5.3.1) has support for automatically generating Binary instances, allowing to read and write your data structures with little fuzz. Binary was developed by a team of 8 during the Haskell Hackathon, and since then has in total 15 people contributed code and many more given feedback and cheerleading on #haskell . The underlying code is currently being rewritten to give even better performance – both reading and writing – still exposing the same API. The package is is available through Hackage (→4.1.1). Further reading Homepage http://www.cse.unsw.edu.au/~dons/binary.html

Hackage http://hackage.haskell.org/cgi-bin/hackage-scripts/package/binary

Development version darcs get –partial http://darcs.haskell.org/binary

4.7.2 binarydefer Report by: Neil Mitchell The Binary Defer library provides a framework for doing binary serialisation, with support for deferred loading. Deferred loading is for when a large data structure exists, but typically only a small fraction of this data structure will be required. By using deferred loading, some of the data structure can be read quickly, and the rest can be read on demand, in a pure manner. This library is at the heart of Hoogle 4 (→5.5.6), but has already found uses outside that application, including to do offline sorts etc. Further reading Homepage: http://www-users.cs.york.ac.uk/~ndm/binarydefer

The current version is still 4.0.3. This means no dependency on NewBinary which had been requested by several people. The interface to SHA-1 is still different from MD5 and the whole library needs a rethink. Unfortunately, I don’t have the time to undertake much work on it at the moment and it is not clear when I will have more time. I’m therefore looking for someone to help keeping the repository up-to-date with contributions, re-structuring the library and managing releases. I have restructured SHA-1 to be more Haskell-like and it’s now obvious how it mirrors the specification. However, this has led to rather poor performance and it’s not obvious (to me at least) what can be done without sacrificing clarity. Several people have posted more efficient versions of SHA-1 but not as patches. Given my limited time, I haven’t been able to do anything with these. This release contains: DES

Blowfish

AES

Cipher Block Chaining (CBC)

PKCS#5 and nulls padding

SHA-1

MD5

RSA

OAEP-based encryption (Bellare-Rogaway) Further reading http://www.haskell.org/crypto http://hackage.haskell.org/trac/crypto.

The current release is 0.0.11 which contains functions to handle ASN.1, X.509, PKCS#8 and PKCS#1.5. This still has a dependency on NewBinary. The current version handles the Basic Encoding Rules (BER). In addition, a significant amount of work has been undertaken on handling the Packed Encoding Rules (PER) using a GADT to represent the Abstract Syntax Tree (we’ll probably move the BER to use the same AST at some point). You can download the current working version and try the unit and QuickCheck property tests for PER. These are not yet built by Cabal. This release supports: X.509 identity certificates

X.509 attribute certificates

PKCS#8 private keys

PKCS#1 version 1.5 Further reading http://haskell.org/asn1.

A two-level data transformation consists of a type-level transformation of a data format coupled with value-level transformations of data instances corresponding to that format. Examples of two-level data transformations include XML schema evolution coupled with document migration, and data mappings used for interoperability and persistence. In the 2LT project, support for two-level transformations is being developed using Haskell, relying in particular on generalized abstract data types (GADTs). Currently, the 2LT package offers: A library of two-level transformation combinators. These combinators are used to compose trans formation systems which, when applied to an input type, produce an output type, together with the conversion functions that mediate between input and out types.

Front-ends for VDM-SL, XML and SQL. These front-ends support (i) reading a schema, (ii) applying a two-level transformation system to produce a new schema, (iii) convert a document/database corresponding to the input schema to a document/database corresponding to the output schema, and vice versa .

. A combinator library for transformation of point-free and structure-shy functions. These combinators are used to compose transformation systems for optimization of conversion functions, and for migration of queries through two-level transformations. Independent of two-level transformation, the combinators can be used to specializes structure-shy programs (such as XPath queries and strategic functions) to structure-sensitive point-free from, and vice versa .

. Support for schema constraints using point-free expressions. Constraints present in the initial schema are preserved during the transformation process and new contraints are added in specific transformations to ensure semantic preservation. Constraints can be simplified using the already existent library for transformation of point-free functions. The various sets of transformation combinators are reminiscent of the combinators of Strafunski and the Scrap-your-Boilerplate approach to generic functional programming. The various sets of transformation combinators are reminiscent of the combinators of Strafunski and the Scrap-your-Boilerplate approach to generic functional programming. An release of 2LT is available from the project URL. Recently, the 2LT project has been migrated to Google Code. New functionality is planned, such as elaboration of the front-ends and the creation of a web interface. Further reading Project URL: http://2lt.googlecode.com Alcino Cunha, JoséNuno Oliveira, Joost Visser. Type-safe Two-level Data Transformation . Formal Methods 2006.

. Formal Methods 2006. Alcino Cunha, Joost Visser. Strongly Typed Rewriting For Coupled Software Transformation. RULE 2006.

Pablo Berdaguer, Alcino Cunha, Hugo Pacheco, Joost Visser. Coupled Schema Transformation and Data Conversion For XML and SQL . PADL 2007.

. PADL 2007. Alcino Cunha and Joost Visser. Transformation of Structure-Shy Programs, Applied to XPath Queries and Strategic Functions . PEPM 2007.

. PEPM 2007. Tiago L. Alves, Paulo Silva and Joost Visser. Constraint-aware Schema Transformation. Draft, 2007.

4.8 User interfaces

4.8.1 Shellac Report by: Rob Dockins Status: beta, maintained Shellac is a framework for building read-eval-print style shells. Shells are created by declaratively defining a set of shell commands and an evaluation function. Shellac supports multiple shell backends, including a ‘basic’ backend which uses only Haskell IO primitives and a full featured ‘readline’ backend based on the the Haskell readline bindings found in the standard libraries. This library attempts to allow users to write shells in a declarative way and still enjoy the advanced features that may be available from a powerful line editing package like readline. Shellac is available from Hackage, as is the related Shellac-readline package. Shellac has been successfully used by several independent projects and the API is now fairly stable. I will likely be releasing an officially “stable” version in the not-too-distant future. I anticipate few changes from the current version. Further reading http://www.cs.princeton.edu/~rdockins/shellac/home

Grapefruit is a library for creating graphical user interfaces and animated graphics in a declarative way. Fundamental to Grapefruit is the notion of signal. A signal denotes either a time-varying value (the continuous case) or a sequence of values assigned to discrete points in time (the discrete case). Signals can be constructed in a purely functional manner. User interfaces are described as systems of interconnected components which communicate via signals. To build such systems, the methods from the Arrow and ArrowLoop classes are used. For describing animated graphics, a special signal type exists. Grapefruit also provides list signals. A list signal is a list-valued signal which can be updated incrementally and thus efficiently. In addition, a list signal associates an identity with each element so that moving an element within the list can be distinguished from removing the element and adding it again. List signals can be used to describe dynamic user interfaces, i.e., user interfaces with a changing set of components and changing order of components. Grapefruit descriptions of user interfaces and animations always cover their complete lifetime. No explicit event handler registrations and no explicit recalculations of values are necessary. This is in line with the declarative nature of Haskell because it stresses the behavior of GUIs and animations instead of how this behavior is achieved. Internally though, Grapefruit is implemented efficiently using a common event dispatching and handling mechanism. Grapefruit is currently based on Gtk2Hs (→4.8.3) and HOpenGL but implementations on top of other GUI and graphics libraries are possible. The aim is to provide alternative implementations based on different GUI toolkits so that a single application is able to integrate itself into multiple desktop environments. Further reading http://haskell.org/haskellwiki/Grapefruit

Gtk2Hs is a GUI Library for Haskell based on Gtk+. Gtk+ is an extensive and mature multi-platform toolkit for creating graphical user interfaces. GUIs written using Gtk2Hs use themes to resemble the native look on Windows and, of course, various desktops on Linux, Solaris and FreeBSD. Gtk+ and Gtk2Hs also support Mac OS X (it currently uses the X11 server but a native port is in progress – see below). Gtk2Hs f