Index

This is the 21st edition of the Haskell Communities and Activities Report. As usual, fresh entries are formatted using a blue background, while updated entries have a header with a blue background. Entries for which I received a liveness ping, but which have seen no essential update for a while, have been replaced with online pointers to previous versions. Other entries on which no new activity has been reported for a year or longer have been dropped completely. Please do revive such entries next time if you do have news on them.

A call for new entries and updates to existing ones will be issued on the usual mailing lists in April. Now enjoy the current report and see what other Haskellers have been up to lately. Any feedback is very welcome, as always.

Janis Voigtländer, University of Bonn, Germany, <hcar at haskell.org>

Unfortunately we cannot provide a full statement of haskell.org’s accounts with this report; we are doing our best to track down the necessary information and will produce them as soon as possible. Better control and visibility of our finances and assets is of course one of the benefits we are seeking by affiliating with SFC or SPI.

The haskell.org infrastructure as a whole is still in a rather tenuous state. While the extreme unreliability we saw for a while has improved with the reorganisation, the level of sysadmin resource/involvement is still inadequate. The committee is open to ideas on how to improve the situation.

The committee as a whole’s involvement in this was only to approve the change — the sysadmin team did all the actual work.

For many years, www.haskell.org was generously hosted by Paul Hudak at Yale. This was becoming increasingly expensive for him so in late 2010 we moved to a new dedicated host (lambda.haskell.org). At the same time we put in place a policy that lambda would host only “meta” community resources, thus limiting the number of people who need to have accounts on it. For some time before this new project content had been created on community.haskell.org anyway, and this move gave us the opportunity to move “legacy” sites such as Gtk2Hs over to community. In addition, community.haskell.org is now also a VM running on the same machine.

Clearly the line between services and content, and indeed the precise definitions of each, is something of a grey area, and we are certainly happy to be flexible particularly if there are technical or other reasons for doing things one way. Our overall goal is to minimise unnecessary proliferation of subdomains and to try to keep the haskell.org domain reasonably well organised, while still helping people do useful things with it.

In contrast, during the year, we did add revdeps.hackage.haskell.org for a hackage reverse-dependency lookup service, and of course hackage.haskell.org already exists.

So for example a Haskell graphics related website should normally go at http://www.haskell.org/graphics, rather than http://graphics.haskell.org.

In response to various requests for subdomains of haskell.org, we have formulated the following policy, now (belatedly!) documented at http://www.haskell.org/haskellwiki/Haskell.org_domain#Policy_on_adding_new_subdomains

The committee would like to thank Jason Dagit who has been helping us to make progress on this issue over the last few months, with the support of his employer Galois.

In the meantime we are also investigating joining an alternative, Software in the Public Interest ( http://www.spi-inc.org ). Discussions about this option are still ongoing.

The main option we have been exploring is joining the Software Freedom Conservancy ( http://www.sfconservancy.org ). After seeking the community’s consent, we have contacted them to begin the application process. Unfortunately they are currently rather overworked and as they prioritise work for existing projects over accepting new ones, we do not yet know when there will be progress with this.

At the moment, Galois is kindly holding funds on behalf of haskell.org. However, this causes them administrative difficulties and it would also be better for haskell.org for them to be held separately in a vehicle with tax-free status (at least in the US) that can also accept donations.

The most important work for the year has been trying to get the ownership of haskell.org resources — principally some money from our GSoC participation, and various machines — on a sounder footing.

In our first year of operation, the following has happened:

The haskell.org committee was formed a year ago to formalise the previously ad-hoc arrangements around managing the haskell.org infrastructure and money. The committee’s “home page” is at http://www.haskell.org/haskellwiki/Haskell.org_committee , and occasional publicity is via a blog ( http://haskellorg.wordpress.com ) and twitter account ( http://twitter.com/#!/haskellorg ) as well as the Haskell mailing list.

Since the November 2010 HCAR, Haskellers has added job postings, strike forces, and the ever important bling, as well as a brand new, community-developed site design. Haskellers is quickly approaching 800 active accounts. To be clear, the site is intended for all members of the Haskell community, from professionals with 15 years experience to people just getting into the language.

Haskellers is a site designed to promote Haskell as a language for use in the real world by being a central meeting place for the myriad talented Haskell developers out there. It allows users to create profiles complete with skill sets and packages authored and gives employers a central place to find Haskell professionals.

The book uses GHCi, the interactive version of the Glasgow Haskell Compiler, as its implementation of choice. It has also been revised to include material about the Haskell Platform, and the Hackage online database of Haskell libraries. In particular, readers are given detailed guidance about how to find their way around what is available in these systems.

Existing material has been expanded and re-ordered, so that some concepts — such as simple data types and input/output — are presented at an earlier stage. The running example of Pictures is now implemented using web browser graphics as well as lists of strings.

The third edition of one of the leading textbooks for beginning Haskell programmers is thoroughly revised throughout. New material includes thorough coverage of property-based testing using QuickCheck and an additional chapter on domain-specific languages as well as a variety of new examples and case studies, including simple games.

Since the last HCAR there have been two new issues. Issue 18, published in July 2011, featured articles on a monadic formulation of MapReduce, parallel monad comprehensions, and attributed variables. Issue 19, published in October 2011, was a special issue on parallelism and concurrency, featuring an article on the Mighttpd web server, a tutorial on the use of MPI from Haskell, and an article about pipelines of coroutine-based processes.

The Monad.Reader is also a great place to write about a tool or application that deserves more attention. Most programmers do not enjoy writing manuals; writing a tutorial for The Monad.Reader, however, is an excellent way to put your code in the limelight and reach hundreds of potential users.

There are plenty of interesting ideas that might not warrant an academic publication—but that does not mean these ideas are not worth writing about! Communicating ideas to a wide audience is much more important than concealing them in some esoteric journal. Even if it has all been done before in the Journal of Impossibly Complicated Theoretical Stuff, explaining a neat idea about “warm fuzzy things” to the rest of us can still be plain fun.

There are many academic papers about Haskell and many informative pages on the HaskellWiki. Unfortunately, there is not much between the two extremes. That is where The Monad.Reader tries to fit in: more formal than a wiki page, but more casual than a journal article.

Our solution relies on the observation that each node in a finite tree can be identified by its path – a sequence of integers – with a decidable identity and order. To annotate tree nodes we build a separate finite map data structure associating nodes’ paths with their annotations. To add annotations of a different type, we build another map.

A common problem is annotating nodes of an already constructed tree or other such data structure with arbitrary new data. The original tree had been defined with no provision for node attributes, and we are not at liberty to change the data type definition. We should not even require rebuilding of the tree as we add annotations to its nodes. Our code must be pure functional; in particular, the tree to annotate should remain as it was. Finally, our solution should be expressible in a typed language without resorting to the Universal type.

Finally, we implement printf that takes a C-like format string and the variable number of other arguments. Unlike C of Haskell’s printf, ours is total: if the types or the number of the other arguments do not match the format string, a type error is reported. Likewise, we build a type-safe scanf that takes a C-like format string and the poly-variadic consumer function. We use Template Haskell to translate the format string to a term in the DSL of format descriptors.

The DSL of formatting patterns can be embedded into Haskell as a (generalized) algebraic data type, or as a family of overloaded functions (a type class). To the end user, the difference is hardly noticeable. However, whereas the first embedding requires GADT, the second one is entirely in Haskell98 and is extensible.

Our implementations of type-safe printf and scanf all share the same insight of a simple embedded domain-specific language (DSL) of formatting patterns. The functions printf and scanf are two interpreters of the language, building or parsing a string according to the given pattern. The format descriptor, a term in our DSL, can be interpreted in far more than two ways, producing a family of printf/scanf-like functions.

A set of several articles describes various type-safe implementations of printf and scanf, with the same format descriptor. A type-safe printf converts the sequence of heterogeneous arguments to a string according to a given format descriptor; the number and the types of the arguments must agree with the descriptor. Haskell’s Text.Printf.printf is not type-safe by the above definition since the type checker does not stop the programmer from passing to Text.Printf.printf more or fewer arguments than required by the formatting string. The dual type-safe scanf extracts a sequence of heterogeneous arguments from a string by interpreting the same format descriptor as a heterogeneous sequence of patterns binding zero or more variables. Although type-safe printf received a lot of attention (from Danvy, Hinze, Asai), the type-safe scanf is often neglected. Apparently it has been unknown if type-safe printf and scanf could share the same format descriptor.

Enumerator/Iteratee (EI) developed by Oleg Kiselyov is an API to enable modular programming in the IO monad. A popular implementation of EI is the enumerator library developed by John Millikin. This tutorial is a gentle introduction of the background of EI and how to use the enumerator library.

The magazine attempts to keep a bi-monthly release schedule, with Issue #7 leaving the press at the end of April 2011. Full contents of current and past issues are available in PDF from the official site of the magazine free of charge. Articles are in Russian, with English annotations.

“Practice of Functional Programing” is a Russian electronic magazine promoting functional programming. The magazine features articles that cover both theoretical and practical aspects of the craft. Significant amount of the already published material is directly related to Haskell.

The platform steering committee will be proposing some modifications to the community review process for accepting new packages into the platform process with the aim of reducing the burden for package authors and keeping the review discussions productive. Though we will be making some modifications, we would still like to invite package authors to propose new packages. This can be initiated at any time. We also invite the rest of the community to take part in the review process on the libraries mailing list <libraries at haskell.org>. The procedure involves writing a package proposal and discussing it on the mailing list with the aim of reaching a consensus. Details of the procedure are on the development wiki.

Our systems for coordinating and testing new releases remains too time consuming, involving too much manual work. Help from the community on this issue would be very valuable.

Major releases are supposed to take place on a 6 month cycle. There will be a major release in Spring 2012 which will be based on the GHC-7.4.x series.

There has not been a release in the last 6 months. While the plan calls for major releases every 6 months this has not happened for a number of reasons. We took the decision not to base a major release on GHC-7.2.1 and no new release in the 7.2.x series is expected. We ran into some problems trying to prepare a release using GHC-7.0.4, however we may yet do a release using GHC-7.0.4.

Historically, GHC shipped with a collection of packages under the name extralibs . Since GHC 6.12 the task of shipping an entire platform has been transferred to the Haskell Platform.

The Haskell Platform (HP) is the name of the “blessed” set of libraries and tools on which to build further Haskell libraries and applications. It takes a core selection of packages from the more than 3500 on Hackage (→ 6.8.1 ). It is intended to provide a comprehensive, stable, and quality tested base for Haskell projects to work from.

Moreover, we are working at the third revision of the regular parallel array library [Repa]. It uses indexed types to distinguish multiple array representations, which helps to guide users to write high-performance code. To see it in action, check out Ben Lippmeier’s recent demo [Quasicrystals].

Binary distributions of GHC 7.x require the installation of separate Data Parallel Haskell libraries from Hackage — follow the instructions in the wiki documentation [DPH].

As ever, there is a lot still to do, and if you wait for us to do something then you may have to wait a long time. So do not wait; join in!

We continue to receive some fantastic help from a number of members from the Haskell community. Amongst those who have rolled up their sleeves recently are:

For further details and usage examples, see the paper "Bringing back monad comprehensions" [MonadComp] at the 2011 Haskell Symposium.

Rebindable syntax is fully supported for standard monad comprehensions with generators and filters. We also plan to allow rebinding of the parallel/zip and SQL-like monad comprehension notations.

Since we do not give a definition for T in the instance declaration, it filled in with the default given in the class declaration, just as if you had written type T Int = [Int] .

Here X is an associated constraint synonym of the class Coll . The key point is that different instances can give different definitions to X . The GHC wiki page describes the design [WikiConstraint], and Max’s blog posts give more examples [ConstraintFamlies, ConstraintKind].

Here, the constraint (Stringy a) is a synonym for (Show a, Read a) . More importantly, by combining with associated types, we can write some fundamentally new kinds of programs:

GHC will now infer the polymorphic kind signature above, rather that "defaulting" to T :: (*->*) -> * -> * as Haskell98 does.

This has already been merged, so will definitely be in 7.4.

We advertised 7.2 as a technology preview, expecting 7.4 to merely consolidate the substantial new features in 7.2. But as it turns out GHC 7.4 will have a further wave of new features, especially in the type system. Significant changes planned for the 7.4 branch are:

GHC is still humming along, with the 7.2.1 release (more of a "technology preview" than a stable release) having been made in August, and attention now focused on the upcoming 7.4 branch. By the time you read this, the 7.4 branch will have been created, and will be in "feature freeze". We will then be trying to fix as many bugs as possible before releasing later in the year.

BackgroundUHC actually is a series of compilers of which the last is UHC, plus infrastructure for facilitating experimentation and extension. The distinguishing features for dealing with the complexity of the compiler and for experimentation are (1) its stepwise organisation as a series of increasingly more complex standalone compilers, the use of DSL and tools for its (2) aspectwise organisation (called Shuffle) and (3) tree-oriented programming (Attribute Grammars, by way of the Utrecht University Attribute Grammar (UUAG) system (→ 5.4.1 ).

What do we currently do and/or has recently been completed? As part of the UHC project, the following (student) projects and other activities are underway (in arbitrary order):

What is new? UHC is the Utrecht Haskell Compiler, supporting almost all Haskell98 features and most of Haskell2010, plus experimental extensions. The current focus is on the Javascript backend.

If you find yourself interested in helping us or simply want to use the latest versions of Haskell programs on FreeBSD, check out our page at the FreeBSD wiki (see below) where you can find all important pointers and information required for use, contact, or contribution.

We have a developer repository for Haskell ports that features around 255 ports of many popular Cabal packages. The updates committed to this repository are continuously integrated to the official ports tree on a regular basis. Though the FreeBSD Ports Collection already has many important Haskell software: GHC 7.0.3, Haskell Platform 2011.2.0.1, Gtk2Hs 0.12, XMonad 0.10, Pandoc 1.8, Darcs 2.5, and Snap 0.5.2 – that is going to be also incorporated into the upcoming FreeBSD 9.0-RELEASE.

The FreeBSD Haskell Team is a small group of contributors who maintain Haskell software on all actively supported versions of FreeBSD. The primarily supported implementation is the Glasgow Haskell Compiler together with Haskell Cabal, although one may also find Hugs and NHC98 in the ports tree. FreeBSD is a Tier-1 platform for GHC (on both i386 and amd64) starting from GHC 6.12.1, hence one can always download vanilla binary distributions for each recent release.

The transition to GHC 7, which involved renaming all packages, is finished. The stable Debian release (“squeeze”) provides the Haskell Platform 2010.1.0.0, Debian testing contains 2011.2.0.1 and in unstable we are currently staging the to-be released 2011.3.0.0. Other noteworthy additions to Haskell on Debian are the yesod packages and a port of GHC to the 64bit mainframe architecture “s390x”.

A system of virtual package names and dependencies, based on the ABI hashes, guarantees that a system upgrade will leave all installed libraries usable. Most libraries are also optionally available with the profiling data and the documentation packages register with the system-wide index.

The Debian Haskell Group aims to provide an optimal Haskell experience to users of the Debian GNU/Linux distribution and derived distributions such as Ubuntu. We try to follow the Haskell Platform versions for the core package and package a wide range of other useful libraries and programs. In total, we maintain 390 source packages, an increase of 80% over the number from the last report.

As always we are more than happy for (and in fact encourage) Gentoo users to get involved and help us maintain our tools and packages, even if it is as simple as reporting packages that do not always work or need updating: with such a wide range of GHC and package versions to co-ordinate, it is hard to keep up! Please contact us on IRC or email if you are interested!

More information about the Gentoo Haskell Overlay can be found at http://haskell.org/haskellwiki/Gentoo . It is available via the Gentoo overlay manager “layman”. If you choose to use the overlay, then any problems should be reported on IRC ( #gentoo-haskell on freenode), where we coordinate development, or via email <haskell at gentoo.org> (as we have more people with the ability to fix the overlay packages that are contactable in the IRC channel than via the bug tracker).

The team made considerable effort to port a lot of popular packages to ghc-7.2. There is a lot of patches sitting in overlay and waiting for upstream inclusion though.

There is also an overlay which contains almost 700 extra unofficial and testing packages. Thanks to the Haskell developers using Cabal and Hackage (→ 6.8.1 ), we have been able to write a tool called “hackport” (initiated by Henning Günther) to generate Gentoo packages with minimal user intervention. Notable packages in the overlay include the latest version of the Haskell Platform (→ 3.1 ) as well as the latest 7.2.1 release of GHC, as well as popular Haskell packages such as pandoc (→ 8.2.2 ), gitit ( http://www.haskell.org/communities/11-2010/html/report.html#sect5.2.5 ), yesod (→ 5.2.6 ) and others.

The full list of packages available through the official repository can be viewed at http://packages.gentoo.org/category/dev-haskell?full_cat .

Feedback from users and packaging contributions to Fedora Haskell are always welcome: join us on #fedora-haskell on Freenode IRC and our mailing-list.

In the Fedora 17 development cycle it is planned to update ghc to 7.4 and continue work on packaging the Snap and Yesod web frameworks.

There are currently 139 Haskell source packages in Fedora. Note the Fedora package version numbers listed on the Hackage website refer to the packages for the latest stable Fedora release.

These changes have also been partially backported to Fedora 14 and 15.

Fedora 16 is shipping early in November with ghc-7.0.4 and haskell-platform-2011.2.0.1, and updates to many of the packages. Newly added packages this time include leksah, cabal-dev, cab, and over 25 new libraries.

The Fibon tools and benchmark suite are ready for public consumption. They can be found on github at the url indicated below. People are invited to use the included benchmark suite or just use the tools and build a suite of their own creation. Any improvements to the tools or additional benchmarks are most welcome. Benchmarks have been used to tell lies about performance for many years, so join in the fun and keep on fibbing with Fibon.

This year, the Fibon benchmark suite has been updated to include a Train problem size that can be used for feedback directed optimization work. The Ref problem size has been increased so that the running time of a benchmark program is comparable to the running time when using the ref size of the SPEC benchmarks. With this update a single benchmark will typically take 10-30 minutes to run depending on the power of the computer hardware. See the README file for more information on benchmark size and configuring the benchmarks to finish in an acceptable amount of time.

As a real life example of a complete benchmark suite, Fibon comes with its own set of benchmarks for testing the effectiveness of compiler optimizations in GHC. The benchmark programs come from Hackage , the Computer Language Shootout , Data Parallel Haskell , and Repa . The benchmarks were selected to have minimal external dependencies so they could be easily used with a version of GHC compiled from the latest sources. The following figure shows the performance improvement of GHC’s optimizations on the Fibon benchmark suite.

Benchmarks are built using the standard cabal tool. Any program that has been cabalized can be added as benchmark simply by specifying some meta-information about the program inputs and expected outputs. Fibon will automatically collect execution times for benchmarks and can optionally read the statistics output by the GHC runtime. The program outputs are checked to ensure correct results making Fibon a good option for testing the safety and performance of program optimizations. The Fibon tools are not tied to any one benchmark suite. As long as the correct meta-information has been supplied, the tools will work with any set of programs.

The Fibon benchmark tools draw inspiration from both the venerable nofib Haskell benchmark suite and the industry standard SPEC benchmark suite. The tools automate the tedious parts of benchmarking: building the benchmark in a sand-boxed directory, running the benchmark multiple times, verifying correctness, collecting statistics, and summarizing results.

Fibon is a set of tools for running and analyzing benchmark programs in Haskell. It contains an optional set of benchmarks from various sources including several programs from the Hackage repository.

At the time of writing version 2.3.0 is about to be released, with the following new features (among others):

A lot of work remains in order for Agda to become a full-fledged programming language (good libraries, mature compilers, documentation, etc.), but already in its current state it can provide lots of fun as a platform for experiments in dependently typed programming.

Agda is a dependently typed functional programming language (developed using Haskell). A central feature of Agda is inductive families, i.e. GADTs which can be indexed by values and not just types. The language also supports coinductive types, parameterized modules, and mixfix operators, and comes with an interactive interface—the type checker can assist you in the development of your code.

Recent features include bounded size quantification and destructor patterns for a more general handling of coinduction. In the long run, I plan to evolve MiniAgda into a core language for Agda with termination certificates.

MiniAgda is a tiny dependently-typed programming language in the style of Agda (→ 4.1 ). It serves as a laboratory to test potential additions to the language and type system of Agda. MiniAgda’s termination checker is a fusion of sized types and size-change termination and supports coinduction. Equality incorporates eta-expansion at record and singleton types. Function arguments can be declared as static; such arguments are discarded during equality checking and compilation.

Over the last six months we continued working towards mechanising the metatheory of the DDC core language in Coq. We’ve finished Progress and Preservation for System-F2 with mutable algebraic data, and are now looking into proving contextual equivalence of rewrites in the presence of effects. Based on this experience, we’ve also started on an interpreter for a cleaned up version of the DDC core language. We’ve taken the advice of previous paper reviewers and removed dependent kinds, moving witness expressions down to level 0 next to value expressions. In the resulting language, types classify both witness and value expressions, and kinds classify types. We’re also removing more-than constraints on effect and closure variables, along with dangerous type variables (which never really worked). All over, it’s being pruned back to the parts we understand properly, and the removal of dependent kinds will make mechanising the metatheory easier. Writing an interpreter for the core language also gets us a parser for it, which we will need for performing cross module inlining in the compiler proper.

Our compiler (DDC) is still in the “research prototype” stage, meaning that it will compile programs if you are nice to it, but expect compiler panics and missing features. You will get panics due to ungraceful handling of errors in the source code, but valid programs should compile ok. The test suite includes a few thousand-line graphical demos, like a ray-tracer and an n-body collision simulation, so it is definitely hackable.

Disciple is a dialect of Haskell that uses strict evaluation as the default and supports destructive update of arbitrary data. Many Haskell programs are also Disciple programs, or will run with minor changes. In addition, Disciple includes region, effect, and closure typing, and this extra information provides a handle on the operational behaviour of code that is not available in other languages. Our target applications are the ones that you always find yourself writing C programs for, because existing functional languages are too slow, use too much memory, or do not let you update the data that you need to.

The Eden skeleton library is under constant development. Currently it contains various skeletons for parallel maps, workpools, divide-and-conquer, topologies and many more. Take a look on the Eden pages.

The Eden trace viewer tool EdenTV provides a visualisation of Eden program runs on various levels. Activity profiles are produced for processing elements (machines), Eden processes and threads. In addition message transfer can be shown between processes and machines. EdenTV has been written in Haskell and is freely available on the Eden web pages.

The current release of the Eden compiler based on GHC 6.12.3 is available on our web pages, see http://www.mathematik.uni-marburg.de/~eden . A release based on GHC 7 is in preparation. It will include a shared memory mode which does not depend on a middleware like MPI but which nevertheless uses multiple independent heaps (in contrast to GHC’s threaded runtime system) connected by Eden’s parallel runtime system. An Eden variant of GHC’s head version is available in a repository on github, see https://github.com/jberthold/ghc .

Eden’s main constructs are process abstractions and process instantiations. The Eden logo consists of four λ turned in such a way that they form the Eden instantiation operator #05. Higher-level coordination is achieved by defining skeletons , ranging from a simple parallel map to sophisticated master-worker schemes. They have been used to parallelize a set of non-trivial programs.

Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronization, and process handling.

The latest GUM implementation of GpH is built on GHC 6.12, using either PVM or MPI as communications library. It implements a virtual shared memory abstraction over a collection of physically distributed machines. At the moment our main hardware platforms are Intel-based Beowulf clusters of multicores. We plan to connect several of these clusters into a wide-area, hierarchical, heterogenous parallel architecture.

In recent work we have developed and released a GHCi-based computer algebra shell (CASH) that gives direct access to computer algebra functionality, provided by an SCSCP server, and enabling easy parallelism on the Haskell side.

As part of the SCIEnce EU FP6 I3 project (026133) (April 2006 – December 2011) and the HPC-GAP project (October 2009 – September 2013) we use Eden and GpH as middleware to provide access to computational Grids from Computer Algebra (CA) systems, including GAP, Maple MuPad and KANT. We have developed and released SymGrid-Par, a Haskell-side infrastructure for orchestrating heterogeneous computations across high-performance computational Grids. Based on this infrastructure we have developed a range of domain-specific parallel skeletons for parallelising representative symbolic computation applications. We are currently extending SymGrid-Par with support for fault-tolerance, targeting massively parallel high-performance architectures.

A distributed-memory, GHC-based implementation of the parallel Haskell extension GpH and of a fundamentally revised version of the evaluation strategies abstraction is available in a prototype version. In current research an extended set of primitives, supporting hierarchical architectures of parallel machines, and extensions of the runtime-system for supporting these architectures are being developed.

Finally, we have completed a pure Haskell implementation of the "Modified Additive Lagged Fibonacci" random number generator. This generator is attractive for use in Monte Carlo simulations because it is splittable and has good statistical quality, while providing high performance. The LFG implementation will be released on Hackage when it has undergone more extensive quality testing.

The project has also inspired some interesting write-ups from project members working on the side. Bernie Pope and Dmitry Astapov wrote an article for the recent Monad Reader special edition on parallelism and concurrency. In their article, Bernie and Dmitry discuss the Haskell MPI binding developed within the context of this project. Kazu Yamamoto wrote an article discusses the latest version of the high-performance web server Mighttpd, particularly how it takes advantage of the new IO manager in GHC 7.

Meanwhile, there has been a lot of work in documenting and presenting the work done in the project to the world. The team at Los Alamos National Laboratory have presented to their colleagues their work on high performance Monte Carlo simulations using parallel Haskell, now published under in the report LA-UR 11-0341. Duncan Coutts presented our recent work on ThreadScope at the Haskell Implementors Workshop in Tokyo (23 Sep). He talked about the new spark visualisation feature which shows a graphical representation of spark creation and conversion statistics.

Since the last report, the Parallel GHC project has been joined by two new industrial partners, the research and development group of Spanish telecoms company Telefonica, and VETT a UK-based payment processing company. We are excited to be working with the teams at Telefonica I+D and VETT. We hope to making good use of Cloud Haskell with these partners.

Microsoft Research is funding a 2-year project to promote the real-world use of parallel Haskell. The project started in November 2010, with four industrial partners, and consulting and engineering support from Well-Typed (→ 9.1 ). Each organisation is working on its own particular project making use of parallel Haskell. The overall goal is to demonstrate successful serious use of parallel Haskell, and along the way to apply engineering effort to any problems with the tools that the organisations might run into.

The WAI standard has proven itself capable for different users and there are no major plans for changes and improvements. Future ideas include allowing Middleware to pass along arbitrary data.

WAI is most often used in conjunction with the Yesod web framework (→ 5.2.6 ), but it is designed in a framework independent way. There are some plain WAI users such as Hoogle (→ 6.2.2 ). There are also some new web frameworks that take a completely different approach to web development that use WAI, such as webwire (FRP) and dingo (GUI).

WAI is also a platform for sharing code between web applications and web frameworks through WAI middleware and WAI applications. WAI middleware can inspect and transform a request, for example by automatically gzipping a response or logging a request. WAI applications can send a response themselves. For example, wai-app-static is used by Yesod to serve static files. By targeting WAI, every web framework can share WAI code instead of wasting effort re-implementing the same functionality.

The Web Application Interface (WAI) is an interface between Haskell web applications and Haskell web servers. By targeting the WAI, a web framework or web application gets access to multiple deployment platforms. Platforms in use include CGI, the Warp web server, and desktop webkit.

Warp is a high performance, easy to deploy HTTP server backend for WAI (→ 5.2.1 ). Since the last HCAR, Warp has become more battle tested and can be considered a stable, production ready web server. Due to the combined use of ByteStrings, Blaze-Builder, Enumerators, and GHC’s improved I/O manager, Wai+Warp has consistently proven to be Haskell’s most performant web deployment option. Its performance is better than dynamic language alternatives and seems to be in league with industry standards such as Nginx (benchmarks forthcoming). Warp currently serves Hoogle (→ 6.2.2 ), hums, and several production Yesod web sites (→ 5.2.6 ).

The Holumbus web page ( http://holumbus.fh-wedel.de/ ) includes downloads, Git web interface, current status, requirements, and documentation. Timo Hübel’s master thesis describing the Holumbus index structure and the search engine is available at http://holumbus.fh-wedel.de/branches/develop/doc/thesis-searching.pdf . Sebastian Gauck’s thesis dealing with the crawler component is available at http://holumbus.fh-wedel.de/src/doc/thesis-indexing.pdf The thesis of Stefan Schmidt describing the Holumbus MapReduce is available via http://holumbus.fh-wedel.de/src/doc/thesis-mapreduce.pdf .

In the second project a specialized search engine for our university web site has been built. The new aspect in this application is a specialized free text search for appointments, deadlines, announcements, meetings and other dates. There is a running prototype of this search engine. We expect to finish this work in November 2011 and then to use this engine as the official search engine of our university web site.

There are two running projects. The first, a masters thesis done by Sebastian Schröder, deals with the development of a framework for news systems. The functionality will be like with google news, but the target is to build news systems for specialized topics. We expect to finish this project at the end of 2011.

The search engine package includes the indexer and search modules, the MapReduce package bundles the distributed MapReduce system. This is based on two other packages, which may be useful for their on: The Distributed Library with a message passing communication layer and a distributed storage system.

The framework is now separated into four packages, all available on Hackage.

The framework supports distributed computations for building indexes and searching indexes. This is done with a MapReduce like framework. The MapReduce framework is independent of the index- and search-components, so it can be used to develop distributed systems with Haskell.

The Holumbus framework consists of a set of modules and tools for creating fast, flexible, and highly customizable search engines with Haskell. The framework consists of two main parts. The first part is the indexer for extracting the data of a given type of documents, e.g., documents of a web site, and store it in an appropriate index. The second part is the search engine for querying the index.

Happstack can also be extended using a wide range of libraries which include support for alternative HTML templating systems, javascript templating and generation, type-safe URLs, type-safe form generation and validation, RAM-cloud database persistence, OpenId authentication, and more.

At the core of Happstack is the happstack-server package which provides a fast, powerful, and easy to use HTTP server with built-in support for templating (via blaze-html), request routing, form-decoding, cookies, file-uploads, etc. happstack-server is all you need to create a simple website.

While Happstack is over 7 years old, it is still undergoing active development and new innovation. It is used in a number of commercial projects as well as the new Hackage 2 server.

The Happstack project is focused on leveraging the unique characteristics of Haskell to create a highly-scalable, robust, and expressive web framework.

The performance of Mighttpd 2 is now comparable to highly tuned web servers written in C Please read “The Monad.Reader” Issue 19 for more information.

Mighttpd 2 stops using the c10k library because GHC 7 starts using epoll()/kqueue(). The file/CGI handling part of the webserver library is re-implemented as a web application on the wai library (→ 5.2.1 ). For HTTP transfer, Mighttpd 2 links the warp library (→ 5.2.2 ) which can send a file in zero copy manner thank to sendfile().

Mighttpd version 1 was implemented with two libraries c10k and webserver . Since GHC 6 uses select(), more than 1,024 connections cannot be handled at the same time. The c10k library gets over this barrier with the pre-fork technique. The webserver library provides HTTP transfer and file/CGI handling.

Mighttpd (called mighty) version 2 is a simple but practical Web server in Haskell. It is now working on Mew.org providing basic web features and CGI (mailman and contents search).

The Yesod site ( http://www.yesodweb.com/ ) is a great place for information. It has code examples, screencasts, the Yesod blog and — most importantly — a book on Yesod.

To see an example site with source code available, you can view Haskellers (→ 1.2 ) source code: ( https://github.com/snoyberg/haskellers ).

We are excited to be near a 1.0 release. 1.0 to us means API stability and a web framework that gives developers all the tools they need for productive web development. But we already have a productive framework in use by the Haskell community, including commercial users.

Yesod is currently on its 0.9 version. The last HCAR entry was for the 0.8 version. Since then we have added:

Yesod is broken up into many smaller projects and uses (→ 5.2.1 ) to communicate with the server. This means that many of the powerful features of Yesod can be used in different web development stacks. Recently a continuation-based FRP web framework called webwire was released. It uses WAI and many other libraries that have been produced under Yesod.

MVC stands for model-view-controller. The preferred library for models is Persistent (→ 7.4.2 ). Views are handled by the Shakespeare family of compile-time template languages. This includes Hamlet, which takes the tedium out of HTML. Controllers are invoked through declarative routing. Their return type shows which response types are allowed for the request.

When type safety conflicts with programmer productivity, Yesod is not afraid to use Haskell’s most advanced features of Template Haskell and quasi-quoting to provide easier development for its users. In particular, these are used for declarative routing, declarative schemas, and compile-time templates.

Of course type-safety guarantees against typos or the wrong type in a function. But Yesod cranks this up a notch to guarantee common web application errors won’t occur.

But Yesod is even more focused on scalable development. A developer should be able to continue to productively write code as their application grows and more team members join, including designers. The key to achieving this is applying Haskell’s type-safety to an otherwise traditional MVC REST web framework.

Performance scalablity comes from the amazing GHC compiler and runtime. GHC provides fast code and built-in evented asynchronous IO. The standard Warp web server utilizes this to serve more simlutaneous requests than any other web application server we know of.

Since the 0.6 release an independently written snaplet for accessing HDBC databases has already been published. We expect to see more of this kind of third-party development and hope to eventually have a vibrant ecosystem of snaplets providing a deep body of pluggable functionality.

In September, Gregory Collins gave a CUFP tutorial on building web applications with Snap. The tutorial demonstrated how to use long polling JSON calls to implement a simple web-based chat room. Slides and source code for his presentation are in the links below.

The Snap Framework has seen two major releases (0.5 and 0.6) since the last HCAR with a development team that continues to grow. Snap 0.6 introduces composable web application components called snaplets, which allow you to build self-contained pieces of your web site in a structured way. The snaplet API simplifies distribution, installation, and configuration, allowing you to easily add new functionality to your application in a safe, clean way with very little boilerplate. Snap 0.6 also ships with built-in snaplets for templating, sessions, and authentication.

The Snap Framework is a web application framework built from the ground up for speed, reliability, and ease of use. The project’s goal is to be a cohesive high-level platform for web development that leverages the power and expressiveness of Haskell to make building websites quick and easy.

ghci> url (Blog 2011 9 19)

== "/blog/2009-9-19"



Recent developments

I have ported ivy-web from wai to snap-server backend, and also wrote a sample project correspond to the starter project of snap. When everything is fine and I am free, I will upload the code and bump the version to 0.2.

Further reading

rss2irc is an IRC bot that polls a single RSS or Atom feed and announces new items to an IRC channel, with options for customizing output and behavior. It aims to be an easy to use, dependable bot that does its job and creates no problems. rss2irc was published in 2008 by Don Stewart. Simon Michael took over maintainership in 2009, with the goal of making a robust low-maintenance bot to stimulate development in various free/open-source software communities. It is currently used for several full-time bots including: hackagebot — announces new hackage releases in #haskell

hledgerbot — announces hledger commits in #ledger

zwikicommitbot — announces Zwiki commits in #zwiki

squeaksobot — announces Squeak and Smalltalk-related Stack Overflow questions in #squeak

squeakquorabot — announces Squeak/Smalltalk-related Quora questions in #squeak

etoystrackerbot — announces new Etoys bugs in #etoys

etoysupdatesbot — announces Etoys commits in #etoys

planetzopebot — announces new planet.zope.org posts in #zope The project is available under BSD license from its home page at http://hackage.haskell.org/package/rss2irc. Since last report there has been a great deal of cleanup and enhancement, but no new release on hackage yet due to an xml-related memory leak. Further reading http://hackage.haskell.org/package/rss2irc

5.3 Haskell and Games

FunGEn (Functional Game Engine) is a BSD-licensed cross-platform 2D game engine implemented in and for Haskell, using OpenGL and GLUT. It was created in 2002 by Andre Furtado, updated in 2008 by Simon Michael and Miloslav Raus, and revived again in 2011, with a GHC 6.12-tested 0.3 release on Hackage, preliminary haddockification and a new home repo. FunGEn remains the quickest path to building cross-platform graphical games in Haskell, due to its convenient game framework and widely-available dependencies. It comes with several working examples that are quite easy to read and build (pong, worms). In the last six months there has been little activity and a new maintainer would be welcome. FunGEn-related discussions most often appear in the #haskell-game channel on irc.freenode.net. Further reading http://darcsden.com/simon/fungen

5.3.2 Nikki and the Robots Report by: Sönke Hahn Participants: Joyride Laboratories GbR Status: alpha, active Nikki and the Robots is a 2D platformer written in Haskell and produced by Joyride Laboratories. Nikki, the protagonist, walks and jumps around the levels wearing a cute ninja/cat costume. Nikki refrains from using any tools or weapons, with one exception: The Robots. These come in various types with different abilities and can be used by Nikki to solve puzzles, overcome obstacles, and complete the level tasks. The game features an integrated level editor. We made our first binary release of Nikki and the Robots in April 2011. Publishing We are releasing the game and the level editor under an open source license (LGPL). The included graphics are published under a permissive Creative Commons license (cc-by-sa). We are also planning to create a server that will allow players to upload the levels they created and download levels from other players. We hope that a community of coders, level creators, and players will emerge around the game. Simultaneously, we are working on episodes that we plan to sell via the game. These will include new graphics, more robots, a story line, other characters, and other surprises. (Just to clarify: The licensing is very permissive. It allows others to create their own episodes and distribute them freely or sell them. This would be very welcome. If anybody is interested in this, we propose to join forces and sell all our episodes through one system.) Technologies Used Qt for user input and rendering.

OpenGL as an efficient rendering backend for Qt. Everything will remain 2D, though - we promise!

Hipmunk, the Haskell bindings to the chipmunk physics engine. Getting Involved The project is still in alpha stage, so there are some features that are not yet implemented. For some, we have a clear vision on how to implement them; for others, we do not. If you want to get involved, check out our darcs repo, our launchpad site, and do not hesitate to contact us. Further reading http://joyridelabs.de

http://joyridelabs.de/game/code/

http://joyridelabs.de/game/download/

5.4 Haskell and Compiler Writing

UUAG is the Utrecht University Attribute Grammar system. It is a preprocessor for Haskell that makes it easy to write catamorphisms, i.e., functions that do to any data type what foldr does to lists. Tree walks are defined using the intuitive concepts of inherited and synthesized attributes, while keeping the full expressive power of Haskell. The generated tree walks are efficient in both space and time. An AG program is a collection of rules, which are pure Haskell functions between attributes. Idiomatic tree computations are neatly expressed in terms of copy, default, and collection rules. Attributes themselves can masquerade as subtrees and be analyzed accordingly (higher-order attribute). The order in which to visit the tree is derived automatically from the attribute computations. The tree walk is a single traversal from the perspective of the programmer. Nonterminals (data types), productions (data constructors), attributes, and rules for attributes can be specified separately, and are woven and ordered automatically. These aspect-oriented programming features make AGs convenient to use in large projects. The system is in use by a variety of large and small projects, such as the Utrecht Haskell Compiler UHC (→3.3), the editor Proxima for structured documents (http://www.haskell.org/communities/05-2010/html/report.html#sect6.4.5), the Helium compiler (http://www.haskell.org/communities/05-2009/html/report.html#sect2.3), the Generic Haskell compiler, UUAG itself, and many master student projects. The current version is 0.9.39 (October 2011), is extensively tested, and is available on Hackage. Recently, we improved the Cabal support and ensured compatibility with GHC 7. We are working on the following enhancements of the UUAG system: First-class AGs We provide a translation from UUAG to AspectAG (→5.4.2). AspectAG is a library of strongly typed Attribute Grammars implemented using type-level programming. With this extension, we can write the main part of an AG conveniently with UUAG, and use AspectAG for (dynamic) extensions. Our goal is to have an extensible version of the UHC. Ordered evaluation We have implemented a variant of Kennedy and Warren (1976) for ordered AGs. For any absolutely non-circular AGs, this algorithm finds a static evaluation order, which solves some of the problems we had with an earlier approach for ordered AGs. A static evaluation order allows the generated code to be strict, which is important to reduce the memory usage when dealing with large ASTs. The generated code is purely functional, does not require type annotations for local attributes, and the Haskell compiler proves that the static evaluation order is correct. Multi-core evaluation Our algorithm for ordered AGs identifies statically which subcomputations of children of a production are independent and suitable for parallel evaluation. Together with the strict evaluation as mentioned above, which is important when evaluating in parallel, the generated code can automatically exploit multi-core CPUs. We are currently evaluating the effectiveness of this approach. Stepwise evaluation In the recent past we worked on a stepwise evaluation scheme for AGs. Using this scheme, the evaluation of a node may yield user-defined progress reports, and the evaluation to the next report is considered to be an evaluation step. By asking nodes to yield reports, we can encode the parallel exploration of trees and encode breadth-first search strategies. We are currently also running a Ph.D. project that investigates incremental evaluation. We are currently also running a Ph.D. project that investigates incremental evaluation. Further reading http://www.cs.uu.nl/wiki/bin/view/HUT/AttributeGrammarSystem

http://hackage.haskell.org/package/uuagc

AspectAG is a library of strongly typed Attribute Grammars implemented using type-level programming. Introduction Attribute Grammars (AGs), a general-purpose formalism for describing recursive computations over data types, avoid the trade-off which arises when building software incrementally: should it be easy to add new data types and data type alternatives or to add new operations on existing data types? However, AGs are usually implemented as a pre-processor, leaving e.g. type checking to later processing phases and making interactive development, proper error reporting and debugging difficult. Embedding AG into Haskell as a combinator library solves these problems. Previous attempts at embedding AGs as a domain-specific language were based on extensible records and thus exploiting Haskell’s type system to check the well-formedness of the AG, but fell short in compactness and the possibility to abstract over oft occurring AG patterns. Other attempts used a very generic mapping for which the AG well-formedness could not be statically checked. We present a typed embedding of AG in Haskell satisfying all these requirements. The key lies in using HList-like typed heterogeneous collections (extensible polymorphic records) and expressing AG well-formedness conditions as type-level predicates (i.e., typeclass constraints). By further type-level programming we can also express common programming patterns, corresponding to the typical use cases of monads such as Reader, Writer, and State. The paper presents a realistic example of type-class-based type-level programming in Haskell. We have included support for local and higher-order attributes. Furthermore, a translation from UUAG to AspectAG is added to UUAGC as an experimental feature. Current Status We have recently added a combinator agMacro to provide support for “attribute grammars macros”; a mechanism that makes it easy to define attribute computation in terms of already existing attribute computation. Background The approach taken in AspectAG was proposed by Marcos Viera, Doaitse Swierstra, and Wouter Swierstra in the ICFP 2009 paper “Attribute Grammars Fly First-Class: How to do aspect oriented programming in Haskell”. The Attribute Grammar Macros combinator is described in a technical report: UU-CS-2011-028. Further reading http://www.cs.uu.nl/wiki/bin/view/Center/AspectAG

5.4.3 LQPL — A Quantum Programming Language Compiler and Emulator Report by: Brett G. Giles Participants: Dr. J.R.B. Cockett Status: v 0.8.4 experimental released See: http://www.haskell.org/communities/11-2010/html/report.html#sect5.4.4.

6 Development Tools

6.1 Environments

6.1.1 EclipseFP Report by: JP Moresmau Participants: B. Scott Michel, Alejandro Serrano, building on code from Thiago Arrais, Leif Frenzel, Thomas ten Cate, and others Status: stable, maintained EclipseFP is a set of Eclipse plugins to allow working on Haskell code projects. It features Cabal integration (.cabal file editor, uses Cabal settings for compilation), and GHC integration. Compilation is done via the GHC API, syntax coloring uses the GHC Lexer. Other standard Eclipse features like code outline, folding, and quick fixes for common errors are also provided. EclipseFP also allows launching GHCi sessions on any module including extensive debugging facilities. It uses Scion to bridge between the Java code for Eclipse and the Haskell APIs. The source code is fully open source (Eclipse License) and anyone can contribute. Current version is 2.1.0, released in September 2011 and supporting GHC 6.12 and 7.0, and more versions with additional features are planned. Feedback on what is needed is welcome! The website has information on downloading binary releases and getting a copy of the source code. Support and bug tracking is handled through Sourceforge forums. Further reading http://eclipsefp.sourceforge.net/

6.1.2 ghc-mod — Happy Haskell Programming on Emacs Report by: Kazu Yamamoto Status: open source, actively developed ghc-mod is an enhancement of the Haskell mode on Emacs. It provides the following features: Completion You can complete a name of keyword, module, class, function, types, language extensions, etc. Code template You can insert a code template according to the position of the cursor. For instance, “module Foo where” is inserted in the beginning of a buffer. Syntax check Code lines with error messages are automatically highlighted thanks to flymake. You can display the error message of the current line in another window. hlint (→6.3.2) can be used instead of GHC to check Haskell syntax. Document browsing You can browse the module document of the current line either locally or on Hackage. Function type You can display the type/information of the function on the cursor. (new) ghc-mod consists of code in Emacs Lisp and a sub-command in Haskell. The Emacs code executes the sub-command to obtain information about your Haskell environment. The sub-command makes use of the GHC API for that purpose. ghc-mod now supports “hs-source-dirs” in a cabal file and GHC 7.2. Further reading http://www.mew.org/~kazu/proj/ghc-mod/en/

6.1.3 Leksah — The Haskell IDE in Haskell Report by: Jürgen Nicklisch-Franken Participants: Hamish Mackenzie Leksah is a Haskell IDE written in Haskell. It is still beta quality, but we hope we can publish the 1.0 release this year. The project has its focus on providing a practical tool for Haskell development. Leksah has already proved its usefulness in industrial projects. We have had positive feedback and are pleased to see that a large number of people are downloading Leksah and we hope you are finding it useful. Leksah is at a critical point in its development, as it is difficult to bring a project of this size to a success, considering we are just two developers which work on it in their rare spare time. If you can spare some time to work on part of the project, please get in touch by mailing the Leksah group or log onto IRC #leksah. If there is something you do not like about Leksah let us know and we can probably show you where to get started fixing it. We believe that Leksah can be an important contribution for Haskell, to make its way from an academic language to a valuable tool in industry. Further reading http://leksah.org/

6.1.4 HEAT: The Haskell Educational Advancement Tool Report by: Olaf Chitil Status: active See: http://www.haskell.org/communities/11-2010/html/report.html#sect6.1.4.

6.1.5 HaRe — The Haskell Refactorer Report by: Simon Thompson Participants: Huiqing Li, Chris Brown, Claus Reinke Refactorings are source-to-source program transformations which change program structure and organization, but not program functionality. Documented in catalogs and supported by tools, refactoring provides the means to adapt and improve the design of existing code, and has thus enabled the trend towards modern agile software development processes. Our project, Refactoring Functional Programs, has as its major goal to build a tool to support refactorings in Haskell. The HaRe tool is now in its sixth major release. HaRe supports full Haskell 98, and is integrated with (X)Emacs and Vim. All the refactorings that HaRe supports, including renaming, scope change, generalization and a number of others, are module-aware, so that a change will be reflected in all the modules in a project, rather than just in the module where the change is initiated. The system also contains a set of data-oriented refactorings which together transform a concrete data type and associated uses of pattern matching into an abstract type and calls to assorted functions. The latest snapshots support the hierarchical modules extension, but only small parts of the hierarchical libraries, unfortunately. In order to allow users to extend HaRe themselves, HaRe includes an API for users to define their own program transformations, together with Haddock documentation. Please let us know if you are using the API. Snapshots of HaRe are available from our webpage, as are related presentations and publications from the group (including LDTA’05, TFP’05, SCAM’06, PEPM’08, PEPM’10, TFP’10, Huiqing’s PhD thesis and Chris’s PhD thesis). The final report for the project appears there, too. Recent developments HaRe 0.6, which is compatible with GHC-6.12.1, has been released; HaRe 0.6 is available on Hackage, and also downloadable from our project webpage.

HaRe 0.6 comes with a number of new refactorings, including adding and removing fields and constructors to data-type definitions, folding and unfolding against as-patterns, merging and splitting function definitions, converting between let and where constructs, introducing pattern matching and generative folding.

Support for automatic detection and semi-automatic elimination of duplicated code in Haskell programs is also available from HaRe 0.6.

Support for a number of new refactorings for parallel Haskell have recently been added to HaRe. These include support to introduce simple divide and conquer parallelism, using the new Strategies module. The refactorings are designed to issue warnings to the user when ill-defined evaluation degrees are set, together with support for adding a threshold value. Further reading http://www.cs.kent.ac.uk/projects/refactor-fp/

6.2 Documentation

Haddock is a widely used documentation-generation tool for Haskell library code. Haddock generates documentation by parsing and typechecking Haskell source code directly and including documentation supplied by the programmer in the form of specially-formatted comments in the source code itself. Haddock has direct support in Cabal (→6.8.1), and is used to generate the documentation for the hierarchical libraries that come with GHC, Hugs, and nhc98 (http://www.haskell.org/ghc/docs/latest/html/libraries) as well as the documentation on Hackage. The latest release is version 2.9.4, released October 3 2011. Recent changes: Support for GHC 7.2 and Alex 3.x

New –qual flag for qualification of names

Print doc coverage information to stdout

Speed up generation of index

Various bug fixes Future plans Although Haddock understands many GHC language extensions, we would like it to understand all of them. Currently there are some constructs you cannot comment, like GADTs and associated type synonyms.

Error messages is an area with room for improvement. We would like Haddock to include accurate line numbers in markup syntax errors.

On the HTML rendering side we want to make more use of Javascript in order to make the viewing experience better. The frames-mode could be improved this way, for example.

Finally, the long term plan is to split Haddock into one program that creates data from sources, and separate backend programs that use that data via the Haddock API. This will scale better, not requiring adding new backends to Haddock for every tool that needs its own format. Further reading Haddock’s homepage: http://www.haskell.org/haddock/

Haddock’s developer Wiki and Trac: http://trac.haskell.org/haddock

Haddock’s mailing list: haddock@projects.haskell.org

6.2.2 Hoogle Report by: Neil Mitchell Status: stable Hoogle is an online Haskell API search engine. It searches the functions in the various libraries, both by name and by type signature. When searching by name, the search just finds functions which contain that name as a substring. However, when searching by types it attempts to find any functions that might be appropriate, including argument reordering and missing arguments. The tool is written in Haskell, and the source code is available online. Hoogle is available as a web interface, a command line tool, and a lambdabot plugin. Hoogle has seen significant revisions in the last few months. Hoogle can now search all of Hackage (→6.8.1), and has a brand new look and feel, including instant results as you type. Work continues improving the performance and quality of the results. Further reading http://haskell.org/hoogle

This tool by Ralf Hinze and Andres Löh is a preprocessor that transforms literate Haskell or Agda code into LaTeX documents. The output is highly customizable by means of formatting directives that are interpreted by lhs2TeX. Other directives allow the selective inclusion of program fragments, so that multiple versions of a program and/or document can be produced from a common source. The input is parsed using a liberal parser that can interpret many languages with a Haskell-like syntax. The program is stable and can take on large documents. The current version is 1.17, so there has not been a new release since the last report. Development repository and bug tracker are on GitHub. There are still plans for a rewrite of lhs2TeX with the goal of cleaning up the internals and making the functionality of lhs2TeX available as a library. Further reading http://www.andres-loeh.de/lhs2tex

https://github.com/kosmikus/lhs2tex

6.3 Testing and Analysis

shelltestrunner was first released in 2009, inspired by the test suite in John Wiegley’s ledger project. It is a command-line tool for doing repeatable functional testing of command-line programs or shell commands. It reads simple declarative tests specifying a command, some input, and the expected output, error output and exit status. Tests can be run selectively, in parallel, with a timeout, in color, and/or with differences highlighted. In the last six months, shelltestrunner has had three releases (1.0, 1.1, 1.2) and acquired a home page. Projects using it include hledger, yesod, berp, and eddie. shelltestrunner is free software released under GPLv3+ from Hackage or http://joyful.com/shelltestrunner. Further reading http://joyful.com/repos/shelltestrunner

6.3.2 HLint Report by: Neil Mitchell Status: stable HLint is a tool that reads Haskell code and suggests changes to make it simpler. For example, if you call maybe foo id it will suggest using fromMaybe foo instead. HLint is compatible with almost all Haskell extensions, and can be easily extended with additional hints. There have been numerous feature improvements since the last HCAR, including features to detect duplicated code within a module. HLint can be tried online within hpaste.org. Further reading http://community.haskell.org/~ndm/hlint/

This project was born during the 2009 Google Summer of Code under the name “Improving space profiling experience”. The name hp2any covers a set of tools and libraries to deal with heap profiles of Haskell programs. At the present moment, the project consists of three packages: hp2any-core : a library offering functions to read heap profiles during and after run, and to perform queries on them.

: a library offering functions to read heap profiles during and after run, and to perform queries on them. hp2any-graph : an OpenGL-based live grapher that can show the memory usage of local and remote processes (the latter using a relay server included in the package), and a library exposing the graphing functionality to other applications.

: an OpenGL-based live grapher that can show the memory usage of local and remote processes (the latter using a relay server included in the package), and a library exposing the graphing functionality to other applications. hp2any-manager : a GTK application that can display graphs of several heap profiles from earlier runs. The project also aims at replacing hp2ps by reimplementing it in Haskell and possibly adding new output formats. The manager application shall be extended to display and compare the graphs in more ways, to export them in other formats and also to support live profiling right away instead of delegating that task to hp2any-graph . Recently, the hp2any project joined forces with hp2pretty , which resulted in increased performance in the core library. Further reading http://www.haskell.org/haskellwiki/Hp2any

http://code.google.com/p/hp2any/

http://gitorious.org/hp2pretty

6.4 Optimization

HFusion is an experimental tool for optimizing Haskell programs. The tool performs source to source transformations by the application of a program transformation technique called fusion. The aim of fusion is to reduce memory management effort by eliminating the intermediate data structures produced in function compositions. It is based on an algebraic approach where functions are internally represented in terms of a recursive program scheme known as hylomorphism. We offer a web interface to test the technique on user-supplied recursive definitions and HFusion is also available as a library on Hackage. The last improvement to HFusion has been to accept as input an expression containing any number of compositions, returning the expression which results from applying fusion to all of them. Compositions which cannot be handled by HFusion are left unmodified. In its current state, HFusion is able to fuse compositions of general recursive functions, including primitive recursive functions (like dropWhile or insertions in binary search trees), functions that make recursion over multiple arguments like zip, zipWith or equality predicates, mutually recursive functions, and (with some limitations) functions with accumulators like foldl. In general, HFusion is able to eliminate intermediate data structures of regular data types (sum-of-product types plus different forms of generalized trees). Further reading HFusion publications: http://www.fing.edu.uy/inco/proyectos/fusion

HFusion web interface: http://www.fing.edu.uy/inco/proyectos/fusion/tool

HFusion on Hackage: http://hackage.haskell.org/package/hfusion

6.4.2 Optimizing Generic Functions Report by: José Pedro Magalhães Participants: Johan Jeuring, Andres Löh Status: actively developed Datatype-generic programming increases program reliability by reducing code duplication and enhancing reusability and modularity. Several generic programming libraries for Haskell have been developed in the past few years. These libraries have been compared in detail with respect to expressiveness, extensibility, typing issues, etc., but performance comparisons have been brief, limited, and preliminary. It is widely believed that generic programs run slower than hand-written code. At Utrecht University we are looking into the performance of different generic programming libraries and how to optimize them. We have confirmed that generic programs, when compiled with the standard optimization flags of the Glasgow Haskell Compiler (GHC), are substantially slower than their hand-written counterparts. However, we have also found that advanced optimization capabilities of GHC, such as inline pragmas and rewrite rules, can be used to further optimize generic functions, often achieving the same efficiency as hand-written code. We are continuing our research in this topic and hope to provide more information in the near future. Further reading http://dreixel.net/research/pdf/ogie.pdf

6.5 Boilerplate Removal

Haskell’s deriving mechanism supports the automatic generation of instances for a number of functions. The Haskell 98 Report only specifies how to generate instances for the Eq, Ord, Enum, Bounded, Show, and Read classes. The description of how to generate instances is largely informal. As a consequence, the portability of instances across different compilers is not guaranteed. Additionally, the generation of instances imposes restrictions on the shape of datatypes, depending on the particular class to derive. We have developed a new approach to Haskell’s deriving mechanism, which allows users to specify how to derive arbitrary class instances using standard datatype-generic programming techniques. Generic functions, including the methods from six standard Haskell 98 derivable classes, can be specified entirely within Haskell, making them more lightweight and portable. We have implemented our deriving mechanism together with many new derivable classes in UHC (→3.3) and GHC. The implementation in GHC has a more convenient syntax; consider enumeration: class GEnum a where

genum :: [a]

default genum :: ( Representable a,

Enum' (Rep a))

=> [a]

genum = map to enum'



instance (GEnum a) => GEnum (Maybe a)

instance (GEnum a) => GEnum [a]

These instances are empty, and therefore use the (generic) default implementation. This is as convenient as writing |deriving| clauses, but allows defining more generic classes. This implementation relies on the new functionality of default signatures, like in |genum| above, which are like standard default methods but allow for a different type signature.

Further reading

http://www.haskell.org/haskellwiki/Generics

6.6 Code Management

Darcs is a distributed revision control system written in Haskell. In Darcs, every copy of your source code is a full repository, which allows for full operation in a disconnected environment, and also allows anyone with read access to a Darcs repository to easily create their own branch and modify it with the full power of Darcs’ revision control. Darcs is based on an underlying theory of patches, which allows for safe reordering and merging of patches even in complex scenarios. For all its power, Darcs remains a very easy to use tool for every day use because it follows the principle of keeping simple things simple. Our most recent release, Darcs 2.5.2, was in March 2011. The Darcs 2.5.x line provides faster repository-local operations, and faster record with long patch histories, among other bug fixes and features. The most recent version adds compatibility with Haskell Platform 2011.2.0.0. We are currently working on releasing Darcs 2.8, which will include Alexey Levan’s 2010 Google Summer of Code work on optimised darcs get (using the “optimize –http” command) and a few refinements to Adolfo Builes’ cache reliability work. The Darcs 2.8 release is planned to include a faster and more human-readable annotate command. Meanwhile, we are happy to have been able to participate in the Google Summer of Code 2011 (as part of Haskell.org). We had two projects this year, one to develop a a bidirectional bridge between Darcs and Git (and potentially other VCSs), and the other to do some new exploratory work on primitive patch types for a future Darcs 3. The bridge project will improve collaboration between Darcs and Git users, allowing each to contribute to projects hosted in the other’s VCS of choice. The primitive patches work will allow us to implement some ideas we have been discussing in the Darcs team in recent months, in particular, separation of file dentifiers from file names and the separation of on-disk patch contents from their in-memory representation. Making a prototype implementation of these ideas will give us a better idea how feasible they are in practice and help us to identify the technical difficulities that may be lurking around the corner. Both projects were succesful; see below for their respective wrap-ups and prototypes. Darcs is free software licensed under the GNU GPL (version 2 or greater). Darcs is a proud member of the Software Freedom Conservancy, a US tax-exempt 501(c)(3) organization. We accept donations at http://darcs.net/donations.html. Further reading http://darcs.net

http://web.mornfall.net/blog/soc_reloaded:_outcomes.html

6.6.2 DarcsWatch Report by: Joachim Breitner Status: working See: http://www.haskell.org/communities/05-2011/html/report.html#sect5.6.3.

http://darcsden.com is a free Darcs (→6.6.1) repository hosting service, similar to patch-tag.com or (in essence) github . The darcsden software is also available (on darcsden) so that anyone can set up a similar service. darcsden is available under BSD license and was created by Alex Suraci. Alex keeps the service running and fixes bugs, but is mostly focussed on other projects. darcsden has a clean UI and codebase and is a viable hosting option for smaller projects despite occasional glitches. The last Hackage release was in 2010. Other committers have been submitting patches, and the darcsden software is close to becoming a just-works installable darcs web ui for general use. Further reading http://darcsden.com

6.6.4 darcsum Report by: Simon Michael Status: occasional development; suitable for daily use darcsum is an emacs add-on providing an efficient, pcl-cvs-like interface for the Darcs revision control system (→6.6.1). It is especially useful for reviewing and recording pending changes. Simon Michael took over maintainership in 2010, and tried to make it more robust with current Darcs. The tool remains slightly fragile, as it depends on Darcs’ exact command-line output, and needs updating when that changes. Dave Love has contributed a large number of cleanups. darcsum is available under the GPL version 2 or later from http://joyful.com/darcsum. In the last six months darcsum acquired a home page, but there has been little other activity. We are looking for a new maintainer for this useful tool. Further reading http://joyful.com/darcsum/

6.6.5 Improvements to Cabal’s Test Support Report by: Thomas Tuegel Participants: Johan Tibell (Mentor) Status: active development See: http://www.haskell.org/communities/11-2010/html/report.html#sect7.2.

6.6.6 cab — A Maintenance Command of Haskell Cabal Packages Report by: Kazu Yamamoto Status: open source, actively developed cab is a MacPorts-like maintenance command of Haskell cabal packages. Some parts of this program are a wrapper to ghc-pkg , cabal , and cabal-dev . If you are always confused due to inconsistency of ghc-pkg and cabal , or if you want a way to check all outdated packages, or if you want a way to remove outdated packages recursively, this command helps you. cab now supports GHC 7.2. Further reading http://www.mew.org/~kazu/proj/cab/en/

6.6.7 Hackage-Debian Report by: Marco Gontijo Status: unconcluded Hackage-Debian is a tool for creating a Debian repository with all, or almost all, of the packages in Hackage. It is highly based on the debian available at http://hackage.haskell.org/package/debian. It should build a snapshot of the Hackage database and then track each new package added to build it on demand. It is still under development, but the first release should be announced soon. A limitation of the first version being developed is that it only builds the latest version of each library. So, if a library depends on an older version of another library, it will not be built. This is the reason why it does not build all packages, but almost all of them. Also, the first version will only deal with libraries, but there are plans to also build programs. The darcs repository for both hackage-debian and the modified version of the debian package that it uses are available at http://marcot.eti.br/darcs/hackage-debian and http://marcot.eti.br/darcs/haskell-debian.

6.7 Interfacing to other Languages

6.7.1 HSFFIG Report by: Dmitry Golubovsky Status: release See: http://www.haskell.org/communities/11-2010/html/report.html#sect6.6.1.

6.8 Deployment

Background Cabal is the standard packaging system for Haskell software. It specifies a standard way in which Haskell libraries and applications can be packaged so that it is easy for consumers to use them, or re-package them, regardless of the Haskell implementation or installation platform. Hackage is a distribution point for Cabal packages. It is an online archive of Cabal packages which can be used via the website and client-side software such as cabal-install. Hackage enables users to find, browse and download Cabal packages, plus view their API documentation. cabal-install is the command line interface for the Cabal and Hackage system. It provides a command line program cabal which has sub-commands for installing and managing Haskell packages. Recent progress We have had two successful Google Summer of Code projects on Cabal this year. Sam Anklesaria worked on a “cabal repl” feature to launch an interactive GHCi session with all the appropriate pre-processing and context from the project’s .cabal file. Mikhail Glushenkov worked on a feature so that “cabal install” can build independent packages in parallel (not to be confused with building modules within a package in parallel). The code from both projects is available and they are awaiting integration into the main Cabal repository, which we expect to happen over the course of the next few months. The “cabal test” feature which was developed as a GSoC project last summer has matured significantly in the last 6 months, thanks to continuing effort from Thomas Tuegel and Johan Tibell. The basic test interface will be ready to use in the next release, and there has been some progress on the “detailed” test interface. The IHG is currently sponsoring some work on cabal-install. The first fruits of this work is a new dependency solver for cabal-install which is now included in the development version. The new solver can find solutions in more cases and produces more detailed error messages when it cannot find a solution. In addition, it is better about avoiding and warning about breaking existing installed packages. We also expect it to be a better basis for other features in future. For more details see the presentation by Andres Löh. http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011/Loeh The last 6 months has seen significant progress on the new hackage-server implementation with help from many new volunteers, in particular Max Bolingbroke, but also several other people who helped at hackathons and subsequently. The IHG funded Well-Typed to improve package mirroring so that continuous nearly-live mirroring is now possible. We are also grateful to factis research GmbH who have kindly donated a VM to help the hackage developers test the new server code. We expect to do live mirroring and public beta testing using this server during the next few months. Looking forward Users are increasingly relying on hackage and cabal-install and are increasingly frustrated by dependency problems. Solutions to the variety of problems do exist. It will however take sustained effort to solve them. The good news is that there is the realistic prospect of the new hackage-server being ready in the not too distant future with features to help monitor and encourage package quality, and the recent work on cabal-install should reduce the frustration level somewhat. The last 6 months has seen a good upswing in the number of volunteers spending their time on cabal and hackage, so much so that a clear bottleneck is patch review and integration bandwidth. A similar issue is that many of the long standing bugs and feature requests require significant refactoring work which many volunteers feel reluctant or unable to do. Assistance in these areas would be very valuable indeed. We would like to encourage people considering contributing to join the cabal-devel mailing list so that we can increase development discussion and improve collaboration. The bug tracker is reasonably well maintained and it should be relatively clear to new contributors what is in need of attention and which tasks are considered relatively easy. Further reading Cabal homepage: http://www.haskell.org/cabal

Hackage package collection: http://hackage.haskell.org/

Bug tracker: http://hackage.haskell.org/trac/hackage/

6.8.2 Capri Report by: Dmitry Golubovsky Status: experimental See: http://www.haskell.org/communities/11-2010/html/report.html#sect6.7.2.

7 Libraries

7.1 Processing Haskell

7.1.1 The Neon Library Report by: Jurriaan Hage See: http://www.haskell.org/communities/11-2010/html/report.html#sect8.1.1.

7.2 Parsing and Transforming

7.2.1 The grammar-combinators Parser Library Report by: Dominique Devriese Status: partly functional See: http://www.haskell.org/communities/11-2010/html/report.html#sect8.2.1.

7.2.2 epub-metadata Report by: Dino Morelli Status: stable, actively developed Library for parsing and manipulating ePub files and OPF package data. An attempt has been made here to very thoroughly implement the OPF Package Document specification. epub-metadata is available from Hackage, the Darcs repository below, and also in binary form for Arch Linux through the AUR. See also epub-tools (→8.8.10). Further reading Project page: http://ui3.info/d/proj/epub-metadata.html

Source repository: darcs get http://ui3.info/darcs/epub-metadata

The previous extension for recognizing merging parsers was generalized so now any kind of applicative and monadic parsers can be merged in an interleaved way. As an example take the situation where many different programs write log entries into a log file, and where each log entry is uniquely identified by a transaction number (or process number) which can be used to distinguish them. E.g., assume that each transaction consists of an |a|, a |b| and a |c| action, and that a digit is used to identify the individual actions belonging to the same transaction; the individual transactions can now be recognized by the parser: pABC :: Grammar String

pABC = (\ a d -> d:a) <$> pA <*> (pDigit' >>=

\d -> pB *> mkGram (pSym d) *>

pC *> mkGram (pSym d)

)



run (pmMany(pABC)) "a2a1b1b2c2a3b3c1c3"

Result: ["2a","1a","3a"]



Furthermore the library was provided with many more examples in two modules in the |Demo| directory.

Features

Much simpler internals than the old library (http://haskell.org/communities/05-2009/html/report.html#sect5.5.8) .

Combinators for easily describing parsers which produce their results online, do not hang on to the input and provide excellent error messages. As such they are “surprise free” when used by people not fully aware of their internal workings.

Parsers “correct” the input such that parsing can proceed when an erroneous input is encountered.

The library basically provides the to be preferred applicative interface and a monadic interface where this is really needed (which is hardly ever).

No need for try-like constructs which makes writing Parsec based parsers tricky.

based parsers tricky. Scanners can be switched dynamically, so several different languages can occur intertwined in a single input file.

Parsers can be run in an interleaved way, thus generalizing the merging and permuting parsers into a single applicative interface. This makes it e.g. possible to deal with white space or comments in the input in a completely separate way, without having to think about this in the parser for the language at hand (provided of course that white space is not syntactically relevant).

Future plans

Since the part dealing with merging is relatively independent of the underlying parsing machinery we may split this off into a separate package. This will enable us also to make use of a different parsing engines when combining parsers in a much more dynamic way. In such cases we want to avoid too many static analyses.

Future versions will contain a check for grammars being not left-recursive, thus taking away the only remaining source of surprises when using parser combinator libraries. This makes the library even greater for use teaching environments. Future versions of the library, using even more abstract interpretation, will make use of computed look-ahead information to speed up the parsing process further.

The old library in the |uulib| package stays stable, and can continue to be used. A few changes were needed in order to make it compile with GHC 7.2.

Contact

If you are interested in using the current version of the library in order to provide feedback on the provided interface, contact <doaitse at swierstra.net>. There is a low volume, moderated mailing list which was moved to <parsing at lists.science.uu.nl> (see also http://www.cs.uu.nl/wiki/bin/view/HUT/ParserCombinators).

7.2.4 Regular Expression Matching with Partial Derivatives Report by: Martin Sulzmann Participants: Kenny Zhuo Ming Lu Status: stable We are still improving the performance of our matching algorithms. The latest implementation can be downloaded via hackage. Further reading http://hackage.haskell.org/package/regex-pderiv

http://sulzmann.blogspot.com/2010/04/regular-expression-matching-using.html

7.2.5 regex-applicative Report by: Roman Cheplyaka Status: active development regex-applicative is aimed to be an efficient and easy to use parsing combinator library for Haskell based on regular expressions. Regular expressions have Perl-like (left-biased) semantics to satisfy most of the daily regex needs, but also allow longest matching prefix search useful for lexical analysis. For example, the following code finds filename extensions: import Text.Regex.Applicative

getExtension :: String -> Maybe String

getExtension str =

str =~

many anySym *>

sym '.' *>

many anySym

More examples can be found on the wiki. Further reading http://hackage.haskell.org/package/regex-applicative

http://github.com/feuerbach/regex-applicative

7.3 Mathematical Objects

7.3.1 normaldistribution: Minimum Fuss Normally Distributed Random Values Report by: Björn Buckwalter Status: stable Normaldistribution is a new package that lets you produce normally distributed random values with a minimum of fuss. The API builds upon, and is largely analogous to, that of the Haskell 98 Random module (more recently System.Random). Usage can be as simple as: sample <-normalIO. For more information and examples see the package description on Hackage. Further reading http://hackage.haskell.org/package/normaldistribution

7.3.2 dimensional: Statically Checked Physical Dimensions Report by: Björn Buckwalter Status: active, stable core with experimental extras Dimensional is a library providing data types for performing arithmetics with physical quantities and units. Information about the physical dimensions of the quantities/units is embedded in their types, and the validity of operations is verified by the type checker at compile time. The boxing and unboxing of numerical values as quantities is done by multiplication and division with units. The library is designed to, as far as is practical, enforce/encourage best practices of unit usage within the frame of the si . Example: d :: Fractional a => Time a -> Length a

d t = a / _2 * t ^ pos2

where a = 9.82 *~ (meter / second ^ pos2)



Ongoing experimental work includes:

Support for user-defined dimensions and a proof-of-concept implementation of the cgs system of units.

system of units. dimensional-vectors — a rudimentary linear algebra library which statically tracks the sizes of vectors and matrices as well as the physical dimensions of their elements on a per element basis, disallowing non-sensical operations. This library makes it very difficult to accidentally implement, e.g., a Kalman filter incorrectly. My work on dimensional-vectors is need-driven and tends to occur in spurts.

dimensional-experimental — a library in heavy flux of which the most interesting feature is probably automatic differentiation of functions involving physical quantities. Example: v :: Fractional a => Time a -> Velocity a

v t = diff d t



The core library, dimensional, can be installed off Hackage using cabal. The experimental packages can be cloned off of Github.

Dimensional relies on numtype for type-level integers (e.g., pos2 in the above example), ad for automatic differentiation, and HList (→7.4.1) for type-level vector and matrix representations.

Further reading

AERN stands for Approximating Exact Real Numbers. We are developing a family of libraries that will provide: a reliable and fast arbitrary precision correctly rounded interval arithmetic , including both standard and inverted intervals with Kaucher arithmetic

, including both standard and inverted intervals with Kaucher arithmetic arbitrary precision arithmetic of interval polynomials and polynomial intervals to automatically reduce overestimations in interval computations efficiently support validated numerical integration automatically decide many inequalities and interval inclusions with non-linear and elementary functions that occur in numerical theorem proving and specifically in the verification of numerical programs

and to a type class hierarchy for validated and exact computation, featuring standard mathematical structures such as posets and lattices extended to take account of rounding errors and partially decided relations such as equality separate treatment of numerical order and interval refinement order ability to increase computational effort to reduce the effect of rounding and partiality, converging to no rounding and total relations with infinite effort extensive set of QuickCheck properties for each type class, enabling automatic checking of, e.g., algebraic properties such as associativity extended to take account of rounding

a framework for distributed query-driven lazy dataflow exact numerical computation with tidy exact semantics based on Domain Theory There are stable older versions of the libraries on Hackage but these lack the type classes described above. We are currently in the process of redesigning and rewriting the libraries from scratch. Out of the newly designed code we recently released libraries featuring the type classes for approximate real number operations

correctly rounded real interval arithmetic with Double endpoints A release of interval arithmetic with MPFR endpoints is planned as soon as a solution is found for an easier installation of the hmpfr package. (Currently one has to compile a ghc without gmp to use hmpfr.) We have made progress on implementing polynomial intervals with a core written in C but have suspended the development until we finish a Haskell-only implementation of an arithmetic of interval polynomials (ie polynomials with interval coefficients). We are likely to use interval polynomials as endpoints for polynomial intervals when the work on polynomial intervals is resumed. The development files now include demos that apply interval polynomials on validated simulation of selected ODE IVPs and hybrid systems. All AERN development is open and we welcome contributions and new developers. Further reading http://code.google.com/p/aern/

7.3.4 Paraiso Report by: Takayuki Muranushi Status: active development Paraiso is a domain-specific language (DSL) embedded in Haskell, aimed at generating explicit type of partial differential equations solving programs, for accelerated and/or distributed computers. Equations for fluids, plasma, general relativity, and many more falls into this category. This is still a tiny domain for a computer scientist, but large enough that an astrophysicist (I am) might spend even his entire life in it. In Paraiso we can describe equation-solving algorithms in mathematical, simple notation using builder monads. At the moment it can generate programs for multicore CPUs as well as single GPU, and tune their performance via automated benchmarking and genetic algorithms. The experiment is under way; the fluid simulator I am using is 464 lines in Haskell. So far, Paraiso has tried more than 117’000 different implementations of this single algorithm, each being about 10’000 lines of CUDA program. The best one found so far is 33.4 times faster than the initial guess, and twice faster than the hand-tuned implementation. Anyone can get Paraiso from hackage (http://hackage.haskell.org/package/Paraiso) or github (https://github.com/nushio3/Paraiso). The next big challenge is to make Paraiso generate distributed computations. Further reading http://paraiso-lang.org/wiki/

7.4 Data Types and Data Structures

HList is a comprehensive, general purpose Haskell library for typed heterogeneous collections including extensible polymorphic records and variants. HList is analogous to the standard list library, providing a host of various construction, look-up, filtering, and iteration primitives. In contrast to the regular lists, elements of heterogeneous lists do not have to have the same type. HList lets the user formulate statically checkable constraints: for example, no two elements of a collection may have the same type (so the elements can be unambiguously indexed by their type). An immediate application of HLists is the implementation of open, extensible records with first-class, reusable, and compile-time only labels. The dual application is extensible polymorphic variants (open unions). HList contains several implementations of open records, including records as sequences of field values, where the type of each field is annotated with its phantom label. We and others have also used HList for type-safe database access in Haskell. HList-based Records form the basis of OOHaskell. The HList library relies on common extensions of Haskell 2010. HList is being used in AspectAG (→5.4.2), typed EDSL of attribute grammars, and in HaskellDB. The October 2011 version of HList library has many changes, mainly related to deprecating TypeCast (in favor of ~ ) and getting rid of overlapping instances. The only use of OverlappingInstances is in the implementation of the generic type equality predicate TypeEq . We plan to remove even that remaining single occurrence. The code works with GHC 7.0.4. Future plans include the implementation of TypeEq without resorting to overlapping instances (so, HList will be overlapping-free), and moving towards type functions and expressive kinds. Further reading HList: http://okmij.org/ftp/Haskell/types.html#HList

OOHaskell: http://homepages.cwi.nl/~ralf/OOHaskell/

Persistent is a type-safe data store interface for Haskell. Haskell has many different database bindings available. However, most of these have little knowledge of a schema and therefore do not provide useful static guarantees. They also force database-dependent interfaces and data structures on the programmer. There are Haskell specific data stores such as acid-state that get around these flaws. This allows one to easily store any Haskell type and have type-safe interactions with data. However, the use case is limited to in memory storage without replication, and they aren’t designed to interface with other programming languages. Persistent maintains much of the advantage of using native Haskell data types — you store and retrieve normal Haskell records, and your queries are also type-safe — they must match the schema. However, Persistent lets you persist your data to a battle tested database of your choice that is well optimized for your problem domain. Persistent is backend agnostic, and there are currently interfaces to Sqlite, Postgresql, and MongoDB. Since the last report, Persistent has undergone an internal re-write and major API changes. The MongoDB backend has been polished and works out of the box with the Yesod web framework. Here is a quick example of the new Persistent query language: selectList [ PersonFirstName ==. "Simon",

PersonLastName ==. "Jones"] []



Future pl