Index

This is the 22nd edition of the Haskell Communities and Activities Report. As usual, fresh entries are formatted using a blue background, while updated entries have a header with a blue background. Entries for which I received a liveness ping, but which have seen no essential update for a while, have been replaced with online pointers to previous versions. Other entries on which no new activity has been reported for a year or longer have been dropped completely. Please do revive such entries next time if you do have news on them.

A call for new entries and updates to existing ones will be issued on the usual mailing lists in October. Now enjoy the current report and see what other Haskellers have been up to lately. Any feedback is very welcome, as always.

Janis Voigtländer, University of Bonn, Germany, <hcar at haskell.org>

With the task of incorporation behind us, the haskell.org committee can now focus on establishing guidelines around donations, fund raising, and appropriate uses of funds.

The haskell.org infrastructure is becoming more stable, but still suffers from occasional hiccups. While the extreme unreliability we saw for a while has improved with the reorganisation, the level of sysadmin resource/involvement is still inadequate. The committee is open to ideas on how to improve the situation.

Note that the participant reimbursement paid by haskell.org matches the reimbursement given to haskell.org by Google. The haskell.org credits for 2011 include only GSoC payments of $9,316.41, leaving us with a balance of $13,056.32 at the end of 2011.

At the start of 2011 the haskell.org account had $7,261.73 USD, and by the end of the year the account balance was $13,056.32. The haskell.org expenses for 2011 include:

We are currently in the process of establishing guidelines for fund raising and appropriate ways to spend funds. The main expense of haskell.org at this time is server hosting. The GSoC participant reimbursement is actually paid by Google and we do not consider this a normal expense as Google reimburses us for the full amount.

Haskell.org has now joined Software in the Public Interest ( http://www.spi-inc.org ). This allows haskell.org to accept donations as a US-based non-profit as well as pay for services with these donations. Currently, most of the money in the haskell.org account comes from GSoC participation.

The haskell.org committee is in its second year of operation managing the haskell.org infrastructure and money. The committee’s “home page” is at http://www.haskell.org/haskellwiki/Haskell.org_committee , and occasional publicity is via a blog ( http://haskellorg.wordpress.com ) and twitter account ( http://twitter.com/#!/haskellorg ) as well as the Haskell mailing list.

Haskellers remains a site intended for all members of the Haskell community, from professionals with 15 years experience to people just getting into the language.

Since the May 2011 HCAR, Haskellers has added polls, which provides a convenient means of surveying a large cross-section of the active Haskell community. There are now over 1300 active accounts, versus 800 one year ago.

Haskellers is a site designed to promote Haskell as a language for use in the real world by being a central meeting place for the myriad talented Haskell developers out there. It allows users to create profiles complete with skill sets and packages authored and gives employers a central place to find Haskell professionals.

The book uses GHCi, the interactive version of the Glasgow Haskell Compiler, as its implementation of choice. It has also been revised to include material about the Haskell Platform, and the Hackage online database of Haskell libraries. In particular, readers are given detailed guidance about how to find their way around what is available in these systems.

Existing material has been expanded and re-ordered, so that some concepts — such as simple data types and input/output — are presented at an earlier stage. The running example of Pictures is now implemented using web browser graphics as well as lists of strings.

The third edition of one of the leading textbooks for beginning Haskell programmers is thoroughly revised throughout. New material includes thorough coverage of property-based testing using QuickCheck and an additional chapter on domain-specific languages as well as a variety of new examples and case studies, including simple games.

We are grateful to all the people’s work that made this wonderful book available in Japanese, including the publisher, our kind reviewers, and the original author Miran. We wish for prosperity of the Haskell community in Japan and in many countries, and for those who don’t read Japanese, we’d just like to let you know that we’re doing fine in Japan!

The original book is an elaborate and popular introduction to the programming language Haskell. The reader will walk through the playland of Haskell decorated with funky examples and illustrations, and without noticing any difficulties, will become one with the core concepts of Haskell, say types, type classes, lazy evaluations, functors, applicatives and monads. The translators have added a short article on handling multi-byte strings in Haskell.

An official translation of the book “Learn You a Haskell for Great Good!” by Miran Lipovaca ( http://learnyouahaskell.com/ ) to Japanese is now available in stores.

Since the last HCAR editorship of The Monad Reader has passed over from Brent Yorgey to Edward Z. Yang. A mini-issue is currently in the works.

The Monad.Reader is also a great place to write about a tool or application that deserves more attention. Most programmers do not enjoy writing manuals; writing a tutorial for The Monad.Reader, however, is an excellent way to put your code in the limelight and reach hundreds of potential users.

There are plenty of interesting ideas that might not warrant an academic publication—but that does not mean these ideas are not worth writing about! Communicating ideas to a wide audience is much more important than concealing them in some esoteric journal. Even if it has all been done before in the Journal of Impossibly Complicated Theoretical Stuff, explaining a neat idea about “warm fuzzy things” to the rest of us can still be plain fun.

There are many academic papers about Haskell and many informative pages on the HaskellWiki. Unfortunately, there is not much between the two extremes. That is where The Monad.Reader tries to fit in: more formal than a wiki page, but more casual than a journal article.

The article describes a type-level interpreter for the call-by-value lambda-calculus with booleans, natural numbers, and case discrimination. Its terms are Haskell types. Using functional dependencies for type-level reductions is well-known. Missing before was the encoding abstractions with named arguments and closures.

The article shows many examples, of the fixpoint combinator, Fibonacci function, and S and K combinators.

http://okmij.org/ftp/Computation/lambda-calc.html#haskell-type-level

The follow-up article describes several applications of computable types, to ascribe signatures to terms and to drive the selection of overloaded functions. One example computes a complex XML type and instantiates the read function to read the trees of only that shape.

A telling example of the power of the approach is the ability to use not only (a->) but also (->a) as an unary type function. The former is just (->) a . The latter was considered impossible. The type-level lambda-calculus interpreter helps, letting us write (->a) almost literally as (flip (->) a) . For example, we can express the type (((Int -> Bool) -> Bool) ... -> Bool) -> Bool , with n nested arrows as E (F Ntimes :< (F Flip :< (F (ATC2 (->))) :< Bool) :< Int) n where the higher-order type function NTimes is the right fold on type-level numerals.

http://okmij.org/ftp/Haskell/types.html#computable-types

Yet Another Lambda Blog is a new blog about functional programming aimed at beginners. It focuses on practical aspects of programming in Haskell, but there are other topics as well: book reviews, links to interesting internet resources and Scheme programming. New posts appear once or twice a week.

The platform steering committee will be proposing some modifications to the community review process for accepting new packages into the platform process with the aim of reducing the burden for package authors and keeping the review discussions productive. Though we will be making some modifications, we would still like to invite package authors to propose new packages. This can be initiated at any time. We also invite the rest of the community to take part in the review process on the libraries mailing list <libraries at haskell.org>. The procedure involves writing a package proposal and discussing it on the mailing list with the aim of reaching a consensus. Details of the procedure are on the development wiki.

Our systems for coordinating and testing new releases remains too time consuming, involving too much manual work. Help from the community on this issue would be very valuable.

Major releases are supposed to take place on a 6 month cycle. There will be a major release in Spring 2012 which will be based on the GHC-7.4.x series.

There has not been a release in the last 6 months. While the plan calls for major releases every 6 months this has not happened for a number of reasons. We took the decision not to base a major release on GHC-7.2.1 and no new release in the 7.2.x series is expected. We ran into some problems trying to prepare a release using GHC-7.0.4, however we may yet do a release using GHC-7.0.4.

Historically, GHC shipped with a collection of packages under the name extralibs . Since GHC 6.12 the task of shipping an entire platform has been transferred to the Haskell Platform.

The Haskell Platform (HP) is the name of the “blessed” set of libraries and tools on which to build further Haskell libraries and applications. It takes a core selection of packages from the more than 3500 on Hackage (→ 6.6.1 ). It is intended to provide a comprehensive, stable, and quality tested base for Haskell projects to work from.

This is all still very much experimental, and it is not clear whether it will ever be in GHC proper. It depends on whether we can achieve good enough performance, amongst other things. All we can say for now is that the approach is promising. You can find KC’s work on the ghc-lwc branch of the git repo.

Firstly, KC found a way to enable concurrency abstractions to be defined without depending on a particular scheduler. This means for example that we can provide MVars that work with any user-defined scheduler, rather than needing one MVar implementation per scheduler. Secondly, we found ways to coexist with some of the existing RTS machinery for handling blackholes and asynchronous exceptions in particular, which means that these facilities will continue to work as before (with the same performance), and writers of user-defined schedulers do not need to worry about them. Furthermore this significantly lowers the barrier for writing a new scheduler.

Finally, we are about to release (it may be out by the time you read this) a stable, end-user ready version of the Repa-like array library Accelerate for GPU computing on Hackage. It integrates with Repa, so you can mix GPU and CPU multicore computing, and via the new meta-par package you can share workload between CPUs and GPUs [13]. This new version 0.12 is already available on GitHub [14]. You need a CUDA-capable NVIDIA GPU to use it.

In addition, we released Repa 3 [12], which uses type-indices to control array representations. This leads to more predictable performance. You can install Repa 3, which requires GHC 7.4.1, from Hackage. We are currently writing a paper describing the new design in detail.

7.4.1 included a few major improvements. For more details on these, see the previous status report [2].

We have a new member of the team! Please welcome Paolo Capriotti who is assuming some of the GHC maintenance duties for Well-Typed.

GHC 7.4.1 was released at the beginning of February, and has been by and large a successful release. Nevertheless the tickets keep pouring in, and a large collection of bug fixes [1] have been made since the 7.4.1 release. We plan to put out a 7.4.2 release candidate very soon (it may be out by the time you read this), followed shortly by the release.

BackgroundUHC actually is a series of compilers of which the last is UHC, plus infrastructure for facilitating experimentation and extension. The distinguishing features for dealing with the complexity of the compiler and for experimentation are (1) its stepwise organisation as a series of increasingly more complex standalone compilers, the use of DSL and tools for its (2) aspectwise organisation (called Shuffle) and (3) tree-oriented programming (Attribute Grammars, by way of the Utrecht University Attribute Grammar (UUAG) system (→ 5.3.1 ).

What do we currently do and/or has recently been completed? As part of the UHC project, the following (student) projects and other activities are underway (in arbitrary order):

What is new? UHC is the Utrecht Haskell Compiler, supporting almost all Haskell98 features and most of Haskell2010, plus experimental extensions. The current focus is on the Javascript backend.

If you find yourself interested in helping us or simply want to use the latest versions of Haskell programs on FreeBSD, check out our page at the FreeBSD wiki (see below) where you can find all important pointers and information required for use, contact, or contribution.

We have a developer repository for Haskell ports that features around 350 ports of many popular Cabal packages. The updates committed to this repository are continuously integrated to the official ports tree on a regular basis. Though the FreeBSD Ports Collection already has many popular and important Haskell software: GHC 7.0.4, Haskell Platform 2011.4.0.0, Gtk2Hs, wxHaskell, XMonad, Pandoc, Gitit, Yesod, Happstack, and Snap — that have been incorporated into the recently published FreeBSD 8.3-RELEASE.

The FreeBSD Haskell Team is a small group of contributors who maintain Haskell software on all actively supported versions of FreeBSD. The primarily supported implementation is the Glasgow Haskell Compiler together with Haskell Cabal, although one may also find Hugs and NHC98 in the ports tree. FreeBSD is a Tier-1 platform for GHC (on both i386 and amd64) starting from GHC 6.12.1, hence one can always download vanilla binary distributions for each recent release.

The stable Debian release (“squeeze”) provides the Haskell Platform 2010.1.0.0 and GHC 6.12, Debian testing (“wheezy”) contains the Platform version 2011.4.0.0 with GHC 7.0.4 and in unstable we are currently ahead of the Platform and ship GHC 7.4.1. We plan to get GHC 7.4.2 and the Platform version 2012.2.0.0 into wheezy in time before the stable release, expected this year.

A system of virtual package names and dependencies, based on the ABI hashes, guarantees that a system upgrade will leave all installed libraries usable. Most libraries are also optionally available with profiling enabled and the documentation packages register with the system-wide index.

The Debian Haskell Group aims to provide an optimal Haskell experience to users of the Debian GNU/Linux distribution and derived distributions such as Ubuntu. We try to follow the Haskell Platform versions for the core package and package a wide range of other useful libraries and programs. At the time of writing, we maintain 500 source packages.

As always we are more than happy for (and in fact encourage) Gentoo users to get involved and help us maintain our tools and packages, even if it is as simple as reporting packages that do not always work or need updating: with such a wide range of GHC and package versions to co-ordinate, it is hard to keep up! Please contact us on IRC or email if you are interested!

More information about the Gentoo Haskell Overlay can be found at http://haskell.org/haskellwiki/Gentoo . It is available via the Gentoo overlay manager “layman”. If you choose to use the overlay, then any problems should be reported on IRC ( #gentoo-haskell on freenode), where we coordinate development, or via email <haskell at gentoo.org> (as we have more people with the ability to fix the overlay packages that are contactable in the IRC channel than via the bug tracker).

Over the time more and more people get involved in gentoo-haskell project which reflects positively on haskell ecosystem health status.

As usual GHC 7.4 branch required some packages to be patched. For a 6 months period we have got about 150 patches waiting for upstream inclusion.

There is also an overlay which contains almost 800 extra unofficial and testing packages. Thanks to the Haskell developers using Cabal and Hackage (→ 6.6.1 ), we have been able to write a tool called “hackport” (initiated by Henning Günther) to generate Gentoo packages with minimal user intervention. Notable packages in the overlay include the latest version of the Haskell Platform (→ 3.1 ) as well as the latest 7.4.1 release of GHC, as well as popular Haskell packages such as pandoc, gitit, yesod (→ 5.2.6 ) and others.

The full list of packages available through the official repository can be viewed at http://packages.gentoo.org/category/dev-haskell?full_cat .

Feedback from users and packaging contributions to Fedora Haskell are always welcome: please join us on #fedora-haskell on Freenode IRC and our new low-traffic mailing-list.

Fedora 18 development work has already started and we have already updated to ghc-7.4.1 and continue work on packaging including web frameworks.

At the time of writing there are now 165 Haskell source packages in Fedora. The Fedora package version numbers listed on the Hackage website now refer to the latest branched version of Fedora (currently 17).

On the packaging side, for Fedora 16 profiling subpackages were merged into the development subpackages to reduce installation overhead. For Fedora 17 the packaging macros have been simplified and made closer to generic Fedora packaging.

Fedora 17 is shipping in May with ghc-7.0.4 and haskell-platform-2011.4.0.0, and version updates to many of the packages. This also includes Fedora 17 Secondary architectures: ppc, ppc64, and the exciting new armv5tel and armv7hp builds (ghc has also been built for Fedora 17 s390 and s390x for the first time). 30 new packages have been added since the release of Fedora 16, including aeson, conduit, hakyll, lifted-base, snap-core, warp, etc.

The Fibon tools and benchmark suite are ready for public consumption. They can be found on github at the url indicated below. People are invited to use the included benchmark suite or just use the tools and build a suite of their own creation. Any improvements to the tools or additional benchmarks are most welcome. Benchmarks have been used to tell lies about performance for many years, so join in the fun and keep on fibbing with Fibon.

This year, the Fibon benchmark suite has been updated to include a Train problem size that can be used for feedback directed optimization work. The Ref problem size has been increased so that the running time of a benchmark program is comparable to the running time when using the ref size of the SPEC benchmarks. With this update a single benchmark will typically take 10-30 minutes to run depending on the power of the computer hardware. See the README file for more information on benchmark size and configuring the benchmarks to finish in an acceptable amount of time.

As a real life example of a complete benchmark suite, Fibon comes with its own set of benchmarks for testing the effectiveness of compiler optimizations in GHC. The benchmark programs come from Hackage , the Computer Language Shootout , Data Parallel Haskell , and Repa . The benchmarks were selected to have minimal external dependencies so they could be easily used with a version of GHC compiled from the latest sources. The following figure shows the performance improvement of GHC’s optimizations on the Fibon benchmark suite.

Benchmarks are built using the standard cabal tool. Any program that has been cabalized can be added as benchmark simply by specifying some meta-information about the program inputs and expected outputs. Fibon will automatically collect execution times for benchmarks and can optionally read the statistics output by the GHC runtime. The program outputs are checked to ensure correct results making Fibon a good option for testing the safety and performance of program optimizations. The Fibon tools are not tied to any one benchmark suite. As long as the correct meta-information has been supplied, the tools will work with any set of programs.

The Fibon benchmark tools draw inspiration from both the venerable nofib Haskell benchmark suite and the industry standard SPEC benchmark suite. The tools automate the tedious parts of benchmarking: building the benchmark in a sand-boxed directory, running the benchmark multiple times, verifying correctness, collecting statistics, and summarizing results.

Fibon is a set of tools for running and analyzing benchmark programs in Haskell. It contains an optional set of benchmarks from various sources including several programs from the Hackage repository.

The next version of Agda is under development. The most interesting changes to the language may be the addition of pattern synonyms, contributed by Stevan Andjelkovic and Adam Gundry, and modifications of the constraint solver, implemented by Andreas Abel. Other work has targeted the Emacs mode. Peter Divianszky has removed the prior dependency on GHCi and haskell-mode, and Guilhem Moulin and myself have made the Emacs mode more interactive: type-checking no longer blocks Emacs, and the expression that is currently being type-checked is highlighted.

A lot of work remains in order for Agda to become a full-fledged programming language (good libraries, mature compilers, documentation, etc.), but already in its current state it can provide lots of fun as a platform for experiments in dependently typed programming.

Agda is a dependently typed functional programming language (developed using Haskell). A central feature of Agda is inductive families, i.e. GADTs which can be indexed by values and not just types. The language also supports coinductive types, parameterized modules, and mixfix operators, and comes with an interactive interface—the type checker can assist you in the development of your code.

Recently, I have added more comfortable syntax for data type declarations and let-definitions. Data and codata types can now also be defined recursively. In the long run, I plan to evolve MiniAgda into a core language for Agda with termination certificates.

MiniAgda is a tiny dependently-typed programming language in the style of Agda (→ 4.1 ). It serves as a laboratory to test potential additions to the language and type system of Agda. MiniAgda’s termination checker is a fusion of sized types and size-change termination and supports coinduction. Bounded size quantification and destructor patterns for a more general handling of coinduction. Equality incorporates eta-expansion at record and singleton types. Function arguments can be declared as static; such arguments are discarded during equality checking and compilation.

Over the last six months we continued working towards mechanising the metatheory of the DDC core language in Coq. We’ve finished Progress and Preservation for System-F2 with mutable algebraic data, and are now looking into proving contextual equivalence of rewrites in the presence of effects. Based on this experience, we’ve also started on an interpreter for a cleaned up version of the DDC core language. We’ve taken the advice of previous paper reviewers and removed dependent kinds, moving witness expressions down to level 0 next to value expressions. In the resulting language, types classify both witness and value expressions, and kinds classify types. We’re also removing more-than constraints on effect and closure variables, along with dangerous type variables (which never really worked). All over, it’s being pruned back to the parts we understand properly, and the removal of dependent kinds will make mechanising the metatheory easier. Writing an interpreter for the core language also gets us a parser for it, which we will need for performing cross module inlining in the compiler proper.

Our compiler (DDC) is still in the “research prototype” stage, meaning that it will compile programs if you are nice to it, but expect compiler panics and missing features. You will get panics due to ungraceful handling of errors in the source code, but valid programs should compile ok. The test suite includes a few thousand-line graphical demos, like a ray-tracer and an n-body collision simulation, so it is definitely hackable.

Disciple is a dialect of Haskell that uses strict evaluation as the default and supports destructive update of arbitrary data. Many Haskell programs are also Disciple programs, or will run with minor changes. In addition, Disciple includes region, effect, and closure typing, and this extra information provides a handle on the operational behaviour of code that is not available in other languages. Our target applications are the ones that you always find yourself writing C programs for, because existing functional languages are too slow, use too much memory, or do not let you update the data that you need to.

The Eden skeleton library is under constant development. Currently it contains various skeletons for parallel maps, workpools, divide-and-conquer, topologies and many more. Take a look on the Eden pages.

The Eden trace viewer tool EdenTV provides a visualisation of Eden program runs on various levels. Activity profiles are produced for processing elements (machines), Eden processes and threads. In addition message transfer can be shown between processes and machines. EdenTV has been written in Haskell and is freely available on the Eden web pages.

A new release of the Eden compiler based on GHC 7.4 will soon be available on our web pages, see http://www.mathematik.uni-marburg.de/~eden , and via Hackage. It will include a shared memory mode which does not depend on a middleware like MPI but which nevertheless uses multiple independent heaps (in contrast to GHC’s threaded runtime system) connected by Eden’s parallel runtime system. An Eden variant of GHC-7.4 and the Eden libraries are already available via git repositories at http://james.mathematik.uni-marburg.de:8080 .

Eden’s primitive constructs are process abstractions and process instantiations. The Eden logo consists of four λ turned in such a way that they form the Eden instantiation operator #05. Higher-level coordination is achieved by defining skeletons , ranging from a simple parallel map to sophisticated master-worker schemes. They have been used to parallelize a set of non-trivial programs.

Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronization, and process handling.

The latest GUM implementation of GpH is built on GHC 6.12, using either PVM or MPI as communications library. It implements a virtual shared memory abstraction over a collection of physically distributed machines. At the moment our main hardware platforms are Intel-based Beowulf clusters of multicores. We plan to connect several of these clusters into a wide-area, hierarchical, heterogenous parallel architecture.

As part of the SCIEnce EU FP6 I3 project (026133) (April 2006 – December 2011) and the HPC-GAP project (October 2009 – September 2013) we use Eden, GpH and HdpH as middleware to provide access to computational Grids from Computer Algebra (CA) systems, in particular GAP. We have developed and released SymGrid-Par, a Haskell-side infrastructure for orchestrating heterogeneous computations across high-performance computational Grids. Based on this infrastructure we have developed a range of domain-specific parallel skeletons for parallelising representative symbolic computation applications. A Haskell-side interface to this infrastructures is available in the form of the Computer Algebra Shell CASH, which is downloadable from Hackage. We are currently extending SymGrid-Par with support for fault-tolerance, targeting massively parallel high-performance architectures.

Another strand of development is the improvement of the GUM runtime-system to better deal with hierarchical and heterogeneous architectures, that are becoming increasingly important. We are revisiting basic resource policies, such as those for load distribution, and are exploring modifications that provide enhanced, adaptive behaviour for these target platforms.

New work has been launched into the direction of inherently parallel data structures for Haskell and using such data structures in symbolic applications. This work aims to develop foundational building blocks in composing parallel Haskell applications, taking a data-centric point of view. Current work focuses on data structures such as append-trees to represent lists and quad-trees in an implementation of the n-body problem.

In the context of the SICSA MultiCore Challenge, we are comparing the performance of several parallel Haskell implementations (in GpH and Eden) with other functional implementations (F#, Scala and SAC) and with implementations produced by colleagues in a wide range of other parallel languages. The latest challenge application was the n-body problem. A summary of this effort is available on the following web page, and sources of several parallel versions will be uploaded shortly: http://www.macs.hw.ac.uk/sicsawiki/index.php/MultiCoreChallenge .

We have been extending the set of primitives for parallelism in GpH, to provide enhanced control of data locality in GpH applications. Results from applications running on up to 256 cores of our Beowulf cluster demonstrate significant improvements in performance when using these extensions.

A distributed-memory, GHC-based implementation of the parallel Haskell extension GpH and of a fundamentally revised version of the evaluation strategies abstraction is available in a prototype version. In current research an extended set of primitives, supporting hierarchical architectures of parallel machines, and extensions of the runtime-system for supporting these architectures are being developed.

Cloud HaskellWe have been working on Cloud Haskell for distributed parallelism. In particular, we are developing a new implementation that is intended to be robust, flexible and have good performance. The resulting “distributed-process” package will build off an internal design which includes a swappable network transport layer. As we flesh out this implementation, we are also working on further developing and validating the new design. These ongoing efforts are visible from the GitHub page listed below.

The release is also accompanied by a new tutorial on the Haskell wiki, the ThreadScope Tour. The tours provides a series of self-contained miniature walkthroughs focusing on various aspects of ThreadScope usage, for example, observing the need to consolidate sequential evaluation in order to make ThreadScope output easier to interpret.

Much of the ThreadScope work leading up to this release consists in backend investments, improvements to the ghc-events package (a new state machine representation of the meaning of events) and the GHC runtime system (adding a new startup wall-clock time and Haskell thread labels to the event log). These changes will enable more useful improvements to ThreadScope in the future.

ThreadScopeThe latest release of ThreadScope (version 0.2.1) provides new visualisations that allow the user to observe the creation and conversion of sparks into actual work. These visualisations are aimed at giving users of ThreadScope more insight into the performance of their programs, not just what programs are doing performance-wise, but why.

The two main areas of focus in the project recently have been ThreadScope and Cloud Haskell.

Microsoft Research is funding a 2-year project to promote the real-world use of parallel Haskell. The project started in November 2010, with four industrial partners, and consulting and engineering support from Well-Typed (→ 8.1 ). Each organisation is working on its own particular project making use of parallel Haskell. The overall goal is to demonstrate successful serious use of parallel Haskell, and along the way to apply engineering effort to any problems with the tools that the organisations might run into.

We have already defined, formalized and developed a framework of verification and, now, we try to evaluate which range of concurrency bugs we are able to detect. The ongoing work also includes the implementation of a prototype and the research in order to reduce the number of annotations the programmer has to provide for running the analysis.

This PhD project targets the detection of concurrency bugs in STM Haskell. We focus on static analysis, i.e., we try to find errors by analyzing the source code of the program without executing it. Specifically, we target what we call application-level bugs, i.e., when the shared memory becomes inconsistent with respect to the design of the application because of an unexpected interleaving of the threads that access the memory. Our approach is to check that each transaction of the program preserves a given user-defined consistency property.

The WAI standard has proven itself capable for different users and there are no outstanding plans for changes or improvements.

By targeting WAI, every web framework can share WAI code instead of wasting effort re-implementing the same functionality. There are also some new web frameworks that take a completely different approach to web development that use WAI, such as webwire (FRP) and dingo (GUI). Since the last HCAR, another web framework called Scotty was released. WAI applications can send a response themselves. For example, wai-app-static is used by Yesod to serve static files. However, one does not need to use a web framework, but can simply build a web application using the WAI interface alone. The Hoogle web service targets WAI directly.

WAI is also a platform for re-using code between web applications and web frameworks through WAI middleware and WAI applications. WAI middleware can inspect and transform a request, for example by automatically gzipping a response or logging a request.

Since the last HCAR, WAI has switched to conduits (→ 7.1.1 ). WAI also added a vault parameter to the request type to allow middleware to store arbitrary data.

The Web Application Interface (WAI) is an interface between Haskell web applications and Haskell web servers. By targeting the WAI, a web framework or web application gets access to multiple deployment platforms. Platforms in use include CGI, the Warp web server, and desktop webkit.

Warp is actively used to serve up most of the users of WAI (and Yesod).

Due to the combined use of ByteStrings, blaze-builder, conduit, and GHC’s improved I/O manager, WAI+Warp has consistently proven to be Haskell’s most performant web deployment option.

Warp is a high performance, easy to deploy HTTP server backend for WAI (→ 5.2.1 ). Since the last HCAR, Warp has switched from enumerators to conduits (→ 7.1.1 ), added SSL support, and websockets integration.

The Holumbus web page ( http://holumbus.fh-wedel.de/ ) includes downloads, Git web interface, current status, requirements, and documentation. Timo Kranz’s master thesis describing the Holumbus index structure and the search engine is available at http://holumbus.fh-wedel.de/branches/develop/doc/thesis-searching.pdf . Sebastian Gauck’s thesis dealing with the crawler component is available at http://holumbus.fh-wedel.de/src/doc/thesis-indexing.pdf The thesis of Stefan Schmidt describing the Holumbus MapReduce is available via http://holumbus.fh-wedel.de/src/doc/thesis-mapreduce.pdf .

The Hayoo! and the FH-Wedel search engine have been adopted to run on top of the Snap framework (→ 5.2.7 ).

The second project, a specialized search engine for the FH-Wedel web site, has been finished http://w3w.fh-wedel.de/ . The new aspect in this application is a specialized free text search for appointments, deadlines, announcements, meetings and other dates.

Currently there are activities to optimize the index structures of the framework. In the past there have been problems with the space requirements during indexing. The data structures and evaluation strategies have been optimized to prevent space leaks. A second index structure working with cryptographic keys for document identifiers is under construction. This will further simplify partial indexing and merging of indexes.

The search engine package includes the indexer and search modules, the MapReduce package bundles the distributed MapReduce system. This is based on two other packages, which may be useful for their on: The Distributed Library with a message passing communication layer and a distributed storage system.

The framework is now separated into four packages, all available on Hackage.

The framework supports distributed computations for building indexes and searching indexes. This is done with a MapReduce like framework. The MapReduce framework is independent of the index- and search-components, so it can be used to develop distributed systems with Haskell.

The Holumbus framework consists of a set of modules and tools for creating fast, flexible, and highly customizable search engines with Haskell. The framework consists of two main parts. The first part is the indexer for extracting the data of a given type of documents, e.g., documents of a web site, and store it in an appropriate index. The second part is the search engine for querying the index.

For more information check out the happstack.com website — especially the “Happstack Philosophy” and “Happstack 8 Roadmap”.

Happstack can also be extended using a wide range of libraries which include support for alternative HTML templating systems, javascript templating and generation, type-safe URLs, type-safe form generation and validation, RAM-cloud database persistence, OpenId authentication, and more.

At the core of Happstack is the happstack-server package which provides a fast, powerful, and easy to use HTTP server with built-in support for templating (via blaze-html), request routing, form-decoding, cookies, file-uploads, etc. happstack-server is all you need to create a simple website.

While Happstack is over 7 years old, it is still undergoing active development and new innovation. It is used in a number of commercial projects as well as the new Hackage 2 server.

The Happstack project is focused on bringing the relentless, uncompromised power and beauty of Haskell to a web framework. We aim to leverage the unique characteristics of Haskell to create a highly-scalable, robust, and expressive web framework.

Mighttpd 2 is now based on Conduit version 0.4 and provides the functionality of reverse proxy. You can install Mighttpd 2 ( mighttpd2 ) from HackageDB.

The performance of Mighttpd 2 is now comparable to highly tuned web servers written in C Please read “The Monad.Reader” Issue 19 for more information.

Mighttpd 2 stops using the c10k library because GHC 7 starts using epoll()/kqueue(). The file/CGI handling part of the webserver library is re-implemented as a web application on the wai library (→ 5.2.1 ). For HTTP transfer, Mighttpd 2 links the warp library (→ 5.2.2 ) which can send a file in zero copy manner thank to sendfile().

Mighttpd version 1 was implemented with two libraries c10k and webserver . Since GHC 6 uses select(), more than 1,024 connections cannot be handled at the same time. The c10k library gets over this barrier with the pre-fork technique. The webserver library provides HTTP transfer and file/CGI handling.

Mighttpd (called mighty) version 2 is a simple but practical Web server in Haskell. It is now working on Mew.org providing basic web features and CGI (mailman and contents search).

To see an example site with source code available, you can view Haskellers (→ 1.2 ) source code: ( https://github.com/snoyberg/haskellers ).

The Yesod site ( http://www.yesodweb.com/ ) is a great place for information. It has code examples, screencasts, the Yesod blog and — most importantly — a book on Yesod.

We are excited to have achieved a 1.0 release. This signifies maturity and API stability and a web framework that gives developers all the tools they need for productive web development. Future directions for Yesod are now largely driven by community input and patches. Easier client-side interaction is definitely one concern that Yesod is working on going forward. The 1.0 release features better coffeescript support and even roy.js support

Yesod finally reached its 1.0 version. The last HCAR entry was for the 0.8 version. Some of the major changes since then are:

Yesod is broken up into many smaller projects and leverages Wai (→ 5.2.1 ) to communicate with the server. This means that many of the powerful features of Yesod can be used in different web development stacks.

MVC stands for model-view-controller. The preferred library for models is Persistent (→ 7.7.2 ). View can be handled by the Shakespeare family of compile-time template languages. This includes Hamlet, which takes the tedium out of HTML. Both of these libraries are optional, and you can use any Haskell alternative. Controllers are invoked through declarative routing. Their return type shows which response types are allowed for the request.

When type safety conflicts with programmer productivity, Yesod is not afraid to use Haskell’s most advanced features of Template Haskell and quasi-quoting to provide Easier development for its users. In particular, these are used for declarative routing, declarative schemas, and compile-time templates.

Of course type-safety guarantees against typos or the wrong type in a function. But Yesod cranks this up a notch to guarantee common web application errors won’t occur.

But Yesod is even more focused on scalable development. The key to achieving this is applying Haskell’s type-safety to an otherwise traditional MVC REST web framework.

Yesod is a traditional MVC RESTful framework. By applying Haskell’s strengths to this paradigm, we have created a web framework that helps users create highly scalable web applications.

We are starting to see more high level functionality developed by third parties being made available as snaplets. A complete list of the third-party snaplets we are aware of can be found in the snaplet directory page on our website. So far this includes seven different snaplets providing support for various data stores, support for different build environments, ReCAPTCHA support, and a snaplet providing functionality similar to “rake tasks” from Ruby on Rails.

The Snap Framework has seen two major releases (0.7 and 0.8) since the last HCAR. Some of the major features added are better awareness of proxy servers and address translation, more powerful timeout handling, more control over buffering semantics, improvements to the test infrastructure, and a number of other bug fixes and minor improvements.

The Snap Framework is a web application framework built from the ground up for speed, reliability, and ease of use. The project’s goal is to be a cohesive high-level platform for web development that leverages the power and expressiveness of Haskell to make building websites quick and easy.

ghci> url (Blog 2011 9 19)

== "/blog/2009-9-19"



Recent developments

I have ported ivy-web from wai to snap-server backend, and also wrote a sample project correspond to the starter project of snap. When everything is fine and I am free, I will upload the code and bump the version to 0.2.

Further reading

5.2.9 rss2irc Report by: Simon Michael Status: beta rss2irc is an IRC bot that polls a single RSS or Atom feed and announces new items to an IRC channel, with options for customizing output and behavior. It aims to be an easy to use, dependable bot that does its job and creates no problems. rss2irc was published in 2008 by Don Stewart. Simon Michael took over maintainership in 2009, with the goal of making a robust low-maintenance bot to stimulate development in various free/open-source software communities. It is currently used for several full-time bots including: hackagebot — announces new hackage releases in #haskell

hledgerbot — announces hledger commits in #ledger

zwikicommitbot — announces Zwiki commits in #zwiki

squeaksobot — announces Squeak and Smalltalk-related Stack Overflow questions in #squeak

squeakquorabot — announces Squeak/Smalltalk-related Quora questions in #squeak

etoystrackerbot — announces new Etoys bugs in #etoys

etoysupdatesbot — announces Etoys commits in #etoys

planetzopebot — announces new planet.zope.org posts in #zope The project is available under BSD license from its home page at http://hackage.haskell.org/package/rss2irc. Since last report there has been a great deal of cleanup and enhancement, but no new release on hackage yet due to an xml-related memory leak. Further reading http://hackage.haskell.org/package/rss2irc

5.3 Haskell and Compiler Writing

5.3.1 UUAG Report by: Arie Middelkoop Participants: ST Group of Utrecht University Status: stable, maintained UUAG is the Utrecht University Attribute Grammar system. It is a preprocessor for Haskell that makes it easy to write catamorphisms, i.e., functions that do to any data type what foldr does to lists. Tree walks are defined using the intuitive concepts of inherited and synthesized attributes, while keeping the full expressive power of Haskell. The generated tree walks are efficient in both space and time. An AG program is a collection of rules, which are pure Haskell functions between attributes. Idiomatic tree computations are neatly expressed in terms of copy, default, and collection rules. Attributes themselves can masquerade as subtrees and be analyzed accordingly (higher-order attribute). The order in which to visit the tree is derived automatically from the attribute computations. The tree walk is a single traversal from the perspective of the programmer. Nonterminals (data types), productions (data constructors), attributes, and rules for attributes can be specified separately, and are woven and ordered automatically. These aspect-oriented programming features make AGs convenient to use in large projects. The system is in use by a variety of large and small projects, such as the Utrecht Haskell Compiler UHC (→3.3), the editor Proxima for structured documents (http://www.haskell.org/communities/05-2010/html/report.html#sect6.4.5), the Helium compiler (http://www.haskell.org/communities/05-2009/html/report.html#sect2.3), the Generic Haskell compiler, UUAG itself, and many master student projects. The current version is 0.9.39 (October 2011), is extensively tested, and is available on Hackage. Recently, we improved the Cabal support and ensured compatibility with GHC 7. We are working on the following enhancements of the UUAG system: First-class AGs We provide a translation from UUAG to AspectAG (→5.3.2). AspectAG is a library of strongly typed Attribute Grammars implemented using type-level programming. With this extension, we can write the main part of an AG conveniently with UUAG, and use AspectAG for (dynamic) extensions. Our goal is to have an extensible version of the UHC. Ordered evaluation We have implemented a variant of Kennedy and Warren (1976) for ordered AGs. For any absolutely non-circular AGs, this algorithm finds a static evaluation order, which solves some of the problems we had with an earlier approach for ordered AGs. A static evaluation order allows the generated code to be strict, which is important to reduce the memory usage when dealing with large ASTs. The generated code is purely functional, does not require type annotations for local attributes, and the Haskell compiler proves that the static evaluation order is correct. Multi-core evaluation Our algorithm for ordered AGs identifies statically which subcomputations of children of a production are independent and suitable for parallel evaluation. Together with the strict evaluation as mentioned above, which is important when evaluating in parallel, the generated code can automatically exploit multi-core CPUs. We are currently evaluating the effectiveness of this approach. Stepwise evaluation In the recent past we worked on a stepwise evaluation scheme for AGs. Using this scheme, the evaluation of a node may yield user-defined progress reports, and the evaluation to the next report is considered to be an evaluation step. By asking nodes to yield reports, we can encode the parallel exploration of trees and encode breadth-first search strategies. We are currently also running a Ph.D. project that investigates incremental evaluation. We are currently also running a Ph.D. project that investigates incremental evaluation. Further reading http://www.cs.uu.nl/wiki/bin/view/HUT/AttributeGrammarSystem

http://hackage.haskell.org/package/uuagc

5.3.2 AspectAG Report by: Marcos Viera Participants: Doaitse Swierstra, Wouter Swierstra Status: experimental AspectAG is a library of strongly typed Attribute Grammars implemented using type-level programming. Introduction Attribute Grammars (AGs), a general-purpose formalism for describing recursive computations over data types, avoid the trade-off which arises when building software incrementally: should it be easy to add new data types and data type alternatives or to add new operations on existing data types? However, AGs are usually implemented as a pre-processor, leaving e.g. type checking to later processing phases and making interactive development, proper error reporting and debugging difficult. Embedding AG into Haskell as a combinator library solves these problems. Previous attempts at embedding AGs as a domain-specific language were based on extensible records and thus exploiting Haskell’s type system to check the well-formedness of the AG, but fell short in compactness and the possibility to abstract over oft occurring AG patterns. Other attempts used a very generic mapping for which the AG well-formedness could not be statically checked. We present a typed embedding of AG in Haskell satisfying all these requirements. The key lies in using HList-like typed heterogeneous collections (extensible polymorphic records) and expressing AG well-formedness conditions as type-level predicates (i.e., typeclass constraints). By further type-level programming we can also express common programming patterns, corresponding to the typical use cases of monads such as Reader, Writer, and State. The paper presents a realistic example of type-class-based type-level programming in Haskell. We have included support for local and higher-order attributes. Furthermore, a translation from UUAG to AspectAG is added to UUAGC as an experimental feature. Current Status We have recently added a combinator agMacro to provide support for “attribute grammars macros”; a mechanism that makes it easy to define attribute computation in terms of already existing attribute computation. Background The approach taken in AspectAG was proposed by Marcos Viera, Doaitse Swierstra, and Wouter Swierstra in the ICFP 2009 paper “Attribute Grammars Fly First-Class: How to do aspect oriented programming in Haskell”. The Attribute Grammar Macros combinator is described in a technical report: UU-CS-2011-028. Further reading http://www.cs.uu.nl/wiki/bin/view/Center/AspectAG

LQPL (Linear Quantum Programming Language) is a functional quantum programming language inspired by Peter Selinger’s paper “Towards a Quantum Programming Language”. The LQPL system consists of a compiler, a GUI based front end and an emulator. Compiled programs are loaded to the emulator by the front end. LQPL incorporates a simple module / include system (more like C’s include than Haskell’s import), predefined unitary transforms, quantum control and classical control, algebraic data types, and operations on purely classical data. The largest difference since the previous release of the package is that LQPL is now split into separate modules. These consist of: The compiler — available at the command line and via a TCP/IP interface.

The emulator — available as a server via a TCP/IP interface.

The front end — with version 0.9, the front end is written as a Java/Swing application, which connects to both the compiler and the emulator via TCP/IP. Further front ends are being contemplated. During the modification to create these separate modules, Hspec was used to verify the interfaces worked as designed. Quantum programming allows us to provide a fair coin toss, as shown in the code example below. qdata Coin = {Heads | Tails}

toss ::( ; c:Coin) =

{ q = |0>; Had q;

measure q of

|0> => {c = Heads}

|1> => {c = Tails}

}

This allows programming of probabilistic algorithms, such as leader election. This allows programming of probabilistic algorithms, such as leader election. Separation into modules is a preparatory step for improving the performance of the emulator and adding optimization features to the language. Further reading http://pll.cpsc.ucalgary.ca/lqpl/index.html

6 Development Tools

6.1 Environments

EclipseFP is a set of Eclipse plugins to allow working on Haskell code projects. It features Cabal integration (.cabal file editor, uses Cabal settings for compilation, allows the user to install Cabal packages from within the IDE), and GHC integration. Compilation is done via the GHC API, syntax coloring uses the GHC Lexer. Other standard Eclipse features like code outline, folding, and quick fixes for common errors are also provided. HLint suggestions can be applied in one click. EclipseFP also allows launching GHCi sessions on any module including extensive debugging facilities. It uses BuildWrapper to bridge between the Java code for Eclipse and the Haskell APIs. It also provides a full package and module browser to navigate the Haskell packages installed on your system, integrated with Hackage. The source code is fully open source (Eclipse License) on github and anyone can contribute. Current version is 2.2.4, released in March 2012 and supporting GHC 7.0 and above, and more versions with additional features are planned and actively worked on. Feedback on what is needed is welcome! The website has information on downloading binary releases and getting a copy of the source code. Support and bug tracking is handled through Sourceforge forums. Further reading http://eclipsefp.github.com/

ghc-mod is a backend command to enrich Haskell programming on editors including Emacs and Vim. The ghc-mod package on Hackage includes the ghc-mod command and Emacs front-end. Emacs front-end provides the following features: Completion You can complete a name of keyword, module, class, function, types, language extensions, etc. Code template You can insert a code template according to the position of the cursor. For instance, “module Foo where” is inserted in the beginning of a buffer. Syntax check Code lines with error messages are automatically highlighted thanks to flymake. You can display the error message of the current line in another window. hlint can be used instead of GHC to check Haskell syntax. Document browsing You can browse the module document of the current line either locally or on Hackage. Expression type You can display the type/information of the expression on the cursor. (new) There are two Vim plugins: ghcmod-vim

syntastic Further reading http://www.mew.org/~kazu/proj/ghc-mod/en/

A new major version of Heat has appeared, which works on top of GHCi instead of Hugs,

supports automatic QuickCheck property testing,

uses a simple model of updating Haskell files in place,

is distributed as a single jar file. Heat is an interactive development environment (IDE) for learning and teaching Haskell. Heat was designed for novice students learning the functional programming language Haskell. Heat provides a small number of supporting features and is easy to use. Heat is portable, small and works on top of a Haskell interpreter. Heat provides the following features: Editor for a single module with syntax-highlighting and matching brackets.

Shows the status of compilation: non-compiled; compiled with or without error.

Interpreter console that highlights the prompt and error messages.

If compilation yields an error, then the relevant source line is highlighted and no further expression can be evaluated in the console until the source has been changed and successfully recompiled.

A tree structure provides a program summary, giving definitions of types and types of functions.

Automatic checking of either Boolean or QuickCheck properties of a program; results shown in summary. Further reading http://www.cs.kent.ac.uk/projects/heat/

6.1.4 HaRe — The Haskell Refactorer Report by: Simon Thompson Participants: Huiqing Li, Chris Brown, Claus Reinke See: http://www.haskell.org/communities/05-2011/html/report.html#sect5.1.5.

6.2 Documentation

6.2.1 Haddock Report by: David Waern Status: experimental, maintained Haddock is a widely used documentation-generation tool for Haskell library code. Haddock generates documentation by parsing and typechecking Haskell source code directly and including documentation supplied by the programmer in the form of specially-formatted comments in the source code itself. Haddock has direct support in Cabal (→6.6.1), and is used to generate the documentation for the hierarchical libraries that come with GHC, Hugs, and nhc98 (http://www.haskell.org/ghc/docs/latest/html/libraries) as well as the documentation on Hackage. The latest release is version 2.9.4, released October 3 2011. Recent changes: Support for GHC 7.2 and Alex 3.x

New –qual flag for qualification of names

Print doc coverage information to stdout

Speed up generation of index

Various bug fixes Future plans Although Haddock understands many GHC language extensions, we would like it to understand all of them. Currently there are some constructs you cannot comment, like GADTs and associated type synonyms.

Error messages is an area with room for improvement. We would like Haddock to include accurate line numbers in markup syntax errors.

On the HTML rendering side we want to make more use of Javascript in order to make the viewing experience better. The frames-mode could be improved this way, for example.

Finally, the long term plan is to split Haddock into one program that creates data from sources, and separate backend programs that use that data via the Haddock API. This will scale better, not requiring adding new backends to Haddock for every tool that needs its own format. Further reading Haddock’s homepage: http://www.haskell.org/haddock/

Haddock’s developer Wiki and Trac: http://trac.haskell.org/haddock

Haddock’s mailing list: haddock@projects.haskell.org

6.2.2 lhs2TeX Report by: Andres Löh Status: stable, maintained This tool by Ralf Hinze and Andres Löh is a preprocessor that transforms literate Haskell or Agda code into LaTeX documents. The output is highly customizable by means of formatting directives that are interpreted by lhs2TeX. Other directives allow the selective inclusion of program fragments, so that multiple versions of a program and/or document can be produced from a common source. The input is parsed using a liberal parser that can interpret many languages with a Haskell-like syntax. The program is stable and can take on large documents. The current version is 1.17, so there has not been a new release since the last report. Development repository and bug tracker are on GitHub. There are still plans for a rewrite of lhs2TeX with the goal of cleaning up the internals and making the functionality of lhs2TeX available as a library. Further reading http://www.andres-loeh.de/lhs2tex

https://github.com/kosmikus/lhs2tex

6.3 Testing and Analysis

6.3.1 shelltestrunner Report by: Simon Michael Status: stable shelltestrunner was first released in 2009, inspired by the test suite in John Wiegley’s ledger project. It is a command-line tool for doing repeatable functional testing of command-line programs or shell commands. It reads simple declarative tests specifying a command, some input, and the expected output, error output and exit status. Tests can be run selectively, in parallel, with a timeout, in color, and/or with differences highlighted. In the last six months, shelltestrunner has had three releases (1.0, 1.1, 1.2) and acquired a home page. Projects using it include hledger, yesod, berp, and eddie. shelltestrunner is free software released under GPLv3+ from Hackage or http://joyful.com/shelltestrunner. Further reading http://joyful.com/repos/shelltestrunner

6.3.2 hp2any Report by: Patai Gergely Status: experimental This project was born during the 2009 Google Summer of Code under the name “Improving space profiling experience”. The name hp2any covers a set of tools and libraries to deal with heap profiles of Haskell programs. At the present moment, the project consists of three packages: hp2any-core : a library offering functions to read heap profiles during and after run, and to perform queries on them.

: a library offering functions to read heap profiles during and after run, and to perform queries on them. hp2any-graph : an OpenGL-based live grapher that can show the memory usage of local and remote processes (the latter using a relay server included in the package), and a library exposing the graphing functionality to other applications.

: an OpenGL-based live grapher that can show the memory usage of local and remote processes (the latter using a relay server included in the package), and a library exposing the graphing functionality to other applications. hp2any-manager : a GTK application that can display graphs of several heap profiles from earlier runs. The project also aims at replacing hp2ps by reimplementing it in Haskell and possibly adding new output formats. The manager application shall be extended to display and compare the graphs in more ways, to export them in other formats and also to support live profiling right away instead of delegating that task to hp2any-graph . Recently, the hp2any project joined forces with hp2pretty , which resulted in increased performance in the core library. Further reading http://www.haskell.org/haskellwiki/Hp2any

http://code.google.com/p/hp2any/

http://gitorious.org/hp2pretty

6.4 Optimization

6.4.1 HFusion Report by: Facundo Dominguez Participants: Alberto Pardo Status: experimental HFusion is an experimental tool for optimizing Haskell programs. The tool performs source to source transformations by the application of a program transformation technique called fusion. The aim of fusion is to reduce memory management effort by eliminating the intermediate data structures produced in function compositions. It is based on an algebraic approach where functions are internally represented in terms of a recursive program scheme known as hylomorphism. We offer a web interface to test the technique on user-supplied recursive definitions and HFusion is also available as a library on Hackage. The last improvement to HFusion has been to accept as input an expression containing any number of compositions, returning the expression which results from applying fusion to all of them. Compositions which cannot be handled by HFusion are left unmodified. In its current state, HFusion is able to fuse compositions of general recursive functions, including primitive recursive functions (like dropWhile or insertions in binary search trees), functions that make recursion over multiple arguments like zip, zipWith or equality predicates, mutually recursive functions, and (with some limitations) functions with accumulators like foldl. In general, HFusion is able to eliminate intermediate data structures of regular data types (sum-of-product types plus different forms of generalized trees). Further reading HFusion publications: http://www.fing.edu.uy/inco/proyectos/fusion

HFusion web interface: http://www.fing.edu.uy/inco/proyectos/fusion/tool

HFusion on Hackage: http://hackage.haskell.org/package/hfusion

6.4.2 Optimizing Generic Functions Report by: José Pedro Magalhães Participants: Johan Jeuring, Andres Löh Status: actively developed See: http://www.haskell.org/communities/11-2010/html/report.html#sect8.5.4.

6.5 Code Management

Darcs is a distributed revision control system written in Haskell. In Darcs, every copy of your source code is a full repository, which allows for full operation in a disconnected environment, and also allows anyone with read access to a Darcs repository to easily create their own branch and modify it with the full power of Darcs’ revision control. Darcs is based on an underlying theory of patches, which allows for safe reordering and merging of patches even in complex scenarios. For all its power, Darcs remains a very easy to use tool for every day use because it follows the principle of keeping simple things simple. Our most recent release, Darcs 2.5.2, was in March 2011. We are very close to releasing Darcs 2.8 (the second release candidate is out). Some key changes include support for GHC 7, a faster and more readable darcs annotate , a darcs obliterate -O which can be used to conveniently “stash” patches, hunk editing for the darcs revert command. Over the longer term, Darcs will emphasise three development priorities Improving code quality: this ranges from surface-level improvements such as switching to a uninform coding style, to deeper refactors and a move towards a more principled separation of Darcs subsystems. Supporting Darcs hosting and GUIs: we aim to provide library code that makes it easier to write hosting sites such as Darcsden and Patch-Tag, or graphical interfaces to Darcs. This work may potentially involve writing prototype hosting code to test our library. Developing the Darcs 3 theory of patches: we aim specifically to address the conflict-resolution issues that Darcs suffers from. Darcs is free software licensed under the GNU GPL (version 2 or greater). Darcs is a proud member of the Software Freedom Conservancy, a US tax-exempt 501(c)(3) organization. We accept donations at http://darcs.net/donations.html. Further reading http://darcs.net

http://wiki.darcs.net/Development/Priorities

6.5.2 DarcsWatch Report by: Joachim Breitner Status: working DarcsWatch is a tool to track the state of Darcs (→6.5.1) patches that have been submitted to some project, usually by using the darcs send command. It allows both submitters and project maintainers to get an overview of patches that have been submitted but not yet applied. DarcsWatch continues to be used by the xmonad project (→7.8.2), the Darcs project itself, and a few developers. At the time of writing, it was tracking 39 repositories and 4288 patches submitted by 234 users. Further reading http://darcswatch.nomeata.de/

http://darcs.nomeata.de/darcswatch/documentation.html

6.5.3 darcsden Report by: Simon Michael Participants: Alex Suraci, Simon Michael, Scott Lawrence, Daniel Patterson, Daniel Goran Status: beta, low activity http://darcsden.com is a free Darcs (→6.5.1) repository hosting service, similar to patch-tag.com or (in essence) github . The darcsden software is also available (on darcsden) so that anyone can set up a similar service. darcsden is available under BSD license and was created by Alex Suraci. Alex keeps the service running and fixes bugs, but is mostly focussed on other projects. darcsden has a clean UI and codebase and is a viable hosting option for smaller projects despite occasional glitches. The last Hackage release was in 2010. Other committers have been submitting patches, and the darcsden software is close to becoming a just-works installable darcs web ui for general use. Further reading http://darcsden.com

6.5.4 darcsum Report by: Simon Michael Status: occasional development; suitable for daily use darcsum is an emacs add-on providing an efficient, pcl-cvs-like interface for the Darcs revision control system (→6.5.1). It is especially useful for reviewing and recording pending changes. Simon Michael took over maintainership in 2010, and tried to make it more robust with current Darcs. The tool remains slightly fragile, as it depends on Darcs’ exact command-line output, and needs updating when that changes. Dave Love has contributed a large number of cleanups. darcsum is available under the GPL version 2 or later from http://joyful.com/darcsum. In the last six months darcsum acquired a home page, but there has been little other activity. We are looking for a new maintainer for this useful tool. Further reading http://joyful.com/darcsum/

cab is a MacPorts-like maintenance command of Haskell cabal packages. Some parts of this program are a wrapper to ghc-pkg , cabal , and cabal-dev . If you are always confused due to inconsistency of ghc-pkg and cabal , or if you want a way to check all outdated packages, or if you want a way to remove outdated packages recursively, this command helps you. cab now provides the “test”, “up”, “genpaths”, and “doc” subcommands. Further reading http://www.mew.org/~kazu/proj/cab/en/

6.6 Deployment

6.6.1 Cabal and Hackage Report by: Duncan Coutts Background Cabal is the standard packaging system for Haskell software. It specifies a standard way in which Haskell libraries and applications can be packaged so that it is easy for consumers to use them, or re-package them, regardless of the Haskell implementation or installation platform. Hackage is a distribution point for Cabal packages. It is an online archive of Cabal packages which can be used via the website and client-side software such as cabal-install. Hackage enables users to find, browse and download Cabal packages, plus view their API documentation. cabal-install is the command line interface for the Cabal and Hackage system. It provides a command line program cabal which has sub-commands for installing and managing Haskell packages. Recent progress We have had two successful Google Summer of Code projects on Cabal this year. Sam Anklesaria worked on a “cabal repl” feature to launch an interactive GHCi session with all the appropriate pre-processing and context from the project’s .cabal file. Mikhail Glushenkov worked on a feature so that “cabal install” can build independent packages in parallel (not to be confused with building modules within a package in parallel). The code from both projects is available and they are awaiting integration into the main Cabal repository, which we expect to happen over the course of the next few months. The “cabal test” feature which was developed as a GSoC project last summer has matured significantly in the last 6 months, thanks to continuing effort from Thomas Tuegel and Johan Tibell. The basic test interface will be ready to use in the next release, and there has been some progress on the “detailed” test interface. The IHG is currently sponsoring some work on cabal-install. The first fruits of this work is a new dependency solver for cabal-install which is now included in the development version. The new solver can find solutions in more cases and produces more detailed error messages when it cannot find a solution. In addition, it is better about avoiding and warning about breaking existing installed packages. We also expect it to be a better basis for other features in future. For more details see the presentation by Andres Löh. http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011/Loeh The last 6 months has seen significant progress on the new hackage-server implementation with help from many new volunteers, in particular Max Bolingbroke, but also several other people who helped at hackathons and subsequently. The IHG funded Well-Typed to improve package mirroring so that continuous nearly-live mirroring is now possible. We are also grateful to factis research GmbH who have kindly donated a VM to help the hackage developers test the new server code. We expect to do live mirroring and public beta testing using this server during the next few months. Looking forward Users are increasingly relying on hackage and cabal-install and are increasingly frustrated by dependency problems. Solutions to the variety of problems do exist. It will however take sustained effort to solve them. The good news is that there is the realistic prospect of the new hackage-server being ready in the not too distant future with features to help monitor and encourage package quality, and the recent work on cabal-install should reduce the frustration level somewhat. The last 6 months has seen a good upswing in the number of volunteers spending their time on cabal and hackage, so much so that a clear bottleneck is patch review and integration bandwidth. A similar issue is that many of the long standing bugs and feature requests require significant refactoring work which many volunteers feel reluctant or unable to do. Assistance in these areas would be very valuable indeed. We would like to encourage people considering contributing to join the cabal-devel mailing list so that we can increase development discussion and improve collaboration. The bug tracker is reasonably well maintained and it should be relatively clear to new contributors what is in need of attention and which tasks are considered relatively easy. Further reading Cabal homepage: http://www.haskell.org/cabal

Hackage package collection: http://hackage.haskell.org/

Bug tracker: http://hackage.haskell.org/trac/hackage/

6.6.2 Portackage — A Hackage Portal Report by: Andrew G. Seniuk Portackage (fremissant.net/portackage) is a web interface to all of hackage.haskell.org, which at the time of writing includes some 4000 packages exposing over 17000 modules. There are package and module views, as seen in the screenshots. The package view includes links to the package, homepage, and bug tracker when available. Each name in the module tree view links to the Haddock API page. Control-hovering will show the fully-qualified name in a tooltip. Portackage is only a few days old; imminent further work includes Tree branches will be collapsed by default.

Cookies (as well as server DB) will maintain persistent state of which nodes you have open, since this information carries value, both in terms of cost to reconstruct manually, and of personal mnemonics — if nodes were collapsed, you would forget where things were, instead of having them right there filtered out.

A flat list of modules with the filtering text input field would be good, but the full list of modules is too large for the present naive JavaScript. The code itself is mostly Haskell, but is still too green to expose on Hackage.

7 Libraries, Applications, Projects

7.1 Language Features

7.1.1 Conduit Report by: Michael Snoyman Status: experimental While lazy I/O has served the Haskell community well for many purposes in the past, it is not a panacea. The inherent non-determinism with regard to resource management can cause problems in such situations as file serving from a high traffic web server, where the bottleneck is the number of file descriptors available to a process. Left fold enumerators have been the most common approach to dealing with streaming data with using lazy I/O. While it is certainly a workable solution, it requires a certain inversion of control to be applied to code. Additionally, many people have found the concept daunting. Most importantly for our purposes, certain kinds of operations, such as interleaving data sources and sinks, are prohibitively difficult under that model. The conduit package was designed as an alternate approach to the same problem. It is based around the concept of a cursor. In particular, we have sources that can be pulled from and sinks that can be pushed to. There’s nothing revolutionary there: this is the same concept powering such low-level approaches as file descriptor I/O. However, we have a few higher-level facilities that make for a simpler usage: Monadic composition allows us to combine simpler components into more complicated actions.

We also have conduits (the namesake of the package), which allow transformations of data. For example, it’s trivial to combine a source which reads from a file and a conduit that decompresses data.

Combined with the resourcet package, we have fully deterministic and exception safe resource handling. The design space is still not fully resolved. The enumerator approach continues to be used and thrive, and alternatives like pipes are in development as well. The community is currently having a very healthy and lively debate about the merits of each approach. It is likely that we will continue to see improvements and refinements. Meanwhile, the team behind conduit feels it is ready to be used today. The Web Application Interface (WAI) and Yesod have both moved over to conduit, and have experienced drastic simplification of the code bases. Conduit has also allowed a much simplified HTTP API in the form of http-conduit. In other words, while the package is relatively young, it has already proven vital for our daily workflow, and we believe that many in the community can benefit from it already. Further reading http://www.yesodweb.com/book/conduits

https://github.com/mezzohaskell/mezzohaskell/blob/master/chapters/libraries/conduit.md

7.1.2 Free Sections Report by: Andrew G. Seniuk Free sections (package freesect ) extend Haskell (or other languages) to better support partial function application. The package can be installed from Hackage and runs as a preprocessor. Free sections can be explicitly bracketed, or usually the groupings can be inferred automatically. zipWith ( f _ $ g _ z ) xs ys -- context inferred

= zipWith _[ f _ $ g _ z ]_ xs ys -- explicit bracketing

= zipWith (\ x y -> f x $ g y z ) xs ys -- after the rewrite

Free sections can be understood by their place in a tower of generalisations, ranging from simple function application, through usual partial application, to free sections, and to named free sections. The latter (where _ wildcards include identifier suffixes) have the same expressivity as a lambda function wrapper, but the syntax is more compact and semiotic. Although the rewrite provided by the extension is simple, there are advantages of free sections relative to explicitly written lambdas: lambda forces the programmer to invent fresh names for the wildcards

lambda forces the programmer to repeat those names, and place them correctly

freesect wildcards stand out vividly, indicating where the awaited expressions will go

reading the lambda requires visual pattern-matching between left and right sides

lambda is longer overall, and prefaces the expression of interest with boilerplate On the other hand, the lambda (or named free section) is more powerful than the anonymous free section: it can achieve arbitrary permutations without further ado; but anonymous wildcards preserve their lexical order

it is more expressive when nesting is involved, because the variables are not anonymous On the other hand, the lambda (or named free section) is more powerful than the anonymous free section: Free sections (like function wrappers generally) are especially useful in refactoring and retrofitting exisitng code, although once familiar they can also be useful from the ground up. Philosophically, use of this sort of syntax promotes “higher-order programming”, since any expression can so easily be made into a function, in numerous ways, simply by replacing parts of it with freesect wildcards. That this is worthwhile is demonstrated by the frequent usefulness of sections. The notion of free sections emanated from an encompassing research agenda around vagaries of lexical syntax. Immediate plans specific to free sections include: possibly something could be prepared for academic publication

implementing the named free sections extension-extension for completeness

attempting to get it accepted into some project (maybe some Haskell compiler) which handles parsing (my code uses a fork of HSE, and divergence is accruing) Otherwise, pretty much a one-off which will be deemed stable in a few months. Maybe I’ll try extending some language which lacks lambdas (or where its lambda syntax is especially unpleasant). Further reading fremissant.net/freesect

7.2 Education

7.2.1 Holmes, Plagiarism Detection for Haskell Report by: Jurriaan Hage Participants: Brian Vermeer, Gerben Verburg Holmes is a tool for detecting plagiarism in Haskell programs. A prototype implementation was made by Brian Vermeer under supervision of Jurriaan Hage, in order to determine which heuristics work well. This implementation could deal only with Helium programs. We found that a token stream based comparison and Moss style fingerprinting work well enough, if you remove template code and dead code before the comparison. Since we compute the control flow graphs anyway, we decided to also keep some form of similarity checking of control-flow graphs (particularly, to be able to deal with certain refactorings). In November 2010, Gerben Verburg started to reimplement Holmes keeping only the heuristics we figured were useful, basing that implementation on haskell-src-exts . A large scale empirical validation has been made, and the results are good. We have found quite a bit of plagiarism in a collection of about 2200 submissions, including a substantial number in which refactoring was used to mask the plagiarism. A paper has been written, but is currently unpublished. The tool will not be made available through Hackage, but will be available free of use to lecturers on request. Please contact J.Hage@uu.nl for more information. We also have a implemented graph based that computes near graph-isomorphism that seems to work really well in comparing control-flow graphs in an inexact fashion. However, it does not scale well enough in terms of computations to be included in the comparison, and is not mature enough to deal with certain easy refactorings. Future work includes a Hare-against-Holmes bash in which Hare users will do their utmost to fool Holmes.

The Ideas project (at Open Universiteit Nederland and Utrecht University) aims at developing interactive domain reasoners on various topics. These reasoners assist students in solving exercises incrementally by checking intermediate steps, providing feedback on how to continue, and detecting common mistakes. The reasoners are based on a strategy language, from which feedback is derived automatically. The calculation of feedback is offered as a set of web services, enabling external (mathematical) learning environments to use our work. We currently have a binding with the Digital Mathematics Environment of the Freudenthal Institute (first/left screenshot), the ActiveMath learning system of the DFKI and Saarland University (second/right screenshot), and our own online exercise assistant that supports rewriting logical expressions into disjunctive normal form. We are adding support for more exercise types, mainly at the level of high school mathematics. For example, our domain reasoner now covers simplifying expressions with exponents, rational equations, and derivatives. We have investigated how users can interleave solving different parts of exercises. We have extended our strategy language with different combinators for interleaving, and have shown how the interleaving combinators are implemented in the parsing framework we use for recognizing student behavior and providing hints. Recently, we have focused on designing the Ask-Elle functional programming tutor. This tool lets you practice introductory functional programming exercises in Haskell. The tutor can both guide a student towards developing a correct program, as well as analyse intermediate, incomplete, programs to check whether or not certain properties are satisfied. We are planning to include checking of program properties using QuickCheck, for instance for the generation of counterexamples. We have to guide the test-generation process to generate test-cases that do not use the part of the program that has yet to be developed. We also want to make it as easy as possible for teachers to add programming exercises to the tutor, and to adapt the behavior of the tutor by disallowing or enforcing particular solutions, and by changing the feedback. Teachers can adapt feedback by annotating the model solutions of an exercise. The tutor has an improved web-interface and is used in an introductory FP course at Utrecht University. The feedback services are available as a Cabal source package. The latest release is version 1.0 from September 1, 2011. Further reading Online exercise assistant (for logic), accessible from our project page.

Bastiaan Heeren, Johan Jeuring, and Alex Gerdes. Specifying Rewrite Strategies for Interactive Exercises. Mathematics in Computer Science, 3(3):349–370, 2010.

Bastiaan Heeren and Johan Jeuring. Interleaving Strategies. Conference on Intelligent Computer Mathematics, Mathematical Knowledge Management (MKM 2011).

Johan Jeuring, Alex Gerdes, and Bastiaan Heeren. A Programming Tutor for Haskell. To appear in Lecture Notes Central European School on Functional Programming, (CEFP 2011). Try our tutor at http://ideas.cs.uu.nl/ProgTutor/.

7.3 Parsing and Transforming

7.3.1 The grammar-combinators Parser Library Report by: Dominique Devriese Status: partly functional The grammar-combinators library is an experimental parser library written in Haskell (LGPL license). The library features much of the power of a parser generator like Happy or ANTLR, but with the library approach and most of the benefits of a parser combinator library. The project’s initial release was in September 2010. A paper about the main idea has been presented at the PADL’11 conference and an accompanying technical report with more implementation details is available online. The library is published on Hackage under the name grammar-combinators. The library works with an explicit, typed representation of non-terminals, allowing fundamentally more powerful grammar algorithms, including various grammar analysis, transformation and pretty-printing libraries etc. A disadvantage is that higher-order combinators modelling recursive concepts like many and some require more work to write. The library is currently not yet suited for mainstream use. Performance is not ideal and many real-world features are missing. People interested to work on these topics are very welcome to contact us! Further reading http://projects.haskell.org/grammar-combinators/

7.3.2 epub-metadata Report by: Dino Morelli Status: stable, actively developed See: http://www.haskell.org/communities/05-2011/html/report.html#sect6.2.4.

The previous extension for recognizing merging parsers was generalized so now any kind of applicative and monadic parsers can be merged in an interleaved way. As an example take the situation where many different programs write log entries into a log file, and where each log entry is uniquely identified by a transaction number (or process number) which can be used to distinguish them. E.g., assume that each transaction consists of an |a|, a |b| and a |c| action, and that a digit is used to identify the individual actions belonging to the same transaction; the individual transactions can now be recognized by the parser: pABC = do d <- mkGram (pa *> pDigit )

mkGram (pb *> pSym d)

*> mkGram (pc *> pSym d)



run (pmMany(pABC)) "a2a1b1b2c2a3b3c1c3"

Result: "213"



Furthermore the library was provided with many more examples in two modules in the |Demo| directory.

Features

Much simpler internals than the old library (http://haskell.org/communities/05-2009/html/report.html#sect5.5.8) .

Combinators for easily describing parsers which produce their results online, do not hang on to the input and provide excellent error messages. As such they are “surprise free” when used by people not fully aware of their internal workings.

Parsers “correct” the input such that parsing can proceed when an erroneous input is encountered.

The library basically provides the to be preferred applicative interface and a monadic interface where this is really needed (which is hardly ever).

No need for try-like constructs which makes writing Parsec based parsers tricky.

based parsers tricky. Scanners can be switched dynamically, so several different languages can occur intertwined in a single input file.

Parsers can be run in an interleaved way, thus generalizing the merging and permuting parsers into a single applicative interface. This makes it e.g. possible to deal with white space or comments in the input in a completely separate way, without having to think about this in the parser for the language at hand (provided of course that white space is not syntactically relevant).

Future plans

Since the part dealing with merging is relatively independent of the underlying parsing machinery we may split this off into a separate package. This will enable us also to make use of a different parsing engines when combining parsers in a much more dynamic way. In such cases we want to avoid too many static analyses.

Future versions will contain a check for grammars being not left-recursive, thus taking away the only remaining source of surprises when using parser combinator libraries. This makes the library even greater for use teaching environments. Future versions of the library, using even more abstract interpretation, will make use of computed look-ahead information to speed up the parsing process further.

Students are working on a package for processing options which makes use of the merging parsers, so that the various options can be set in a flexible but typeful way.

Contact

If you are interested in using the current version of the library in order to provide feedback on the provided interface, contact <doaitse at swierstra.net>. There is a low volume, moderated mailing list which was moved to <parsing at lists.science.uu.nl> (see also http://www.cs.uu.nl/wiki/bin/view/HUT/ParserCombinators).

7.3.4 Regular Expression Matching with Partial Derivatives Report by: Martin Sulzmann Participants: Kenny Zhuo Ming Lu Status: stable We are still improving the performance of our matching algorithms. The latest implementation can be downloaded via hackage. Further reading http://hackage.haskell.org/package/regex-pderiv

http://sulzmann.blogspot.com/2010/04/regular-expression-matching-using.html

regex-applicative is aimed to be an efficient and easy to use parsing combinator library for Haskell based on regular expressions. There are several ways in which one can specify what part of the string should be matched: the whole string, a prefix or an arbitrary part (“leftmost infix”) of the string. Additionally, for prefix and infix modes, one can demand either the longest part, the shortest part or the first (in the left-biased ordering) part. Finally, other things being equal, submatches are chosen using left bias. Recently the performance has been improved by using more efficient algorithm for the parts of the regular expression whose result is not used. Example code can be found on the wiki. Further reading http://hackage.haskell.org/package/regex-applicative

http://github.com/feuerbach/regex-applicative

7.4 Generic and Type-Level Programming

7.4.1 Unbound Report by: Brent Yorgey Participants: Stephanie Weirich, Tim Sheard Status: actively maintained Unbound is a domain-specific language and library for working with binding structure. Implemented on top of the RepLib generic programming framework, it automatically provides operations such as alpha equivalence, capture-avoiding substitution, and free variable calculation for user-defined data types (including GADTs), requiring only a tiny bit of boilerplate on the part of the user. It features a simple yet rich combinator language for binding specifications, including support for pattern binding, type annotations, recursive binding, nested binding, set-like (unordered) binders, and multiple atom types. Further reading http://byorgey.wordpress.com/2011/08/24/unbound-now-supports-set-binders-and-gadts/

http://byorgey.wordpress.com/2011/03/28/binders-unbound/

http://hackage.haskell.org/package/unbound

http://code.google.com/p/replib/

7.4.2 FlexiWrap Report by: Iain Alexander Status: experimental A library of flexible newtype wrappers which simplify the process of selecting appropriate typeclass instances, which is particularly useful for composed types. Version 0.1.0 has been released on Hackage, providing support for a more comprehensive range of typeclasses when wrapping simple values, and some documentation. Work is still ongoing to flesh out the typeclass instances available and improve the documentation.

7.4.3 Generic Programming at Utrecht University Report by: José Pedro Magalhães Participants: Johan Jeuring, Sean Leather Status: actively developed See: http://www.haskell.org/communities/11-2010/html/report.html#sect8.5.3.

7.4.4 A Generic Deriving Mechanism for Haskell Report by: José Pedro Magalhães Participants: Atze Dijkstra, Johan Jeuring, Andres Löh, Simon Peyton Jones Status: actively developed Haskell’s deriving mechanism supports the automatic generation of instances for a number of functions. The Haskell 98 Report only specifies how to generate instances for the Eq, Ord, Enum, Bounded, Show, and Read classes. The description of how to generate instances is largely informal. As a consequence, the portability of instances across different compilers is not guaranteed. Additionally, the generation of instances imposes restrictions on the shape of datatypes, depending on the particular class to derive. We have developed a new approach to Haskell’s deriving mechanism, which allows users to specify how to derive arbitrary class instances using standard datatype-generic programming techniques. Generic functions, including the methods from six standard Haskell 98 derivable classes, can be specified entirely within Haskell, making them more lightweight and portable. We have implemented our deriving mechanism together with many new derivable classes in UHC (→3.3) and GHC. The implementation in GHC has a more convenient syntax; consider enumeration: class GEnum a where

genum :: [a]

default genum :: ( Representable a,

Enum' (Rep a))

=> [a]

genum = map to enum'



instance (GEnum a) => GEnum (Maybe a)

instance (GEnum a) => GEnum [a]

These instances are empty, and therefore use the (generic) default implementation. This is as convenient as writing |deriving| clauses, but allows defining more generic classes. This implementation relies on the new functionality of default signatures, like in |genum| above, which are like standard default methods but allow for a different type signature.

Further reading

http://www.haskell.org/haskellwiki/Generics

7.5 Proof Assistants and Reasoning

The Haskell Equational Reasoning Model-to-Implementation Tunnel (HERMIT) is an NSF-funded project being run at KU (→9.11), which aims to improve the applicability of Haskell-hosted Semi-Formal Models to High Assurance Development. Specifically, HERMIT will use: a Haskell-hosted DSL; the Worker/Wrapper Transformation; and a new refinement user interface to perform rewrites directly on Haskell Core, the GHC internal representation. This project is a substantial case study of the application of Worker/Wrapper on larger examples. In particular, we want to demonstrate the equivalences between efficient Haskell programs, and their clear specification-style Haskell counterparts. In doing so there are several open problems, including refinement scripting and management scaling issues, data representation and presentation challenges, and understanding the theoretical boundaries of the worker/wrapper transformation. The project started in Spring 2012, and is expected to run for two years. Neil Sculthorpe, who got his PhD from the University of Nottingham in 2011, has joined as a senior member of the project, and Andrew Farmer and Ed Komp round out the team. We have already reworked the KURE DSL (http://www.haskell.org/communities/11-2008/html/report.html#sect5.5.7) as the basis of our rewrite capabilities, and constructed the rewrite kernel. The entire system uses the GHC plugin architecture, and we have small examples successfully being transformed through a simple REPL. A web-based API is being constructed, and an Android version is planned. We aim to write up a detailed introduction of our architecture and implementation for the Haskell Symposium. Further reading http://www.ittc.ku.edu/csdl/fpg/Tools/HERMIT

7.5.2 Automated Termination Analyzer for Haskell Report by: Jürgen Giesl Participants: Matthias Raffelsieper, Peter Schneider-Kamp, Stephan Swiderski, RenéThiemann Status: actively developed See: http://www.haskell.org/communities/05-2011/html/report.html#sect7.6.1.

HTab is an automated theorem prover for hybrid logics based on a tableau calculus. It handles hybrid logic with nominals, satisfaction operators, converse modalities, universal modalities, the down-arrow binder, and role inclusion. Main changes of version 1.6.0 are the switch to a better blocking mechanism called pattern-based blocking, and general effort to reduce and clean up the source code (removing some features in the process) to facilitate further experiments. It is available on Hackage and comes with sample formulas to illustrate its input format. Further reading http://code.google.com/p/intohylo/

7.5.4 Free Theorems for Haskell Report by: Janis Voigtländer Participants: Daniel Seidel Free theorems are statements about program behavior derived from (polymorphic) types. Their origin is the polymorphic lambda-calculus, but they have also been applied to programs in more realistic languages like Haskell. Since there is a semantic gap between the original calculus and modern functional languages, the underlying theory (of relational parametricity) needs to be refined and extended. We aim to provide such new theoretical foundations, as well as to apply the theoretical results to practical problems. The research grant that sponsored Daniel’s position has been extended for another round of funding. However, currently we are both consumed by teaching the (by local definition, imperative) programming intro course here at U Bonn, in C (yes, in C), plus an advanced functional programming course, in Haskell. On the practical side, we maintain a library and tools for generating free theorems from Haskell types, originally implemented by Sascha Böhme and with contributions from Joachim Breitner and now Matthias Bartsch. Both the library and a shell-based tool are available from Hackage (as free-theorems and ftshell, respectively). There is also a web-based tool at http://www-ps.iai.uni-bonn.de/ft/. Features include: three different language subsets to choose from

equational as well as inequational free theorems

relational free theorems as well as specializations down to function level

support for algebraic data types, type synonyms and renamings, type classes

plain text, LaTeX source, PDF, and inline graphics output with nicely typeset theorems Further reading http://www.iai.u