Testing is important, and it’s hard to get people to do as much of it as they should. Testing tools matter because the smoother the process is, the more tests people will write.

Especially in the functional programming world, most of the talk about testing tools is focused on tools for property-based testing, like the various and sundry quickcheck-style systems. These are great, but sometimes, you don’t want to write down properties – what you want is to write your tests in terms of simple, concrete scenarios.

We’ve recently added support for what we call expect tests, a kind of test optimized for this kind testing. Expect tests allow you to write test scenarios without having to manually write out the output generated by the code you’re testing. Instead, that output is captured and recorded automatically for you, in a way that makes it easy to integrate into the source of the test.

Our expect tests were inspired by Mercurial’s unified test format. Unified tests are designed for testing command-line tools like hg , and so are specialized to the shell.

Here’s an example using cram, an implementation of the unified test idea that is independent of Mercurial. Let’s say we want to test the UNIX sort command. We might start by writing a test file, simple.t , that looks like this.

Dump some lines into a file $ cat > colors << HERE > red > yellow > green > HERE sort the file and dump to stdout $ sort colors

If you then run cram on this, cram will show you that the test failed by showing you a diff.

expect-test $ cram simple.t ! --- simple.t +++ simple.t.err @@ -10,5 +10,11 @@ sort the file and dump to stdout $ sort colors + green + red + yellow

It also creates a new file, simple.t.err , which contains the output of running the script, intermixed with the script itself. You can accept the new version just by moving the err file over the original.

mv simple.t.err simple.t

If you run cram now, you’ll see that the tests pass.

$ cram simple.t . # Ran 1 tests, 0 skipped, 0 failed.

If we break the tests somehow, then the diff will show us exactly what failed. For example, if we replace sort with cat , here’s what Cram will show us:

bash-3.2$ cram simple.t ! --- simple.t +++ simple.t.err @@ -10,7 +10,7 @@ sort the file and dump to stdout $ cat colors - green red yellow + green # Ran 1 tests, 0 skipped, 1 failed.

Note how good the diff is for seeing how your test failed.

Starting with the development of Iron last year, we started using cram tests pretty extensively for command-line programs. We found it to be a very productive idiom, but it’s pretty awkward to apply outside of the command-line domain. That’s why we started thinking about how to get the benefits of cram, but in OCaml.

Breaking out of the shell

Unified tests are great for three reasons:

they let you combine the scenario and the output of that scenario (and comments) into one readable file

they help you construct the file automatically

they display test failures as easy-to-interpret diffs.

None of these advantages is tied to using the shell. To bring this to OCaml, though, we needed to figure out a reasonable way of embedding these tests in an OCaml program, without breaking all of the tooling. We did this by leveraging OCaml’s annotations, which let us get the data we need in place without breaking from the ordinary syntax of an OCaml program. That means that tools like merlin and ocp-indent and editor modes like tuareg will work without incident.

We can write the OCaml analogue of our cram test by creating the following file, named simple.ml .

open Core . Std let % expect_test "simple sort" = let sorted = List . sort ~ cmp : String . compare [ "red" ; "yellow" ; "green" ] in [% sexp_of : string list ] sorted |> Sexp . to_string_hum |> print_endline ; [% expect {| |}]

Here, the let%expect_test introduces a new test, and registers it with our inline test framework. [%expect {| |}] introduces a section where output is captured, and multiple such declarations can go in a single test.

Since we haven’t actually filled in the output, running the test will fail. Here’s the diff it would show.

open Core.Std let%expect_test "simple sort" = let sorted = List.sort ~cmp:String.compare ["red";"yellow";"green"] in [%sexp_of: string list] sorted |> Sexp.to_string_hum |> print_endline; - [%expect {| |}] + [%expect {| (green red yellow) |}]

As with cram, a new file will have been generated, in this case called simple.ml.corrected , containing the updated test file. As with cram, you can accept the new test results by just copying the generated file over the original.

The above example is simple, but expect tests really shine when you start doing bigger and more complicated scenarios. And the ability to do this in ordinary OCaml code means you can use it for a much wider set of applications.

The source hasn’t been released yet, but it will come out as part of our ordinary public release process, and we hope others will give it a spin when it does come out.