Testing naturality

It’s not obvious that the code for and µ and η will accurately model the commutativity of the diagrams in Fig. 3 and Fig. 4. Let’s test the naturality:

{-

I h

I a -------> I b

| |

| η a | η b

| |

v v

F a -------> F b

F h

-}

testη :: (Monad' m, Eq (m HV)) => MonadImpl m -> Fun HaskellValue HaskellValue -> [HaskellValue] -> Bool

testη fArr' (Fun _ h) = all

(\a -> let b = h a in

(η b . iArr h $ iObj a) == (fArr' h . η a $ iObj a))

and

{-

F^2 h

F^2 a ----> F^2 b

| |

| µ a | µ b

| |

v v

F a ------> F b

F h

-}

testµ :: (Monad' m, Eq (m HV)) => MonadImpl m -> Fun HV HV -> [m (m HV)] -> Bool

testµ _ (Fun _ h) = all

(\mma ->

(µ undefined . f2Arr h $ mma) == (fArr h . µ undefined $ mma))

OK. In each test, we take in an arbitrary function h, and a list of possible inputs to it (the as, taken in point-free), and verify that a property holds for each of them. We also take in a MonadImpl parameter — this is a function (fArr’) that is hardcoded to a specific functor — that way, we can run the tests for Lists or Maybes or whatever functor we choose. Don’t worry too much about this parameter.

The Fun data type is also new. It comes from Test.Quickcheck.Function, and can be used to generate arbitrary functions between types, which is awesome! For example, it might generate the following arbitrary function from String to Int:

{"elephant"->1, "monkey"->1, _->0}

Fig. 2 (reproduced)

Here we use it to generate the function h (see Fig. 2, reproduced to the left). The natural transformation conditions (that this diagram is commutative) apply for *any* h and a, so the best we can do is generate as many possible hs and as as we can with QuickCheck and verify that the condition holds for each a for each h.

Third, we’re passing in undefined to µ. This is because our implementation of µ ignores its first argument (see section on coding µ); undefined is the sole inhabitant of the Bottom type, which is a subtype of every type (so it passes the typechecker, here). We could also pass undefined to η, but since in that test we have actual values for a and b, we use those instead to better mirror the math (in the µ test we might get passed the empty list, so we wouldn’t have an a (or a b), and the empty list is polymorphically parameterized across all types so we couldn’t just pick one anyway).

Fourth, we’ve introduced the type HaskellValue. Again, objects in the category Hask are Haskell *types*, and arrows are functions between those types. Then, why are testing h on things called Haskell*Value*s, and not Haskell*Type*s? h might be an arrow from Int to String, but you feed it values of type Int, not the type Int. It might seem like we’re splitting hairs, here, but levels of indirection are important.

It’d be great if we could have instead written a function of type

testη :: Fun a b -> [a] -> Bool

but I couldn’t find a way to have QuickCheck generate arbitrary values of arbitrary *types*, so we approximate the possible values of types a and b with HaskellValue. Let’s define it as a sum type as follows (see here if you’re not comfortable with Haskell algebraic data types):

data HaskellValue

= A String

| B Int

| C Bool

| D Char

| E (Either (Bool, Char, [Bool]) [Maybe (Maybe Int)])

deriving (Show, Eq)

QuickCheck might generate for us - for example - the following function:

{D 'x'->C True, D 'a'->C True, D 'n'->C False, _->C False}

We’ll need to write Arbitrary, CoArbitrary, and Function instances for HaskellValue in order for QuickCheck to generate the random functions and values. This isn’t as interesting, so we’ll skip over it (see here for implementations of these).

We didn’t write tests to ensure that our functors are really functors, though we did write tests for our natural transformations. This is because any Haskell data type that implements the typeclasses Applicative and Functor is already supposed to obey the functor laws.