Reading the re-engineering for testability literature and sitting in on presentations on the topic reveals two interesting points. First, only a small minority of developers currently work on applications that have adequate testability built in, and second (and even more interesting) engineering OO applications for testability is a challenging proposition that consumes much of the time of the best people on the project.

OO/imperative computer languages have no inherent qualities that make applications testable. Building in testability requires adhering to conventions: methods that only do one “thing”, at most one side effect per method (and preferably none), no global variables, dependency injection, inversion of control, and so forth. Dependency injection and inversion of control insert their own kind of complexity into the application, and the first three practices can only be enforced by adhering to conventions, but are built-in features of functional language development.

C# code in.NET development for testability starts looking a lot like F#, but looks can be deceiving. In large OO systems C# testability involves thousands of methods held together by careful planning, developer education, an IoC container, and a mocking library, but ultimately it is all held together by relying on the humans writing code working within the conventions of the testability design. Maybe there are even more layers of software and process complexity to assure programmers adhere to the conventions.

So why not build your app in F# to begin with and let the F# compiler enforce testability? In F# functional development pretty much everything is already a function (either a let binding or member binding of a type) or computation expression. If you are re-engineering for testability, instead of incrementally migrating processes to re-engineered OO code written (by convention) to mimic functional code, try incrementally migrating to an F# project with compiler-enforced testability.

Exploiting the built-in testability of F# has been largely overlooked. Perhaps the built-in correctness of functional F# makes testability appear less significant in comparison to the other advantages it has to offer. While much work has been done in developing F# testing tools, much of it has focused on using F# as the test-build language for the other .NET languages. That is very useful, but F# testability spans the unit-integration-regression testing continuum. Let’s go straight to the heart of the matter and advance tools that exploit F#’s testability.

Here’s a place to start: create a Visual Studio add-in to generate function test stubs. I like NUnit and FsUnit so I’m working them into the design. I’ve also come to prefer fsi signature files as the means of exposing API to other files and the public, but currently you either have to hand build the signature file or use the --sig compile option. There needs to be a way to conveniently generate, properly place, and maintain signature files from within the IDE.

Once you have good signature file maintenance it opens up the next possibility, a “shadow” project containing your core project’s code files, but not the signature files. Why would you want to do this? To reference the shadow project in your test project so it has access to all function bindings, not just the ones exposed by the signature file. (There is likely a more clever way to accomplish the same thing, probably incorporating reflection, but this method follows the principle of least astonishment.)

Of course I’m thinking the shadow project and test project should have as much automated maintenance as possible built it. Stub test methods should be built for every let and member binding, and it should have a mechanism for remembering an exclude list of test methods you choose not to implement, so when you add new bindings you can “update” the test project with new stubs and not recreate the ones you don’t want.

Naming standards are important to get right from the beginning, and coherent test naming is definitely an area for more R&D! (But that would be a whole nother article, if not a book.) To begin with, I would like at least one of the available standards to implement the default stub names to be “[let module name (.) binding name | type name (.) member name] : test 1”. It’s easy to manually change “test 1″ to something more descriptive, if desired. The system should maintain the test methods in sorted order in keeping with the external NUnit test runner.

The stub code should consist of two FsUnit equality statements, one stubbing the function result to assert equality with default values (zero length string, zero, false, etc, and higher order types built from default values), and a second equality asserting true |> should equal false . (The second assert ensures the developer does something useful with the stub or removes it.)

1: module NUnitFsUnitLibTest . MyLibTest 2: 3: open System 4: open NUnit . Framework 5: open FsUnit 6: open MyProject 7: open MyProject . MyLib 8: 9: [< Test >] 10: let ``MyLib.addToMyType: Test 1`` () = 11: addToMyType ( MyType ( 0 , " " , false )) 12: |> should equal ( MyType ( 0 , " " , false )) 13: true |> should equal false 14: 15: let myType = MyType ( 0 , " " , false ) 16: 17: [< Test >] 18: let ``MyType.BoolMember: Test 1`` () = 19: myType . BoolMember |> should equal false 20: true |> should equal false 21: 22: [< Test >] 23: let ``MyType.IntMember: Test 1`` () = 24: myType . IntMember |> should equal 0 25: true |> should equal false 26: 27: [< Test >] 28: let ``MyType.StringMember: Test 1`` () = 29: myType . StringMember |> should equal " " 30: true |> should equal false

This is just a starting point for exploiting F# testability. I’m sure more sophisticated test generation systems can (and should!) follow on. Perhaps inferring and generating stubs for edge cases or generating FsCheck stubs.

Testability is a huge hook to promoting F# as a general purpose language. This is an area project managers and enterprise executives can get interested in. What is a better allocation of their expensive programmer resources? Learning and maintaining um-teen design patterns all held together by convention or writing naturally maintainable and testable code from the beginning?

Those tending more towards functional puritanism should also find automating functional test code generation interesting. Because of inherent correctness, the immediate return of finding more bugs faster is not as great as it would be for OO/imperative languages, but near-ready-made regression unit tests really make the testability built-in and goes a long way in protecting against software fragility.

namespace MyProjectTest

module MyLibTest from MyProjectTest

namespace System

namespace NUnit

namespace NUnit.Framework

module FsUnit

namespace MyProject

module MyLib from MyProject

Multiple items

type TestAttribute =

inherit Attribute

new : unit -> TestAttribute

member Description : string with get, set Full name: NUnit.Framework.TestAttribute ——————–

TestAttribute() : unit

val addToMyType : x:MyType -> MyType Full name: MyProject.MyLib.addToMyType



type MyType =

new : myInt:int * myString:string * myBool:bool -> MyType

member BoolMember : bool

member IntMember : int

member StringMember : string Multiple itemstype MyType =new : myInt:int * myString:string * myBool:bool -> MyTypemember BoolMember : boolmember IntMember : intmember StringMember : string Full name: MyProject.MyType ——————–

new : myInt:int * myString:string * myBool:bool -> MyType

val should : f:('a -> #Constraints.Constraint) -> x:'a -> y:obj -> unit Full name: FsUnit.should

val equal : x:'a -> Constraints.EqualConstraint Full name: FsUnit.equal

val myType : MyType Full name: MyProjectTest.MyLibTest.myType

property MyType.BoolMember: bool

property MyType.IntMember: int

property MyType.StringMember: string