Often our web applications will need to make their own web requests to other 3rd-party applications. These requests provide a lot of opportunity for failure and so we’d like to test that the right messages and failure values (in addition to success values) are returned from our application.

-- We can use these helpers when we want to make requests

-- ourselves. We're also representing only one Elasticsearch endpoint:

-- We're using Aeson's Generic JSON `Value` to make things easier on

With Servant’s type-level API definitions, assuming you’ve already defined the API you want to mock, it’s relatively trivial to create a simple server for the purposes of running tests. For instance, consider an API server that needs to get data out of Elasticsearch. Let’s first define the Elasticsearch server and client using Servant API descriptions:

-- we'll either fail to parse our document or we'll return it

-- Our Servant Client function returns Either ClientError Value here:

-- it. Unfortunately, there's a lot of opportunity for failure in these

-- Our Handler tries to get a doc from Elasticsearch and then tries to parse

Imagine, then, that this is our real server implementation:

One note: we’re also going to take advantage of lens-aeson here, which may look a bit foreign. The gist of it is that we’re going to traverse a JSON Value from Elasticsearch and try to extract some kind of document to return.

So we’ve got an Elasticsearch server and a client to talk to it. Let’s now build a simple app server that uses this client to retrieve documents. This is somewhat contrived, but hopefully it illustrates the typical three-tier application architecture.

Testing Our Backend¶

So the above represents our application and is close to a server we may actually deploy. How then shall we test this application?

Ideally, we’d like it to make requests of a real Elasticsearch server, but we certainly don’t want our tests to trigger requests to a live, production database. In addition, we don’t want to depend on our real Elasticsearch server having specific, consistent results for us to test against, because that would make our tests flaky (and flaky tests are sometimes described as worse than not having tests at all).

One solution to this is to create a trivial Elasticsearch server as part of our testing code. We can do this relatively easily because we already have an API definition for it above. With a real server, we can then let our own application make requests of it and we’ll simulate different scenarios in order to make sure our application responds the way we expect it to.

Let’s start with some helpers which will allow us to run a testing version of our Elasticsearch server in another thread:

-- | We'll run the Elasticsearch server so we can test behaviors withElasticsearch :: IO () -> IO () withElasticsearch action = bracket ( liftIO $ C . forkIO $ Warp . run 9999 esTestApp ) C . killThread ( const action ) esTestApp :: Application esTestApp = serve ( Proxy :: Proxy SearchAPI ) esTestServer esTestServer :: Server SearchAPI esTestServer = getESDocument -- This is the *mock* handler we're going to use. We create it -- here specifically to trigger different behavior in our tests. getESDocument :: Integer -> Handler Value getESDocument docId -- arbitrary things we can use in our tests to simulate failure: -- we want to trigger different code paths. | docId > 1000 = throwError err500 | docId > 500 = pure . Object $ HM . fromList [( "bad" , String "data" )] | otherwise = pure $ Object $ HM . fromList [( "_source" , Object $ HM . fromList [( "a" , String "b" )])]

Now, we should be ready to write some tests.

In this case, we’re going to use hspec-wai , which will give us a simple way to run our application, make requests, and make assertions against the responses we receive.

Hopefully, this will simplify our testing code:

thirdPartyResourcesSpec :: Spec thirdPartyResourcesSpec = around_ withElasticsearch $ do -- we call `with` from `hspec-wai` and pass *real* `Application` with ( pure $ docsApp "localhost" "9999" ) $ do describe "GET /docs" $ do it "should be able to get a document" $ -- `get` is a function from hspec-wai`. get "/docs/1" ` shouldRespondWith ` 200 it "should be able to handle connection failures" $ get "/docs/1001" ` shouldRespondWith ` 404 it "should be able to handle parsing failures" $ get "/docs/501" ` shouldRespondWith ` 400 it "should be able to handle odd HTTP requests" $ -- we can also make all kinds of arbitrary custom requests to see how -- our server responds using the `request` function: -- request :: Method -> ByteString -> [Header] -- -> LB.ByteString -> WaiSession SResponse request methodPost "/docs/501" [] "{" ` shouldRespondWith ` 405 it "we can also do more with the Response using hspec-wai's matchers" $ -- see also `MatchHeader` and JSON-matching tools as well... get "/docs/1" ` shouldRespondWith ` 200 { matchBody = MatchBody bodyMatcher } bodyMatcher :: [ Network . HTTP . Types . Header ] -> Body -> Maybe String bodyMatcher _ body = case ( decode body :: Maybe Value ) of -- success in this case means we return `Nothing` Just val | val == ( Object $ HM . fromList [( "a" , String "b" )]) -> Nothing _ -> Just "This is how we represent failure: this message will be printed"

Out of the box, hspec-wai provides a lot of useful tools for us to run tests against our application. What happens when we run these tests?

$ cabal new-test all ... GET /docs should be able to get a document should be able to handle connection failures should be able to handle parsing failures should be able to handle odd HTTP requests we can also do more with the Response using hspec-wai's matchers

Fortunately, they all passed! Let’s move to another strategy: whole-API testing.