When writing automated tests, such as unit tests, many of us have a tendency to focus mostly on verifying that our code behaves correctly under ideal conditions — what’s commonly referred to as “the happy path”. In order to avoid flakiness, we might mock the network to always return a successful response, disable any caching mechanisms requiring disk access, and remove other variables that can cause our tests to yield unpredictable results.

While avoiding flakiness and uncertainty is a great thing, we ideally should also test how our code behaves when something goes wrong. How does our networking code deal with the user suddenly going offline, how does our functions respond to being passed invalid data, and what kind of errors do our APIs throw?

This week, let’s take a look at how unit tests can be used not only to verify correct outcomes — but to verify the correctness of the errors that our code can produce as well.

Let’s say that we’re working on an app that provides translations between different languages, and that we have a core class called Translator — which acts as the top-level API for all translation functionality. To translate a piece of text, we pass it as a string, and our translator will either return a translated string or throw an error:

class Translator { func translate( _ text: String ) throws -> String { ... } }

Since we’re in complete control of the above Translator class and all of its functionality, we can quite easily create an exhaustive list of errors that it can throw — and model those using a nested Error enum — like this:

extension Translator { enum Error: Swift . Error , Equatable { case emptyText case tooManyWords(count: Int ) case unknownWords([ String ]) } }

As you can see above, we’ve made our error type Equatable , and the compiler has auto-synthesized the conformance — since our enum only uses components that in turn are equatable as well ( Int and [String] ). Equatable errors might not be super useful in production code, but will come very much in handy when writing a test that verifies that the correct error is thrown for a given failing condition.

To do just that, let’s write a test called testEmptyTextError , which will verify that our translator will throw the .emptyText error when passed an empty string. In our test, we’ll call our translate API by wrapping it in a call to XCTAssertThrowsError , which’ll both assert that an error was thrown — and also give us access to that error in a closure — allowing us to capture it, and then later verify it, like this:

class TranslatorTests: XCTestCase { func testEmptyTextError() { let translator = Translator () var thrownError: Error ? XCTAssertThrowsError ( try translator. translate ( "" )) { thrownError = $0 } XCTAssertTrue ( thrownError is Translator . Error , "Unexpected error type: \( type (of: thrownError)) " ) XCTAssertEqual (thrownError as ? Translator . Error , . emptyText ) } }

Thanks to Translator.Error being equatable, we can use XCTAssertEqual to match the thrown error against the one that we expect to be thrown — but before we can do so, we need to perform a series of capturing, checking, and type-casting. That can quickly get repetitive, especially if we want to write many more of the above kind of tests.

To make things easier, let’s introduce a new ”flavor” of XCTAssert , that will perform the above series of operations — given a throwing expression and an equatable error to match against. We’ll implement it as an extension on XCTestCase , to make it easy to use from within our test functions, and preventing us from accidentally exposing it to any other code:

extension XCTestCase { func assert<T, E: Error & Equatable >( _ expression: @autoclosure () throws -> T , throws error: E , in file: StaticString = #file , line: UInt = #line ) { var thrownError: Error ? XCTAssertThrowsError ( try expression (), file: file, line: line) { thrownError = $0 } XCTAssertTrue ( thrownError is E , "Unexpected error type: \( type (of: thrownError)) " , file: file, line: line ) XCTAssertEqual ( thrownError as ? E , error, file: file, line: line ) } }

As you can see above, when performing each verification we forward the current file name and line number to the underlying assertions. We do that so that we’ll get the file and line number information from the actual call sites within our test functions, which’ll give us better diagnostics in case of a failure.

Using our new assertion, we can now update our test from before to be a lot simpler. It now only has to create an instance of Translator and then invoke the API we wish to test, wrapped in a call to our new assert function — like this:

class TranslatorTests: XCTestCase { func testEmptyTextCausingError() { let translator = Translator () assert ( try translator. translate ( "" ), throws: Translator . Error . emptyText ) } }

In general, writing custom utilities — like the one we made above — can be a great way to make working with tests more fun and productive, while still fundamentally basing everything on the standard XCTAssert family of functions 👍.

While it would be pretty great if all values and errors conformed to Equatable (at least from a testing perspective), that’s often far from how things are in reality. Especially when we’re dealing with nested errors, it can be really hard to setup a structure that even guarantees what type of error that will be thrown — yet alone that it’s an equatable one.

Let’s take a look at another example. Here we’re building a ModelStorage class, that’ll enable us to store and load models to and from disk, looking like this:

class ModelStorage<Model: Codable & Identifiable > { func store( _ model: Model ) throws { ... } func load(forID id: Identifier < Model >) throws -> Model { ... } }

Just like our Translator from before, our ModelStorage class also defines a nested enum that describes what kind of errors that it can throw — however, some of those errors are simply wrappers around an error that was thrown by the system, making it hard to make the enum conform to Equatable :

extension ModelStorage { enum LoadingError: Error { case missing case readingFileFailed( Error ) case decodingFailed( Error ) } }

We could of course choose to discard those nested errors, but that could really hurt the debugging experience of any code using ModelStorage — since the underlying cause of some of our errors would be lost. That might not be a tradeoff we’re willing to make to enable easier testing — but thankfully, there’s another way! 😀

Arguably one of the most important aspects of any error is its description — which we’ll often even use as part of our UI code (when, for example, displaying an error to the user through a label) — and since descriptions are equatable strings, we could use them as an easy way to verify errors that aren’t fully equatable.

Let’s try doing that for a test that’ll verify that our ModelStorage class throws the correct error when failing to decode a model instance. The first thing we’ll need is a Codable model mock that’ll always throw a specific error when decoded — like this:

struct FailingModelMock: Codable , Identifiable { let id: Identifier < FailingModelMock > } extension FailingModelMock { struct Error: Swift . Error {} init (from decoder: Decoder ) throws { throw Error () } }

The reason we‘ll use a mock for this task is to be able to fully control the original error that was thrown — since a real Decodable implementation would throw an error generated by the Swift standard library. For more on mocking, check out “Mocking in Swift”.

Next, let’s write our test. We’ll start by storing an instance of our FailingModelMock type from above using our ModelStorage class, and then verify that loading that same instance produces the expected error — by comparing the localized descriptions of the actual error and the one we’re expecting:

class ModelStorageTests: XCTestCase { func testDecodingError() throws { let storage = ModelStorage < FailingModelMock >() let model = FailingModelMock (id: "id" ) try storage. store (model) var thrownError: Error ? XCTAssertThrowsError ( try storage. load (forID: model. id )) { thrownError = $0 } let expectedError = ModelStorage < FailingModelMock > . LoadingError . decodingFailed ( FailingModelMock . Error ()) XCTAssertEqual (expectedError. localizedDescription , thrownError?. localizedDescription ) } }

While not quite as type-safe and robust as using proper equatable errors, using descriptions to compare errors can be a great way to be able to easily write tests that verify any kind of error — especially when nested errors are being used.

To gain some additional robustness, however, we might also want to add another test that verifies that not all error cases produce the same description — which could happen if we were to hard-code a single value for all cases in one of our error enums.

In general, testing asynchronous code is often more difficult than writing an equivalent test for a synchronous API — and the same can be true for verifying asynchronous errors as well.

For example, here we’re verifying that an ArticleLoader produces the correct error result when asked to load an article while there’s no Internet access. Although we’ve already turned our asynchronous API synchronous (like in “Unit testing asynchronous Swift code”), in order to write our test — we first need to capture the loaded result, then switch on it, and finally perform an assertion against the wrapped error — like this:

class ArticleLoaderTests: XCTestCase { func testOfflineError() { let session = NetworkSessionMock (error: . offline ) let loader = ArticleLoader (session: session) var result: Result < Article >? loader. loadArticle (withID: "id" ) { result = $0 } switch result { case . success ?: XCTFail ( "No error thrown" ) case . failure ( let error)?: XCTAssertEqual ( error. localizedDescription , NetworkError . offline . localizedDescription ) case nil : XCTFail ( "No result loaded" ) } } }

The above code is using a Result type containing two cases — success and failure . To learn more about such types, check out “The power of Result types in Swift”.

In general it’s a good idea to avoid conditionals, such as switch statements, in our test cases — since we’d ideally want just a single, predictable code path for each test. However, we’ll still need to verify our result somewhere, so let’s write another custom assert function that’ll do just that:

extension XCTestCase { func assert<T>( _ result: Result < T >?, containsError expectedError: Error , in file: StaticString = #file , line: UInt = #line ) { switch result { case . success ?: XCTFail ( "No error thrown" , file: file, line: line) case . failure ( let error)?: XCTAssertEqual ( error. localizedDescription , expectedError. localizedDescription , file: file, line: line ) case nil : XCTFail ( "Result was nil" , file: file, line: line) } } }

Just like how we earlier used an assertion to reduce the boilerplate needed to verify synchronous errors, we can now simply use our new assert function to easily verify asynchronous results as well:

class ArticleLoaderTests: XCTestCase { func testOfflineError() { let session = NetworkSessionMock (error: . offline ) let loader = ArticleLoader (session: session) var result: Result < Article >? loader. loadArticle (withID: "id" ) { result = $0 } assert (result, containsError: NetworkError . offline ) } }

Reducing boilerplate can sometimes not only lead to code being easier to write and work with, but also — especially in the case of testing — make our code more robust as well, when we’re able to reduce the number of paths of execution our code contains, like we did with our latest assert function above.

While constantly chasing a 100% test coverage can often yield diminishing returns once we reach a certain level, adding tests not only for when our code executes successfully — but also for when it fails — can really increase our confidence in the code we write, especially when there are multiple conditions that can cause a failure.

Building dedicated test utilities, such as custom assertions, can also be a great way to make tests verifying error paths easier to write — which most often increases the likelihood that we’ll keep writing and maintaining them.

What do you think? Do you currently unit test your error code paths, or is it something you’ll try out? Let me know — along with your questions, comments or feedback — on Twitter @johnsundell.

Thanks for reading! 🚀