Tests can be a bummer to write but even a bigger nightmare to maintain. When we noticed we are putting off simple tasks just because we were afraid to update some monster test case, we started looking for more creative ways to simplify the process of writing and maintaining tests.

In this article I will describe a class based approach to writing tests.

Before we start writing code let’s set some goals:

Extensive — We want our tests to cover as many scenarios as possible. We hope a solid platform for writing tests will make it easier for us to adapt to changes and cover more grounds.

— We want our tests to cover as many scenarios as possible. We hope a solid platform for writing tests will make it easier for us to adapt to changes and cover more grounds. Expressive — Good tests tell a story. Issues become irrelevant and documents get lost but tests must always pass — this is why we treat our tests as specs . Writing good tests can help newcomers (and future self) to understand all the edge cases and micro-decisions made during development.

Good tests tell a story. Issues become irrelevant and documents get lost but tests must always pass — this is why . Writing good tests can help newcomers (and future self) to understand all the edge cases and micro-decisions made during development. Maintainable — As requirements and implementations change we want to adapt quickly with as little effort as possible.

Enter Class Based Tests

Articles and tutorials about testing always give simple examples such as add and sub. I rarely have the pleasure of testing such simple functions. I’ll take a more realistic example and test an API endpoint that does login:

POST /api/account/login

{

username: <str>,

password: <str>

}

The scenarios we want to test are

User logins successfully.

User does not exist.

Incorrect password.

Missing or malformed data.

User already authenticated.

The input to our test is

a payload — username and password.

The client performing the action (anonymous or authenticated).

The output we want to test is

The return value (error or payload).

The response status code.

Side effects (for example, last login date after successful login).

After properly defining the input and output we wish to test we write this base class:

import requests

from unittest import TestCase class TestLogin:

"""Base class for testing login endpoint.""" @property

def client(self):

return requests.Session() @property

def username(self):

raise NotImplementedError() @property

def password(self):

raise NotImplementedError() @property

def payload(self):

return {

'username': self.username,

'password': self.password,

} expected_status_code = 200

expected_return_payload = {} def setUp(self):

self.response = self.client.post(

'/api/account/login',

json=payload

) def test_should_return_expected_status_code(self):

self.assertEqual(

self.response.status,

self.expected_status_code

) def test_should_return_expected_payload(self):

self.assertEqual(

self.response.json(),

self.expected_return_payload

)

We define the input (client and payload) and the expected output (expected_*). The actual login action is performed during test setUp and the response is kept as a member of the class. We added two common test cases — one for the expected status code and one for the expected return value.

The observant reader might notice we raise a NotImplementedError exception from the properties. This way, if the test author forgets to set one of the required values for the test, they get a useful exception.

Lets use our TestLogin class to write a test for a successful login:

class TestSuccessfulLogin(TestLogin, TestCase):

username = 'Haki’,

password = 'correct-password'

expected_status_code = 200

expected_return_payload = {

'id’: 1,

'username’: 'Haki’,

'full_name’: 'Haki Benita’,

}



def test_should_update_last_login_date_in_user_model(self):

user = User.objects.get(self.response.data[’id’])

self.assertIsNotNone(user.last_login_date)

By just reading the code we can tell that a username and password are sent. We expect a response with a 200 status and additional data about the user.

We extended the test to also check the last_login_date in our user model. This specific test might not be relevant to all test cases so we add it only to the successful test case.

Lets write some tests for when the login should fail:

class TestInvalidPassword(TestLogin, TestCase):

username = 'Haki'

password = 'wrong-password'

expected_status_code = 401

class TestMissingPassword(TestLogin, TestCase):

payload = {'username': 'Haki'}

expected_status_code = 400

class TestMalformedData(TestLogin, TestCase):

payload = {'username': [1, 2, 3]}

expected_status_code = 400

A developer that stumbles upon this piece of code will be able to tell exactly what should happen for any type of input. The name of the class describe the scenario and the names of the attributes describe the input. Together the class tells a story which is easy to read and understand.

The last two tests set the payload directly (without setting username and password). This won’t raise a NotImplementedError because we override the payload property directly, which is the one calling username and password.

A good test should help you find where the problem is. Let’s see what a test looks like when it fails:

FAIL: test_should_return_expected_status_code (tests.test_login.TestInvalidPassword)

------------------------------------------------------

Traceback (most recent call last):

File "../tests/test_login.py", line 28, in test_should_return_expected_status_code

self.assertEqual(self.response.status_code, self.expected_status_code)

AssertionError: 400 != 401

------------------------------------------------------

Looking at the failed test report it is clear what went wrong — when the password is invalid we expect status code 401 but we received 400.

Let’s make things a bit harder and test an authenticated user attempting to login:

class TestAuthenticatedUserLogin(TestLogin, TestCase):

username = 'Haki'

password = 'correct-password' @property

def client(self):

session = requests.session()

session.auth = ('Haki', 'correct-password')

return session expected_status_code = 400

This time we had to override the client property to authenticate the session.

Putting Our Test To The Test

To illustrate how resilient our new test cases are lets see how we can modify the base class as we introduce new requirements and changes:

We have made some refactoring and the endpoint changed to /api/user/login:

class TestLogin:



... def setUp(self):

self.response = self.client.post(

'/api/user/login',

json=payload

)

Someone decided it can speed things up if we use a different serialization format (msgpack, xml, yaml):

class TestLogin:



... def setUp(self):

self.response = self.client.post(

'/api/account/login',

data=encode(payload)

)

The product guys want to go global and now we need to test different languages:

class TestLogin:

language = 'en'



... def setUp(self):

self.response = self.client.post(

'/' + self.language + '/api/account/login',

json=payload

)

None of the above will break our existing tests.

Profit!

Taking it a step further

A few things to consider when employing this technique.

Speed Things Up

The setUp function is executed for each test case in the class (test cases are the functions beginning with test_*). To speed things up it is better to perform the action in setUpClass. This changes a few things. For example, the property’s we used should be set as attributes on the class or as @classmethods.

Using Fixtures

When using Django with fixtures the action should go in setUpTestData:

class TestLogin:

fixtures = ('test/users', ) @classmethod

def setUpTestData(cls):

super().setUpTestData()

cls.response = cls.get_client().post(

'/api/account/login',

json=payload

)

Django loads fixtures at setUpTestData so by calling super the action is executed after the fixtures were loaded.

Another quick note about Django — I’ve used the requests package but Django (and the popular Django restframework for that matter) provide their own clients — django.test.Client and rest_framework.test.APIClient.

Testing Exceptions

When a function can raise an exception we can extend the base class and wrap the action with try … catch.

class TestLoginFailure(TestLogin):



@property

def expected_exception(self):

raise NotImplementedError() def setUp(self):

try:

super().setUp()

except Exception as e:

self.exception = e def test_should_raise_expected_exception(self):

self.assertIsInstance(

self.exception,

self.expected_exception

)

If you are familiar with the assertRaises context, I haven’t used it in this case becaus the test should not fail during setUp.

Create Mixins

Test cases are repetitive by nature. With mixins we can abstract parts of common test cases and compose new ones. For example:

TestAnonymousUserMixin — populate the test with anonymous API client.

TestRemoteResponseMixin — mock response from remote service. This usually looks something like that:

from unittest import mock class TestRemoteServiceXResponseMixin:

mock_response_data = None @classmethod

@mock.patch('path.to.function.making.remote.request')

def setUpTestData(cls, mock_remote)

mock_remote.return_value = cls.mock_response_data

super().setUpTestData()

Conclusion

Someone once said that duplication is cheaper than the wrong abstraction. I couldn’t agree more. If your tests do not fit easily into a pattern then this solution is probably not the right one. It’s important to carefully decide what to abstract — the more you abstract the more flexible are the tests. But, as parameters pile up in the base class tests are becoming harder to write and we go back to square one.

Having said that, we found this technique to be useful in various situations and with different frameworks (such as Tornado and Django). Over time it has proven itself as being resilient to changes and easy to maintain. This is what we set out to achieve and we consider it a success!