I feel there is a tendency to forget that we, or somebody else, will have to maintain our code weeks, months or even years after its creation. Sooner or later some problems or new features appear that will make you have to adapt you codebase long after you forget all the details of what it does.

Testing is like leaving a live documentation of what your code is expected to do

Types of tests

I will be focusing on unit tests in particular but just to give you an ideia:

Unit — Testing the functionality of a function usually using fake data when the code depends on external sources like databases;

Integration — Let us say you have a connection to a database, integration tests is when you are testing the data that you are getting from that connection;

End-to-end — Used for application with UI, it’s basically having the test simulate what the user would do.

The distribution is usually like this, end to end are the less amount you do and unit the most

So according to the image let us say you have implemented all those three types of tests, you have 10 tests 1 should be end to end, 2 would be integration and the remaining 7 would be unit tests.

You don’t have to faithfully follow this metric is just a good to have guideline.

You can have an extensive read about testing in general in article by Martin Fowler.

Why test

Its important to test your project code for reasons like:

Logic is documented — Self explanatory, just by reading the tests you have an overview of what the code is expected to do;

Faster debugging — If something went wrong you will have instant feedback on your tests by seeing them fail;

Less to worry in new features — When developing new features that can have an impact on existing code you can change logic around knowing that if you are breaking any part of your code the tests will tell you by failing.

There are some arguable cons like:

It takes more time to develop — I refute this by saying that the time you spend less developing and debugging with tests outweighs the time you would otherwise spend;

developing and debugging with tests outweighs the time you would otherwise spend; One more thing for a team to learn — There is usually quite a big tech stack in projects and it can end up being “yet one more thing I have to learn” *rolls eyes*

Jargon

Some of the terms that you usually see associated with tests are:

Assertions — This is what you are trying to prove in your test, if you are testing for equality and for the type of a returned value one would say that for that test you have two assertions. Each test can have as many assertions as you like but try to keep each test with few assertions for simplicity and readability;

Spy — A spy is when you are using a real method that your code being tested depends to work;

Fixtures —Code or files that simulate a certain state of your application so that you always have a fixed environment to repeat your tests whenever you want to evaluate certain states;

Stubs — You want to mitigate the amount of external influence in your test (like spies) so to have more control over the test we simulate a method that is require for our code being tested to work instead of using the original one. For example, let us say we are using a third party library to concatenate, we would replace the original method of that library with one of ours;

Mocks — These are the hardcoded values you create yourself for the test to work, one example of it would be a sample of a server response to be used by the test.

You can have a more extensive explanation with examples here.

Example

For the example I will be using a lightweight library with modules to extend it for unit tests called tape but you have alternatives like jest, jasmine and mocha.

Ideally if you should try using a TDD (Test driven development) or similar approach where you start by creating the test and only after you create your code. When this is not possible and the methods you are testing are already implemented make sure that you change your method so that you see your test fail. If the test never fails then you are testing nothing.

Let us assume we want to create an utility file named string that will have a method for lowercase and another for concatenate. We create our files

I prefer the tests near the file being tested approach but some people have all the tests in a separate folder, I leave it up to your preference to decide which approach to take

In out test file we create our first test with two assertions, one to check if everything ran “ok” and the other call the lowercase function and compare the result with the expected hardcoded value

const test = require('tape')

const stringUtils = require('./strings.js') test('lowercase should make everything in the original string lower case', function (t) {

const result = stringUtils.lowercase('Testing lowerCase')

const expected = 'testing lowercase' t.ok(result)

t.deepEqual(result, expected) t.end()

});

This is the point were we run our test and we see that it fails miserably since lowercase is defined nowhere so we go to our string.js and make it return a function with the method lowercase

function lowercase(value) {

return value.toLowerCase()

} module.exports = {

lowercase,

}

And if we run our test again it should show a success! Hurray! A first test passing!

Now we think “what if I send a number to that method?”, well in that case we create another test to test that out and implement accordingly to our expected behaviour, in this case I want an error to be thrown when an invalid type is sent to our method

The test

test('if we send a number it should throw an error from lowercase method', function(t) {

const result = stringUtils.lowercase.bind(null, 1)

t.throws(result)

t.end()

})

The changed function after we see the test fail

function lowercase(value) {

try {

return value.toLowerCase()

} catch(e) {

throw e

}

}

A good thing to also do it’s change a random part of your code being tested so that you can make sure your tests are actually being useful. In this scenario you could change the throw for a return for example.

Note that the tests in tape must have either t.plan([assertionsNumber]) or t.end() else the tests will hang.

For our method concatenate we want to test if it concatenates and, when you send an object we want it to be concatenated as {} and not as [object Object]. You can try it yourself or just see a possible solution just below

The test

test('concatenate should return the two strings concatenated', function (t) {

const result = stringUtils.concatenate('test', 'concat')

const expected = 'test concat'



t.deepEqual(result, expected)

t.end()

}) test('concatenate should consider {} as a string', function (t) {

const result = stringUtils.concatenate('test', {})

const expected = 'test {}'

t.deepEqual(result, expected)

t.end()

});

The function

function concatenate(start, end) {

start = typeof start === 'string'

? start

: JSON.stringify(start)

end = typeof end === 'string'

? end

: JSON.stringify(end) return `${start} ${end}`

}

With the exports now with the new method

module.exports = {

lowercase,

concatenate

}

This are “trivial” tests made for simplicity and you should add as many as you feel necessary for each method, “the more the merrier” as long as they are useful (meaning that they are testing something core to the method and not just there to “have more assertions”.

As the application grows we can add multiple test files and we could run them by using the following

tape ./utils/*.test.js

Demo of this example here

For existing projects you can start by adding tests for new features and for older features refactor the code while adding tests as you go, eventually you will have to propagated to most of your codebase if not all.

And remember to always see the test fail

Code coverage

There are tools like istanbul to check how much of your code is being tested, I don’t like to focus on that metric to much since it can get you to chance your code not because you need to but just because you want to have 100% coverage, you want to develop new features that are tested without having to spend much time adapting the code just to get 100%.

It’s better to have 10% of your code tested than to have nothing nonetheless if you want to impose a minimum coverage I would say that 70% is a good place to start.