It’s a truth that ought to be more widely acknowledged: if your job involves buying or building software, sooner or later you’re going to have to get involved in testing.

You might be a designer who needs to check the app works exactly as you specified it. You might be a project manager checking that what goes out the door won’t have you blushing for shame in front of your customer. You might be a client needing to check that you got what you paid for. There are hundreds of reasons and occasions when a basic knowledge of how to test can come in handy.

I was fortunate in starting my career with a big consultancy firm. They considered that testing was an essential tool in our kit bag, and included plenty of it in their graduate training programme.

The hyper-formal testing methods I used back in the nineties don’t get much of an outing these days. But the basic principles that underpin them inform my current testing tactics, and have prevented many an embarrassing bug getting in front of users.

So if you’re new to testing, here are some tips to get you started.

Don’t check things work. Check you can’t break them.

Most development teams will be able to deliver software that works under the most common scenarios. But users are brilliant at doing the unexpected: having unusual names, trying to load the wrong file format, typing things wrong or too fast.

And what most dev teams don’t do, in my experience, is test all these different scenarios. And that's where most of the bugs will be.

So if you want to find those problems before your users do, you need to sit down and think of all the ways you might possibly make the system go wrong. Then you need to try them out.

Test for common problems

Even though we've been building software for decades, we still get the same things wrong, time and time again. A checklist can help you catch these issues in your testing.

Zero, one and many.

Designers usually design, and coders usually code, for a few rows. They don’t always plan for the zero and one scenarios. So for every table, list, and import file check what the system does with no rows, one row and more than three rows.

Loads of stuff.

Your designers might have planned for this - adding pagination or ‘lazy loads’ or other features that do something different when there’s too much information to fit on a screen. Or they might not. So check.

Crash the fields.

For any field that has a size limit, check you can put in the data up to that limit, and no more. Tip: create data that looks like “Axxx xxxxx xxxxx xxxxx xxxxxZ”, where the whole piece of text is the maximum allowed length that you expect. If the Z is chopped off, you have a problem.

For fields that don’t have a size limit, throw in a massive piece of text and see what happens to the display. (Google for ‘lorem ipsum generator’ to create big blocks of text quickly.)

Unusual characters.

Apostrophes in names are frequently a problem (and not exactly unusual). But try Scandinavian and Asian names too. (Google for ‘test data name generator’ to make up names easily.)

Validation.

Structured data of any kind - from names and address formats to business-specific data - is usually checked on input. This is to make sure that only good data reaches your database.

Some developers get a bit carried away with this. For instance, demanding that all addresses have a city, or surnames be only one word. Other developers don't bother validating, so total crap hits your database. And nearly all of them write really terrible error messages: 'validation error' might be accurate, but it's not helpful to the user.

File formats.

If you’re importing data - e.g. from a CSV file - check it works with all the formats you expect, created on all the machines you’d expect. (Macs don’t do their CSVs the same as Windows, for instance.)

As you test your own systems, add to this list, to speed your testing up next time. Because the ways you can break software are pretty much endless. So…

Plan your testing in advance

Write scripts for yourself, describing what you will do and what data you will use. I use a simple document for this, rather than a spreadsheet or more formal template.

I start by listing all the scenarios as headings (e.g., customer buys one item, three items, adds one item and abandons basket, adds fourth item after starting checkout, etc…)

Then under each heading, I write the testing actions, using a rough ‘given when, then’ format. Eg "given I have one particular item in my basket, when I hit checkout, I go to the checkout page, and these buttons are enabled, and this ones aren’t, and the totals are right, and the fields displayed are…"

I also add visual checklists to remind myself to check things like screen copy, images, links, etc.

Planning in advance means you don’t have to stop and think when you’re testing. You often don’t get much time to test (software is always delivered late) so make the most of your preparation time.

Prepare your data in advance

As you write your scripts, research the data you need. Either add it to the script or save it as test data files. This speeds up testing enormously because you don’t need to keep stopping to find or make up the right kind of data.

Planning your tests also means you can repeat them quickly to check problems are fixed.

Take a lot of notes and screenshots.

As I test, I make notes on the test plan document. I record what worked and what didn't. When I hit a problem, I try to write down exactly what I did, repeating the scenario several times to check I've noted it correctly. I also take lots of screenshots. These notes and images help your developers recreate the issue and fix it faster.

Leave time for the random factor too

A lot of software is tested by other software these days. This is a beautiful thing and means that software is tested more often and more thoroughly than it was when I was a developer in the nineties.

However, while automated testing is great at checking individual bits of code work, it isn't perfect. The testing code can itself have bugs. And it’s useless at reproducing human creativity (or stupidity). So just try stuff and see what happens.

Create a central place for issues.

Even if it’s just you and your Wordpress developer, life will be simpler if you create an issue log. You don’t need to invest in a fancy bug tracker: Trello or Asana are fine.

When you add new issues, include the notes and screenshots from your testing so the developers can see exactly what you did.

Create columns or sections to organise your log. It's usual to keep new issues separate from things being worked on by the developer, things they want you to retest, and things that have been fixed. Use labels or ordering to prioritise issues, so the developers work on the most important things first.

Know your limits

An informal approach to testing works fine for most situations. It wouldn’t work for something like a financial trading system or an emergency services call centre.

If faults in your software could result in people losing unaffordable amounts of money, or suffering life-changing injuries, get help from a professional testing consultant.

Accept no software will be bug free.

You can't test everything, however hard you try. Do the best you can in the time you have. Focus your effort on making the software as good as it can be to do its job, whatever that is.

You can’t fix all the bugs you find, either. Prioritise what has to be fixed before launch and deal with the rest as and when you can.

And keep your issue tracker open. Customers find bugs too.

If you found this post useful, you might also like 4 Types of Software Testing and When You Should Use Them, a useful overview from the chaps at Process.St.