In this article, I look at unit tests using two unit test harnesses that work in C. Along the way, I will also discuss some of the common terminology of automated unit testing. Let me start by discussing the fundamental tool, the test harness.

What Is A Unit Test Harness?

A unit test harness is a software package that allows a programmer to express how production code should behave. A unit test harness's job is to provide these capabilities:

A common language to express test cases

A common language to express expected results

Access to the features of the production code programming language

A place to collect all the unit test cases for the project, system, or subsystem

A mechanism to run the test cases, either in full or in partial batches

A concise report of the test suite success or failure

A detailed report of any test failures

I'll shortly look at two popular harnesses for testing embedded C.They are both easy to use and are descendants of the xUnit family of unit test harnesses.

First, I'll employ Unity, a C-only test harness. Later, I will use CppUTest, a unit test harness written in C++, but not requiring C++ knowledge to use. You'll find that the bulk of the material in this article can be applied using any test harness.

Here are a few terms that will come in handy while reading this explanation:

Code under test is just like it sounds; it is the code being tested.

Production code is code that is (or will be) part of the released product.

Test code is code that is used for testing the production code and is not part of the released product.

A test case is test code that describes the behavior of code under test. It establishes the preconditions and checks that significant post conditions are met.

A test fixture is code that provides the proper environment for a series of test cases that exercise the code under test. A test fixture will assist in establishing a common setup and environment for exercising the production code.

To take the mystery out of these terms, let's look at a few tests for something we've all used: sprintf . For this first example, sprintf is the code under test; it is production code.

sprintf is good for a first example because it is a standalone function, which is the most straightforward kind of function to test. The output of a standalone function is fully determined by the parameters passed immediately to the function. There are no visible external interactions and no stored state to get in the way. Each call to the function is independent of all previous calls.

Unity: A C-Only Test Harness

Unity is a straightforward, small unit test harness. It comprises just a few files. Let's get familiar with Unity and unit tests by looking at a couple example unit test cases. If you are a long-time Unity user, you'll notice some additional macros that are helpful when you are not using Unity's scripts to generate a test runner.

A test should be short and focused. Think of it as an experiment that silently does its work when it passes, but makes some noise when it fails. This test checks that sprintf handles a format spec with no format operations.

TEST(sprintf, NoFormatOperations) { char output[5]; TEST_ASSERT_EQUAL(3, sprintf(output, "hey")); TEST_ASSERT_EQUAL_STRING("hey", output); }

The TEST macro defines a function that is called when all tests are run. The first parameter is the name of a group of tests. The second parameter is the name of the test. We'll look at TEST in more detail later.

The TEST_ASSERT_EQUAL macro compares two integers. sprintf should report that it formatted a string of length three, and if it does, the TEST_ASSERT_EQUAL check succeeds. As is the case with most unit test harnesses, the first parameter is the expected value.

TEST_ASSERT_EQUAL_STRING compares two null-terminated strings. This statement declares that output should contain the string "hey" : Following convention, the first parameter is the expected value.

If either of the checked conditions is not met, the test will fail. The checks are performed in order, and the TEST will terminate on the first failure.

Notice that TEST_ASSERT_EQUAL_STRING could pass by accident; if the output just happened to hold the "hey" string, the test would pass without sprintf doing a thing. Yes, this is unlikely, but we better improve the test and initialize the output to the empty string.

TEST(sprintf, NoFormatOperations) { char output[5] = ""; TEST_ASSERT_EQUAL(3, sprintf(output, "hey")); TEST_ASSERT_EQUAL_STRING("hey", output); }

The next TEST challenges sprintf to format a string with %s .

TEST(sprintf, InsertString) { char output[20] = ""; TEST_ASSERT_EQUAL(12, sprintf(output, "Hello %s

", "World")); TEST_ASSERT_EQUAL_STRING("Hello World

", output); }

A weakness in both the preceding tests is that they do not guard against sprintf writing past the string terminator. The following tests watch for output buffer overruns by filling the output with a known value and checking that the character after the terminating null is not changed.

TEST(sprintf, NoFormatOperations) { char output[5]; memset(output, 0xaa, sizeof output); TEST_ASSERT_EQUAL(3, sprintf(output, "hey")); TEST_ASSERT_EQUAL_STRING("hey", output); TEST_ASSERT_BYTES_EQUAL(0xaa, output[4]); } TEST(sprintf, InsertString) { char output[20]; memset(output, 0xaa, sizeof output); TEST_ASSERT_EQUAL(12, sprintf(output, "Hello %s

", "World")); TEST_ASSERT_EQUAL_STRING("Hello World

", output); TEST_ASSERT_BYTES_EQUAL(0xaa, output[13]); }

If you're worried about sprintf corrupting memory in front of output , we could always make output a character bigger and pass &output[1] to sprintf . Checking that output[0] is still 0xaa would be a good sign that sprintf is behaving itself.

In C, it is hard to make tests totally fool-proof. Errant or malicious code can go way beyond the end or way in front of the beginning of output . It's a judgment call on how far to take the tests. You will see when we get into TDD how to decide which tests to write.

With those tests, you can see some subtle duplication creeping into the tests. There are duplicate output declarations, duplicate initializations, and duplicate overrun checks. With just two tests, this is no big deal, but if you happen to be sprintf 's maintainer, there will be many more tests. With every test added, the duplication will crowd out and obscure the code that is essential to understand the test case. Let's see how a test fixture can help TEST cases.

Test Fixtures in Unity

Duplication reduction is the motivation for a test fixture. A test fixture helps organize the common facilities needed by all the tests in one place. Notice how TEST_SETUP and TEST_TEAR_DOWN keep duplication out of the sprintf tests.

TEST_GROUP(sprintf); static char output[100]; static const char * expected; TEST_SETUP(sprintf) { memset(output, 0xaa, sizeof output); expected = ""; } TEST_TEAR_DOWN(sprintf) { } static void expect(const char * s) { expected = s; } static void given(int charsWritten) { TEST_ASSERT_EQUAL(strlen(expected), charsWritten); TEST_ASSERT_EQUAL_STRING(expected, output); TEST_ASSERT_BYTES_EQUAL(0xaa, output[strlen(expected) + 1]); }

The shared data items defined after the TEST_GROUP are initialized by TEST_SETUP before the opening curly brace of each TEST . The data items comprise file scope, accessible by each TEST and all the helper functions. For this TEST_GROUP , there is no cleanup work for TEST_TEAR_DOWN .

The file scope helper functions, expect and given , help keep the sprintf tests clean and low on duplication.

In the end, it's just plain C, so you can do what you want as far as shared data and helper functions. I'm showing the typical way to structure a group of tests with common data and condition checks.

Now these tests are focused, lean, mean, and to the point.

TEST(sprintf, NoFormatOperations) { expect("hey"); given(sprintf(output, "hey")); } TEST(sprintf, InsertString) { expect("Hello World

"); given(sprintf(output, "Hello %s

", "World")); }

Notice that once you understand a specific TEST_GROUP and have seen a couple examples, writing the next test case is much less work. When there is a common pattern within a TEST_GROUP , each test case is easier to read, understand, and evolve, as change becomes necessary.

Installing Unity Tests

It is not evident from the example how the test cases get run with the necessary pre- and post-processing. It's done with another macro: the TEST_GROUP_RUNNER . The TEST_GROUP_RUNNER can go in the file with the tests or a separate file. To avoid scrolling through the file, I use a separate file. For the two sprintf tests written, the TEST_GROUP_RUNNER looks like this:

#include "unity_fixture.h" TEST_GROUP_RUNNER(sprintf) { RUN_TEST_CASE(sprintf, NoFormatOperations); RUN_TEST_CASE(sprintf, InsertString); }

Each test case is called through the RUN_TEST_CASE macro. Essentially, this RUN_GROUP_RUNNER calls the function bodies associated with each of these macros:

TEST_SETUP(sprintf); TEST(sprintf, NoFormatOperations); TEST_TEAR_DOWN(sprintf); TEST_SETUP(sprintf); TEST(sprintf, InsertString); TEST_TEAR_DOWN(sprintf);

Invoking TEST_SETUP before each TEST means that each test starts out fresh, with no accumulated state. TEST_TEAR_DOWN is called to clean up after each test.

Now that the tests are wired into a TEST_GROUP_RUNNER , let's see how the TEST_GROUP_RUNNER s are called. For this last step, we have to look at main . You will have a main for your production code and one, or more, for your test code. The Unity test main looks like this:

#include "unity_fixture.h" static void RunAllTests(void) { RUN_TEST_GROUP(sprintf); } int main(int argc, char * argv[]) { return UnityMain(argc, argv, RunAllTests); }

RUN_TEST_GROUP(GroupName) calls the function defined by TEST_GROUP_ RUNNER . Each TEST_GROUP_RUNNER you want to run as part of your test main has to be mentioned in a RUN_TEST_GROUP . Notice that RunAllTests is passed to UnityMain .

One unfortunate side effect of using a C-only test harness is that you have to remember to install each TEST into a TEST_GROUP_RUNNER , and the runner is invoked by calling UnityMain . If you forget, tests will compile, but not run  potentially giving a false positive.

Because of this opportunity for error, the designers of Unity created a system of code generators that read your test files and produce the needed test runner code. To keep the dependencies low for getting started with Unity, I've opted to not use the code-generating scripts and manually wire all the test code.

When I discuss CppUTest later in this article, you will see another solution to that problem. But before doing that, let's look at Unity's output.