Coming from Ruby, which has excellent testing tools and libraries, the notion of table-driven tests was unusual for me. The popular testing libraries in Ruby, like RSpec, force the programmer to approach testing from a BDD standpoint. Thus, coming to Go and learning about the table-driven test was definitely a new way of looking at tests for me.

Image taken from Undraw

Looking back, Dave Cheney's 2013 seminal blog post “Writing table driven-tests in Go” was very likely my gateway to table-driven tests. In it, he points out to the tests of the math [source] and time [source] packages, where The Go authors have used table-driven tests. I encourage you to go visit these two links, they offer a good perspective to testing in Go.

I remember that at the beginning, the idea of table-driven tests was quite provoking. The Rubyist in me was screaming “What is this blasphemy?!", “These weird for loops don't seem right” and “What are these data structures that I have to define to run a simple spec!?” These were some one of the first questions that came to my mind.

In fact, the approach is very far from bad. Go's philosophy to testing is different from Ruby's, yet it has an identical goal: make sure that our code works as expected, so we can sleep tight at night.

Let's explore table-driven tests, understand their background, the approach and their pros and cons.

Image taken from Undraw

What are table-driven tests?

As the name suggests, these are tests that are driven by tables. You might be wondering “what kind of tables?!” - an excellent question. Hold on though!

Here’s the general idea: every function under test, has inputs and expected outputs. For example, the function Max [docs] from the math package takes two arguments and has one return value. Both arguments are numbers of type float64 , and the returned value is also a float64 number. When invoked, Max will return the bigger number between the two arguments.

Following the same idea, Max has two inputs and one expected output. In fact, the output is actually one of the inputs.

What would a test look like for Max ? We would probably test its basic sanity, e.g. that between 1 and 2 it will return 2 . Also, we will probably test with negative numbers, e.g. that between -100 and -200 it will return -100 . We will probably throw in a test that uses 0 or some arbitrary floating point number. Lastly, we can try the edge cases - very, very big and very, very small numbers. Who knows, maybe we can hit some sort of an edge case.

Looking at the above paragraph, the input values and the expected outcomes change. Still, the number of values that are in play is always the same, three: two arguments and one expected return value. Given that the value number is constant, we can put it in a table:

Argument 1 Argument 2 Code representation Expected return 1 2 Max(1, 2) 2 -100 -200 Max(-100, -200) -100 0 -200 Max(0, -200) 0 -100 0 Max(-100, 0) 0 100 0 Max(100, 0) 100 0 200 Max(0, 200) 200 100 0 Max(100, 0) 100 0 200 Max(0, 200) 200 -8.31373e-02 1.84273e-02 Max(-8.31373e-02, 1.84273e-02) 1.84273e-02

Following this idea, what if we would try to express this table in a very simple Go structure?

type TestCase struct { arg1 float64 arg2 float64 expected float64 }

That should do the trick: it has three attributes of type float64 : arg1 , arg2 and expected . We are going to skip the third column as that is only there for more clarity.

What about the data? Could we next add the data to a slice of TestCase ? Let's give it a shot:

func TestMax (t * testing.T) { cases := []TestCase{ TestCase{ arg1: 1.0 , arg2: 2.0 , expected: 2.0 , }, TestCase{ arg1: - 100 , arg2: - 200 , expected: - 100 , }, TestCase{ arg1: 0 , arg2: - 200 , expected: 0 , }, TestCase{ arg1: - 8.31373e-02 , arg2: 1.84273e-02 , expected: 1.84273e-02 , }, } }

We intentionally omitted some of the cases for brevity and because what we have above clearly painted the picture. Now, we have a test function already and cases of type []TestCase . The last piece of the puzzle is to iterate over the slice. For each of the TestCase structs invoke the Max function using the two arguments. Then, compare the expected attribute of the TestCase with the actual result of the invocation of Max .

func TestMax (t * testing.T) { cases := []TestCase{ TestCase{ arg1: 1.0 , arg2: 2.0 , expected: 2.0 , }, TestCase{ arg1: - 100 , arg2: - 200 , expected: - 100 , }, TestCase{ arg1: 0 , arg2: - 200 , expected: 0 , }, TestCase{ arg1: - 8.31373e-02 , arg2: 1.84273e-02 , expected: 1.84273e-02 , }, } for _, tc := range cases { got := math. Max (tc.arg1, tc.arg2) if got != tc.expected { t. Errorf ( "Max(%f, %f): Expected %f, got %f" , tc.arg1, tc.arg2, tc.expected, got) } } }

Let's dissect the for loop:

For each of the cases , we invoke the math.Max function, with tc.arg1 and tc.arg2 as arguments. Then, we compare what the invocation returned with the expected value in tc.expected . This tells us if math.Max returned what we expected and if that's not the case it will mark the test as failed. If any of the tests fail, the error message will look like this:

› go test math_test.go -v = = = RUN TestMax --- FAIL: TestMax ( 0.00s ) math_test.go:41: Max ( -0.083137, 0.018427 ) : Expected 0.000000, got 0.018427 FAIL FAIL command-line-arguments 0.004s

This is the magic behind table-driven tests and the reason for the name: a TestCase represents a row from a table. With the for , loop we evaluate each of the rows and we use its cells as arguments and expected values.

Convert ordinary to table-driven tests

As always, talking about code is better if we have some code to talk about. In this section, we will first add some simple and straightforward of tests. After that, we will convert them to table-driven tests.

Consider this type Person , which has two functions: older and NewPerson . The latter being a constructor, while the former is a function that can decide what Person is older between two of them:

package person import "errors" var ( AgeTooLowError = errors. New ( "A person must be at least 1 years old" ) AgeTooHighError = errors. New ( "A person cannot be older than 130 years" ) ) type Person struct { age int } func NewPerson (age int ) ( error , * Person) { if age < 1 { return AgeTooLowError, nil } if age >= 130 { return AgeTooHighError, nil } return nil , & Person{ age: age, } } func (p * Person) older (other * Person) bool { return p.age > other.age }

Next, let's add some tests for these two functions:

package person import ( "testing" ) func TestNewPersonPositiveAge (t * testing.T) { err, _ := NewPerson ( 1 ) if err != nil { t. Errorf ( "Expected person, received %v" , err) } } func TestNewPersonNegativeAge (t * testing.T) { err, p := NewPerson ( - 1 ) if err == nil { t. Errorf ( "Expected error, received %v" , p) } } func TestNewPersonHugeAge (t * testing.T) { err, p := NewPerson ( 150 ) if err == nil { t. Errorf ( "Expected error, received %v" , p) } } func TestOlderFirstOlderThanSecond (t * testing.T) { _, p1 := NewPerson ( 1 ) _, p2 := NewPerson ( 2 ) if p1. older (p2) { t. Errorf ( "Expected p1 with age %d to be younger than p2 with age %d" , p1.age, p2.age) } } func TestOlderSecondOlderThanFirst (t * testing.T) { _, p1 := NewPerson ( 2 ) _, p2 := NewPerson ( 1 ) if !p1. older (p2) { t. Errorf ( "Expected p1 with age %d to be older than p2 with age %d" , p1.age, p2.age) } }

These tests are fairly conventional. Also, the tests covering the same function normally are alike having the same structure of setup, assertion and error reporting. This is another reason why table-driven tests are good: they eliminate repetition of boilerplate code and substitute it with a simple for loop.

Let's refactor the tests into table-driven tests. We will begin with a TestOlder function:

func TestOlder (t * testing.T) { cases := [] struct { age1 int age2 int expected bool }{ { age1: 1 , age2: 2 , expected: false , }, { age1: 2 , age2: 1 , expected: true , }, } for _, c := range cases { _, p1 := NewPerson (c.age1) _, p2 := NewPerson (c.age2) got := p1. older (p2) if got != c.expected { t. Errorf ( "Expected %v > %v, got %v" , p1.age, p2.age, got) } } }

There isn't much happening here. The only difference when compared to the tests we saw before is the inline definition and initialization of the cases slice. We define the type with its attributes and add values to it right away instead of first defining the type and initializing a slice of it after.

Next, we will create a TestNewPerson function:

func TestNewPerson (t * testing.T) { cases := [] struct { age int err error }{ { age: 1 , err: nil , }, { age: - 1 , err: AgeTooLowError, }, { age: 150 , err: AgeTooHighError, }, } for _, c := range cases { err, _ := NewPerson (c.age) if err != c.err { t. Errorf ( "Expected %v, got %v" , c.err, err) } } }

This test follows the same structure: we define the cases slice by defining and initializing the slice inline. Then, in the loop, we assert that the errors that we expect are the same as the ones returned by the invocation of the NewPerson function.

If you have a test file that you would like refactor to use a table-driven approach, follow these steps:

Group all tests that focus on one function one after another in the test file Identify the inputs/arguments to the function under test in each of the test functions Identify the expected output on each of the tests Extract the inputs and the expected outputs into another test, wrapping them into a type ( struct ) that will accommodate all inputs and the expected output Create a slice of the new type, populate it with all inputs and expected outputs and introduce a loop where you will create the assertion between the expected and the actual output

Image taken from Undraw

Why should you use them?

One of the reasons I like the table-driven approach to testing is how effortless it is to add different test cases: it boils down to adding another entry in the cases slice. Compared to the classic style of writing a test function where you have to figure out a name for the function, then set up the state and lastly execute the assertion, table-driven tests make this a breeze.

In most cases, table-driven tests centralize the test of a function to a single test function. This is because the classical approach to testing has only one set of inputs and expected outputs, compared to table-driven tests where we can add virtually unlimited test cases within a single test function.

Lastly, having all cases centralized in a single slice gives more transparency to the quality of our test inputs. For example, are we trying to use arbitrary big or small numbers as inputs, or very long and very short strings, etc? You get the idea.

Let's take a quick look at the TestOlder test function again:

func TestOlder (t * testing.T) { cases := [] struct { age1 int age2 int expected bool }{ { age1: 1 , age2: 2 , expected: false , }, { age1: 2 , age2: 1 , expected: true , }, } for _, c := range cases { _, p1 := NewPerson (c.age1) _, p2 := NewPerson (c.age2) got := p1. older (p2) if got != c.expected { t. Errorf ( "Expected %v > %v, got %v" , p1.age, p2.age, got) } } }

If I ask you: only by looking at the cases slice, what kind of other tests cases can you come up with, what would you answer? One case that immediately comes to mind is testing when the two age int s are the same. There are more cases we can add, but I'll let you think that one through and let me know in the comments. (Hint: think about edge cases 😉)

Coming soon! I am planning a course on testing in Go. Would you like to get a (free) sneak peek while I am building it? Sign up now! Subscribe I respect your privacy. Unsubscribe at any time. No spam, ever.

It's not all rainbows and unicorns, this approach has some downsides. For example, running a specific test case (using go test -run foo ) is problematic here - we cannot target a single case, we have to run the whole function. But, there's a trick to achieve both: it's called subtests and we will look into them in the next article.

Until then, let me know in the comments how do you use table-driven test and do you use any specific technique to producing good test cases?