Coming from Ruby, which has excellent testing tools and libraries, the notion of table-driven tests was unusual for me. The popular testing libraries in Ruby, like RSpec, force the programmer to approach testing from a BDD standpoint. Thus, coming to Go and learning about the table-driven test was definitely a new way of looking at tests for me.
Looking back, Dave Cheney’s 2013 seminal blog post “Writing table driven-tests
was very likely my gateway to table-driven tests. In it, he points out to the
tests of the
math [source] and
time [source] packages, where
The Go authors have used table-driven tests. I encourage you to go visit these
two links, they offer a good perspective to testing in Go.
I remember that at the beginning, the idea of table-driven tests was quite
provoking. The Rubyist in me was screaming “What is this blasphemy?!”, “These
for loops don’t seem right” and “What are these data structures that I
have to define to run a simple spec!?” These were some one of the first
questions that came to my mind.
In fact, the approach is very far from bad. Go’s philosophy to testing is different from Ruby’s, yet it has an identical goal: make sure that our code works as expected, so we can sleep tight at night.
Let’s explore table-driven tests, understand their background, the approach and their pros and cons.
What are table-driven tests?
As the name suggests, these are tests that are driven by tables. You might be wondering “what kind of tables?!” - an excellent question. Hold on though!
Here’s the general idea: every function under test, has inputs and expected
outputs. For example, the function
[docs] from the
math package takes two
arguments and has one return value. Both arguments are numbers of type
float64, and the returned value is also a
float64 number. When invoked,
Max will return the bigger number between the two arguments.
Following the same idea,
Max has two inputs and one expected output. In fact,
the output is actually one of the inputs.
What would a test look like for
Max? We would probably test its basic sanity,
e.g. that between
2 it will return
2. Also, we will probably test
with negative numbers, e.g. that between
-200 it will return
-100. We will probably throw in a test that uses
0 or some arbitrary
floating point number. Lastly, we can try the edge cases - very, very big and
very, very small numbers. Who knows, maybe we can hit some sort of an edge
Looking at the above paragraph, the input values and the expected outcomes change. Still, the number of values that are in play is always the same, three: two arguments and one expected return value. Given that the value number is constant, we can put it in a table:
|Argument 1||Argument 2||Code representation||Expected return|
Following this idea, what if we would try to express this table in a very simple Go structure?
That should do the trick: it has three attributes of type
expected. We are going to skip the third column as that is only
there for more clarity.
What about the data? Could we next add the data to a slice of
give it a shot:
We intentionally omitted some of the cases for brevity and because what we have
above clearly painted the picture. Now, we have a test function already and
cases of type
TestCase. The last piece of the puzzle is to iterate over
the slice. For each of the
TestCase structs invoke the
Max function using
the two arguments. Then, compare the
expected attribute of the
with the actual result of the invocation of
Let’s dissect the
For each of the
cases, we invoke the
math.Max function, with
tc.arg2 as arguments. Then, we compare what the invocation returned with the
expected value in
tc.expected. This tells us if
math.Max returned what we
expected and if that’s not the case it will mark the test as failed. If any of
the tests fail, the error message will look like this:
› go test math_test.go -v === RUN TestMax --- FAIL: TestMax (0.00s) math_test.go Max(-0.083137, 0.018427): Expected 0.000000, got 0.018427 FAIL FAIL command-line-arguments 0.004s
This is the magic behind table-driven tests and the reason for the name: a
TestCase represents a row from a table. With the
for, loop we evaluate each
of the rows and we use its cells as arguments and expected values.
Convert ordinary to table-driven tests
As always, talking about code is better if we have some code to talk about. In this section, we will first add some simple and straightforward of tests. After that, we will convert them to table-driven tests.
Consider this type
Person, which has two functions:
The latter being a constructor, while the former is a function that can decide
Person is older between two of them:
Next, let’s add some tests for these two functions:
These tests are fairly conventional. Also, the tests covering the same function
normally are alike having the same structure of setup, assertion and error
reporting. This is another reason why table-driven tests are good: they
eliminate repetition of boilerplate code and substitute it with a simple
Let’s refactor the tests into table-driven tests. We will begin with a
There isn’t much happening here. The only difference when compared to the tests
we saw before is the inline definition and initialization of the
We define the type with its attributes and add values to it right away instead
of first defining the type and initializing a slice of it after.
Next, we will create a
This test follows the same structure: we define the
cases slice by defining
and initializing the slice inline. Then, in the loop, we assert that the errors
that we expect are the same as the ones returned by the invocation of the
If you have a test file that you would like refactor to use a table-driven approach, follow these steps:
- Group all tests that focus on one function one after another in the test file
- Identify the inputs/arguments to the function under test in each of the test functions
- Identify the expected output on each of the tests
- Extract the inputs and the expected outputs into another test, wrapping them
into a type (
struct) that will accommodate all inputs and the expected output
- Create a slice of the new type, populate it with all inputs and expected outputs and introduce a loop where you will create the assertion between the expected and the actual output
Why should you use them?
One of the reasons I like the table-driven approach to testing is how
effortless it is to add different test cases: it boils down to adding another
entry in the
cases slice. Compared to the classic style of writing a test
function where you have to figure out a name for the function, then set up the
state and lastly execute the assertion, table-driven tests make this a breeze.
In most cases, table-driven tests centralize the test of a function to a single test function. This is because the classical approach to testing has only one set of inputs and expected outputs, compared to table-driven tests where we can add virtually unlimited test cases within a single test function.
Lastly, having all cases centralized in a single slice gives more transparency to the quality of our test inputs. For example, are we trying to use arbitrary big or small numbers as inputs, or very long and very short strings, etc? You get the idea.
Let’s take a quick look at the
TestOlder test function again:
If I ask you: only by looking at the
cases slice, what kind of other tests
cases can you come up with, what would you answer? One case that immediately
comes to mind is testing when the two age
ints are the same. There are more
cases we can add, but I’ll let you think that one through and let me know in
the comments. (Hint: think about edge cases 😉)
It’s not all rainbows and unicorns, this approach has some downsides. For
example, running a specific test case (using
test -run foo) is problematic here - we cannot target a single case, we have
to run the whole function. But, there’s a trick to achieve both: it’s called
subtests and we will look into them in the next article.
Until then, let me know in the comments how do you use table-driven test and do you use any specific technique to producing good test cases?