In the previous article on testing in Golang, titled Testing in Go: First
Principles, we looked at what testing looks
like in Golang. In this article, we will do a deeper dive in the testing
package. More specifically, we will take a look at the different ways we can
fail tests in our test suites and when and how we should use these techniques.
Without further ado, let’s take a look under the covers in the testing
package.
A short overview of the testing
package
The testing
package from Golang’s standard library provides support for
automated testing of Go packages. It is intended to be used in combination with the
go test
command. This package in combination with go test
expect certain
naming conventions that we will cover in another article, but for the purpose of
this article we have to know that:
- Every test file ends with
*_test.go
- Every test function has the format
TestXxx
, whereXxx
must not start with a lowercase letter
The testing
package exposes two different modes: testing and benchmarking.
While what we will be discussing here is transferable to benchmarking as well,
we will focus on how to fail our tests in a good way and how to provide
meaningful error messages.

Logging with Log
and Logf
If we look at the index of T
type in the
testing
package, there’s a list of functions that represent errors or failures:
Quite some options, right? But before we dive into these different functions,
let’s look at another one: Logf
(docs).
You might be thinking: “I came here to read about signaling test failures, and for some reason, I am reading about logging”. Trust me, there’s a connection here.
Log
formats its arguments using default formatting, analogous to Println
,
while Logf
takes one or more arguments: a format
string and additional
arguments. It formats its arguments according to the format (analogous to
Printf
). Both these functions save the text in the error log. Compared to
Println
and Printf
, the difference is that Log
and Logf
save the output
to the error log (instead of os.Stdout
) and they also add a final newline.
When tests fail or when the -test.v
flag is present, the text is printed out.
Basically, Log
& Logf
are the testing
package’s Println
and Printf
with a twist - prints only in verbose mode or when a test fails. A simple
example:
|
|
If we would run this using go test
there won’t be any output. But, if we run it
using the -test.v
flag, we’ll see something like this:
|
|
As you can see above, the test returns PASS
because there was no test that
got marked as failed. If we would like to mark a test as failed, we would have
to invoke t.Fail()
inside the test. Let’s do that:
|
|
Now, when we run go test
we should see the test marked as a failure:
|
|
What’s great about this type of logging is that although the test is marked
as a failure on line 7, the t.Logf
call is on line 6 and the output states
that clearly. If we would add any other logging below we would see a separate
line for each of the Logf
calls:
|
|
And the output of the go test
invocation:
|
|
This behaviour is due to the way Logf
functions. With every invocation, it
adds the log lines to the error log, but it dumps the whole log to STDOUT
only when the tests actually fail, by running all of them and seeing if any
were marked as failed. That’s why in the above example we see that the failure
is marked on line 7, yet all the logs are present (from line 6 and line 8).
So why are Log and
Logf` important? Because of all of the next functions that
we will explore rely on them to write their outputs to the error log. And log
output is essential to showing the failures of the failing tests.
Signaling test Fail
ure
To be able to illustrate the different type of failing a test in Golang, we
will first need to write a small function that we can test. Let’s use a simple
function Max
that takes a slice of int
s and returns the largest integer of
them all:
|
|
Let’s write a small test function:
|
|
The TestMax
function defines an input
slice of int
s, the result of the
invocation of Max
with the input
as argument (called actual
) and the
expected
value of the invocation.
If the expected
value does not match the actual
result, we want to tell the
person (or program) running this test that we received something that we
weren’t expecting. We do this by invoking t.Logf
with providing a nice
explanation for the human.
If you missed it – the expected
value here is wrong. It expects 6
while the
maximum element of the input
slice is 5
. That means that this test should
fail when we run it:
|
|
Huh? Here’s the first thing we have to understand about signaling failures in
tests: we actually have to tell the testing
package that this is a failed
test. And we are the ones to define what failure means - usually, it’s actual != expected
, but other times it can be something else. What testing
only
cares about is recording if there was a test failure or not. There are multiple
ways to do this, t.Fail
being the easiest one.
Let’s add a call to t.Fail
after the t.Logf
call:
|
|
Let’s run it:
|
|
Boom! Our test failed, as expected. You can notice that the log is now visible,
including our call to Logf
with our informative error message Expected 6, got 5
.
Now that we have our first test failure signaled, let’s look at different ways we can signal test failures.

Delayed vs. immediate failure
Golang’s testing
package promotes two types of failures: ones that
immediately stop the tests from running and others that register the failure
but report it after all tests finish running. To reflect this behaviour, these
functions are aptly named: Fail
and FailNow
.
Let’s explore their behaviour and usage by continuing testing the Max
function we already built.
One other test case that we could write is checking that the slice passed as
argument is not empty. If it is, we will return -1
, otherwise the maximum
item we find. Let’s write the new test first:
|
|
If we run it, without adding any new functionality to the Max
function, it
will fail:
|
|
What we can see here is that the TestMax
function is run and it passes, while
the TestMaxEmptySlice
fails. We achieve that by marking the test as failed
using t.Fail()
. Let’s add the new functionality to Max
which would make the
test pass:
|
|
While the change does not make much sense, for our purpose it will do the trick. Let’s run the test:
|
|
Now, if we look into the two things that the Max
function does, we could
theoretically attach an importance level to both of them. For example, the
guard clause which returns -1
when the slice argument is empty has lower
importance than the actual algorithm that detects the maximum element of the
slice. That means that if someone is expanding that algorithm and it goes wrong
we want to be very loud and “stop the world”, as none of the other
functionalities is more relevant than that one.
In cases like these, we can use the FailNow()
function to mark the failed
test and to immediately stop any further execution of the particular test that
tests the max detection. Let’s use it in our tests:
|
|
Also, we need to make the Max
function misbehave. To short-circuit this
quickly, let’s make the Max
function return -100
so we can see the
difference.
If we would run this now, there would not be a difference in how the tests will
run, although we use FailNow
:
|
|
That is because the TestMax
function only does one assertion. Let’s rework
our tests a bit to use table-driven tests:
|
|
What the table-driven tests do is they loop over multiple test cases (often
structs with the expected input and output), and they set assertions on them.
If we would run the tests now, with using t.Fail()
instead of t.FailNow()
we should see the following output:
|
|
We actually get all failing cases for both tests. Let’s replace the t.Fail()
invocation to t.FailNow()
in the TestMax
function and run the test:
|
|
You can see that instead of running each of the test cases and marking them as failures, the test function stopped right after hitting the first case which failed, instead of going through each case before reporting all of the errors.
This is the key difference between the two, t.Fail()
and t.FailNow()
. By
using the latter we are able to stop the tests and communicate with the
developer that there’s something really wrong happening quickly. In comparison,
imagine if we had to wait for all test cases to complete before we have some
sort of output – it would take much longer to get any meaningful output.
Having a clear difference and use cases for Fail
and FailNow
in mind, let’s
look into how we can combine failure reporting (Log
and Logf
) with failure
marking (Fail
and FailNow
) in a more structured approach.
Signal & report failures in one go
Understanding the basics and variations of logging and signaling errors is crucial to understand what we will talk about now. If you look at all the tests we wrote so far, although we switched up the approaches by using table-driven tests, two steps were constant:
- In all tests we had to check for a mismatch between the expected and the actual values, and
- We had to call report (
Log
) and signal failure (Fail
) if there was a mismatch
Luckily, the testing
package makes our life a bit easier by implementing
functions that combine both, the logging and the test failure. And because
there are two ways to signal test failures, there are two functions to do this:
Error
and Fatal
, with their own variants.
Let’s reuse the tests that we wrote above, namely the TestMax
and
TestMaxEmptySlice
functions. If we would like to quickly collapse the
invocation of t.FailNow()
and t.Log()
we can simply use t.Fatal()
. We use
t.Logf()
instead of t.Log()
in our test functions but conveniently for us,
the testing
package also implements a t.Fatalf()
function, which is a
combination of t.FailNow()
+ t.Logf()
.
Let’s see how we can use it in our TestMax
test function:
|
|
The change is pretty simple. Basically, we keep the log format and content, but
instead of passing it to t.Logf()
we pass it to t.Fatalf()
as an argument,
while t.FailNow()
can be completely removed. If we run our specs again we
will not see any difference in the behaviour or the output in comparison to the
previous implementation:
|
|
The style of invocation of t.Fatalf()
we see here also applies to t.Error()
and t.Errorf()
. We can make a small change to our TestMaxEmptySlice function and use
t.Errorf()instead of a combination of
t.Logf() and
t.Fail()`:
|
|
The usage of t.Errorf()
is the same as the one of t.Fatalf()
in the
previous example. As expected, the output and the behaviour of the test
function also will not change:
|
|
When to use Error(f)
or Fatal(f)
So, we took a look at all the variations of signaling test failures and
logging such failures and the behaviour of the respective functions. For those
of you wondering when you should use Error
or Fatal
, I think that’s a very
important question that we should explore and find a good answer for.
Now, there’s no hard and fast answer here. In cases like these, we need to look at certain rules or guidelines on when to use these functions. If we look at the composition of these two functions, it would look something like this:
|
|
By looking at the composition of these functions, getting a clearer indication of the intended usage is easier:
If marking the test function as failed, but continuing its execution can/will
provide the user (the programmer) more information that can aid them in fixing
the failure, it’s better to use Error(f)
instead of Fatal(f)
. If unsure,
start with Error(f)
and work your way towards a definitive approach. There
are multiple examples to this scenario: your test function is to execute
multiple queries against a test database and not failing on the error from the
first query might give you more debugging information.
In cases, where a test function cannot recover from the failure and continuing
its execution is useless/doesn’t make sense, it’s better to use Fatal(f)
and
stop the further execution. Such cases are errors while loading fixtures or
filesystem permission errors.
These are two guidelines you can use to choose what functions to use. Of course, there are exceptions to this so it’s up to us to use good judgement when writing our test functions.
In closing
In this article, we explored a lot of the details around failing tests. We went
deep in to understand how failure logging is done using Go’s testing
package
- although it was unclear at that moment how it is related to failing tests. Then we went on to explore signaling test failures and the different approaches we can take when marking our tests as failed.
We looked in depth what is the difference between Fail
and FailNow
, and how
to apply them better in our tests. In the end, we looked at some neat functions
that the testing
package has that make our lives a bit simpler. We discussed
two different strategies of using the Error
and the Fatal
functions in our
tests.
If you would like to learn more about marking tests as failed in Go, you can
check the documentation of the testing
package.