When I was researching the topic of test fixtures, I couldn’t find much about their beginnings. My first search was about the name of the person who coined “test fixtures”. Unfortunately, that was not a fruitful endeavor. The next logical step was to look for the etymology of the phrase “test fixtures”, but the only search result that made sense was a Wikipedia page on the topic.

Judging by the Wiki page, it’s clear that Ruby on Rails has heavily popularized test fixtures as a concept. Likely though, folks that have been in the industry for a longer time will say that the idea of test fixtures is older than Rails itself.

Test fixtures contribute to setting up the system for the testing process by providing it with all the necessary data for initialization. The setup using fixtures is done to satisfy any preconditions there may be for the code under test. For example, code that we want to test might require some configuration before being executed or tested. In such cases, we would have to recreate these preconditions to run the code every time we have to test the code.

More annoyingly, if the configuration of the tested code would change, we would have to update the configuration structure everywhere where we test that particular code.

To avoid such scenarios, we use fixtures. Fixtures allow us to reliably and repeatably create the state our code relies on upon without worrying about the details. When the required state of the code under test changes, we need to tweak a fixture instead of scouring all tests for the code that needs changing.

I know, I know. My introduction made you dizzy from all the praise of fixtures. So let’s stop the sales pitch here and move on to see how simple fixtures can be and how you can master them as another tool in your testing tool belt.

Making a simple grade book

As always, when talking about code without having code to look at is not great. Let’s introduce an example representing a grade book populated from a CSV file, using a builder function. After, we will create a lookup method per column and add some tests for both functions.

type Record struct {
	student string
	subject string
	grade   string
}

type Gradebook []Record

The Record type will have three attributes: student, subject and grade, all three of type string. The Gradebook type is just a slice of Records, nothing more.

Next, let’s create a builder function for a Gradebook. We want the function to be simple - receive a path to a CSV file as an argument and return a Gradebook with all of the records parsed from the CSV.

func NewGradebook(csvFile io.Reader) (Gradebook, error) {
	var gradebook Gradebook
	reader := csv.NewReader(csvFile)

	for {
		line, err := reader.Read()

		if err == io.EOF {
			break
		}

		if err != nil {
			return gradebook, err
		}

		if len(line) < 3 {
			return gradebook, fmt.Errorf("Invalid file structure")
		}

		gradebook = append(gradebook, Record{
			student: line[0],
			subject: line[1],
			grade:   line[2],
		})
	}

	return gradebook, nil
}

Although a bit bloated, the function doesn’t do much. It receives an io.Reader as an argument (which is the file reader), wraps it in a CSV reader, and reads it line by line. For each line it reads, it will create a new Record struct and append it to the collection of Record’s gradebook. After parsing the whole file, it will exit the loop and return the grade book.

Of course, in true Go fashion, we gracefully handle the errors in every step of the reading and parsing the file. If there’s an error in any scenario, the function will return the error along with the empty gradebook.

The last piece of the puzzle is the function that will find all records in the grade book for a particular student:

func (gb *Gradebook) FindByStudent(student string) []Record {
	var records []Record
	for _, record := range *gb {
		if student == record.student {
			records = append(records, record)
		}
	}
	return records
}

The FindByStudent function takes the student name as argument. First, it will loop through the Gradebook’s records and collect the records where the student name matches. Then, it will return the records found for the particular student name.

To manually test the code, let’s create a small CSV file, called grades.csv:

Jane,Chemistry,A
John,Biology,A
Jane,Algebra,B
Jane,Biology,A
John,Algebra,B
John,Chemistry,C

In the main function of the file, we will parse it and then get all of Jane’s grades:

func main() {
	csvFile, err := os.Open("grades.csv")
	if err != nil {
		fmt.Println(fmt.Errorf("error opening file: %v", err))
	}
	grades, err := NewGradebook(csvFile)
	fmt.Printf("%+v\n", grades.FindByStudent("Jane"))
}

The output of the function will be:

$ go run grades.go
[{student:Jane subject:Chemistry grade:A} {student:Jane subject:Algebra grade:B} {student:Jane subject:Biology grade:A}]

From the output, it is clear what are Jane’s grades in the grade book we have created. Having these two types and two functions is good enough to explain how we can use fixtures in the testing we’re about to do.

Testing the builder function

Whenever we need to test a piece of code, we have to identify its key components. In other words, we have to understand the essential steps that that code takes to accomplish its mission. For example, to test the NewGradebook function, an overly simplified breakdown of its doings would look like:

  1. Read through each of the lines of the CSV
  2. When reading through each line, create a new struct from the data
  3. Put the new struct in the collection of structs
  4. Return the collection of structs

Now, there’s no need to test if opening a file and parsing it works - we trust Go to take care of that. So instead, we are interested in two things: will our function handle invalid CSV files gracefully, and will it create a Gradebook that we expect from a valid file?

To test the error handling, we will introduce a test function:

func TestNewGradebook_ErrorHandling(t *testing.T) {
	cases := []struct {
		fixture   string
		returnErr bool
		name      string
	}{
		{
			fixture:   "testdata/grades/empty.csv",
			returnErr: false,
			name:      "EmptyFile",
		},
		{
			fixture:   "testdata/grades/invalid.csv",
			returnErr: true,
			name:      "InvalidFile",
		},
		{
			fixture:   "testdata/grades/valid.csv",
			returnErr: false,
			name:      "ValidFile",
		},
	}

	for _, tc := range cases {
		t.Run(tc.name, func(t *testing.T) {
			_, err := NewGradebook(tc.fixture)
			returnedErr := err != nil

			if returnedErr != tc.returnErr {
				t.Fatalf("Expected returnErr: %v, got: %v", tc.returnErr, returnedErr)
			}
		})
	}
}

To run these test cases, we will need three accompanying CSV files in the root of our project: empty.csv, invalid.csv and valid.csv. An empty CSV, an invalid CSV and a valid CSV file, respectively.

Each of these files is fixtures - files that go together with the test suite, enabling us to assume the state of the system that we run our tests on. Now, the content of these files should be evident from the file names. The invalid.csv will contain just text, but not in a CSV format. The empty.csv will be just an empty file, while the valid.csv file will be a real CSV that our function can parse and use. Lastly, the nonexisting.csv actually will not be a file – we want our tests to fail when this path is passed to the NewGradebook function. And this is the first thing we need to remember about fixtures: we can (and should) create as many fixture files as it makes sense, but not more.

Fixtures should always be placed in a directory (in our example testdata) at the root of our project. We should always put our fixtures in the testdata directory at the root of our project because go test will ignore that path when building our packages. Quoting the output of go help test:

The go tool will ignore a directory named “testdata”, making it available to hold ancillary data needed by the tests.

Placing it in the root of the package works great because when we run go test, for each package in the directory tree, go test will execute the test binary with its working directory set to the source directory of the package under test. (Read more about it in Dave Cheney’s article on the topic.)

In the example above, we used two nested directories: testdata and grades. We use two nested directories because we want to logically group our fixtures and leave the room for another kind of fixtures within the same project if need be. Software is built to grow, so why not set some sane defaults from the start.

Testing the FindByStudent function

The functionality of the FindByStudent function is a linear search through a Gradebook type (which is a slice of Record’s). It compares the student name from the argument and the name of each of the records in the Gradebook. When a match is found, the matching record is added to the collection records.

Testing this function is can be based on a couple of state assumptions. The first one is to test FindByStudent we have to have a Gradebook available. The Gradebook can be in three states: empty, without a matching Record, and with a Record that matches the student name from the argument. If we flipped this on its head, it would mean that to test the function, we will need three different Gradebooks: one empty, one without a matching Record, and one with a matching Record.

To create such Gradebook’s, we can take two different approaches: define the Gradebook’s directly in the test or use a fixture file. Using the first approach might be preferable, but we will use the second approach to see how we can use fixtures. While we already have the fixture files from the previous test, we can use them in the test of the FindByStudent function:

func TestFindByStudent(t *testing.T) {
	cases := []struct {
		fixture string
		student string
		want    Gradebook
		name    string
	}{
		{
			fixture: "fixtures/grades/empty.csv",
			student: "Jane",
			want:    Gradebook{},
			name:    "EmptyFixture",
		},
		{
			fixture: "fixtures/grades/valid.csv",
			student: "Jane",
			want: Gradebook{
				Record{
					student: "Jane",
					subject: "Chemistry",
					grade:   "A",
				},
				Record{
					student: "Jane",
					subject: "Algebra",
					grade:   "A",
				},
			},
                        name: "ValidFixtures",
		},
	}

	for _, tc := range cases {
		t.Run(tc.name, func(t *testing.T) {
			gradebook, err := NewGradebook(tc.fixture)
			if err != nil {
				t.Fatalf("Cannot create gradebook: %v", err)
			}

			got := gradebook.FindByStudent(tc.student)
			for idx, gotGrade := range got {
				wantedGrade := tc.want[idx]
				if gotGrade != wantedGrade {
					t.Errorf("Expected: %v, got: %v", wantedGrade, gotGrade)
				}
			}

		})
	}

}

In this test function, we have defined two test cases: the first one uses the empty.csv fixture, while the other uses the valid.csv fixture. By looking at the test cases, it is clear what we expect to get from each one. For example, when working with the empty CSV, we hope to get a blank grade book - no grades, no grade book. But, on the other hand, when working with the valid.csv we expect to get a Gradebook, with all student grades specified.

The test function does not have any magic. It merely builds a Gradebook using the NewGradebook function and the fixture file. Then, we invoke the FindByStudent function on the Gradebook, and we make sure that all of the grades that we got are the ones we expected.

If we run the test, we’ll get an output looking like this:

$ go test -v -run=TestFindByStudent
=== RUN   TestFindByStudent
=== RUN   TestFindByStudent/EmptyFixture
=== RUN   TestFindByStudent/ValidFixture
--- PASS: TestFindByStudent (0.00s)
    --- PASS: TestFindByStudent/EmptyFixture (0.00s)
    --- PASS: TestFindByStudent/ValidFixture (0.00s)
PASS
ok  	_/Users/Ilija/Documents/fixtures	0.004s

The tests pass - building the Gradebooks with the fixtures worked well, so we could range over the test cases and test our expectations.

Tidying up our tests

Looking at both test functions that we wrote at the beginning of the t.Run blocks, we can notice that we have to create a new Gradebook using the NewGradebook builder function. In essence, this is the test setup in these two test functions - we have to have an instance of the Gradebook type to run our tests.

When we use fixtures, the failure to use a fixture can mean that we can’t run the tests - they depend on the fixture files being available and usable. If the fixture renders to be unusable, we have to stop the tests further and bail out with an error.

It is a quick win to extract a test helper to use in the test setup for such reasons. We can extract all error handling for loading the fixture and test setup outside of the test functions. Let’s create a small function that will do just that:

func buildGradebook(t *testing.T, path string) *Gradebook {
	gradebook, err := NewGradebook(path)
	if err != nil {
		t.Fatalf("Cannot create Gradebook: %v", err)
	}

	return &gradebook
}

The buildGradebook is simply a wrapper around the call to NewGradebook, with one key difference: if a Gradebook cannot be produced using NewGradebook it will mark the test as failed. Signaling the failure is done using t.Fatalf, where instead of returning an empty Gradebook, we immediately make the test fail. In other words: being unable to create a Gradebook is an unrecoverable error. A nice side-effect is that the caller function of buildGradebook does not need to handle the error that might be returned from NewGradebook - that will all be handled by buildGradebook.

If we revisit our TestFindByStudent function now, it will not have changed much. Still, it will contain the improvements coming from the buildGradebook function:

func TestFindByStudent(t *testing.T) {
	cases := []struct {
		fixture string
		student string
		want    Gradebook
		name    string
	}{
		{
			fixture: "fixtures/grades/empty.csv",
			student: "Jane",
			want:    Gradebook{},
			name:    "EmptyFixture",
		},
		{ fixture: "fixtures/grades/valid.csv",
			student: "Jane",
			want: Gradebook{
				Record{
					student: "Jane",
					subject: "Chemistry",
					grade:   "A",
				},
				Record{
					student: "Jane",
					subject: "Algebra",
					grade:   "A",
				},
			},
			name: "ValidFixture",
		},
	}

	for _, tc := range cases {
		t.Run(tc.name, func(t *testing.T) {
			gradebook := buildGradebook(t, tc.fixture)

			got := gradebook.FindByStudent(tc.student)
			for idx, gotGrade := range got {
				wantedGrade := tc.want[idx]
				if gotGrade != wantedGrade {
					t.Errorf("Expected: %v, got: %v", wantedGrade, gotGrade)
				}
			}

		})
	}
}

If we would remove any of the fixture files, we will see how the test will be marked as failed due to the t.Fatal invocation:

$ rm testdata/grades/valid.csv # We remove the fixture

$ go test ./... -count=1 -v -run=TestFindByStudent
=== RUN   TestFindByStudent
=== RUN   TestFindByStudent/ValidFixture
=== RUN   TestFindByStudent/EmptyFixture
--- FAIL: TestFindByStudent (0.00s)
    --- FAIL: TestFindByStudent/ValidFixture (0.00s)
        grades_test.go:8: Cannot create Gradebook: open testdata/grades/valid.csv: no such file or directory
    --- PASS: TestFindByStudent/EmptyFixture (0.00s)
FAIL
FAIL	_/Users/Ilija/Documents/fixtures	0.004s

By having another function that takes care of building the Gradebook, we can offload the complexity of the missing fixtures outside of the tests themselves. While these concepts are simple, they’re powerful as they lead to cleaner tests with local test functions that are easy to maintain.

EDIT October 8, 2019: As Andreas Schröpfer suggested in the comments, it’s more idiomatic Go when the function receives a io.Reader instead of a file path. I have updated the example code and the article to reflect that. Thanks Andreas!