Ignoring Your Test Suite


If you practice any form of test-driven development or automated testing you’ve likely come across some kind of test framework. If you work with .NET you might have used nUnit, MSTest, or xUnit. These are all great tools that let you decorate classes and methods with attributes to define a test suite. These frameworks also integrate nicely with test runners that can be used to run your tests as part of your normal “check-in dance” and/or continuous integration efforts. While unit testing frameworks are indispensible for doing nearly any kind of automated testing, they all (or at least nearly all) have a terrible feature that encourages bad habits: the ‘ignore’ attribute. We’ve been having a little discussion surrounding a pull request on the SpecEasy project (an OSS test framework that I occasionally contribute to) about this subject, and I thought I’d jot down some thoughts. In short: I don’t think that a test framework should provide a built-in means to ignore tests because it encourages bad habits.

Needles In The Haystack

The ‘ignore’ attribute provided by a unit testing framework lets you take a single test (or spec or fact or whatever your framework likes to call the smallest unit of your test suite that can pass or fail) and tells the test runner to skip over it. The test runner will typically report the total number of tests that passed, failed, and were ignored/skipped when the test run is complete. The test runners that have GUIs showing red and green status indicators for passed and failed tests might also have a separate indicator for ones that were ignored. Here’s how a test suite with an ignored tests looks in the test runner GUI that comes with ReSharper:

image

You might notice that, for all intents and purposes, this looks like a 100% successful test run. The status bar at the top is all green, and we have a nice little green check mark next to the document tab representing this test run. In a manner of speaking this actually was a 100% successful test run in that all of the tests that were run passed. There were, however, a number of tests that were ignored. You can see one such test in the “zoomed-in” view of the test results shown above, but in a sea of dozens, hundreds, or thousands of tests you’re not likely to notice or pay much mind to the handful being ignored.

Justification

Why do tests get marked with an ‘ignore’ attribute? In my experience, tests get ignored for one very simple reason: to make the test run pass. Something was probably refactored, and a test started failing. Maybe the failure is pointing out a legitimate bug that’s been introduced by recent changes, or maybe the test in question simply isn’t valid in its current form anymore. The bottom line is this: there’s not enough time to figure out the root cause of failing test right now, so it gets marked as ignored with a silent promise made to return and fix it up later. I don’t really need to tell you how the rest of that story goes.

You could argue that marking tests as ignored is better than just commenting them out (you don’t do that, do you?), but I think that’s like saying that stealing from a rich person isn’t as bad as stealing from a poor person: it’s still theft. You might also argue that marking tests as ignored is better than letting them fail, but I disagree. If you don’t have time to fix the test right away, letting it fail is a far better option than marking with an ‘ignore’ attribute. Failure is an option.

Embrace Failure

A good development team notices failed tests because failed tests create failed continuous integration (CI) builds, and good development teams hate failed CI builds. I’m aware that most CI projects and test runners have options to fail test runs with a certain number of ignored/skipped tests, but if you’re going to use the presence of ignored tests to fail a build, why not just let the tests fail and fail the build that way?

Once a test is failing, it will need to get fixed. If the test is failing on a CI server that is visible to entire team, everyone should have a vested interest in getting it fixed as soon as possible. There are two possible outcomes at this point:

1 – Fix the test: After examining the failing test you might determine that there’s value in keeping it and making it work. You might also discover that the failing test is actually indicative of a problem in the application code. Either way: fixing the code so that the test passes is always the best option.

2 – Delete the test: Alternatively, you might determine that the test in question isn’t relevant or valuable anymore, and should just be deleted.

The test is either worth spending the time to fix now, or it’s not worth keeping around. Keeping a test around, but marking it as ‘ignored’ is like keeping that old video game around because you think that you’re going to have time to re-play it some day and do all of the side missions.

Enabling Poor Habits

When a test framework has a built-in attribute for ignoring tests, it’s enabling a poor habit. Just like the xUnit test framework did away with the ‘ExpectedException’ attribute in favor of Assert.Throws, I think newer test frameworks should get rid of the ‘ignore’ attribute. If I’m successful in convincing the other contributors to the project, SpecEasy won’t have a built-in way to ignore tests.

Leave a comment

Your email address will not be published. Required fields are marked *