Wednesday, July 19, 2006

JUnit tests for known issues, part 2

A few days ago I considered different options for including known issue test cases (ones that you expect to fail) in a JUnit test suite in a way that wouldn't make the full test suite fail. I decided to adopt a solution that uses system properties to selectively enable such known issue test cases. Here's how I implemented it for Apache Jackrabbit using Maven 1 (we're currently working on migrating to Maven 2, so I'll probably post Maven 2 instructions later on).

The first thing to do is to make the known issue tests check for a system property used to enable a test. The example class below illustrates two ways of doing this; either to make the full known issue test code conditional, or to add an early conditional return to skip the known issue. You can either use a single property like "test.known.issues" or different properties to allow fine grained control over which tests are run and which skipped. I like to use the known issue identifier from the issue tracker as the controlling system property, so I can selectively enable the known issue tests for a single reported issue.
public class ExampleTest extends TestCase {

public void testFoo() {
if (Boolean.getBoolean("ISSUE-foo")) {
// test code for "foo"
}
}

public void testBar() {
if (!Boolean.getBoolean("ISSUE-bar")) {
return;
}
// test code for "bar"
}

}

Once this instrumentation is in place, the build system needs to be configured to pass the identified system properties to the code when requested. In Maven 1 this happens through the maven.junit.sysproperties setting in project.properties:
maven.junit.sysproperties=ISSUE-foo ISSUE-bar
ISSUE-foo=false
ISSUE-bar=false

This way the known issue tests will be skipped when normally running "maven test", but can be selectively enabled either on the command line ("maven -DISSUE-foo=true test") or by modifying project.properties or build.properties.

3 comments:

  1. That seems a lot of extra work if you have a lot of tests - I wonder if it is better to put that into a suite() method and group them up that way?

    Even better - I'd suggest taking a look at using TestNG, where you can simply @group the tests :)

    ReplyDelete
  2. We are using a Very Large test suite for the DMS we've been working on, and of course the same problem has surfaced.

    In the end we settled with having a test runner compare the *output* of JUnit with an expected result. That has the advantage that the actual Java code doesn't need to be modified, and that the tests that are known to fail are actually executed (you want to know when they stop failing, right?).

    This is of particular importance when the system being tested behaves differently based on config/backend/supported features. Think different JCR implementations, or running Jackrabbit with a range of persistence layers.

    Best regards, Julian

    ReplyDelete
  3. Brett Porter:
    > That seems a lot of extra work if you have a lot of tests - I wonder if it is better
    > to put that into a suite() method and group them up that way?

    Good point. For now the extra effort is minimal in Jackrabbit, as I've only used the above technique to skip two test methods out of the thousands of tests in the Jackrabbit test suite. And I only expect to use this approach when entering new tests that I expect to fail. But the suite() approach does sound like a good solution when there are a large number of such conditional test methods.

    > Even better - I’d suggest taking a look at using TestNG, where you can simply
    > @group the tests :)

    Very true. I had a look at TestNG when evaluating the alternatives, and found it quite interesting, but for now we're stuck with JUnit in Jackrabbit. We might consider switching once we've migrated to Maven 2 and start thinking about upgrading to JUnit 4.

    Julian Reschke:
    > In the end we settled with having a test runner compare the *output* of JUnit
    > with an expected result.

    Oh, that's a clever approach! Thanks for sharing it. It probably doesn't fit my bill though, as I want to be able to selectively enable testing for individual known issues and still get a single SUCCESS/FAILURE status at the end of the entire test run and not have to dig into the actual test results to see whether a test I'm interested in fails or succeeds.

    ReplyDelete