Testing permutations of configurations

Posted on by

Permutations are an every day part of VZ development.  Most obviously, we have three different platforms, each with different names, different base URLs, different wordings and of course different colours.  But then we also have two different languages that we currently support, English and German.  On top of that we often do AB testing, where we’ll have different variants of the same feature displayed or implemented in slightly different ways, and we present the different ways to different users and then gather metrics to see which ways the users seem to prefer.  Finally, there are times where you have different functions, but the functions share much of their functionality.  You end up with a massive list of permutations of different ways the code can be executed, too many to ever test manually, and too many to write and maintain individual tests for.

The service that I’ve been spending a lot of time on at VZ is what we call the “notificator”.  It is responsible for generating all the HTML emails that the platform sends, from registration emails through to new message notifications, event invites, photo comments etc.  Each notification type shares a lot of its functionality with the other notification types, the emails all look very similar, and sometimes only differ by what resource keys are used to generate their wording.

There are many bugs that could be introduced in this system.  Here are some examples of things that I want to and can automatically test:

  • All generated HTML is valid markup
  • All keys that the templates use exist in our resource bundles.  When a key doesn’t exist, text like this ends up in the email: ???new.comment.action???
  • All URLs in links and images are absolute URLs and are to the right platform
  • Standard headers and titles in emails are correct

There are also many specific things for each notification type that I want to test.  The requirements I have mean that I can’t just run the tests for one notification type, or for one language, or for one platform, or for one AB testing variant.  I have to run the tests for every permutation of these.  Writing these tests manually would be a nightmare.  Fortunately, JUnit has a few features that can help us here.

Setting up configurations

Before we go into the details of how to use JUnit to help, we need to set up representations of our configurations. This can be most easily done using enums. For language, variants and platforms, we can use quite simple enums:

public enum Language {
  GERMAN("de"), ENGLISH("en");
  public final String abbreviation;
  Language(String abbreviation) {
    this.abbreviation = abbreviation;
  }
}
public enum Variant {
  SMALL_PROFILE_PICS, LARGE_PROFILE_PICS
}
public enum Platform {
  SCHUELERVZ("http://www.schuelervz.net"), STUDIVZ("http://www.studivz.net"), FREUNDEVZ("http://www.freundevz.net");
  public final String baseUrl;
  Platform(String baseUrl) {
    this.baseUrl = baseUrl;
  }
}

Sometimes these enums might already exist in some form in your code already, or you’ll have to make them up specifically for the tests. Using ones specific to your tests have the advantage that you can add meta data that is important to the tests to them, as I’ve done above with the base URLs for the platforms.

For the notification types, I wanted a bit more functionality, for example, code to run notification type specific assertions. There are many ways this could be implemented, I decided to do it using anonymous classes in an enum, implementing a method that accepts a jsoup Document to run assertions on:

public enum NotificationType {
  NEW_MESSAGE(new MessageData("Test Subject", "Test content")) {
    public void runAssertions(Document body) {
      assertThat("Test Subject", equalTo(body.getElementById("subject").text()));
      assertThat("Test content", equalTo(body.getElementById("content").text()));
    }
  },
  GRUSCHEL(new GruschelData()),
  ...
  public final Object testData;
  NotificationType(Object testData) {
    this.testData = testData;
  }
  public void runAssertions(Document body) {
  }
}

Using JUnit parameters

Now that I’ve got the different configurations, I can write a test that JUnit will run for every permutation of configurations. For my first attempt, I’m going to use JUnit parameters. This is by far the simplest way to do things. The first thing to do is declare the runner for the test class:

@RunWith(Parameterized.class)
public class EmailGenerationTest {

Now I can set up my permutations. The way the JUnit parameterized runner works is you annotate a method with @Parameterized.Parameters, and that method must return a collection of object arrays, each nested array being the set of arguments to pass to the tests constructor for each permutation. I’m going to implement this like so:

private final Variant variant;
private final NotificationType type;
private final Platform platform;
private final Language language;
public EmailGenerationTest(Variant variant, NotificationType type, Platform platform, Language language) {
  this.variant = variant;
  this.type = type;
  this.platform = platform;
  this.language = language;
}
@Parameterized.Parameters
public static Collection<Object[]> generateParameters() {
  Collection<Object[]> params = new ArrayList<Object[]>();
  for (Variant variant: Variant.values()) {
    for (NotificationType type: NotificationType.values()) {
      for (Platform platform: Platform.values()) {
        for (Language language: Language.values()) {
          params.add(new Object[] {variant, type, platform, language});
        }
      }
    }
  }
  return params;
}

Finally I can write my tests. Each test method that I write will be run once for each permutation of parameters that I have generated.

@Test
public void noResourceKeysShouldBeMissing() {
  String html = ...// code to generate email given the parameters
  assertThat(html, not(containsString("???")));
}
@Test
public void notificationSpecificAssertionsShouldPass() {
  Document body = ...// code to generate jsoup document of the email given the parameters
  type.runAssertions(body);
}
...

This works very nicely, I can add new notification types, variants, languages and platforms, and I only have to change my tests in one place specific to that configuration.  I can also add new general tests to one place, and they get run for every permutation. However, there is one problem. JUnit names each set of parameters with a sequential number. Working out which number relates to which permutation can be difficult, especially considering that we are dynamically generating the parameters.  Here’s an example of what such a test run looks like in IntelliJ IDEA:

Parameterised test run in IntelliJ IDEAYou can see that I don’t get much information. Maven test runners are also similarly unhelpful. However, there is another strategy you can use to make sure you have the right information about failures.

Custom suites

This method is quite involved, if you only have a handful of permutations, it’s certianly not worth it. In my case I have many hundreds of permutations, and so it’s invaluable. The idea is that for each configuration type, we have a custom test suite. These get nested together to form our permutations. What we can do with these is give each a name according to which configuration parameter it’s for, and so we can easily work out which permutation of configurations failed. To start off with, I’m going to write an abstract runner that simply has a name and a list of child runners.  This will be the building block for my tree of runners.

public static class NamedParentRunner extends ParentRunner<Runner> {
  private final List<Runner> runners;
  private final String name;
  protected NamedParentRunner(Class<?> klass, List<Runner> runners, String name) throws InitializationError {
    super(klass);
    this.runners = runners;
    this.name = name;
  }
  protected List<Runner> getChildren() {
    return runners;
  }
  protected Description describeChild(Runner child) {
    return child.getDescription();
  }
  protected void runChild(Runner child, RunNotifier notifier) {
    child.run(notifier);
  }
  protected String getName() {
    return name;
  }
}

Now I’m going to write a test runner that will instantiate each test and run the methods on it.  I’ll extend the existing JUnit class runner because I don’t want to reimplement all the logic to do with looking up methods:

private static class TestRunner extends BlockJUnit4ClassRunner {
  private final Variant variant;
  private final NotificationType type;
  private final Platform platform;
  private final Language language;
  private TestRunner(Class<?> klass, Variant variant, NotificationType type,
      Platform platform, Language language) throws InitializationError {
    super(klass);
    this.variant = variant;
    this.type = type;
    this.platform = platform;
    this.language = language;
  }
  public Object createTest() throws Exception {
    return new EmailGenerationTest(variant, type, platform, language);
  }
  protected String getName() {
    return language.name();
  }
  protected String testName(final FrameworkMethod method) {
    return String.format(method.getName() + "[%s-%s-%s-%s]",
        variant.name(), type.name(), platform.name(), language.name());
  }
  protected void validateConstructor(List<Throwable> errors) {
  }
  protected Statement classBlock(RunNotifier notifier) {
    return childrenInvoker(notifier);
  }
}

Note that the name of this runner is the language, it is going to be the inner most runner and the language is going to be used as the inner most list.  The createTest() method is the most important to implement here, it actually instantiates the test class with the right config.  testName() is also very important, it should uniquely identify the test with its config, and it’s what things like maven will display as the name of the test.  Naming it appropiately will allow you to easily see which config the test failed under.

Now I’m going to write my custom runner that I will pass to the @TestRunner annototation, it will build up a tree of nested NamedParentRunner‘s.

public static class EmailGenerationRunner extends Suite {
  public EmailGenerationRunner(Class<?> klass) throws InitializationError {
    super(klass, createChildren(klass));
  }
  private static List<Runner> createChildren(Class<?> klass) throws InitializationError {
    List<Runner> variants = new ArrayList<Runner>();
    for (Variant variant : Variant.values()) {
      List<Runner> types = new ArrayList<Runner>();
      for (NotificationType type : NotificationType.values()) {
        List<Runner> platforms = new ArrayList<Runner>();
        for (Platform platform : Platform.values()) {
          List<Runner> languages = new ArrayList<Runner>();
          for (Language language : Language.values()) {
            languages.add(new TestRunner(klass, variant, type, platform, language));
          }
          platforms.add(new NamedParentRunner(klass, languages, platform.name()));
        }
        types.add(new NamedParentRunner(klass, platforms, type.name()));
      }
      variants.add(new NamedParentRunner(klass, types, variant.name()));
    }
    return variants;
  }
}

This is a fair bit more code than our initial attempt, and is also a lot of code for a single test class. But when you consider that this single test is running hundreds of sub tests that test the core functionality of my appliaction, it’s not so bad. And the results are really quite nice. This is now what it looks like in IDEA, I get a tree of permutations and can click to expand them to see what passed and what failed:Custom suite test run in IntelliJ IDEASo now we’ve seen some quite advanced methods for testing many permutations of configurations over the same code in JUnit. Since implementing this in notificator, I’ve been able to much more confidently make major refactorings of my templates, as well as add new notification types without having to worry about manually checking every platform, language and variant combination. I hope this will help you in the same way.

You can download the above example code from GitHub.

Leave a Reply

Your email address will not be published. Required fields are marked *


*