Author Archives: James Roper

About James Roper

I'm a senior developer working at VZ on Java, PHP and Scala backend services. I also contribute to a number of open source projects, and am the lead developer of the MongoDB Jackson Mapper, and Pebble. I love writing and speaking about innovative ways of using technology.

Writing a Play 2.0 Module

The intro

Hot off the press, Play 2.0 has arrived, and has been welcomed into the arms of a fast paced community that loves new things.  The day it was released, we started a new project here, and on that particular day and on this particular project we felt particularly daring.  So we decided to use Play 2.0 for our project.  New is an understatement for Play 2.0, it’s not just an incremental improvement on Play 1.x, many parts of it have been completely rewritten.  There is still much work to do, and one of the blaring gaps that has yet to be filled is modules.  At the time of writing, there is no official listing or repository of modules for Play 2.0, in stark contrast to the rich ecosystem of modules for Play 1.x.  Furthermore, there is no documentation on how to write a module.  So, given that modules tend to be very useful, and we were starting a new project, we very quickly ran into the need to write our own module, which we did, the MongoDB Jackson Mapper Play 2.0 Module.  To help the rest of the community of early Play 2.0 adopters, I’ve decided to write a (very) short guide on writing Play 2.0 modules.

The disclaimer

So did I mention that there was no documentation on writing modules, and very little in the way of example code to copy from?  What I’ve written may well be not the right way to do things.  But with no documentation, how am I to know?  All I know is that it’s working for us, and that’s good enough for me.  So if you happen to know what the right way to write Play 2.0 modules is, don’t bother commenting on this telling me that I’m wrong.  Just write the damn documentation!

The setup

In play 1.x, writing a module usually starts with running play new-module. Slight problem here:

$ play new-module
       _            _
 _ __ | | __ _ _  _| |
| '_ \| |/ _' | || |_|
|  __/|_|\____|\__ (_)
|_|            |__/ 

play! 2.0,

This is not a play application!

Use `play new` to create a new Play application in the current directory,
or go to an existing application and launch the development console using `play`.

You can also browse the complete documentation at

Ok, so that doesn’t work.  Looks like there’s no way to create a new play module.  So, I decided to simply write a vanilla SBT project.  I won’t go into the details of how to set a new SBT project up, but here’s the play specific bits that you’ll need:

resolvers ++= Seq(
    Resolver.url("Play", url(""))(Resolver.ivyStylePatterns),
    "Typesafe Repository" at "",
    "Typesafe Other Repository" at ""

libraryDependencies += "play" %% "play" % "2.0"

libraryDependencies += "play" %% "play-test" % "2.0" % "test"

Now do whatever you need to do to open it up in your favourite IDE/editor (I personally use IntelliJ IDEA, so I use the sbt-idea plugin).

The code

It’s worth first noting that the module that I wanted to write only had to load a MongoDB connection pool, according to configuration supplied in application.conf, and manage the lifecycle of that pool.  The trait that needed to be implemented to do this is play.api.Plugin.  It has three methods, onStart(), onStop() and enabled().  If you’re looking for information on how to do things like intercept HTTP calls and define custom routes, then I suspect it’s easy to do (probably by just defining a routes file in your plugin), but I didn’t need to do that so I’m not going to pretend that I know how.

There’s not really much to say now, my plugin looks something like this:

class MongoDBPlugin(val app: Application) extends Plugin {
  private lazy val (mongo, db, mapper) = {
    // insert code here to initialise from application config

  override def onStart() {
    // trigger lazy loading of the mongo field

  override def onStop() {

  override def enabled() = !app.configuration.getString("mongodb.jackson.mapper")
      .filter(_ == "disabled").isDefined

The actual code does a fair bit more than this, but none of that is specific to how to write a plugin.

The finishing touch

So I’ve written my plugin, there’s now one thing left to do… tell play framework about it!  This is done by defining a play.plugins file in the the root of the classpath (in the resources folder):


The leading integer defines the priority of the plugin to be loaded.  The example I saw used 1000, so I decided to use that too.

The resolution

Now all I have to do to use this module is add it as a dependency.  Play will automatically pick it up, and it will be automatically started and stopped as necessary.  Happy hacking, and if you’re starting a Play 2.0 project, and want to use MongoDB, why not try the MongoDB Jackson Mapper Module!

Extending Guice

Guice is a framework that I had been looking forward to trying out for a while, but until recently I never had the opportunity.  Previously I had mostly used Spring (with a dash of PicoContainer), so when I got the opportunity to start using Guice, I naturally had a number of my favourite Spring features in mind as I started using it.  Very quickly I found myself wanting an equivalent of Springs DisposableBean.  Guice is focussed on doing one thing and doing it well, and that thing is dependency injection.  Lifecycle management doesn’t really come into that, so I am not surprised that Guice doesn’t offer native support for disposing of beans.  There is one Guice extension out there, Guiceyfruit, that does offer reasonably complete per scope lifecycle support, however Guiceyfruit requires using a fork of Guice, which didn’t particularly appeal to me.  Besides, Guice is very simple, so I imagined that providing my own simple extensions to it would also be simple.  I was right.

Though, to be honest, while the extensions themselves are simple, it wasn’t that simple to work out how to write them.  On my first attempt, I gave up after Googling and trying things out myself for an hour.  On my second attempt, I almost gave up with this tweet.  But, I stuck with it, and eventually made my breakthrough. The answer was in InjectionListener. This listener is called on every component that Guice manages, including both components that Guice instantiates itself, and components that are provided as instances to Guice.

Supporting Disposables

So, I had my disposable interface:

public interface Disposable {
  void dispose();

and I wanted any component that implemented this interface to have their dispose() method called when my application shut down.  Naturally I had to maintain a list of components to dispose of:

final List<Disposable> disposables = Collections.synchronizedList(new ArrayList());

Thread safety must be taken into consideration, but since I only expected this list to be accessed when my application was starting up and shutting down, a simple synchronized list was suffcient, no need to worry about performant concurrent access.

My InjectionListener is very simple, it just adds disposables to this list after they’ve been injected:

final InjectionListener injectionListener = new InjectionListener<Disposable>() {
  public void afterInjection(Disposable injectee) {

InjectionListener‘s are registered by registering a TypeListener that listens for events on types that Guice encounters.  My type listener checks if the type is Disposable (this actually isn’t necessary because we will register it using a matcher that matches only Disposable types, but it is defensive to do the check), and if so registers the InjectionListener:

TypeListener disposableListener = new TypeListener {
  public <I> void hear(TypeLiteral<I> type, TypeEncounter<I> encounter) {
    if (Disposable.class.isAssignableFrom(type.getRawType())) {
      TypeEncounter<Disposable> disposableEncounter = (TypeEncounter<Disposable>) encounter;

Now I can register my TypeListener.  This is done from a module:

bindListener(new AbstractMatcher<TypeLiteral<?>>() {
      public boolean matches(TypeLiteral<?> typeLiteral) {
        return Disposable.class.isAssignableFrom(typeLiteral.getRawType());
    }, disposableListener);

The last thing I need to do is bind my collection of disposables, so that when my app shuts down, I can dispose of them:

bind((TypeLiteral) TypeLiteral.get(Types.listOf(Disposable.class)))

So now when my app shuts down, I can look up the list of disposables and dispose of them:

for (Disposable disposable : ((List<Disposable>) injector.getInstance(
    Key.get(Types.listOf(Disposable.class)))) {

If you decide to use this code in your own app, please be very wary of a potential memory leak. Any beans that are not singleton scoped will be added to the disposable list each time they are requested (per scope).  For my purposes, all my beans that required being disposed of were singleton scoped, so I didn’t have to worry about this.

Supporting annotation based method invocation scheduling

Happy that I now had a very simple extension with very little code for supporting automatic disposing of beans, I decided to try something a little more complex… scheduling. My app contains a number of simple scheduled tasks, and the amount of boilerplate for scheduling each of these was too much for my liking. My aim was to able to do something like this:

@Schedule(delay = 5L, timeUnit = TimeUnit.MINUTES, initialDelay = 1L)
def cleanUpExpiredData() {

(Yep, this app has a mixture of Scala and Java.) So, I started with my annotation:

public @interface Schedule {
    long delay();
    TimeUnit timeUnit() default TimeUnit.MILLISECONDS;
    long initialDelay() default 0;

The main difference this time is that I’m not listening for events on a particular type, but rather I want to check all types to see if they have a @Schedule annotated method. This is a little more involved, so I’m going to have a scheduler service that does this checking and the scheduling. Additionally it will make use of the disposable support that I just implemented:

public class SchedulerService implements Disposable {
  private final ScheduledExcecutorService executor = Executors.newSingleThreadScheduledExecutor();

  public boolean hasScheduledMethod(Class clazz) {
    for (Method method : clazz.getMethods()) {
      Schedule schedule = method.getAnnotation(Schedule.class);
      if (schedule != null) {
        return true;
    return false;

  public void schedule(Object target) {
    for (final Method method : target.getClass().getMethods()) {
      Schedule schedule = method.getAnnotation(Schedule.class);
      if (schedule != null) {
        schedule(target, method, schedule);

  private void schedule(final Object target, final Method method, Schedule schedule) {
    executor.scheduleWithFixedDelay(new Runnable() {
      public void run() {
      }, schedule.initialDelay(), schedule.delay(), schedule.timeUnit());

  public void dispose() {

Now in my module I instantiate one of these services:

final SchedulerService schedulerService = new SchedulerService();

I then implement my InjectionListener:

final InjectionListener injectionListener = new InjectionListener() {
  public void afterInjection(Object injectee) {

and my TypeListener:

TypeListener typeListener = new TypeListener() {
  public <I> void hear(TypeLiteral<I> type, TypeEncounter<I> encounter) {
    if (schedulerService.hasScheduledMethod(type.getRawType()))  {

And then all I have to do is register my type listener, and also my scheduler service (so that it gets disposed properly):

  bindListener(Matchers.any(), typeListener);


Although Guice doesn’t come with as much as Spring does out of the box, it is very simple to extend to meet your own requirements. If I were needing many more container like features, then maybe Spring would be a better tool for the job, but when I’m just after a dependency injection framework with a little sugar on top, Guice is a very nice and much lighter weight solution.

Jackson annotations with MongoDB

At VZ we are currently busy testing a new storage backend for the VZ feed.  We’ve pushed the existing backend to its limits and while so far it has served us well, we are finding that for what we want to do going forward, it just isn’t the right match for our requirements.  So we’ve spent some time investigating what the best backend will be, and we are now in the testing stage with MongoDB, loading it with our feed data and hammering it with load tests.  So far so good.

The backend of our feed service is implemented in Java, and talks JSON back to clients.  If you log in to VZ and use your favourite browsers developer console to see XHR traffic, you can see the JSON that the feed service returns.  This JSON is generated from POJOs using Jackson, a very simple yet powerful, performant and flexible framework for serialising POJOs to JSON.

Looking carefully at the JSON, you’ll notice that the feed data contains more than just a list of messages, there is a mixture of objects whose data is clearly generated on the fly by the feed service, and other objects that the feed service doesn’t really care about, it just receives them from the services that generate them, and passes them to the client as is.  For example, information about a particular photo, or a gadget, or a status update, or a new friend.  This data is represented by many different POJOs in the feed, and in some cases specific processing is done on them, but mostly the feed receives them from the various services that generates them, and passes them back to the client untouched.

All these POJOs have Jackson annotations on them, and we know that they work well with Jackson.  With our old storage backend, we simply serialised them to JSON to store them.  One of the reasons for going with MongoDB is that we wanted to be able to easily query our data, to get statistics and a better understanding of how the service was being used.  What we wanted to avoid though was having to either rewrite the POJOs, or make MongoDB equivalent copies of them, in order to store them in MongoDB.  What we really wanted to do is reuse the Jackson annotations on them so that we could store them as is in MongoDB.

A google search revealed that there already was a mapper out there by Grzegorz Godlewski that did this, however this mapper had a few problems in our eyes.  For one, it required a fork of the Mongo Java Driver, and there was no indication that this fork would ever be pulled back into the stable driver.  It also said that it was experimental and not production ready, and we weren’t about to trust our feed with an experimental technology that didn’t have a clear future.  It is worth saying though that Grzegorz’s mapper is quite innovative, it serialises/deserialises objects directly to/from the BSON that MongoDB speaks with the client.  This would make it the most performant mapper out there.  But we didn’t feel that the technology was mature enough for our use case.

However, as Grzegorz pointed out, plugging in a custom parser/generator to Jackson is a simple thing to do, so we decided to implement our own mapper that parsed/generated the MongoDB map like DBObject‘s.  In addition to this, we implemented a very lightweight interface that wraps the MongoDB DBCollection, called JacksonDBCollection.  This provides all the same methods that DBCollection provides, except that where appropriate, it replaces DBObject method parameters and return types with strongly typed POJO versions.  DBObject‘s still get used for querying, because you can’t express all queries using POJOs, however when you just want to use equality in your queries, you can use your POJO as the query.

It’s open source!

We’re pleased to announce that we’ve made this library available as an open source library. You can download the source code, and contribute to it yourself, at GitHub.  If you don’t want the source code, but just want to start using it in your project now, you can get it from the central maven repository:


The details

To give you a taste of what this framework looks like, I’ll show some coding examples.  Here is how to create a JacksonDBCollection:

JacksonDBCollection<MyPojo, String> coll = JacksonDBCollection.wrap(
    dbCollection, MyPojo.class, String.class);

The two type parameters are the type of the POJO that is being mapped, and the type of the ID of the POJO.  The reason we require the type of the ID of the POJO is for strongly typed querying by ID, and strongly typed retrieval of generated IDs.  Here is what my POJO looks like:

public class MyPojo {
  @ObjectId @Id public String id;
  public Integer someNumber;
  public List<String> someList;

Maybe not the best Java coding style with public mutable fields, Jackson is flexible enough to support almost anything, this keeps it simple for this blog post.  In the feed, we’re using immutable objects with private final fields, and @JsonCreator annotated constructors.

One issue that we encountered while implementing this is how to handle ObjectId‘s.  If you declare an object to have a type of ObjectId, it works without a problem.  But this is a POJO mapper, and some people may not want to have an ObjectId type in their POJO, particularly if they are reusing the POJO for the web.  So we added an annotation, @ObjectId, that can be put on any String or byte[] property, that tells the mapper to serialise/deserialise this property to/from ObjectId.

You’ll also see the use of the @javax.persistence.Id annotation.  This is really just a short hand that we implemented for @JsonProperty("_id"), which has the added advantage of not having to use the Jackson views feature if you want to use the same POJO for the web and don’t want the name of the property to be _id. You could just as easily make the field name _id with no extra annotations.

Inserting an object is simple:

MyPojo pojo = new MyPojo();
pojo.someNumber = 10;
pojo.someList = Arrays.asList("foo", "bar");
WriteResult<MyPojo, String> result = coll.insert(pojo);

or with write concerns:

WriteResult<MyPojo, String> result = coll.insert(pojo, WriteConcern.MAJORITY);

We are letting MongoDB generate the ID for us here, which begs an important question, how do we find out what the ID it generated was? With the MongoDB Java Mapper, it sets the ID on the object you passed in. This is not practical with Jackson, because you may be using a custom serialiser, or @Creator annotated factory method/constructor, in the case of the VZ feed our objects were immutable, making it impossible to do that. We supported this by implementing our own WriteResult class, which aside from wrapping all the methods from the MongoDB Java Driver WriteResult, provides a few methods such as getSavedId() and getSavedObject(), which deserialises the id/object so you may obtain the ID:

String id = result.getSavedId();

Now using that ID we can load the object again:

MyPojo saved = coll.findOneById(id);

Equality based querying can be done using the POJO as a template:

MyPojo query = new MyPojo();
query.someNumber = 10;
for (MyPojo item: coll.find(query)) {

And loading partial objects can also be done using the POJO as a template, by setting any fields you want loaded to be something that isn’t null:

MyPojo query = new MyPojo();
query.someNumber = 10;
MyPojo template = new MyPojo();
template.someNumber = 1; = "not null"
for (MyPojo item: coll.find(query, template)) {
    assert(item.someList == null);

And when this isn’t enough, you can fall back to normal DBObject‘s for both the query and the template:

for (MyPop item: coll.find(new DBObject().add("someNumber", new DBObject().add("$gt", 5))) {

There are more examples for some more advanced use cases, for example using custom views, on the GitHub project page.


So here we have a new POJO mapper for MongoDB that you can use, and due to the fact that it uses Jackson, it has a head start in being very powerful, flexible and performant. Some of our plans for future features include:

  • @Reference annotation support, loading dehydrated referenced objects containing just the ID, with convenience methods for hydrating these objects
  • Schema migration features, where the JacksonDBCollection can return option tuples containing either the old or the new object type based on what was detected

If you have any other features you’d like to see, please raise an issue in the GitHub project!

You did what in Scala?

This morning when I got into work, the first thing that anyone said to me was “you did what in Scala?”  Not the usual greeting I get in the morning… clearly I had stirred something up.  I knew exactly what this person was talking about, the evening before I committed some code, and then tweeted this:

Just added scala for the first time to an existing Java project. Not too shaby.

As soon as I saw the build passed on our CI server, I went home, but it caught the attention of my product manager, and he was very intrigued.  What I had in fact done was started writing unit tests for an existing Java service that I was working on in Scala.  Why did I do this?  A number of reasons:

  1. I’ve been meaning to learn Scala for at least a year.
  2. I’ve seen Scala unit tests before, and they look very cool, they’re very good at minimising boilerplate, and very easy to read and understand.
  3. At VZ, we are free to make sensible technology choices.  This ranges from what libraries we use, to what databases we use, to what languages we use.  Nothing is off limits, as long as we can provide a good argument as to why it’s better than the alternatives.  And when we do that, our managers trust us.

My product manager of course had no problems with me using Scala, we have another project here that uses Scala and he thought I meant I had done some work on that, and was interested in knowing why.  After explaining that I had actually added Scala to the project I was supposed to be working on, he was completely fine, and that’s one of the things I love about working for VZ, we have the freedom to make our own decisions.

For those that are not familiar with Scala, here is a quick overview of how I introduced Scala into my existing Java project.

First, I did my research.  What unit testing frameworks are there in Scala?  You’ll quickly find that there are two popular frameworks, one called specs, and another called ScalaTest.  ScalaTest supports a number of different testing styles, including TDD and BDD, while specs only supports BDD.  I only wanted BDD, so both were equal to me at this point.  Further research showed that specs has good integration with my favourite mocking framework, Mockito, so I went with specs.  I suggest you do your own research for your own purposes, my comparison here is far from complete.

Next, since I’m using Maven, I needed to add Scala to my maven project.  I found a blog post that explained how to add Scala to a maven project in 4 steps, and I was able to build my project in no time.  I also added a dependency on the specs library, and configured the Maven surefire plugin to run any classes ending in Test or Spec, as per the instructions for integrating with Maven and JUnit in the specs documentation.  I use IntelliJ IDEA as my IDE, so I searched for a Scala plugin in my preferences, found one, installed it, and after a restart IDEA had Scala support.  The IDEA instructions say that you need to install the Scala SDK, but since I was using Maven, I could just add the scala compiler as a provided maven dependency, then go to the Scala compiler preferences and point IDEA at that dependency.

Finally I had to write my tests.  Below is the first test that I wrote.  If you’re a Scala guru, I’m sure you’ll see things that I could have done simpler or that I haven’t followed conventions for, so I’m happy for you point them out to me, I’m still learning.

class WorkResultHandlerSpec extends SpecificationWithJUnit with Mockito {
  "Work result handler" should {
    val tracker = mock[WorkResultTracker]
    val handlerChain = mock[HandlerChain]
    val workUnit = WorkUnit.builder(JobType.TEST_MESSAGE, null).build
    val job = Job.builder(JobType.TEST_MESSAGE).build
    var handler = new WorkResultHandler(tracker)

    "call handler chain only once" in {
      handler.handle(job, workUnit, handlerChain)
      there was one(handlerChain).passToNextHandler(job, workUnit)

    "pass the result to the tracker" in {
      val workResult = WorkResult.success
      handlerChain.passToNextHandler(job, workUnit) returns workResult
      handler.handle(job, workUnit, handlerChain)
      there was one(tracker).trackWorkResult(JobType.TEST_MESSAGE, workResult)

    "return the result" in {
      val workResult = WorkResult.success
      handlerChain.passToNextHandler(job, workUnit) returns workResult
      handler.handle(job, workUnit, handlerChain) mustEq workResult

    "track an exception as a failure" in {
      handlerChain.passToNextHandler(job, workUnit) throws new RuntimeException("Something bad happened")
      val workResult = handler.handle(job, workUnit, handlerChain)
      workResult.getStatus.isSuccess must_== false
      workResult.getMessage mustEq "Something bad happened"
      there was one(tracker).trackWorkResult(JobType.TEST_MESSAGE, workResult)

Testing permutations of configurations

Permutations are an every day part of VZ development.  Most obviously, we have three different platforms, each with different names, different base URLs, different wordings and of course different colours.  But then we also have two different languages that we currently support, English and German.  On top of that we often do AB testing, where we’ll have different variants of the same feature displayed or implemented in slightly different ways, and we present the different ways to different users and then gather metrics to see which ways the users seem to prefer.  Finally, there are times where you have different functions, but the functions share much of their functionality.  You end up with a massive list of permutations of different ways the code can be executed, too many to ever test manually, and too many to write and maintain individual tests for.

The service that I’ve been spending a lot of time on at VZ is what we call the “notificator”.  It is responsible for generating all the HTML emails that the platform sends, from registration emails through to new message notifications, event invites, photo comments etc.  Each notification type shares a lot of its functionality with the other notification types, the emails all look very similar, and sometimes only differ by what resource keys are used to generate their wording.

There are many bugs that could be introduced in this system.  Here are some examples of things that I want to and can automatically test:

  • All generated HTML is valid markup
  • All keys that the templates use exist in our resource bundles.  When a key doesn’t exist, text like this ends up in the email: ???new.comment.action???
  • All URLs in links and images are absolute URLs and are to the right platform
  • Standard headers and titles in emails are correct

There are also many specific things for each notification type that I want to test.  The requirements I have mean that I can’t just run the tests for one notification type, or for one language, or for one platform, or for one AB testing variant.  I have to run the tests for every permutation of these.  Writing these tests manually would be a nightmare.  Fortunately, JUnit has a few features that can help us here.

Setting up configurations

Before we go into the details of how to use JUnit to help, we need to set up representations of our configurations. This can be most easily done using enums. For language, variants and platforms, we can use quite simple enums:

public enum Language {
  GERMAN("de"), ENGLISH("en");
  public final String abbreviation;
  Language(String abbreviation) {
    this.abbreviation = abbreviation;
public enum Variant {
public enum Platform {
  public final String baseUrl;
  Platform(String baseUrl) {
    this.baseUrl = baseUrl;

Sometimes these enums might already exist in some form in your code already, or you’ll have to make them up specifically for the tests. Using ones specific to your tests have the advantage that you can add meta data that is important to the tests to them, as I’ve done above with the base URLs for the platforms.

For the notification types, I wanted a bit more functionality, for example, code to run notification type specific assertions. There are many ways this could be implemented, I decided to do it using anonymous classes in an enum, implementing a method that accepts a jsoup Document to run assertions on:

public enum NotificationType {
  NEW_MESSAGE(new MessageData("Test Subject", "Test content")) {
    public void runAssertions(Document body) {
      assertThat("Test Subject", equalTo(body.getElementById("subject").text()));
      assertThat("Test content", equalTo(body.getElementById("content").text()));
  GRUSCHEL(new GruschelData()),
  public final Object testData;
  NotificationType(Object testData) {
    this.testData = testData;
  public void runAssertions(Document body) {

Using JUnit parameters

Now that I’ve got the different configurations, I can write a test that JUnit will run for every permutation of configurations. For my first attempt, I’m going to use JUnit parameters. This is by far the simplest way to do things. The first thing to do is declare the runner for the test class:

public class EmailGenerationTest {

Now I can set up my permutations. The way the JUnit parameterized runner works is you annotate a method with @Parameterized.Parameters, and that method must return a collection of object arrays, each nested array being the set of arguments to pass to the tests constructor for each permutation. I’m going to implement this like so:

private final Variant variant;
private final NotificationType type;
private final Platform platform;
private final Language language;
public EmailGenerationTest(Variant variant, NotificationType type, Platform platform, Language language) {
  this.variant = variant;
  this.type = type;
  this.platform = platform;
  this.language = language;
public static Collection<Object[]> generateParameters() {
  Collection<Object[]> params = new ArrayList<Object[]>();
  for (Variant variant: Variant.values()) {
    for (NotificationType type: NotificationType.values()) {
      for (Platform platform: Platform.values()) {
        for (Language language: Language.values()) {
          params.add(new Object[] {variant, type, platform, language});
  return params;

Finally I can write my tests. Each test method that I write will be run once for each permutation of parameters that I have generated.

public void noResourceKeysShouldBeMissing() {
  String html = ...// code to generate email given the parameters
  assertThat(html, not(containsString("???")));
public void notificationSpecificAssertionsShouldPass() {
  Document body = ...// code to generate jsoup document of the email given the parameters

This works very nicely, I can add new notification types, variants, languages and platforms, and I only have to change my tests in one place specific to that configuration.  I can also add new general tests to one place, and they get run for every permutation. However, there is one problem. JUnit names each set of parameters with a sequential number. Working out which number relates to which permutation can be difficult, especially considering that we are dynamically generating the parameters.  Here’s an example of what such a test run looks like in IntelliJ IDEA:

Parameterised test run in IntelliJ IDEAYou can see that I don’t get much information. Maven test runners are also similarly unhelpful. However, there is another strategy you can use to make sure you have the right information about failures.

Custom suites

This method is quite involved, if you only have a handful of permutations, it’s certianly not worth it. In my case I have many hundreds of permutations, and so it’s invaluable. The idea is that for each configuration type, we have a custom test suite. These get nested together to form our permutations. What we can do with these is give each a name according to which configuration parameter it’s for, and so we can easily work out which permutation of configurations failed. To start off with, I’m going to write an abstract runner that simply has a name and a list of child runners.  This will be the building block for my tree of runners.

public static class NamedParentRunner extends ParentRunner<Runner> {
  private final List<Runner> runners;
  private final String name;
  protected NamedParentRunner(Class<?> klass, List<Runner> runners, String name) throws InitializationError {
    this.runners = runners; = name;
  protected List<Runner> getChildren() {
    return runners;
  protected Description describeChild(Runner child) {
    return child.getDescription();
  protected void runChild(Runner child, RunNotifier notifier) {;
  protected String getName() {
    return name;

Now I’m going to write a test runner that will instantiate each test and run the methods on it.  I’ll extend the existing JUnit class runner because I don’t want to reimplement all the logic to do with looking up methods:

private static class TestRunner extends BlockJUnit4ClassRunner {
  private final Variant variant;
  private final NotificationType type;
  private final Platform platform;
  private final Language language;
  private TestRunner(Class<?> klass, Variant variant, NotificationType type,
      Platform platform, Language language) throws InitializationError {
    this.variant = variant;
    this.type = type;
    this.platform = platform;
    this.language = language;
  public Object createTest() throws Exception {
    return new EmailGenerationTest(variant, type, platform, language);
  protected String getName() {
  protected String testName(final FrameworkMethod method) {
    return String.format(method.getName() + "[%s-%s-%s-%s]",,,,;
  protected void validateConstructor(List<Throwable> errors) {
  protected Statement classBlock(RunNotifier notifier) {
    return childrenInvoker(notifier);

Note that the name of this runner is the language, it is going to be the inner most runner and the language is going to be used as the inner most list.  The createTest() method is the most important to implement here, it actually instantiates the test class with the right config.  testName() is also very important, it should uniquely identify the test with its config, and it’s what things like maven will display as the name of the test.  Naming it appropiately will allow you to easily see which config the test failed under.

Now I’m going to write my custom runner that I will pass to the @TestRunner annototation, it will build up a tree of nested NamedParentRunner‘s.

public static class EmailGenerationRunner extends Suite {
  public EmailGenerationRunner(Class<?> klass) throws InitializationError {
    super(klass, createChildren(klass));
  private static List<Runner> createChildren(Class<?> klass) throws InitializationError {
    List<Runner> variants = new ArrayList<Runner>();
    for (Variant variant : Variant.values()) {
      List<Runner> types = new ArrayList<Runner>();
      for (NotificationType type : NotificationType.values()) {
        List<Runner> platforms = new ArrayList<Runner>();
        for (Platform platform : Platform.values()) {
          List<Runner> languages = new ArrayList<Runner>();
          for (Language language : Language.values()) {
            languages.add(new TestRunner(klass, variant, type, platform, language));
          platforms.add(new NamedParentRunner(klass, languages,;
        types.add(new NamedParentRunner(klass, platforms,;
      variants.add(new NamedParentRunner(klass, types,;
    return variants;

This is a fair bit more code than our initial attempt, and is also a lot of code for a single test class. But when you consider that this single test is running hundreds of sub tests that test the core functionality of my appliaction, it’s not so bad. And the results are really quite nice. This is now what it looks like in IDEA, I get a tree of permutations and can click to expand them to see what passed and what failed:Custom suite test run in IntelliJ IDEASo now we’ve seen some quite advanced methods for testing many permutations of configurations over the same code in JUnit. Since implementing this in notificator, I’ve been able to much more confidently make major refactorings of my templates, as well as add new notification types without having to worry about manually checking every platform, language and variant combination. I hope this will help you in the same way.

You can download the above example code from GitHub.