Writing a Play 2.0 Module

The intro

Hot off the press, Play 2.0 has arrived, and has been welcomed into the arms of a fast paced community that loves new things.  The day it was released, we started a new project here, and on that particular day and on this particular project we felt particularly daring.  So we decided to use Play 2.0 for our project.  New is an understatement for Play 2.0, it’s not just an incremental improvement on Play 1.x, many parts of it have been completely rewritten.  There is still much work to do, and one of the blaring gaps that has yet to be filled is modules.  At the time of writing, there is no official listing or repository of modules for Play 2.0, in stark contrast to the rich ecosystem of modules for Play 1.x.  Furthermore, there is no documentation on how to write a module.  So, given that modules tend to be very useful, and we were starting a new project, we very quickly ran into the need to write our own module, which we did, the MongoDB Jackson Mapper Play 2.0 Module.  To help the rest of the community of early Play 2.0 adopters, I’ve decided to write a (very) short guide on writing Play 2.0 modules.

The disclaimer

So did I mention that there was no documentation on writing modules, and very little in the way of example code to copy from?  What I’ve written may well be not the right way to do things.  But with no documentation, how am I to know?  All I know is that it’s working for us, and that’s good enough for me.  So if you happen to know what the right way to write Play 2.0 modules is, don’t bother commenting on this telling me that I’m wrong.  Just write the damn documentation!

The setup

In play 1.x, writing a module usually starts with running play new-module. Slight problem here:

$ play new-module
       _            _
 _ __ | | __ _ _  _| |
| '_ \| |/ _' | || |_|
|  __/|_|\____|\__ (_)
|_|            |__/ 

play! 2.0, http://www.playframework.org

This is not a play application!

Use `play new` to create a new Play application in the current directory,
or go to an existing application and launch the development console using `play`.

You can also browse the complete documentation at http://www.playframework.org.

Ok, so that doesn’t work.  Looks like there’s no way to create a new play module.  So, I decided to simply write a vanilla SBT project.  I won’t go into the details of how to set a new SBT project up, but here’s the play specific bits that you’ll need:

resolvers ++= Seq(
    DefaultMavenRepository,
    Resolver.url("Play", url("http://download.playframework.org/ivy-releases/"))(Resolver.ivyStylePatterns),
    "Typesafe Repository" at "http://repo.typesafe.com/typesafe/releases/",
    "Typesafe Other Repository" at "http://repo.typesafe.com/typesafe/repo/"
)

libraryDependencies += "play" %% "play" % "2.0"

libraryDependencies += "play" %% "play-test" % "2.0" % "test"

Now do whatever you need to do to open it up in your favourite IDE/editor (I personally use IntelliJ IDEA, so I use the sbt-idea plugin).

The code

It’s worth first noting that the module that I wanted to write only had to load a MongoDB connection pool, according to configuration supplied in application.conf, and manage the lifecycle of that pool.  The trait that needed to be implemented to do this is play.api.Plugin.  It has three methods, onStart(), onStop() and enabled().  If you’re looking for information on how to do things like intercept HTTP calls and define custom routes, then I suspect it’s easy to do (probably by just defining a routes file in your plugin), but I didn’t need to do that so I’m not going to pretend that I know how.

There’s not really much to say now, my plugin looks something like this:

class MongoDBPlugin(val app: Application) extends Plugin {
  private lazy val (mongo, db, mapper) = {
    // insert code here to initialise from application config
  }

  override def onStart() {
    // trigger lazy loading of the mongo field
    mongo
  }

  override def onStop() {
    mongo.close()
  }

  override def enabled() = !app.configuration.getString("mongodb.jackson.mapper")
      .filter(_ == "disabled").isDefined
}

The actual code does a fair bit more than this, but none of that is specific to how to write a plugin.

The finishing touch

So I’ve written my plugin, there’s now one thing left to do… tell play framework about it!  This is done by defining a play.plugins file in the the root of the classpath (in the resources folder):

1000:play.modules.mongodb.jackson.MongoDBPlugin

The leading integer defines the priority of the plugin to be loaded.  The example I saw used 1000, so I decided to use that too.

The resolution

Now all I have to do to use this module is add it as a dependency.  Play will automatically pick it up, and it will be automatically started and stopped as necessary.  Happy hacking, and if you’re starting a Play 2.0 project, and want to use MongoDB, why not try the MongoDB Jackson Mapper Module!

VZ sponsort Front-Trends Conference 2012

Ich freue mich sehr darüber, bekannt geben zu könne, dass VZ euch als Goldsponsor (gemeinsam mit weiteren Sponsoren) in diesem Jahr die Front-Trends Conference 2012 präsentieren wird.

Die Front-Trends Conference findet vom 26.-27. April 2012 in Warschau statt und versammelt erneut das Who-is-who der Front-Trend-Developer.

 “THIS IS A GATHERING FOR FRONT-END LOVERS TO DISCOVER THE CURRENT TRENDS TO BUILD A PROFESSIONAL CAREER OUT OF INNOVATIVE
CONFERENCE TOPICS: HTML5 UX JAVASCRIPT WEB DESIGN CSS3 MOBILE AND MORE

Ein Blick in die Liste der Speaker sollte jedem Frontend Enthusiasten verdeutlichen, dass es sich um ein absolutes Top Event handelt und so schicken wir von den VZ Netzwerken in diesem Jahr nicht nur Entwickler, sondern auch gleich noch ein “Sponsorship” nach Warschau. Yeah.

Let’s have a beer (or two).
Fahrt ihr auch nach Warschau? Wen treffen wir?

Extending Guice

Guice is a framework that I had been looking forward to trying out for a while, but until recently I never had the opportunity.  Previously I had mostly used Spring (with a dash of PicoContainer), so when I got the opportunity to start using Guice, I naturally had a number of my favourite Spring features in mind as I started using it.  Very quickly I found myself wanting an equivalent of Springs DisposableBean.  Guice is focussed on doing one thing and doing it well, and that thing is dependency injection.  Lifecycle management doesn’t really come into that, so I am not surprised that Guice doesn’t offer native support for disposing of beans.  There is one Guice extension out there, Guiceyfruit, that does offer reasonably complete per scope lifecycle support, however Guiceyfruit requires using a fork of Guice, which didn’t particularly appeal to me.  Besides, Guice is very simple, so I imagined that providing my own simple extensions to it would also be simple.  I was right.

Though, to be honest, while the extensions themselves are simple, it wasn’t that simple to work out how to write them.  On my first attempt, I gave up after Googling and trying things out myself for an hour.  On my second attempt, I almost gave up with this tweet.  But, I stuck with it, and eventually made my breakthrough. The answer was in InjectionListener. This listener is called on every component that Guice manages, including both components that Guice instantiates itself, and components that are provided as instances to Guice.

Supporting Disposables

So, I had my disposable interface:

public interface Disposable {
  void dispose();
}

and I wanted any component that implemented this interface to have their dispose() method called when my application shut down.  Naturally I had to maintain a list of components to dispose of:

final List<Disposable> disposables = Collections.synchronizedList(new ArrayList());

Thread safety must be taken into consideration, but since I only expected this list to be accessed when my application was starting up and shutting down, a simple synchronized list was suffcient, no need to worry about performant concurrent access.

My InjectionListener is very simple, it just adds disposables to this list after they’ve been injected:

final InjectionListener injectionListener = new InjectionListener<Disposable>() {
  public void afterInjection(Disposable injectee) {
    disposables.add(injectee);
  }
};

InjectionListener‘s are registered by registering a TypeListener that listens for events on types that Guice encounters.  My type listener checks if the type is Disposable (this actually isn’t necessary because we will register it using a matcher that matches only Disposable types, but it is defensive to do the check), and if so registers the InjectionListener:

TypeListener disposableListener = new TypeListener {
  public <I> void hear(TypeLiteral<I> type, TypeEncounter<I> encounter) {
    if (Disposable.class.isAssignableFrom(type.getRawType())) {
      TypeEncounter<Disposable> disposableEncounter = (TypeEncounter<Disposable>) encounter;
      disposableEncounter.register(injectionListener);
    }
  }
}

Now I can register my TypeListener.  This is done from a module:

bindListener(new AbstractMatcher<TypeLiteral<?>>() {
      public boolean matches(TypeLiteral<?> typeLiteral) {
        return Disposable.class.isAssignableFrom(typeLiteral.getRawType());
      }
    }, disposableListener);

The last thing I need to do is bind my collection of disposables, so that when my app shuts down, I can dispose of them:

bind((TypeLiteral) TypeLiteral.get(Types.listOf(Disposable.class)))
                .toInstance(disposables);

So now when my app shuts down, I can look up the list of disposables and dispose of them:

for (Disposable disposable : ((List<Disposable>) injector.getInstance(
    Key.get(Types.listOf(Disposable.class)))) {
  disposable.dispose();
}

If you decide to use this code in your own app, please be very wary of a potential memory leak. Any beans that are not singleton scoped will be added to the disposable list each time they are requested (per scope).  For my purposes, all my beans that required being disposed of were singleton scoped, so I didn’t have to worry about this.

Supporting annotation based method invocation scheduling

Happy that I now had a very simple extension with very little code for supporting automatic disposing of beans, I decided to try something a little more complex… scheduling. My app contains a number of simple scheduled tasks, and the amount of boilerplate for scheduling each of these was too much for my liking. My aim was to able to do something like this:

@Schedule(delay = 5L, timeUnit = TimeUnit.MINUTES, initialDelay = 1L)
def cleanUpExpiredData() {
  ...
}

(Yep, this app has a mixture of Scala and Java.) So, I started with my annotation:

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Schedule {
    long delay();
    TimeUnit timeUnit() default TimeUnit.MILLISECONDS;
    long initialDelay() default 0;
}

The main difference this time is that I’m not listening for events on a particular type, but rather I want to check all types to see if they have a @Schedule annotated method. This is a little more involved, so I’m going to have a scheduler service that does this checking and the scheduling. Additionally it will make use of the disposable support that I just implemented:

public class SchedulerService implements Disposable {
  private final ScheduledExcecutorService executor = Executors.newSingleThreadScheduledExecutor();

  public boolean hasScheduledMethod(Class clazz) {
    for (Method method : clazz.getMethods()) {
      Schedule schedule = method.getAnnotation(Schedule.class);
      if (schedule != null) {
        return true;
      }
    }
    return false;
  }

  public void schedule(Object target) {
    for (final Method method : target.getClass().getMethods()) {
      Schedule schedule = method.getAnnotation(Schedule.class);
      if (schedule != null) {
        schedule(target, method, schedule);
      }
    }
  }

  private void schedule(final Object target, final Method method, Schedule schedule) {
    executor.scheduleWithFixedDelay(new Runnable() {
      public void run() {
        method.invoke(target);
      }, schedule.initialDelay(), schedule.delay(), schedule.timeUnit());
  }

  public void dispose() {
    executor.shutdown();
  }
}

Now in my module I instantiate one of these services:

final SchedulerService schedulerService = new SchedulerService();

I then implement my InjectionListener:

final InjectionListener injectionListener = new InjectionListener() {
  public void afterInjection(Object injectee) {
    schedulerService.schedule(injectee);
  }
}

and my TypeListener:

TypeListener typeListener = new TypeListener() {
  public <I> void hear(TypeLiteral<I> type, TypeEncounter<I> encounter) {
    if (schedulerService.hasScheduledMethod(type.getRawType()))  {
      encounter.register(injectionListener);
    }
  }
}

And then all I have to do is register my type listener, and also my scheduler service (so that it gets disposed properly):

  bindListener(Matchers.any(), typeListener);
  bind(SchedulerService.class).toInstance(schedulerService);

Conclusion

Although Guice doesn’t come with as much as Spring does out of the box, it is very simple to extend to meet your own requirements. If I were needing many more container like features, then maybe Spring would be a better tool for the job, but when I’m just after a dependency injection framework with a little sugar on top, Guice is a very nice and much lighter weight solution.

Sharing some love.

Don’t be surprised but we had an awesome year. So we decided to give a little back and support some of our favourite tools and open source projects:

Eclipse
I don’t think we need a description here ;-)
http://www.eclipse.org/
500€
CumulusServer
“CumulusServer is a complete open source and cross-platform RTMFP server extensible by way of scripting.”
https://github.com/OpenRTMFP/Cumulus
500€

 

Thanks to the developers and keep up the good work.

Jackson annotations with MongoDB

At VZ we are currently busy testing a new storage backend for the VZ feed.  We’ve pushed the existing backend to its limits and while so far it has served us well, we are finding that for what we want to do going forward, it just isn’t the right match for our requirements.  So we’ve spent some time investigating what the best backend will be, and we are now in the testing stage with MongoDB, loading it with our feed data and hammering it with load tests.  So far so good.

The backend of our feed service is implemented in Java, and talks JSON back to clients.  If you log in to VZ and use your favourite browsers developer console to see XHR traffic, you can see the JSON that the feed service returns.  This JSON is generated from POJOs using Jackson, a very simple yet powerful, performant and flexible framework for serialising POJOs to JSON.

Looking carefully at the JSON, you’ll notice that the feed data contains more than just a list of messages, there is a mixture of objects whose data is clearly generated on the fly by the feed service, and other objects that the feed service doesn’t really care about, it just receives them from the services that generate them, and passes them to the client as is.  For example, information about a particular photo, or a gadget, or a status update, or a new friend.  This data is represented by many different POJOs in the feed, and in some cases specific processing is done on them, but mostly the feed receives them from the various services that generates them, and passes them back to the client untouched.

All these POJOs have Jackson annotations on them, and we know that they work well with Jackson.  With our old storage backend, we simply serialised them to JSON to store them.  One of the reasons for going with MongoDB is that we wanted to be able to easily query our data, to get statistics and a better understanding of how the service was being used.  What we wanted to avoid though was having to either rewrite the POJOs, or make MongoDB equivalent copies of them, in order to store them in MongoDB.  What we really wanted to do is reuse the Jackson annotations on them so that we could store them as is in MongoDB.

A google search revealed that there already was a mapper out there by Grzegorz Godlewski that did this, however this mapper had a few problems in our eyes.  For one, it required a fork of the Mongo Java Driver, and there was no indication that this fork would ever be pulled back into the stable driver.  It also said that it was experimental and not production ready, and we weren’t about to trust our feed with an experimental technology that didn’t have a clear future.  It is worth saying though that Grzegorz’s mapper is quite innovative, it serialises/deserialises objects directly to/from the BSON that MongoDB speaks with the client.  This would make it the most performant mapper out there.  But we didn’t feel that the technology was mature enough for our use case.

However, as Grzegorz pointed out, plugging in a custom parser/generator to Jackson is a simple thing to do, so we decided to implement our own mapper that parsed/generated the MongoDB map like DBObject‘s.  In addition to this, we implemented a very lightweight interface that wraps the MongoDB DBCollection, called JacksonDBCollection.  This provides all the same methods that DBCollection provides, except that where appropriate, it replaces DBObject method parameters and return types with strongly typed POJO versions.  DBObject‘s still get used for querying, because you can’t express all queries using POJOs, however when you just want to use equality in your queries, you can use your POJO as the query.

It’s open source!

We’re pleased to announce that we’ve made this library available as an open source library. You can download the source code, and contribute to it yourself, at GitHub.  If you don’t want the source code, but just want to start using it in your project now, you can get it from the central maven repository:

<dependency>
    <groupId>net.vz.mongodb.jackson</groupId>
    <artifactId>mongo-jackson-mapper</artifactId>
    <version>1.0</version>
</dependency>

The details

To give you a taste of what this framework looks like, I’ll show some coding examples.  Here is how to create a JacksonDBCollection:

JacksonDBCollection<MyPojo, String> coll = JacksonDBCollection.wrap(
    dbCollection, MyPojo.class, String.class);

The two type parameters are the type of the POJO that is being mapped, and the type of the ID of the POJO.  The reason we require the type of the ID of the POJO is for strongly typed querying by ID, and strongly typed retrieval of generated IDs.  Here is what my POJO looks like:

public class MyPojo {
  @ObjectId @Id public String id;
  public Integer someNumber;
  public List<String> someList;
}

Maybe not the best Java coding style with public mutable fields, Jackson is flexible enough to support almost anything, this keeps it simple for this blog post.  In the feed, we’re using immutable objects with private final fields, and @JsonCreator annotated constructors.

One issue that we encountered while implementing this is how to handle ObjectId‘s.  If you declare an object to have a type of ObjectId, it works without a problem.  But this is a POJO mapper, and some people may not want to have an ObjectId type in their POJO, particularly if they are reusing the POJO for the web.  So we added an annotation, @ObjectId, that can be put on any String or byte[] property, that tells the mapper to serialise/deserialise this property to/from ObjectId.

You’ll also see the use of the @javax.persistence.Id annotation.  This is really just a short hand that we implemented for @JsonProperty("_id"), which has the added advantage of not having to use the Jackson views feature if you want to use the same POJO for the web and don’t want the name of the property to be _id. You could just as easily make the field name _id with no extra annotations.

Inserting an object is simple:

MyPojo pojo = new MyPojo();
pojo.someNumber = 10;
pojo.someList = Arrays.asList("foo", "bar");
WriteResult<MyPojo, String> result = coll.insert(pojo);

or with write concerns:

WriteResult<MyPojo, String> result = coll.insert(pojo, WriteConcern.MAJORITY);

We are letting MongoDB generate the ID for us here, which begs an important question, how do we find out what the ID it generated was? With the MongoDB Java Mapper, it sets the ID on the object you passed in. This is not practical with Jackson, because you may be using a custom serialiser, or @Creator annotated factory method/constructor, in the case of the VZ feed our objects were immutable, making it impossible to do that. We supported this by implementing our own WriteResult class, which aside from wrapping all the methods from the MongoDB Java Driver WriteResult, provides a few methods such as getSavedId() and getSavedObject(), which deserialises the id/object so you may obtain the ID:

String id = result.getSavedId();

Now using that ID we can load the object again:

MyPojo saved = coll.findOneById(id);

Equality based querying can be done using the POJO as a template:

MyPojo query = new MyPojo();
query.someNumber = 10;
for (MyPojo item: coll.find(query)) {
    System.out.println(item.id);
}

And loading partial objects can also be done using the POJO as a template, by setting any fields you want loaded to be something that isn’t null:

MyPojo query = new MyPojo();
query.someNumber = 10;
MyPojo template = new MyPojo();
template.someNumber = 1;
template.id = "not null"
for (MyPojo item: coll.find(query, template)) {
    System.out.println(item.id);
    assert(item.someList == null);
}

And when this isn’t enough, you can fall back to normal DBObject‘s for both the query and the template:

for (MyPop item: coll.find(new DBObject().add("someNumber", new DBObject().add("$gt", 5))) {
    System.out.println(item.id);
}

There are more examples for some more advanced use cases, for example using custom views, on the GitHub project page.

Summary

So here we have a new POJO mapper for MongoDB that you can use, and due to the fact that it uses Jackson, it has a head start in being very powerful, flexible and performant. Some of our plans for future features include:

  • @Reference annotation support, loading dehydrated referenced objects containing just the ID, with convenience methods for hydrating these objects
  • Schema migration features, where the JacksonDBCollection can return option tuples containing either the old or the new object type based on what was detected

If you have any other features you’d like to see, please raise an issue in the GitHub project!