Diagnostic assertions

When we write an assertion statement, naturally we want to make the code expressive. We want the assertion statement to include all of the information that matters to the assertion, and no extraneous information. We want it to be clear to the reader just what we are asserting.

When writing assertion statements, test automators often neglect an important consideration: the diagnostic value of the message that is displayed if the assertion fails.

Given how carefully we craft our assertion statements, it’s a shame when the assertion failure message throws away important information, or when it swamps us with noise that hides the information that we want most.

Some assertion mechanisms naturally lose information when they create failure messages. Other mechanisms allow us to make the failure messages as clear and informative as the assertion statements themselves.

I will demonstrate using Java, JUnit, Hamcrest, and a few other libraries. You will find similar assertion mechanisms and features in other programming languages and libraries.


Let’s set up an example that we can use to explore the diagnostic value of different kinds of assertions.

A class. Here is a very simple class, an item with text:

public class Item {
    private final String text;

    public Item(String text) {
        this.text = text;

    public String text() {
        return text;

A test. Here is a test:

public void findItemReturnsAnItemWithTextFoo() {
    Item item = someObject.findItem();
    assert item.text() == "foo";

The test asks someObject to find an item, then asserts that the item’s text is equal to "foo".

A fault. The code being tested has a fault: It gives the item the incorrect text value "bar" instead of the desired value "foo".

A failure. Given that fault, our assertion will fail. When the assertion statement executes, it will detect that the value is incorrect and throw an exception to indicate an assertion failure. Then JUnit will display the exception, including any diagnostic message contained in the exception.

Assertion Styles

There are many ways to express our desired assertion in Java, especially if we use assertion mechanisms from third-party libraries, such as JUnit, Hamcrest, and Hartley. Each of the following assertion statements correctly evaluates whether our item’s text is equal to "foo", and each correctly throws an exception if the text does not have the desired value.

Let’s look at each of these styles, and notice what information is included in the failure message, and what information is lost.

The Java assert Statement

Our example test expresses an assertion using a Java assertion statement:

assert item.text().equals("foo");

Notice that this short statement gives a lot of information about our intentions:

  • assert says that we will make an evaluation, and that the evaluation is so important to our test that we will mark the test as failed if the result is not as we’ve specified.
  • item says that we will evaluate an object called item (or some aspect of item).
  • text() says that we will evaluate some text.
  • The . between item and text() says that we will evaluate the text obtained from item.
  • .equals() says that we will compare whether item’s text is equal to some value.
  • "foo" says that we will compare the item’s text to the literal value "foo".

Each of those six pieces of information is essential to the assertion. If we remove any piece of information, the assertion loses its meaning. (Technically, we could extract the text into a local variable, so that our assertion need not refer to item, but let’s assume that we’ve chosen this particular phrasing because we want to make the source of the text crystal clear.)

If we run this test and execute this assertion, the assertion throws an exception, and JUnit emits this message:


followed by a stack trace. The stack trace indicates the Java file name and line number from which the exception was thrown. That is, it indicates the file name and line number of our assertion statement. So if we want to understand the source of the failure, we have to navigate to the source code and read the assertion statement. It’s a good thing we took pains to make the assertion statement so clear, because that’s all of the information we have as we begin our investigation of the failure.

The (Mostly) Uninformative Failure Message

Note that the failure message itself tells us only that some assertion failed. It gives us no information at all (in the message itself) about what aspect of our assertion went wrong. Remember that our assertion statement expressed at least six important pieces of information. The failure message conveys only one of these: This thing that went awry was an assertion.

That’s a critical piece of information, of course, but what happened to the other five pieces, which we took such pains to express so clearly in the code?

They were thrown away by the nature of the assertion mechanism. The Java assertion statement consists of two parts: the assert keyword and a boolean expression. To execute an assertion statement, Java first evaluates the boolean expression. Then it assesses the result (true or false). If the result is true, execution continues with the following statement. If the result is false, the statement throws an exception.

In evaluating the boolean expression, the computer loses the information the original expression, and saves only the true or false result. By the time the statement throws its exception, all of the other information about the expression has been lost.

The JUnit assertTrue() Method

Now let’s try another style of assertion, the assertTrue() method from JUnit:


This method produces the same error message as the bare Java assert statement:


again supplemented by a stack trace that points to the assertion in the code.

The JUnit assertTrue() method has one advantage over the Java assert statement: You don’t have to tell the JVM to enable it. If you want the JVM to execute assert statements, you have to pass a special argument to the JVM. If you don’t tell the JVM to enable assert statements, it (quietly) ignores them.

Otherwise, the effects of assert and assertTrue() are similar. Each throws an AssertionError, and neither gives you any of the other information that you so carefully crafted into your assertion.

Improving the Java assert Statement with an Explanation

Both JUnit and Java offer a mechanism to compensate for the limitations of the bare assert and assertTrue() assertions: the explanatory message (which I’ll shorten to explanation).

With the Java assert statement, you add an explanatory message like this:

assert item.text().equals("foo") : "Item text should be foo";

If this assertion fails, Junit displays this message:

java.lang.AssertionError: Item text should be foo

followed by the same stack trace as before.

That’s much more informative than before. In fact, our failure message tells us nearly everything that the code tells us.

Am I happy? Nope.

The big problem with this explanation mechanism is that it requires you to express the same idea twice, once in the boolean expression, and again in the explanation. As with other forms of comments in code, it is very easy (and common) for the comment to diverge from the code it describes. You end up with failure messages that mislead.

And even if the comment stays current with the code, it is a form of duplication. If the code changes, you also have to do the extra work of changing the comment.

Adding Explanations with JUnit assertTrue()

JUnit includes a form of assertTrue() method that takes an explanation as its first parameter:

assertTrue("Item text should be foo", item.text().equals("foo"));

I find this expression more awkward then the assert statement. With the assert statement, the explanation follows the entire assertion. With assertTrue(), the explanation interrupts the left-to-right phrasing of the assertion.

Still, this is better than no explanation at all. The assertTrue() explanation has the same effect as the assert statement explanation. If the assertion fails, JUnit displayes this message:

java.lang.AssertionError: Item text should be foo

Still Missing: The Actual Value

By adding explanatory messages to our assertions, we can restore the information that is lost when the assertion evaluates our boolean expression.

But now I notice another bit of information that I wish were displayed in the failure message: The actual value of the item’s text. Thanks to our explanations, we know the value is not the desired "foo", but what is the value?

Of course, if we’re writing our own explanations, we could easily include the actual value in our explanatory message. But that takes extra work. Granted, it’s not a big burden, but it is extra work.

What if there were a way to get that information (almost) for free? The actual value is assessed somewhere during the evaluation, but it is then thrown away after the comparison is made and we’ve determined the boolean result of the evaluation. What if we could catch the value before it was thrown away?

We can do that. The key is to express the assertion in a way that retains both the actual value and the desired value, even after comparing them.

The JUnit assertEquals() Method

JUnit offers another style of assertion, a style that, when evaluated, retains the separation between the value you are evaluating and the value you are comparing it to. If you want to compare values for equality, the appropriate method is (appropriately) called assertEquals(). It looks like this:

assertEquals("foo", item.text());

Though this shifts the phrasing (the concept of equality now appears earlier in the expression, and the desired value now appears before the retrieval of the actual value), we still have all six of the pieces of information that matter to our assertion.

Our new expression is okay, if a little clumsy English-wise. At least it has all the pieces, and it expresses each piece only once.

What happens when we run it? JUnit displays this message:

org.junit.ComparisonFailure: expected:<[foo]> but was:<[bar]>

Very interesting. By passing the two pieces of information separately to the method, we enable the method emit a more helpful failure message. We give it two pieces (the expected value and the actual value) and it gives those two pieces back to us in the message.

We now have a piece of information that we didn’t (and couldn’t) express directly in our test code: The actual value of the item’s text.

Note also that we no longer get a barely informative AssertionError that tells us only that something went wrong, Now we get a more specific exception, a ComparisonFailure. I find this slightly maddening. We invoked a method dedicated to equality, but the error message seems to remember only that we were doing some kind of comparison.

Still, this is way better than, “Hey, dude, something’s busted. You figure it out.”

And best of all: We got it (nearly) for free. We didn’t have to duplicate any information in our statement. We had only to rephrase our assertion, to rearrange the same six important pieces of information.

So that’s two and a half of our six pieces of information that now appear in the failure message:

  1. We’re asserting something.
  2. We’re asserting equality (this appears only partially in the failure message, so it counts only half).
  3. We’re comparing someting to foo (note that we’ve lost the information that it’s a string, but let’s be generous and give this full credit).

And remember that we also get this bonus piece of information:

  • The actual value that we were evaluating.

This is an important addition. Let’s say that we now would like all seven of those pieces of information. And we’re getting three and a half of them.

JUnit assertEquals() with Explanation

JUnit offers a form of assertEquals() that takes an explanatory message. Let’s use that to restore a few of the pieces that still don’t get for free:

assertEquals("Item text", "foo", item.text());

Now the failure message is:

org.junit.ComparisonFailure: Item text expected:<[foo]> but was:<[bar]>

We’re duplicating the ideas of item and text, and I think we’ve done it in a way that also expresses the relationship between the two.

But there’s far less duplication than our earlier explanatory messages, so this is progress.

Can we do better?

The Hamcrest assertThat() Method

Every assertion involves a comparison of some kind. Steve Freeman and Nat Pryce took the very helpful step of separating the comparison from the assertion method. They accomplished this by introducing the concept of a matcher. A matcher is an object that represents some set of criteria, and knows how to determine whether a given value matches the criteria. Nat and Steve created a library of widely useful matchers called Hamcrest.1.

Hamcrest also includes an assertThat() method that applies a matcher as an assertion:

assertThat(item.text(), equalTo("foo");

The first parameter to assertThat() is the value that we want to evaluate. The second is a matcher object. Typically you supply a matcher by calling a factory method such as equalTo(). Hamcrest factory methods are named so that assertion expressions read informatively in the code. The assertion above says: Assert that item’s text (is) equal to “foo”.

When a Hamcrest assertThat() assertion fails, it throws an exception with a message like this:

Expected: “foo
but: was “bar”

This failure message is similar in content to the JUnit assertEquals() failure message. The great advantage of Hamcrest-style assertions is that you can easily extend the set of available assertions. We will leave that as an exercise for the reader.

Hamcrest also has a version of assertThat() that takes an explanation as a parameter. Normally all we need to do is describe the subject of the evaluation:

assertThat("Item text", item.text(), equalTo("foo"));

When this assertion fails, it emits a message similar to the one from JUnit assertEquals():

java.lang.AssertionError: Item text
Expected: “foo”
but: was “bar”

As with JUnit’s assertEquals(), at the expense of a little bit of redundancy, we have made our failure message express all of the information that matters to the assertion.

So our assertion messages read very similarly to JUnit’s assertion messages. But Hamcrest’s assertion statements read far more expressively in the code.

The Hartley assertThat() Method

For my own tests, I often take a step beyond Hamcrest. I often want to evaluate not only some subject, but more specifically some attribute or feature or property of the subject. Each of the examples in earlier sections evaluates not only item, but more specifically the item’s text as returned by its text() method.

I often find it helpful to write assertion statements that separate the subject from the feature. I have created an assertion method to help me do that:

assertThat(item, text(), is("foo"));

This assertion method2 takes three parameters. Each parameter is an object. The first object is the subject of the assertion. In this case, we are evaluating item.

The second parameter is the feature being evaluated. In the same way that Hamcrest matchers are objects that evaluate other objects, Hartley features are objects that extract values from other objects. The text() method is a factory method that produces a feature object that can retrieve the text from an item.

The third parameter is a matcher that compares the extracted feature to the desired criteria.

In the code, this reads just the same as the Hamcrest assertion:

Assert that item’s text is (equalTo) “foo”

But notice what happens when this assertion fails:

java.lang.AssertionError: Expected: Item text “foo”
but: was “bar”

With no redudancy in the assertion statement itself3, we now have a failure message that includes every important detail of the assertion.


  1. The word Hamcrest is an anagram of the word matchers. You can find Hamcrest matcher libraries for a variety of programming languages at http://hamcrest.org/.

  2. For this and other Java code that I find commonly useful when I automate tests, see my Hartley library.

  3. Note that I did have to do some extra work in order to convince the failure message to describe the subject and the feature. I had to implement the toString() method in both the Item class and in my feature object class. For that small amount of work, I can now use those classes in many assertions, and I gain the diagnostic expressiveness at no additional cost.

Using Git With Subversion Repositories

Some of my clients want to use git with their existing Subversion repositories. These are the notes I give them (along with training and coaching) to get them started.

For additional help, see the Git and Subversion section of Pro Git.

General Safety Rules

These rules will help keep you out of trouble until you learn the subtleties of working with git and subversion:

  1. Make changes ONLY on feature branches.
  2. Merge your features into master ONLY immediately before committing to SVN.

Working With The SVN Repository

You will do most of your work in a local git repository, using normal git commands.

You will use the git svn command only to do three things:

Create A Git Repository From An SVN Repository

To create a local git repository from a remote SVN repository:

git svn clone url-to-svn-repo my-git-repo-name

Note that this can take a very long time if the SVN repository has a large history.

Working With Feature Branches

You will do your work in feature branches in your local git repository. A common approach is to create a feature branch for each feature that you are working on.

See the git documentation for details about how to work with branches.

Create A Feature Branch

  1. Choose the starting point. Decide whether to base your new feature branch on the master branch or on another feature branch.
  2. Update the starting point branch (optional). If you want your new feature branch to include the latest changes from SVN, update the existing branch:
  3. Check out the starting point branch.
  4. Create the feature branch. There are two ways to do this. See below.

    To create a feature branch and check it out:

    git checkout -b my-feature

    To create a feature branch without checking it out:

    git branch my-feature

Committing Your Feature

To commit your feature into the SVN repository:

  1. Update the master branch with the changes from the SVN repository.
  2. Update the feature branch with the changes from the up-to-date master.
  3. Merge the feature into the master branch. You may need to resolve conflicts.
  4. Commit the feature to the SVN repository.

Each step is described in its own section below.

The complete sequence looks like this:

# Update master...
git checkout master
git svn rebase

# Update the feature branch
git checkout my-feature
git rebase master
# may need to resolve conflicts here

# Merge the feature into master
git checkout master
git merge my-feature

# Commit the feature from master
git svn dcommit

Update The Master Branch

To get the latest changes from the SVN repository into your master branch:

git checkout master
git svn rebase

You may wish to update your master branch often. That way, every time you create a feature branch from the master branch, the initial feature branch is up-to-date with the SVN repository.

Update The Feature Branch

You will be getting the new changes from the master branch. So before you do this, you must update the master branch.


git checkout my-feature
git rebase master

You may need to resolve conflicts.

I often update my feature branch, even if I am not going to commit soon. Keeping my feature branch up to date makes the final merge easier when I am ready to commit.

Resolve Conflicts

Whenever you rebase your feature branch on top of master, there may be conflicts between your feature branch and the new changes in master.

If there are conflicts, git will interrupt the rebase operation and alert you. To complete the rebase, you must first resolve the conflicts.

To resolve the conflicts, either edit the conflicting files in an editor, or use a mergetool:

git mergetool

Once you have resolved the conflicts, continue the rebase process:

git add .
git rebase --continue

Merge The Feature Into The Master Branch

Do this ONLY immediately before committing.

Before you do this, you must update the feature branch with changes from SVN.

Once your feature branch is up to date:

git checkout master
git merge my-feature
# OR git merge --squash my-feature

Commit The Feature To The SVN Repository

You will be committing from the master branch. So before you do this, you must merge your feature into the master branch.

Once your feature is merged into the master branch:

git checkout master
git svn dcommit

Testing and team effectiveness

My model of effectiveness focuses on three questions:

  • What results do we want to create?
  • What results are we observing?
  • What can we do to create the results we want?

Often I’m focused on a single person, the person I’m coaching. When I’m working with a team, another question becomes important: What are the similarities and differences among different people’s answers to these questions?

When testers test, they observe things that others have not yet observed. So different people now have different answers to the second question, about observations.

At the risk of floating away into the metaverse, we can now ask some new questions:

  • What do different people’s responses to test observations tell you about their answers to the three questions?
  • What do the responses tell you about the differences and similarities in the answers?
  • What are the implications of these similarities and differences for the team?

Standardize practices to aid moving people around?

Often as I’m working with an organization, managers and executives want to standardize new practices even before they have tried them. When I ask what standardizing will do for them, they say, “It will allow us to move people easily from one project to another.”

This puzzles me, for three reasons.

First, I haven’t seen organizations move people from project to project very often. Even if standardizing helps move people around, you won’t get that benefit if you don’t actually move people around.

Second, I don’t know whether “standardizing practices” helps move people from project to project. I’m not saying it doesn’t help. I’m saying that I don’t know.

Third, I’m not sure whether “standardizing practices” actually standardizes the practices in any meaningful way.

Given all of this, I suspect that “allowing us to move people around” is not the real reason for standardizing. If I’m right about that, what might the real reasons be?

Page Object Puzzle

Note: I originally posted this on Google+. There are lots of interesting comments there.

Now that I have a few months of experience using page objects in my automated tests, I have a puzzle.

If I understand the page object pattern correctly, page objects have two kinds of methods: queries and commands. A query returns the queried value (of course). Every command returns a page object, so that test ideas can be expressed using method chaining. If the command leaves the browser displaying the same page, the command returns the current page object. So far, all of this makes sense to me.

My puzzle is about what happens if the command causes the browser to display a new page. In that case, the command returns a new page object that represents the newly displayed page. The advantage of this is that it allows reasonably expressive method chaining. The disadvantage (it seems to me) is that it requires each page object knowledge of other pages. That is, it makes each page object class aware of every other page object class that might be reached via its commands. And if the result of a command depends on prior existing conditions, or on validation of inputs, then each page object command (in order to return the right kind of page object) must account for all of the different possible results.

This seems to me to create an unwieldy tangle of dependencies from one page object class to another. How do other people deal with this entanglement?

I’m toying with the idea of adjusting the usual page object pattern:

  1. A query returns the queried value.
  2. If a command always leaves the browser displaying the same page, the command returns the current page object.
  3. If a command might lead to a different page, the command returns void, and the caller can figure out what page object to create based on its knowledge of what it’s testing.

Frank(enstein), Victor, Shelley, and Igor: Tools for Testing iOS Applications

A client wants to write automated tests for their iOS app. I did a little research to find existing tools, and decided that I liked Frank. Frank embeds a small HTTP server (the Frank server) into an iOS application that runs in a simulator. You can write tests that interact with views in an app by sending commands to the Frank server.

Another element of Frank is an API for writing tests using Cucumber, a simple, popular Ruby-based testing framework.

One thing didn’t fit: My client’s test automators know Java, but they don’t know Ruby or Cucumber. So I wrote a Java driver (front end) for Frank, which I called Victor. The Victor API is a DSL similar to ones I’ve created in other situations, such as when I wrote a Selenium-based driver for testing Web applications.

I soon discovered that Frank development was at a point of transition. The existing Frank server relied on a third-party library called UIQuery to identify the views to interact with. But UIQuery development had gone stale, so Pete Hodgson (the awesome developer behind Frank) was in the process of writing a new query engine called Shelley to replace UIQuery.

I was a little nervous about inviting my clients to begin using a tool that was in the midst of such a transition. So I started writing a query engine of my own, called Igor. Now, my own query engine would be in transition, too, but at least my clients and I would be in control of it.

As I chatted about Igor on the Frank mailing list, Pete nudged invited me to consider creating a query syntax that looked something like CSS selectors, to allow people to apply their existing CSS knowledge.

I’ve spent much of the last few days designing the Igor syntax. I also implemented a few basic view selectors. Finally, I figured out how to convince Frank to delegate its queries to Igor, a process that is becoming easier as Pete adjusts Frank to allow a variety of query engines.

I’m hoping to have a few more of the most useful selectors implemented before I return to my client next Monday. If you want to take a peek, or follow along, or give feedback, here are links to the various technologies:

  • Frank: Pete Hodgson’s iOS driver.
  • Victor: My Java front end for Frank.
  • Igor: My query engine for Frank.
  • Igor syntax based on CSS.

Towers Episode 1: The Reclining Ribcage

As we begin this episode, I’ve just created a fresh repository, and I’m ready to start developing my Towers game.

The Walking Skeleton

I pick what I think will be a good “walking skeleton” feature, a feature chosen specifically to quickly conjure much of the application structure into existence. I choose this feature: On launch, Towers displays an 8 block x 8 block “city” of black and white towers, each one floor high, arranged in a checkerboard pattern.

With this feature in mind, I write the first acceptance test.

The City is Eight Blocks Square

Acceptance test: The city is eight blocks square. I try to make the test very expressive, using a “fluent” style of assertions. To support the test, I write a CityFixture class to access information about the city through the GUI, and a CityAssertion class to build the fluent assertion DSL. (This may seem overly complex to you. Hold that thought.)

For assertions I use the FEST Fluent Assertions Module. I like the fluent style assertions. In particular, I like being able to type a dot after assertThat(someObject), and having Eclipse pop up a list of things I can assert about the object.

To sense and wiggle stuff through the GUI, I use the FEST Swing Module.

Implementation. To pass this test, I write code directly into Towers, the application’s main class.

At this point, Towers has a display (an empty JFrame), which is prepared to display 64 somethings in an 8 x 8 grid. But it doesn’t yet display any somethings. It’s an 8 x 8 ghost city.

Each City Block Has a One Floor Tower

Acceptance test: Each city block has a one floor tower. To extend the test DSL, I add TowerFixture and TowerAssertion classes. (This DSL stuff may now seem even more complex to you. Hold that thought.)

Implementation. I make the main class display a JButton to represent a tower at each grid location. Grid location seems like a coherent, meaningful concept, so I test-drive a class to represent that. I quickly find that I care only about the location’s name, so I name the class Address. I name each button based on its address, to allow the test fixtures to look up each “tower” by name. Each button displays “1” to represent the height of the tower.

At this point, there is no model code beyond Address. A tower is represented only by the text displayed in a button.

Speculation. To pass this test I could have used a simple JLabel to display the tower height. My choice of the more complex JButton was entirely speculative. That was very naughty.

More speculation. I notice that there’s an orphan Tower class sitting all alone in the corner of the repo. Though I’m trying to code outside-in here, my natural bias toward coding inside-out must have asserted itself. The class isn’t actually used in my app, which suggests that I wrote it on speculation. It’s interesting that I would do that. I wonder: Is all inside-out coding speculative?

Blind speculation. I didn’t commit a test for the unused Tower class. I'm sure that means I didn't write one. So I must have coded it not only speculatively, but blindly. Eeep!

Repospectives. As I step through the commit chain, I see my sins laid out before me. If Saint Peter uses git, I'm fucked. But I wonder: What else might we learn by stepping through our own git repositories?

Towers Alternate Colors

Acceptance Test: Towers alternate colors. I express this by checking that towers alternate colors down the first column, and that towers alternate colors along each row.

Simplifying the fixtures. As I write the third test and the fixture code to support it, I become weary of maintaining and extending my lovely DSL. I abandon it in favor of writing indicative mood statements in simpler <object>.<condition>() format. Now I need only one fixture class per domain concept, rather than two (a class to access items through the GUI and class to make assertions).

Though “the city has width eight” doesn’t trip off the tongue as nicely as “the city is eight blocks wide,” it conveys nearly the same information. The only loss is the idea of “blocks.” I’m okay with that.

Implementation. I make the main class render each button with foreground and background colors based on the button’s location in the grid.

Adding features to the main class. At this early point in the project I’m implementing features mostly by writing code directly in the main class. I’m nervous about this, but I’m doing it on purpose, deliberately following the style Steve Freeman and Nat Pryce illustrate in Growing Object-Oriented Software, Guided by Tests (GOOS). For early features, they write the code in the main class. Later, as the code begins to take shape, they notice the coherent concepts emerging in the code, and extract classes to represent those.

When I first read that, I was surprised. My style is to ask, before writing even the smallest chunk of code: To what class will I allocate this responsibility? Then I test-drive that class. Steve and Nat’s style felt wrong somehow. I made note of that.

Around the same time that I first read GOOS, I watched Kent Beck’s illuminating video series about TDD. In those videos, Kent often commented that he was leaving some slight awkwardness in the code until he had a better idea became apparent (I’m paraphrasing from memory here, perhaps egregiously).

So once again a pattern emerges: Someone really smart says something clearly nonsensical. I think, “That makes no sense,” and let it go. Then someone else really smart says something similar. I think, “How the hell does that work?” Eventually, when enough really smart people have said the same stupid thing, I think, “That makes no sense. I’d better try it.” Then I try it, and it makes sense. Immediately thereafter I wonder why it isn’t obvious to everyone.

As I’ve mentioned, my bias is to code inside-out, writing model code first, then connecting it to the outside world. Now aware of my bias, I chose to work against it. I wrote the code directly into the main class. I was still nervous about that. But I’m still alive, so I must have survived it.

Commit. The first feature finally fully functions. As I commit this code, a tower is still represented only by the color, text, and name of a button. The only model class is Address, a value class with a name tag.

This will change in the next episode when, within seconds of committing this code, I somewhat appease my I MUST ALWAYS ADD CODE IN THE RIGHT PLACE fetish by immediately extracting a new class.

That Skeleton Won’t Walk

There’s a problem here. Though I’ve implemented the entire walking skeleton feature, and the implementation brings major chunks of the user interface into existence, the feature does not yet compel the system to respond to any events. It therefore does not require the system to do anything or decide anything in response to external events.

In retrospect, I chose a poor candidate for a walking skeleton. For one thing, it doesn’t walk. It just sort of reclines. For another, it’s not a whole skeleton. At best it's a ribcage. A reclining ribcage. So there’s a lesson for me to learn: Choose a walking skeleton that responds to external events.

As I wrote this blog post, I searched the web for a definition of “walking skeleton.” Alistair Cockburn defines a walking skeleton as “a tiny implementation of the system that performs a small end-to-end function.” If I’d looked that up earlier (or if I'd remembered the definition from when I first saw it years ago), I would likely have chosen a different feature to animate this corpse. Likely something about moving one tower onto another.

What JUnit Rules are Good For

In my previous post I described how to write JUnit rules. In this post, I’ll describe some of the things rules are good for, with examples.

Before JUnit runs a test, it knows something about the test. When the test ends, JUnit knows something about the results. Sometimes we want test code to have access to that knowledge, or even to alter details about a test, the results, or the context in which the test runs.

Older versions of JUnit offered no easy way for test code to do those things. Neither test class constructors nor test methods were allowed to take parameters. Each test ran in a hermetically sealed capsule. There was no simple API to access or modify JUnit’s internal knowledge about a test.

That’s what rules are for. JUnit rules allow us to:

  • Add information to test results.
  • Access information about a test before it is run.
  • Modify a test before running it.
  • Modify test results.

Adding Information to Test Results

A few months ago I was working with testers at a client, automating web application tests using Selenium RC. The tests ran under Hudson on our continuous integration servers. The Selenium server ran in the cloud using Sauce Labs’ Sauce On Demand service.

Information about each test run was available in two places. First, Hudson creates a web page for each test run, including a printout of any exception raised by the test. Second, Sauce On Demand creates a job for each test run, and a web page for each job, which displays a log of every Selenium command and its result, links to screenshots taken just before each Selenium command that would change the state of the browser, and a video of the entire session.

To make it easier to find information about test failures, we wanted a way to link from the Hudson test report to the Sauce Labs job report. Determining the URL to the Sauce Labs job report was easy; our test framework knew how to do that. But we didn’t know how to insert the URL into the Hudson test report. Given that we needed to display the URL only when a test failed, we decided that the easiest solution was to somehow catch every exception, and wrap it in a new exception whose message included the URL. When Hudson printed the exception, the printout would display the URL.

The challenge was: How to catch every exception? For the exceptions thrown directly by our own test code, wrapping them was easy. But what to do about exceptions thrown elsewhere, such as by Selenium? Sure, we could wrap every Selenium call in a try/catch block, but that seemed troublesome.

One possibility seemed promising: Given that every exception eventually passed through JUnit, perhaps we could somehow hook into JUnit, and wrap the exception there. But how to hook into JUnit?

I tweeted about the problem. Kent Beck replied, telling me that my problem was just the thing Rules were designed to help with. He also sent me some nice examples of how write a rule and (of course) how to test it.

With Kent’s helpful advice and examples in hand, I was able to quickly write a rule that did exactly what my client and I needed:

public class ExceptionWrapperRule implements MethodRule {
    public Statement apply(final Statement base, FrameworkMethod method, Object target) {
        return new Statement() {
            public void evaluate() throws Throwable {
                try {
                } catch (Throwable cause) {
                    throw new SessionLinkReporter(cause);

In our test code, at the point where we establish the Selenium session, we obtain the session ID from Selenium and stash it. If the test fails, the ExceptionWrapperRule’s statement catches it and wraps it in a SeleniumLinkReporter (an Exception subclass we defined). The SeleniumLinkReporter constructor retrieves the stashed session ID, builds the Sauce On Demand job report link, and stuffs it into its exception message.

So this is an example of the first major use of rules: To modify test results by adding information obtained during the test.

Accessing Information About a Test Before It Is Run

At that same client, we learned of a new Sauce On Demand feature that we wanted to use: SauceTV. SauceTV displays a video of the browser while your test is running. As we wrote new tests, changed the features of the web application, and accessed new features of Sauce On Demand, we found ourselves often wanting to watch tests as the executed.

Sauce Labs provides a web page that displays two lists of open jobs for your account: Your jobs that are currently running, and your jobs that are queued to run. To access the video for a test in progress, you click the name of the appropriate job.

By default, each job’s name is its Selenium session ID, a long string of hexadecimal digits. Even if you know the session ID, it is difficult to quickly distinguish one long hexadecimal string from another. To help with this, Sauce Labs allows you to assign each job a descriptive name to display in the job lists. You do this by sending a particular command to its Selenium server.

Given that we were running one test per job, the obvious choice for the job name is the name of the test. But how to access the name of the test? Certainly we could add a line like this to each test:

@Test public void myTest() {

But that’s crazy talk. Much better would be a way to detect the test name at runtime, in a single place in the code. How to detect the test name? Rules to the rescue. We wrote this rule:

public class TestNameSniffer implements MethodRule {
    private String testName;

    public Statement apply(Statement statement, FrameworkMethod method, Object target) {
        String className = method.getMethod().getDeclaringClass().getSimpleName();
        String methodName = method.getName();
        testName = String.format("%s.%s()", className, methodName);
        return statement;
    public String testName() {
        return testName;

The rule stashes the test name, and other code later sends the name to Selenium RC. Now Sauce Labs labels each job with the name of the test, which is much easier to recognize than hexadecimal strings.

Modifying a Test Before Running It

In a Twitter conversation about what rules are good for, Nat Pryce said “JMock uses a rule to perform the verify step after the test but before tear-down.” JMock is a framework that makes it easy to create controllable collaborators for the code you’re testing, and to observe whether your code interacts with them in the ways you intend.

To use JMock in your test code you declare a field that is an instance of the JUnitRuleMockery class:

public class MyTestClass {
    public JUnitRuleMockery context = new JUnitRuleMockery();
    public ACollaboratorClass aCollaborator;
    public void myTest() {
        ... // Declare expectations for aCollaborator.
        ... // Invoke the method being tested.

JUnitRuleMockery is a rule whose apply() looks like this:

public Statement apply(final Statement base, FrameworkMethod method, final Object target) {
    return new Statement() {
        public void evaluate() throws Throwable {
        ... // Declarations of prepare() and assertIsSatisfied()

Before the statement executes the test, it first calls prepare(), which initializes each mock collaborator field declared by the test class. For the example test class, prepare() initializes aCollaborator with a mock instance of ACollaboratorClass.

Initializing fields in the target is an example of a second use for rules: Modifying a test before running it.

Modifying Test Results

The JMock code also demonstrates another use for JUnit rules: Modifying test results. After the test method runs, the assertIsSatisfied() method evaluates whether the expectations expressed by the test method are satisfied and throws an exception if they are not. Even if no exception was thrown during the test method itself, the rule might throw an exception, thereby changing a test from passed to failed.