Testing and team effectiveness

My model of effectiveness focuses on three questions:

  • What results do we want to create?
  • What results are we observing?
  • What can we do to create the results we want?

Often I’m focused on a single person, the person I’m coaching. When I’m working with a team, another question becomes important: What are the similarities and differences among different people’s answers to these questions?

When testers test, they observe things that others have not yet observed. So different people now have different answers to the second question, about observations.

At the risk of floating away into the metaverse, we can now ask some new questions:

  • What do different people’s responses to test observations tell you about their answers to the three questions?
  • What do the responses tell you about the differences and similarities in the answers?
  • What are the implications of these similarities and differences for the team?

Standardize practices to aid moving people around?

Often as I’m working with an organization, managers and executives want to standardize new practices even before they have tried them. When I ask what standardizing will do for them, they say, “It will allow us to move people easily from one project to another.”

This puzzles me, for three reasons.

First, I haven’t seen organizations move people from project to project very often. Even if standardizing helps move people around, you won’t get that benefit if you don’t actually move people around.

Second, I don’t know whether “standardizing practices” helps move people from project to project. I’m not saying it doesn’t help. I’m saying that I don’t know.

Third, I’m not sure whether “standardizing practices” actually standardizes the practices in any meaningful way.

Given all of this, I suspect that “allowing us to move people around” is not the real reason for standardizing. If I’m right about that, what might the real reasons be?

Page Object Puzzle

Note: I originally posted this on Google+. There are lots of interesting comments there.

Now that I have a few months of experience using page objects in my automated tests, I have a puzzle.

If I understand the page object pattern correctly, page objects have two kinds of methods: queries and commands. A query returns the queried value (of course). Every command returns a page object, so that test ideas can be expressed using method chaining. If the command leaves the browser displaying the same page, the command returns the current page object. So far, all of this makes sense to me.

My puzzle is about what happens if the command causes the browser to display a new page. In that case, the command returns a new page object that represents the newly displayed page. The advantage of this is that it allows reasonably expressive method chaining. The disadvantage (it seems to me) is that it requires each page object knowledge of other pages. That is, it makes each page object class aware of every other page object class that might be reached via its commands. And if the result of a command depends on prior existing conditions, or on validation of inputs, then each page object command (in order to return the right kind of page object) must account for all of the different possible results.

This seems to me to create an unwieldy tangle of dependencies from one page object class to another. How do other people deal with this entanglement?

I'm toying with the idea of adjusting the usual page object pattern:

  1. A query returns the queried value.
  2. If a command always leaves the browser displaying the same page, the command returns the current page object.
  3. If a command might lead to a different page, the command returns void, and the caller can figure out what page object to create based on its knowledge of what it's testing.

Frank(enstein), Victor, Shelley, and Igor: Tools for Testing iOS Applications

A client wants to write automated tests for their iOS app. I did a little research to find existing tools, and decided that I liked Frank. Frank embeds a small HTTP server (the Frank server) into an iOS application that runs in a simulator. You can write tests that interact with views in an app by sending commands to the Frank server.

Another element of Frank is an API for writing tests using Cucumber, a simple, popular Ruby-based testing framework.

One thing didn't fit: My client's test automators know Java, but they don't know Ruby or Cucumber. So I wrote a Java driver (front end) for Frank, which I called Victor. The Victor API is a DSL similar to ones I've created in other situations, such as when I wrote a Selenium-based driver for testing Web applications.

I soon discovered that Frank development was at a point of transition. The existing Frank server relied on a third-party library called UIQuery to identify the views to interact with. But UIQuery development had gone stale, so Pete Hodgson (the awesome developer behind Frank) was in the process of writing a new query engine called Shelley to replace UIQuery.

I was a little nervous about inviting my clients to begin using a tool that was in the midst of such a transition. So I started writing a query engine of my own, called Igor. Now, my own query engine would be in transition, too, but at least my clients and I would be in control of it.

As I chatted about Igor on the Frank mailing list, Pete nudged invited me to consider creating a query syntax that looked something like CSS selectors, to allow people to apply their existing CSS knowledge.

I've spent much of the last few days designing the Igor syntax. I also implemented a few basic view selectors. Finally, I figured out how to convince Frank to delegate its queries to Igor, a process that is becoming easier as Pete adjusts Frank to allow a variety of query engines.

I'm hoping to have a few more of the most useful selectors implemented before I return to my client next Monday. If you want to take a peek, or follow along, or give feedback, here are links to the various technologies:

  • Frank: Pete Hodgson's iOS driver.
  • Victor: My Java front end for Frank.
  • Igor: My query engine for Frank.
  • Igor syntax based on CSS.

Towers Episode 1: The Reclining Ribcage

As we begin this episode, I’ve just created a fresh repository, and I’m ready to start developing my Towers game.

The Walking Skeleton

I pick what I think will be a good “walking skeleton” feature, a feature chosen specifically to quickly conjure much of the application structure into existence. I choose this feature: On launch, Towers displays an 8 block x 8 block “city” of black and white towers, each one floor high, arranged in a checkerboard pattern.

With this feature in mind, I write the first acceptance test.

The City is Eight Blocks Square

Acceptance test: The city is eight blocks square. I try to make the test very expressive, using a “fluent” style of assertions. To support the test, I write a CityFixture class to access information about the city through the GUI, and a CityAssertion class to build the fluent assertion DSL. (This may seem overly complex to you. Hold that thought.)

For assertions I use the FEST Fluent Assertions Module. I like the fluent style assertions. In particular, I like being able to type a dot after assertThat(someObject), and having Eclipse pop up a list of things I can assert about the object.

To sense and wiggle stuff through the GUI, I use the FEST Swing Module.

Implementation. To pass this test, I write code directly into Towers, the application’s main class.

At this point, Towers has a display (an empty JFrame), which is prepared to display 64 somethings in an 8 x 8 grid. But it doesn’t yet display any somethings. It’s an 8 x 8 ghost city.

Each City Block Has a One Floor Tower

Acceptance test: Each city block has a one floor tower. To extend the test DSL, I add TowerFixture and TowerAssertion classes. (This DSL stuff may now seem even more complex to you. Hold that thought.)

Implementation. I make the main class display a JButton to represent a tower at each grid location. Grid location seems like a coherent, meaningful concept, so I test-drive a class to represent that. I quickly find that I care only about the location’s name, so I name the class Address. I name each button based on its address, to allow the test fixtures to look up each “tower” by name. Each button displays “1” to represent the height of the tower.

At this point, there is no model code beyond Address. A tower is represented only by the text displayed in a button.

Speculation. To pass this test I could have used a simple JLabel to display the tower height. My choice of the more complex JButton was entirely speculative. That was very naughty.

More speculation. I notice that there’s an orphan Tower class sitting all alone in the corner of the repo. Though I’m trying to code outside-in here, my natural bias toward coding inside-out must have asserted itself. The class isn’t actually used in my app, which suggests that I wrote it on speculation. It’s interesting that I would do that. I wonder: Is all inside-out coding speculative?

Blind speculation. I didn’t commit a test for the unused Tower class. I'm sure that means I didn't write one. So I must have coded it not only speculatively, but blindly. Eeep!

Repospectives. As I step through the commit chain, I see my sins laid out before me. If Saint Peter uses git, I'm fucked. But I wonder: What else might we learn by stepping through our own git repositories?

Towers Alternate Colors

Acceptance Test: Towers alternate colors. I express this by checking that towers alternate colors down the first column, and that towers alternate colors along each row.

Simplifying the fixtures. As I write the third test and the fixture code to support it, I become weary of maintaining and extending my lovely DSL. I abandon it in favor of writing indicative mood statements in simpler <object>.<condition>() format. Now I need only one fixture class per domain concept, rather than two (a class to access items through the GUI and class to make assertions).

Though “the city has width eight” doesn’t trip off the tongue as nicely as “the city is eight blocks wide,” it conveys nearly the same information. The only loss is the idea of “blocks.” I’m okay with that.

Implementation. I make the main class render each button with foreground and background colors based on the button’s location in the grid.

Adding features to the main class. At this early point in the project I’m implementing features mostly by writing code directly in the main class. I’m nervous about this, but I’m doing it on purpose, deliberately following the style Steve Freeman and Nat Pryce illustrate in Growing Object-Oriented Software, Guided by Tests (GOOS). For early features, they write the code in the main class. Later, as the code begins to take shape, they notice the coherent concepts emerging in the code, and extract classes to represent those.

When I first read that, I was surprised. My style is to ask, before writing even the smallest chunk of code: To what class will I allocate this responsibility? Then I test-drive that class. Steve and Nat’s style felt wrong somehow. I made note of that.

Around the same time that I first read GOOS, I watched Kent Beck’s illuminating video series about TDD. In those videos, Kent often commented that he was leaving some slight awkwardness in the code until he had a better idea became apparent (I’m paraphrasing from memory here, perhaps egregiously).

So once again a pattern emerges: Someone really smart says something clearly nonsensical. I think, “That makes no sense,” and let it go. Then someone else really smart says something similar. I think, “How the hell does that work?” Eventually, when enough really smart people have said the same stupid thing, I think, “That makes no sense. I’d better try it.” Then I try it, and it makes sense. Immediately thereafter I wonder why it isn’t obvious to everyone.

As I’ve mentioned, my bias is to code inside-out, writing model code first, then connecting it to the outside world. Now aware of my bias, I chose to work against it. I wrote the code directly into the main class. I was still nervous about that. But I’m still alive, so I must have survived it.

Commit. The first feature finally fully functions. As I commit this code, a tower is still represented only by the color, text, and name of a button. The only model class is Address, a value class with a name tag.

This will change in the next episode when, within seconds of committing this code, I somewhat appease my I MUST ALWAYS ADD CODE IN THE RIGHT PLACE fetish by immediately extracting a new class.

That Skeleton Won’t Walk

There’s a problem here. Though I’ve implemented the entire walking skeleton feature, and the implementation brings major chunks of the user interface into existence, the feature does not yet compel the system to respond to any events. It therefore does not require the system to do anything or decide anything in response to external events.

In retrospect, I chose a poor candidate for a walking skeleton. For one thing, it doesn’t walk. It just sort of reclines. For another, it’s not a whole skeleton. At best it's a ribcage. A reclining ribcage. So there’s a lesson for me to learn: Choose a walking skeleton that responds to external events.

As I wrote this blog post, I searched the web for a definition of “walking skeleton.” Alistair Cockburn defines a walking skeleton as “a tiny implementation of the system that performs a small end-to-end function.” If I’d looked that up earlier (or if I'd remembered the definition from when I first saw it years ago), I would likely have chosen a different feature to animate this corpse. Likely something about moving one tower onto another.

What JUnit Rules are Good For

In my previous post I described how to write JUnit rules. In this post, I'll describe some of the things rules are good for, with examples.

Before JUnit runs a test, it knows something about the test. When the test ends, JUnit knows something about the results. Sometimes we want test code to have access to that knowledge, or even to alter details about a test, the results, or the context in which the test runs.

Older versions of JUnit offered no easy way for test code to do those things. Neither test class constructors nor test methods were allowed to take parameters. Each test ran in a hermetically sealed capsule. There was no simple API to access or modify JUnit's internal knowledge about a test.

That's what rules are for. JUnit rules allow us to:

  • Add information to test results.
  • Access information about a test before it is run.
  • Modify a test before running it.
  • Modify test results.

Adding Information to Test Results

A few months ago I was working with testers at a client, automating web application tests using Selenium RC. The tests ran under Hudson on our continuous integration servers. The Selenium server ran in the cloud using Sauce Labs' Sauce On Demand service.

Information about each test run was available in two places. First, Hudson creates a web page for each test run, including a printout of any exception raised by the test. Second, Sauce On Demand creates a job for each test run, and a web page for each job, which displays a log of every Selenium command and its result, links to screenshots taken just before each Selenium command that would change the state of the browser, and a video of the entire session.

To make it easier to find information about test failures, we wanted a way to link from the Hudson test report to the Sauce Labs job report. Determining the URL to the Sauce Labs job report was easy; our test framework knew how to do that. But we didn't know how to insert the URL into the Hudson test report. Given that we needed to display the URL only when a test failed, we decided that the easiest solution was to somehow catch every exception, and wrap it in a new exception whose message included the URL. When Hudson printed the exception, the printout would display the URL.

The challenge was: How to catch every exception? For the exceptions thrown directly by our own test code, wrapping them was easy. But what to do about exceptions thrown elsewhere, such as by Selenium? Sure, we could wrap every Selenium call in a try/catch block, but that seemed troublesome.

One possibility seemed promising: Given that every exception eventually passed through JUnit, perhaps we could somehow hook into JUnit, and wrap the exception there. But how to hook into JUnit?

I tweeted about the problem. Kent Beck replied, telling me that my problem was just the thing Rules were designed to help with. He also sent me some nice examples of how write a rule and (of course) how to test it.

With Kent's helpful advice and examples in hand, I was able to quickly write a rule that did exactly what my client and I needed:

 1public class ExceptionWrapperRule implements MethodRule {
 2    public Statement apply(final Statement base, FrameworkMethod method, Object target) {
 3        return new Statement() {
 4            public void evaluate() throws Throwable {
 5                try {
 6                    base.evaluate();
 7                } catch (Throwable cause) {
 8                    throw new SessionLinkReporter(cause);
 9                }
10            }
11        };
12    }
13}

In our test code, at the point where we establish the Selenium session, we obtain the session ID from Selenium and stash it. If the test fails, the ExceptionWrapperRule's statement catches it and wraps it in a SeleniumLinkReporter (an Exception subclass we defined). The SeleniumLinkReporter constructor retrieves the stashed session ID, builds the Sauce On Demand job report link, and stuffs it into its exception message.

So this is an example of the first major use of rules: To modify test results by adding information obtained during the test.

Accessing Information About a Test Before It Is Run

At that same client, we learned of a new Sauce On Demand feature that we wanted to use: SauceTV. SauceTV displays a video of the browser while your test is running. As we wrote new tests, changed the features of the web application, and accessed new features of Sauce On Demand, we found ourselves often wanting to watch tests as the executed.

Sauce Labs provides a web page that displays two lists of open jobs for your account: Your jobs that are currently running, and your jobs that are queued to run. To access the video for a test in progress, you click the name of the appropriate job.

By default, each job's name is its Selenium session ID, a long string of hexadecimal digits. Even if you know the session ID, it is difficult to quickly distinguish one long hexadecimal string from another. To help with this, Sauce Labs allows you to assign each job a descriptive name to display in the job lists. You do this by sending a particular command to its Selenium server.

Given that we were running one test per job, the obvious choice for the job name is the name of the test. But how to access the name of the test? Certainly we could add a line like this to each test:

1@Test public void myTest() {
2    sendSauceJobName("myTest");
3}

But that's crazy talk. Much better would be a way to detect the test name at runtime, in a single place in the code. How to detect the test name? Rules to the rescue. We wrote this rule:

 1public class TestNameSniffer implements MethodRule {
 2    private String testName;
 3
 4    public Statement apply(Statement statement, FrameworkMethod method, Object target) {
 5        String className = method.getMethod().getDeclaringClass().getSimpleName();
 6        String methodName = method.getName();
 7        testName = String.format("%s.%s()", className, methodName);
 8        return statement;
 9    }
10    
11    public String testName() {
12        return testName;
13    }
14}

The rule stashes the test name, and other code later sends the name to Selenium RC. Now Sauce Labs labels each job with the name of the test, which is much easier to recognize than hexadecimal strings.

Modifying a Test Before Running It

In a Twitter conversation about what rules are good for, Nat Pryce said "JMock uses a rule to perform the verify step after the test but before tear-down." JMock is a framework that makes it easy to create controllable collaborators for the code you're testing, and to observe whether your code interacts with them in the ways you intend.

To use JMock in your test code you declare a field that is an instance of the JUnitRuleMockery class:

 1public class MyTestClass {
 2    @Rule
 3    public JUnitRuleMockery context = new JUnitRuleMockery();
 4    
 5    @Mock
 6    public ACollaboratorClass aCollaborator;
 7    
 8    @Test
 9    public void myTest() {
10        ... // Declare expectations for aCollaborator.
11        ... // Invoke the method being tested.
12    }
13}

JUnitRuleMockery is a rule whose apply() looks like this:

 1@Override
 2public Statement apply(final Statement base, FrameworkMethod method, final Object target) {
 3    return new Statement() {
 4        @Override
 5        public void evaluate() throws Throwable {
 6            prepare(target);
 7            base.evaluate();
 8            assertIsSatisfied();
 9        }
10        
11        ... // Declarations of prepare() and assertIsSatisfied()
12    }
13}

Before the statement executes the test, it first calls prepare(), which initializes each mock collaborator field declared by the test class. For the example test class, prepare() initializes aCollaborator with a mock instance of ACollaboratorClass.

Initializing fields in the target is an example of a second use for rules: Modifying a test before running it.

Modifying Test Results

The JMock code also demonstrates another use for JUnit rules: Modifying test results. After the test method runs, the assertIsSatisfied() method evaluates whether the expectations expressed by the test method are satisfied and throws an exception if they are not. Even if no exception was thrown during the test method itself, the rule might throw an exception, thereby changing a test from passed to failed.

Using Rules to Influence JUnit Test Execution

JUnit rules allow you to write code to inspect a test before it is run, modify whether and how to run the test, and inspect and modify test results. I've used rules for several purposes:

  • Before running a test, send the name of the test to Sauce Labs to create captions for SauceTV.
  • Instruct Swing to run a test headless.
  • Insert the Selenium session ID into the exception message of a failed test.

Overview of Rules

Parts. The JUnit rule mechanism includes three main parts:

  • Statements, which run tests.
  • Rules, which choose which statements to use to run tests.
  • @Rule annotations, which tell JUnit which rules to apply to your test class.

Flow. To execute each test method, JUnit conceptually follows this flow:

  1. Create a default statement to run the test.
  2. Find all of test class's rules by looking for public member fields annotated with @Rule.
  3. Call each rule's apply() method, telling it which test class and method are to be run, and what statement JUnit has gathered so far. apply() decides how to run the test, selects or creates the appropriate statement to run the test, and returns that statement to JUnit. JUNit then passes this statement to the next rule's apply() method, and so on.
  4. Run the test by calling the evaluate() method of the statement returned by the last rule.

I'll describe the flow in more detail in the final section, below. Now let's take a closer look at the parts, with an example. (The code snippets are available as a gist on GitHub.

Writing Statements

A statement executes a test, and optionally does other work before and after executing the test. You write a new statement by extending the abstract Statement class, which declares a single method:

1public abstract void evaluate() throws Throwable;

A typical evaluate() method acts as a wrapper around another statement. That is, it does something before the test, calls another statement's evaluate() method to execute the test, then does something after the test.

Here is an example of a statement that invokes the base statement to run the test, then takes a screenshot if the test fails:

 1public class MyTest {
 2    @Rule
 3    public MethodRule screenshot = new ScreenshotOnFailureRule();
 4
 5    @Test
 6    public void myTest() { ... }
 7
 8    ...
 9}

Writing Rules

A rule chooses which statement JUnit will use to run a test. You write a new rule by implementing the MethodRule interface, which declares a single method:

1Statement apply(Statement base, FrameworkMethod method, Object target);

The parameters are:

  • method: A description of the test method to be invoked.
  • target: The test class instance on which the method will be invoked.
  • base: The statement that JUnit has gathered so far as it applies rules. This may be default statement that JUnit created, or a statement created by another rule.

The purpose of apply() is to produce a statement that JUnit will later execute to run the test. A typical apply() method has two steps:

  1. Create a statement instance.
  2. Return the newly created statement to JUnit.

Note that when JUnit later calls the statement's evaluate() method, it does not pass any information. If the statement needs information about the test method, the test class, how to invoke the test, or anything else, you will need to supply that information to the statement yourself (e.g. via the constructor) before returning from apply().

Almost always you will pass base to the new statement's constructor, so that the statement can call base's evaluate() method at the appropriate time. Some statements need information extracted from method, such as the method name or the name of the class on which the method was declared. Others do not need information about the method. It's rare that a statement will need information about target. (The only one I've seen is the default one that JUnit creates to invoke the test method directly.)

Often, there is no decision to make. Simply create the statement and return it, as in this example:

1public class ScreenshotOnFailureRule implements MethodRule {
2    @Override
3    public Statement apply(Statement base, FrameworkMethod method, Object target) {
4        String className = method.getMethod().getDeclaringClass().getSimpleName();
5        String methodName = method.getName();
6        return new ScreenshotOnFailureStatement(base, className, methodName);
7    }
8}

In situations that are not so simple, you can compute the appropriate statement depending on information about the test method. For example, you may wish to suppress screenshots for certain tests, which you indicate the @NoScreenshot annotation. Your screenshot rule can choose the appropriate statement depending on whether the annotation is present on the method:

 1public class ScreenshotOnFailureRule implements MethodRule {
 2    @Override
 3    public Statement apply(Statement base, FrameworkMethod method, Object target) {
 4        if(allowsScreenshot(method)) {
 5            String className = method.getMethod().getDeclaringClass().getSimpleName();
 6            String methodName = method.getName();
 7            return new ScreenshotOnFailureStatement(base, className, methodName);
 8        }
 9        else {
10            return base;
11        }
12    }
13
14    private boolean allowsScreenshot(FrameworkMethod method) {
15        return method.getAnnotation(NoScreenshot.class) == null;
16    }
17}

A note about upcoming changes in the rule API

In JUnit 4.9 — the next release of JUnit — the way you declare rules will change slightly. The MethodRule interface will be deprecated, and the TestRule interface added in its place. The signature of the apply() method differs slightly between the two interfaces. As noted above, the signature in the deprecated MethodRule interface was:

1Statement apply(Statement base, FrameworkMethod method, Object target);

The signature in the new TestRule interface is:

1Statement apply(Statement base, Description description);

Instead of using a FramewordMethod object to describe the test method, the new interface uses a Description object, which gives access to essentially the same information. The target object (the instance of the test class) is no longer provided.

Applying Rules

To use a rule in your test class, you declare a public member field, initialize the field with an instance of your rule class, and annotate the field with @Rule:

 1public class MyTest {
 2    @Rule
 3    public MethodRule screenshot = new ScreenshotOnFailureRule();
 4
 5    @Test
 6    public void myTest() { ... }
 7
 8    ...
 9}

The Sequence of Events In Detail

JUnit now applies the rule to every test method in your test class. Here is the sequence of events that occurs for each test (omitting details that aren't related to rules):

  1. JUnit creates an instance of your test class.
  2. JUnit creates a default statement whose evaluate() method knows how to call your test method directly.
  3. JUnit inspects the test class to find fields annotated with @Rule, and finds the screenshot field.
  4. JUnit calls screenshot.apply(), passing it the default statement, the instance of the test class, and information about the test method.
  5. The apply() method creates a new ScreenshotOnFailureStatement, passing it the default statement and the names of the test class and test method.
  6. The ScreenshotOnFailure() constructor stashes the default statement, the test class name, and the test method name for use later.
  7. The apply() method returns the newly constructed screenshot statement to JUnit.
  8. (If there were other rules on your test class, JUnit would call the next one, passing it the statement created by your screenshot rule. But in this case, JUnit finds no further rules to apply.)
  9. JUnit calls the screenshot statement's evaluate() method.
  10. The screenshot statement's evaluate() method calls the default statement's evaluate() method.
  11. The default statement's evaluate() method invokes the test method on the test class instance.
  12. (Let's suppose that the test method throws an exception.)
  13. Your screenshot statement's evaluate() method catches the exception, calls the MyScreenshooter class to take the screenshot, and rethrows the caught exception.
  14. JUnit catches the exception and does whatever arcane things it does when tests fail.

One Small Thing

At the end of a workshop in 1997, I offered a clichéd end-of-training activity: "How will you use what you've learned in this workshop? Write that down." A few people started to write. More people looked uncomfortable. More than half seemed to check out mentally, and two began to check out physically, gathering their belongings to leave.

In the spur of the moment, I invented an elaboration: "What one small thing will you commit to doing? What's the smallest thing you can commit to?"

One man asked, "How small?" I said, "Smaller than that." He laughed, because my kinda non-answer answer seemed funny. Then he immediately thought of something. "Aha!" he said, and started to write. Another man still didn't understand, so I offered a bound: "Something you could do all on your own before noon on Monday." That made sense to him, and he began to write. And everybody else wrote. The two people who had started to gather their belongings sat down and wrote.

That spontaneous refinement worked so well that I now often end workshops with the One Small Thing activity: What one small thing will you commit to doing? Something you could do all on your own before noon? Write it down.

Over the years I've learned what makes this simple activity so powerful. First, people can almost always find something to commit to. If they are reluctant to commit to a given action, there's almost always a smaller action that they will commit to fully, with enthusiasm. One small thing. How small? Smaller than that. What kind of thing? Anything that you will commit to.

Second, when invited to commit to one small thing, people always find something that matters to them. So small doesn't mean unimportant.

Third, people are good at keeping small commitments. Keeping one small commitment starts a ball rolling—or continues one, for people who already make and keep personal commitments.

Fourth, once people see that keeping even a small commitment has good effects, and feels good in and of itself, they realize that they can make their lives better even in small steps.

When people finish writing, I invite them to share their commitments if they wish. Almost always there's a pause. Then someone shares. Then someone else. After the first few, other people begin to feel safer. Usually about a third of the people in the room will share their commitments, and then we're done.</p>

As I invite people to share, I make it clear that sharing is entirely voluntary, and that it's okay to keep these commitments private and personal. At a recent workshop, one woman approached me after the activity and said, "Originally I wrote down something I didn't really care about. When I saw that you really weren't going to make us say our commitments out loud, I realized that this wasn't for you, it was for me. So I changed my commitment to something that matters to me. It matters a lot."