Programming by Emptying the Disagreement Domains

This post was triggered by my thinking and rethinking and rethinking about Robert Martin’s Transformation Priority Premise (TPP). I’m not confident that anyone who lives outside my skin will make sense of this or see any relationship whatsoever between this and the TPP.

But here goes.

Functions Domain and Method Domain

Imagine that we’re programming some feature, and the feature can be expressed as a mathematical function. It takes inputs from some set (its domain) and maps the each input to a value from some set (its codomain).

Now a definition:

The function domain is the domain over which the function is defined. This is the Cartesian product of the function’s inputs.

Further imagine that the feature can be invoked through a single method in code. We may write more than one method to implement the feature, but callers invoke the feature through a single method.

Further imagine that we can think of each invokation of the method as taking a single input. This “single” input may be made up of the values of multiple parameters and state variables.

Another definition:

The method domain is the domain over which the method is defined. This is the Cartesian product of the method’s parameter types and any variables of program state or environment state accessed by the method.

Done

Our implementation of the feature is done when:

  • The method domain exactly matches the function domain.
  • For each input, the method and the function yield the same value.

So if any of the following are true, we are not done:

  • The function domain has at least one member that is not accepted by the method.
  • The method accepts at least one input that is not in the function domain.
  • The function domain has at least one member for which the method’s outputs differ from the function value.

Let’s identify some sets of inputs that together categorize the state of the effort to implement the function.

The way we define a function and the way we declare and implement a method can lead to interesting relationships between their domains. Let’s identify the key ways that these domains may agree or disagree.

Agreement Domains

The input agreement domain is the set of inputs for which both the function and the method are defined. It is the intersection of the function domain and the method domain.

The output agreement domain of a method is the set of inputs for which the method yields the same result as the function. The output agreement domain is necessarily a subset of the input agreement domain.

Disagreement Domains

The function and the method may mismatch in either the domains for which they are defined or the values that they yield.

The surplus domain is the subset of the method domain that is not in the function domain. The surplus domain includes every value that the method can accept, and for which the function is undefined.

For example, if a method takes an int parameter and the function is defined over positive integers, the surplus domain includes 0 and all of the negative values that can be represented by an int.

The deficit domain is the subset of the function domain that is not in the method domain. The deficit domain includes every value for which the function is defined, and that the method cannot accept.

For example, if a method takes an int parameter and the function is declared over the positive integers, the deficit domain includes every integer value larger than the maximum representable int.

The output mismatch domain is the subset of the input agreement domain for which the method result differs from the value of the function.

Emptying the Disagreement Domains

In a strict sense, a method implements a function if and only if the function domain, the method domain, and the output agreement domain are identical. They take the same inputs and produce the same results. The goal of the implementing a function is to bring these three domains into agreement.

Let’s flip that. Any disagreement among these three domains means that at least one of the disagreement domains–the surplus domain, the deficit domain, or the output disagreement domain–is non-empty.

Programming proceeds by progressively emptying the disagreement domains. As long as any disagreement domain has members, we’re not done.

Three Programmer Moves

Programming proceeds in a series of moves. There are three types of moves that make progress:

  • Reduce the Surplus Domain.
  • Reduce the Deficit Domain.
  • Reduce the Output Disagreement Domain.

Reducing the Surplus Domain

Typically we reduce the surplus domain by changing the signature of the method, such as by changing parameters from more generalized types to more specific ones.

For example, we might change a parameter type from String to TelephoneNumber, or from int to unsigned int.

We may also reduce the surplus domain by changing our definition of the function to expand its domain to better match the method domain. Sometimes we do this unconsciously.

We may also simply ignore parts of the surplus domain, if we are convinced that no caller will ever supply those inputs, or if we don’t care what happens when callers do supply them. Again, we sometimes do this unconsciously.

Reducing the Deficit Domain

Typically we reduce the deficit domain by changing the signature of the method, such as by changing parameters from restricted types to more inclusive ones.

For example, we might change a parameter type from int to BigInteger.

We may also reduce the deficit domain by changing our definition of the function to reduce its domain to better match the method domain. Sometimes we do this unconsciously.

Reducing the Output Disagreement Domain

Typically we reduce the output disagreement domain by changing the body of the method to produce the correct results for inputs where it previously produced incorrect results.

Rarely (I think) we reduce the output disagreement domain by changing the definition of the function to match the output of the method.

TDD, Transformations, and Disagreement Domains

My hope is that all of this theory might help someone, somewhere to think more clearly and thoroughly about unit tests, either to assess whether their unit tests are sufficient to warrant their desired confidence in the code, or to select which unit test to write to test-drive the next transformation of the code.

I think these three domains are relevant for thinking about each of Uncle Bob’s transformations. And I have a vague sense that there’s something here that could help illuminate why Uncle Bob’s transformation priorities fall into the order they do.

But this is all fuzzy at the moment. So I’ll leave that for another day.

Emery's Law

Emery’s Law: Any social phenomenon is more complex than you imagine, even when you take into account Emery’s Law.

By any, I mean phenomena like:

  • The riots in Baltimore in the Spring of 2015.
  • Police treatment of individuals.
  • Distribution of privilege.
  • Compensation schemes.
  • Testing.
  • Estimating.
  • A stream of comments on Youtube, Reddit, Twitter, Facebook, …
  • Steve Earle’s concert at the Crest Theater in Sacramento on April 29, 2015.
  • A kiss.
  • A shared glance.
  • This blog post.
  • The woefully limited range of categories I was able to think of as I constructed this list.

One implication of Emery’s Law: Your understanding of any social phenomenon is incomplete.

My advice: Whatever conclusions you draw about any social phenomenon, no matter how simple it seems, hold them tentatively.

Diagnostic assertions

When we write an assertion statement, naturally we want to make the code expressive. We want the assertion statement to include all of the information that matters to the assertion, and no extraneous information. We want it to be clear to the reader just what we are asserting.

When writing assertion statements, test automators often neglect an important consideration: the diagnostic value of the message that is displayed if the assertion fails.

Given how carefully we craft our assertion statements, it’s a shame when the assertion failure message throws away important information, or when it swamps us with noise that hides the information that we want most.

Some assertion mechanisms naturally lose information when they create failure messages. Other mechanisms allow us to make the failure messages as clear and informative as the assertion statements themselves.

I will demonstrate using Java, JUnit, Hamcrest, and a few other libraries. You will find similar assertion mechanisms and features in other programming languages and libraries.

Example

Let’s set up an example that we can use to explore the diagnostic value of different kinds of assertions.

A class. Here is a very simple class, an item with text:

public class Item {
    private final String text;

    public Item(String text) {
        this.text = text;
    }

    public String text() {
        return text;
    }
}

A test. Here is a test:

public void findItemReturnsAnItemWithTextFoo() {
    Item item = someObject.findItem();
    assert item.text() == "foo";
}

The test asks someObject to find an item, then asserts that the item’s text is equal to "foo".

A fault. The code being tested has a fault: It gives the item the incorrect text value "bar" instead of the desired value "foo".

A failure. Given that fault, our assertion will fail. When the assertion statement executes, it will detect that the value is incorrect and throw an exception to indicate an assertion failure. Then JUnit will display the exception, including any diagnostic message contained in the exception.

Assertion Styles

There are many ways to express our desired assertion in Java, especially if we use assertion mechanisms from third-party libraries, such as JUnit, Hamcrest, and Hartley. Each of the following assertion statements correctly evaluates whether our item’s text is equal to "foo", and each correctly throws an exception if the text does not have the desired value.

Let’s look at each of these styles, and notice what information is included in the failure message, and what information is lost.

The Java assert Statement

Our example test expresses an assertion using a Java assertion statement:

assert item.text().equals("foo");

Notice that this short statement gives a lot of information about our intentions:

  • assert says that we will make an evaluation, and that the evaluation is so important to our test that we will mark the test as failed if the result is not as we’ve specified.
  • item says that we will evaluate an object called item (or some aspect of item).
  • text() says that we will evaluate some text.
  • The . between item and text() says that we will evaluate the text obtained from item.
  • .equals() says that we will compare whether item’s text is equal to some value.
  • "foo" says that we will compare the item’s text to the literal value "foo".

Each of those six pieces of information is essential to the assertion. If we remove any piece of information, the assertion loses its meaning. (Technically, we could extract the text into a local variable, so that our assertion need not refer to item, but let’s assume that we’ve chosen this particular phrasing because we want to make the source of the text crystal clear.)

If we run this test and execute this assertion, the assertion throws an exception, and JUnit emits this message:

java.lang.AssertionError

followed by a stack trace. The stack trace indicates the Java file name and line number from which the exception was thrown. That is, it indicates the file name and line number of our assertion statement. So if we want to understand the source of the failure, we have to navigate to the source code and read the assertion statement. It’s a good thing we took pains to make the assertion statement so clear, because that’s all of the information we have as we begin our investigation of the failure.

The (Mostly) Uninformative Failure Message

Note that the failure message itself tells us only that some assertion failed. It gives us no information at all (in the message itself) about what aspect of our assertion went wrong. Remember that our assertion statement expressed at least six important pieces of information. The failure message conveys only one of these: This thing that went awry was an assertion.

That’s a critical piece of information, of course, but what happened to the other five pieces, which we took such pains to express so clearly in the code?

They were thrown away by the nature of the assertion mechanism. The Java assertion statement consists of two parts: the assert keyword and a boolean expression. To execute an assertion statement, Java first evaluates the boolean expression. Then it assesses the result (true or false). If the result is true, execution continues with the following statement. If the result is false, the statement throws an exception.

In evaluating the boolean expression, the computer loses the information the original expression, and saves only the true or false result. By the time the statement throws its exception, all of the other information about the expression has been lost.

The JUnit assertTrue() Method

Now let’s try another style of assertion, the assertTrue() method from JUnit:

assertTrue(item.text().equals("foo"));

This method produces the same error message as the bare Java assert statement:

java.lang.AssertionError

again supplemented by a stack trace that points to the assertion in the code.

The JUnit assertTrue() method has one advantage over the Java assert statement: You don’t have to tell the JVM to enable it. If you want the JVM to execute assert statements, you have to pass a special argument to the JVM. If you don’t tell the JVM to enable assert statements, it (quietly) ignores them.

Otherwise, the effects of assert and assertTrue() are similar. Each throws an AssertionError, and neither gives you any of the other information that you so carefully crafted into your assertion.

Improving the Java assert Statement with an Explanation

Both JUnit and Java offer a mechanism to compensate for the limitations of the bare assert and assertTrue() assertions: the explanatory message (which I’ll shorten to explanation).

With the Java assert statement, you add an explanatory message like this:

assert item.text().equals("foo") : "Item text should be foo";

If this assertion fails, Junit displays this message:

java.lang.AssertionError: Item text should be foo

followed by the same stack trace as before.

That’s much more informative than before. In fact, our failure message tells us nearly everything that the code tells us.

Am I happy? Nope.

The big problem with this explanation mechanism is that it requires you to express the same idea twice, once in the boolean expression, and again in the explanation. As with other forms of comments in code, it is very easy (and common) for the comment to diverge from the code it describes. You end up with failure messages that mislead.

And even if the comment stays current with the code, it is a form of duplication. If the code changes, you also have to do the extra work of changing the comment.

Adding Explanations with JUnit assertTrue()

JUnit includes a form of assertTrue() method that takes an explanation as its first parameter:

assertTrue("Item text should be foo", item.text().equals("foo"));

I find this expression more awkward then the assert statement. With the assert statement, the explanation follows the entire assertion. With assertTrue(), the explanation interrupts the left-to-right phrasing of the assertion.

Still, this is better than no explanation at all. The assertTrue() explanation has the same effect as the assert statement explanation. If the assertion fails, JUnit displayes this message:

java.lang.AssertionError: Item text should be foo

Still Missing: The Actual Value

By adding explanatory messages to our assertions, we can restore the information that is lost when the assertion evaluates our boolean expression.

But now I notice another bit of information that I wish were displayed in the failure message: The actual value of the item’s text. Thanks to our explanations, we know the value is not the desired "foo", but what is the value?

Of course, if we’re writing our own explanations, we could easily include the actual value in our explanatory message. But that takes extra work. Granted, it’s not a big burden, but it is extra work.

What if there were a way to get that information (almost) for free? The actual value is assessed somewhere during the evaluation, but it is then thrown away after the comparison is made and we’ve determined the boolean result of the evaluation. What if we could catch the value before it was thrown away?

We can do that. The key is to express the assertion in a way that retains both the actual value and the desired value, even after comparing them.

The JUnit assertEquals() Method

JUnit offers another style of assertion, a style that, when evaluated, retains the separation between the value you are evaluating and the value you are comparing it to. If you want to compare values for equality, the appropriate method is (appropriately) called assertEquals(). It looks like this:

assertEquals("foo", item.text());

Though this shifts the phrasing (the concept of equality now appears earlier in the expression, and the desired value now appears before the retrieval of the actual value), we still have all six of the pieces of information that matter to our assertion.

Our new expression is okay, if a little clumsy English-wise. At least it has all the pieces, and it expresses each piece only once.

What happens when we run it? JUnit displays this message:

org.junit.ComparisonFailure: expected:<[foo]> but was:<[bar]>

Very interesting. By passing the two pieces of information separately to the method, we enable the method emit a more helpful failure message. We give it two pieces (the expected value and the actual value) and it gives those two pieces back to us in the message.

We now have a piece of information that we didn’t (and couldn’t) express directly in our test code: The actual value of the item’s text.

Note also that we no longer get a barely informative AssertionError that tells us only that something went wrong, Now we get a more specific exception, a ComparisonFailure. I find this slightly maddening. We invoked a method dedicated to equality, but the error message seems to remember only that we were doing some kind of comparison.

Still, this is way better than, “Hey, dude, something’s busted. You figure it out.”

And best of all: We got it (nearly) for free. We didn’t have to duplicate any information in our statement. We had only to rephrase our assertion, to rearrange the same six important pieces of information.

So that’s two and a half of our six pieces of information that now appear in the failure message:

  1. We’re asserting something.
  2. We’re asserting equality (this appears only partially in the failure message, so it counts only half).
  3. We’re comparing someting to foo (note that we’ve lost the information that it’s a string, but let’s be generous and give this full credit).

And remember that we also get this bonus piece of information:

  • The actual value that we were evaluating.

This is an important addition. Let’s say that we now would like all seven of those pieces of information. And we’re getting three and a half of them.

JUnit assertEquals() with Explanation

JUnit offers a form of assertEquals() that takes an explanatory message. Let’s use that to restore a few of the pieces that still don’t get for free:

assertEquals("Item text", "foo", item.text());

Now the failure message is:

org.junit.ComparisonFailure: Item text expected:<[foo]> but was:<[bar]>

We’re duplicating the ideas of item and text, and I think we’ve done it in a way that also expresses the relationship between the two.

But there’s far less duplication than our earlier explanatory messages, so this is progress.

Can we do better?

The Hamcrest assertThat() Method

Every assertion involves a comparison of some kind. Steve Freeman and Nat Pryce took the very helpful step of separating the comparison from the assertion method. They accomplished this by introducing the concept of a matcher. A matcher is an object that represents some set of criteria, and knows how to determine whether a given value matches the criteria. Nat and Steve created a library of widely useful matchers called Hamcrest.1.

Hamcrest also includes an assertThat() method that applies a matcher as an assertion:

assertThat(item.text(), equalTo("foo");

The first parameter to assertThat() is the value that we want to evaluate. The second is a matcher object. Typically you supply a matcher by calling a factory method such as equalTo(). Hamcrest factory methods are named so that assertion expressions read informatively in the code. The assertion above says: Assert that item’s text (is) equal to “foo”.

When a Hamcrest assertThat() assertion fails, it throws an exception with a message like this:

java.lang.AssertionError:
Expected: “foo
but: was “bar”

This failure message is similar in content to the JUnit assertEquals() failure message. The great advantage of Hamcrest-style assertions is that you can easily extend the set of available assertions. We will leave that as an exercise for the reader.

Hamcrest also has a version of assertThat() that takes an explanation as a parameter. Normally all we need to do is describe the subject of the evaluation:

assertThat("Item text", item.text(), equalTo("foo"));

When this assertion fails, it emits a message similar to the one from JUnit assertEquals():

java.lang.AssertionError: Item text
Expected: “foo”
but: was “bar”

As with JUnit’s assertEquals(), at the expense of a little bit of redundancy, we have made our failure message express all of the information that matters to the assertion.

So our assertion messages read very similarly to JUnit’s assertion messages. But Hamcrest’s assertion statements read far more expressively in the code.

The Hartley assertThat() Method

For my own tests, I often take a step beyond Hamcrest. I often want to evaluate not only some subject, but more specifically some attribute or feature or property of the subject. Each of the examples in earlier sections evaluates not only item, but more specifically the item’s text as returned by its text() method.

I often find it helpful to write assertion statements that separate the subject from the feature. I have created an assertion method to help me do that:

assertThat(item, text(), is("foo"));

This assertion method2 takes three parameters. Each parameter is an object. The first object is the subject of the assertion. In this case, we are evaluating item.

The second parameter is the feature being evaluated. In the same way that Hamcrest matchers are objects that evaluate other objects, Hartley features are objects that extract values from other objects. The text() method is a factory method that produces a feature object that can retrieve the text from an item.

The third parameter is a matcher that compares the extracted feature to the desired criteria.

In the code, this reads just the same as the Hamcrest assertion:

Assert that item’s text is (equalTo) “foo”

But notice what happens when this assertion fails:

java.lang.AssertionError: Expected: Item text “foo”
but: was “bar”

With no redudancy in the assertion statement itself3, we now have a failure message that includes every important detail of the assertion.

Footnotes

  1. The word Hamcrest is an anagram of the word matchers. You can find Hamcrest matcher libraries for a variety of programming languages at http://hamcrest.org/.

  2. For this and other Java code that I find commonly useful when I automate tests, see my Hartley library.

  3. Note that I did have to do some extra work in order to convince the failure message to describe the subject and the feature. I had to implement the toString() method in both the Item class and in my feature object class. For that small amount of work, I can now use those classes in many assertions, and I gain the diagnostic expressiveness at no additional cost.

Using Git With Subversion Repositories

Some of my clients want to use git with their existing Subversion repositories. These are the notes I give them (along with training and coaching) to get them started.

For additional help, see the Git and Subversion section of Pro Git.

General Safety Rules

These rules will help keep you out of trouble until you learn the subtleties of working with git and subversion:

  1. Make changes ONLY on feature branches.
  2. Merge your features into master ONLY immediately before committing to SVN.

Working With The SVN Repository

You will do most of your work in a local git repository, using normal git commands.

You will use the git svn command only to do three things:

Create A Git Repository From An SVN Repository

To create a local git repository from a remote SVN repository:

git svn clone url-to-svn-repo my-git-repo-name

Note that this can take a very long time if the SVN repository has a large history.

Working With Feature Branches

You will do your work in feature branches in your local git repository. A common approach is to create a feature branch for each feature that you are working on.

See the git documentation for details about how to work with branches.

Create A Feature Branch

  1. Choose the starting point. Decide whether to base your new feature branch on the master branch or on another feature branch.
  2. Update the starting point branch (optional). If you want your new feature branch to include the latest changes from SVN, update the existing branch:
  3. Check out the starting point branch.
  4. Create the feature branch. There are two ways to do this. See below.

    To create a feature branch and check it out:

    git checkout -b my-feature
    

    To create a feature branch without checking it out:

    git branch my-feature
    

Committing Your Feature

To commit your feature into the SVN repository:

  1. Update the master branch with the changes from the SVN repository.
  2. Update the feature branch with the changes from the up-to-date master.
  3. Merge the feature into the master branch. You may need to resolve conflicts.
  4. Commit the feature to the SVN repository.

Each step is described in its own section below.

The complete sequence looks like this:

# Update master...
git checkout master
git svn rebase

# Update the feature branch
git checkout my-feature
git rebase master
# may need to resolve conflicts here

# Merge the feature into master
git checkout master
git merge my-feature

# Commit the feature from master
git svn dcommit

Update The Master Branch

To get the latest changes from the SVN repository into your master branch:

git checkout master
git svn rebase

You may wish to update your master branch often. That way, every time you create a feature branch from the master branch, the initial feature branch is up-to-date with the SVN repository.

Update The Feature Branch

You will be getting the new changes from the master branch. So before you do this, you must update the master branch.

Then:

git checkout my-feature
git rebase master

You may need to resolve conflicts.

I often update my feature branch, even if I am not going to commit soon. Keeping my feature branch up to date makes the final merge easier when I am ready to commit.

Resolve Conflicts

Whenever you rebase your feature branch on top of master, there may be conflicts between your feature branch and the new changes in master.

If there are conflicts, git will interrupt the rebase operation and alert you. To complete the rebase, you must first resolve the conflicts.

To resolve the conflicts, either edit the conflicting files in an editor, or use a mergetool:

git mergetool

Once you have resolved the conflicts, continue the rebase process:

git add .
git rebase --continue

Merge The Feature Into The Master Branch

Do this ONLY immediately before committing.

Before you do this, you must update the feature branch with changes from SVN.

Once your feature branch is up to date:

git checkout master
git merge my-feature
# OR git merge --squash my-feature

Commit The Feature To The SVN Repository

You will be committing from the master branch. So before you do this, you must merge your feature into the master branch.

Once your feature is merged into the master branch:

git checkout master
git svn dcommit

Testing and team effectiveness

My model of effectiveness focuses on three questions:

  • What results do we want to create?
  • What results are we observing?
  • What can we do to create the results we want?

Often I’m focused on a single person, the person I’m coaching. When I’m working with a team, another question becomes important: What are the similarities and differences among different people’s answers to these questions?

When testers test, they observe things that others have not yet observed. So different people now have different answers to the second question, about observations.

At the risk of floating away into the metaverse, we can now ask some new questions:

  • What do different people’s responses to test observations tell you about their answers to the three questions?
  • What do the responses tell you about the differences and similarities in the answers?
  • What are the implications of these similarities and differences for the team?

Standardize practices to aid moving people around?

Often as I’m working with an organization, managers and executives want to standardize new practices even before they have tried them. When I ask what standardizing will do for them, they say, “It will allow us to move people easily from one project to another.”

This puzzles me, for three reasons.

First, I haven’t seen organizations move people from project to project very often. Even if standardizing helps move people around, you won’t get that benefit if you don’t actually move people around.

Second, I don’t know whether “standardizing practices” helps move people from project to project. I’m not saying it doesn’t help. I’m saying that I don’t know.

Third, I’m not sure whether “standardizing practices” actually standardizes the practices in any meaningful way.

Given all of this, I suspect that “allowing us to move people around” is not the real reason for standardizing. If I’m right about that, what might the real reasons be?

Page Object Puzzle

Note: I originally posted this on Google+. There are lots of interesting comments there.

Now that I have a few months of experience using page objects in my automated tests, I have a puzzle.

If I understand the page object pattern correctly, page objects have two kinds of methods: queries and commands. A query returns the queried value (of course). Every command returns a page object, so that test ideas can be expressed using method chaining. If the command leaves the browser displaying the same page, the command returns the current page object. So far, all of this makes sense to me.

My puzzle is about what happens if the command causes the browser to display a new page. In that case, the command returns a new page object that represents the newly displayed page. The advantage of this is that it allows reasonably expressive method chaining. The disadvantage (it seems to me) is that it requires each page object knowledge of other pages. That is, it makes each page object class aware of every other page object class that might be reached via its commands. And if the result of a command depends on prior existing conditions, or on validation of inputs, then each page object command (in order to return the right kind of page object) must account for all of the different possible results.

This seems to me to create an unwieldy tangle of dependencies from one page object class to another. How do other people deal with this entanglement?

I’m toying with the idea of adjusting the usual page object pattern:

  1. A query returns the queried value.
  2. If a command always leaves the browser displaying the same page, the command returns the current page object.
  3. If a command might lead to a different page, the command returns void, and the caller can figure out what page object to create based on its knowledge of what it’s testing.

Frank(enstein), Victor, Shelley, and Igor: Tools for Testing iOS Applications

A client wants to write automated tests for their iOS app. I did a little research to find existing tools, and decided that I liked Frank. Frank embeds a small HTTP server (the Frank server) into an iOS application that runs in a simulator. You can write tests that interact with views in an app by sending commands to the Frank server.

Another element of Frank is an API for writing tests using Cucumber, a simple, popular Ruby-based testing framework.

One thing didn’t fit: My client’s test automators know Java, but they don’t know Ruby or Cucumber. So I wrote a Java driver (front end) for Frank, which I called Victor. The Victor API is a DSL similar to ones I’ve created in other situations, such as when I wrote a Selenium-based driver for testing Web applications.

I soon discovered that Frank development was at a point of transition. The existing Frank server relied on a third-party library called UIQuery to identify the views to interact with. But UIQuery development had gone stale, so Pete Hodgson (the awesome developer behind Frank) was in the process of writing a new query engine called Shelley to replace UIQuery.

I was a little nervous about inviting my clients to begin using a tool that was in the midst of such a transition. So I started writing a query engine of my own, called Igor. Now, my own query engine would be in transition, too, but at least my clients and I would be in control of it.

As I chatted about Igor on the Frank mailing list, Pete nudged invited me to consider creating a query syntax that looked something like CSS selectors, to allow people to apply their existing CSS knowledge.

I’ve spent much of the last few days designing the Igor syntax. I also implemented a few basic view selectors. Finally, I figured out how to convince Frank to delegate its queries to Igor, a process that is becoming easier as Pete adjusts Frank to allow a variety of query engines.

I’m hoping to have a few more of the most useful selectors implemented before I return to my client next Monday. If you want to take a peek, or follow along, or give feedback, here are links to the various technologies:

  • Frank: Pete Hodgson’s iOS driver.
  • Victor: My Java front end for Frank.
  • Igor: My query engine for Frank.
  • Igor syntax based on CSS.