Dimensions of Software Testing

A few days ago I was poking around the web for ideas about how to test software, and I saw Scott Ambler's article about "Full Life Cycle Object-Oriented Testing (FLOOT)." The article includes a list of common testing techniques. As I looked over the list, I noticed that there is a small set of key dimensions that distinguish one testing technique from another. For example, unit testing and system testing differ in the kind of component they test. Stress testing and usability testing differ in the quality attribute that they test for. Unit testing and acceptance testing differ in the nature of the decisions that are made based on the test results.

I love looking for patterns like that, so I spent an hour analyzing Scott's to identify the dimensions. Here are thirteen dimensions I found, and a few examples that show how different testing techniques vary along each.

Unit Under Test. What type of component being tested?

  • In Class Testing or Unit Testing, the unit under test is a class.
  • In Method Testing, the unit under test is a method of a class.
  • In System Testing, the unit under test is the system.

Test Case Scope. What is the scope of the interaction tested by each test case?

  • In Use-Case Scenario Testing, the scope of the interaction tested by each test case is a user goal.
  • In Unit Testing, the scope of each test case is a method invocation.
  • In Integration Testing, the scope is a transaction.

Unit Coverage. What subset of the unit under test is exercised by the test suite?

  • In Coverage Testing, the subset being exercised by the test suite is code statements.
  • In Path Testing, the coverage is logic paths.
  • In Regression Testing, the coverage is code changes.
  • In Boundary-value Testing, the coverage is limits.

Behavioral Scope. What subset of the unit-under-test's behavior is being tested?

  • Installation Testing tests the system's installation procedure.
  • Functional Testing tests the system's business functionality.
  • Integration Testing tests interactions among subsystems.

Unit Relationships. What are the relationships among the units whose interactions are being tested?

  • In Inheritance-regression Testing, the relationship between units is inheritance.
  • In Integration Testing, the relationship is collaboration or peers.

Quality Attribute. What type of quality attribute is being tested?

  • In Stress Testing or Volume Testing, the quality attribute being tested is throughput or latency or capacity.
  • In Usability Testing, the quality attribute being tested is usability.

Stakeholder. Whose interests are the focus of the testing?

  • Acceptance Testing focuses on the interests of users.
  • Operations Testing focuses on the interests of operators.
  • Support Testing focuses on the interests of support staff.

Liveness. How closely does the test environment mimic the operational environment. Or perhaps this dimension is better characterized as Safety: To what extent are the testers using the system to do the real work for which the system was intended?

  • In a Pilot, the test is the actual operational environment, perhaps limited in scope (e.g. a small subset of users, or for a limited time).
  • In Beta Testing, the environment is a fully operation environment, but perhaps used only for non-critical functions.
  • In Acceptance Testing, the environment is a non-operational similar to the operational environment.
  • Unit Testing is done in the development environment.

Visibility into Unit Under Test. To what extent does the tester exploit knowledge about the internals of the unit under test?

  • In Black-box Testing, the tester exploits no knowledge knowledge of internals of the unit under test.
  • In White-box Testing, the tester exploits full knowledge of internals.
  • In Grey-box Testing, the tester exploits some knowledge of internals.

Tester. What is the relationship of the tester to the software under test?

  • For Acceptance Testing or User Testing, the tester is a user of the software.
  • For Unit Testing or Developer Testing, the tester is a developer of the software.

Processor. What type of "processor" will "executes" the "software" during the tests?

  • In most kinds of testing, a computer executes the software.
  • In Code Inspections and Design Reviews, developers "execute" the software.
  • In Prototype Walkthroughs, user "execute" the "software."

Pre-Test Confidence. How confident are we about the software before we begin the testing?

  • Before Alpha Testing, our confidence in the software is lower (compared with Beta Testing).
  • Before Beta Testing, our confidence in the software is higher (compared with Alpha Testing).

Decision Scope. What kinds of decisions will we make based on the outcome of the test?

  • For Acceptance Testing, the key decision is shell to release the product.
  • For Integration Testing, the decision may be whether to begin system testing.
  • For Unit Testing, the decision is whether the current coding task is complete.

This list is based on only an hour's work, and on my analysis of only a single list of testing techniques (Scott's), so I don't claim that it is anywhere near complete or correct. It might be useful, though, for people who want to expand their repertoire of testing techniques, or to locate a technique that fits a given purpose or context.

I wonder what would happen if we created a thirteen-dimensional matrix. What parts would of the matrix would be crowded with testing techniques? What parts would be empty?

Thirteen dimensions is more than I can handle. So what would happen if we took two or three dimensions at a time and explored all of the values along those dimensions? Would that be interesting? Would it be useful? Would it help us to identify testing techniques that fit our specific situations? Might we notice holes in the matrix for which we want to invent useful techniques?

comments powered by Disqus