A few days ago I was poking around the web for ideas about how to test software, and I saw Scott Ambler’s article about “Full Life Cycle Object-Oriented Testing (FLOOT).” The article includes a list of common testing techniques. As I looked over the list, I noticed that there is a small set of key dimensions that distinguish one testing technique from another. For example, unit testing and system testing differ in the kind of component they test. Stress testing and usability testing differ in the quality attribute that they test for. Unit testing and acceptance testing differ in the nature of the decisions that are made based on the test results.
I love looking for patterns like that, so I spent an hour analyzing Scott’s to identify the dimensions. Here are thirteen dimensions I found, and a few examples that show how different testing techniques vary along each.
Unit Under Test. What type of component being tested?
- In Class Testing or Unit Testing, the unit under test is a class.
- In Method Testing, the unit under test is a method of a class.
- In System Testing, the unit under test is the system.
- In Use-Case Scenario Testing, the scope of the interaction tested by each test case is a user goal.
- In Unit Testing, the scope of each test case is a method invocation.
- In Integration Testing, the scope is a transaction.
- In Coverage Testing, the subset being exercised by the test suite is code statements.
- In Path Testing, the coverage is logic paths.
- In Regression Testing, the coverage is code changes.
- In Boundary-value Testing, the coverage is limits.
- Installation Testing tests the system's installation procedure.
- Functional Testing tests the system's business functionality.
- Integration Testing tests interactions among subsystems.
- In Inheritance-regression Testing, the relationship between units is inheritance.
- In Integration Testing, the relationship is collaboration or peers.
- In Stress Testing or Volume Testing, the quality attribute being tested is throughput or latency or capacity.
- In Usability Testing, the quality attribute being tested is usability.
- Acceptance Testing focuses on the interests of users.
- Operations Testing focuses on the interests of operators.
- Support Testing focuses on the interests of support staff.
- In a Pilot, the test is the actual operational environment, perhaps limited in scope (e.g. a small subset of users, or for a limited time).
- In Beta Testing, the environment is a fully operation environment, but perhaps used only for non-critical functions.
- In Acceptance Testing, the environment is a non-operational similar to the operational environment.
- Unit Testing is done in the development environment.
- In Black-box Testing, the tester exploits no knowledge knowledge of internals of the unit under test.
- In White-box Testing, the tester exploits full knowledge of internals.
- In Grey-box Testing, the tester exploits some knowledge of internals.
- For Acceptance Testing or User Testing, the tester is a user of the software.
- For Unit Testing or Developer Testing, the tester is a developer of the software.
- In most kinds of testing, a computer executes the software.
- In Code Inspections and Design Reviews, developers "execute" the software.
- In Prototype Walkthroughs, user "execute" the "software."
- Before Alpha Testing, our confidence in the software is lower (compared with Beta Testing).
- Before Beta Testing, our confidence in the software is higher (compared with Alpha Testing).
- For Acceptance Testing, the key decision is shell to release the product.
- For Integration Testing, the decision may be whether to begin system testing.
- For Unit Testing, the decision is whether the current coding task is complete.
I wonder what would happen if we created a thirteen-dimensional matrix. What parts would of the matrix would be crowded with testing techniques? What parts would be empty?
Thirteen dimensions is more than I can handle. So what would happen if we took two or three dimensions at a time and explored all of the values along those dimensions? Would that be interesting? Would it be useful? Would it help us to identify testing techniques that fit our specific situations? Might we notice holes in the matrix for which we want to invent useful techniques?