In my last post, I mentioned that software test metrics can be used for different purposes. One of them was tracking progress. Usually when we track progress, it is related to time, or other unit that indicates a schedule. Most often we use progress metrics to track planned versus actual over time. What we track depends on our role. If we are financial people, then we track money spent. But for software quality assurance, we want to track progress of such things as defects, test cases, man-hours, etc. Basically anything that is related to results or effort spent to get those results. For instance, here are a few:

man-hours/test case executed: The natural tendency in driving costs down is to force this as low as possible, but remember the faster they go, and the more tests that are executed, does not translate into higher quality software.

planned hours/actual hours: We want to track the effort we plan versus what we spend not only just to see if we are planning our resources well, but to see deviations, AND then think and find out why those deviations exist which could point to problems. If we find out that planned versus actual deviates on certain days of the week, or that deviations occur only from certain testers, and those testers are working on specific parts of the software, this is useful information.

test cases executed/planned: This just keeps us on track to make sure we get the bare minimum done in terms of getting our test cases executed. If we find that it takes too long, on a repeated basis, then something needs to be changed. Or if we go faster than normal on a regular basis, then this may point to a problem with the test cases (especially if they find no defects).

test cases executed/defects found: This is a metric that indicates how good our test cases are at finding defects. Test cases run with no defects found or with a low ratio does not mean there are no defects in the software.

Here is a generic chart which shows planned versus actual in terms of test cases executed.

Test Execution Progress Chart

Executed Versus Planned Test Cases

I’m wondering why the gap continues to increase over time… Have to look into that.