Software Testing Metrics – A Balanced Approach

Some say you can’t manage what you can’t measure. But before measuring, you need to know what to measure and why to measure. And when you decide why to measure, you need to think about the actions you will take based on the measurements. If you won’t be taking any action, why bother measuring in the first place. Metrics must not only be collectible and accurate/reliable, they must be meaningful, with actions taken upon their analysis that align with business objectives. Because many organizations don’t think through this carefully, they may begin collecting software testing metrics that don’t align with their business and waste time doing so. Everyone knows of measurement programs that fell by the wayside.

In the big picture, software testing metrics are just one of the ways to measure and improve software quality. Wikipedia mentions 5 perspectives on software quality amongst various definitions by Deming, Feigenbaum, and so on. However, from a practical perspective software quality can be measured and improved from 3 main elements:

  • Process quality – the process and procedures used to deliver the software.
  • Internal product quality – the quality that no one sees, such as code quality.
  • External/Delivered product quality – the quality that is delivered and used by the end user/customer.

When we take measurements and develop metrics, they can usually fall into these three categories. Some typical metrics associated with software quality and testing that we frequently encounter include:

  • Customer satisfaction
  • Defect quantities in production
  • Product volatility
  • Critical Defect ratio
  • Defect removal efficiency
  • Test coverage
  • Re-work
  • Customer service-tech support calls/time unit
  • Customer service-tech support call length

You may notice that you may not consider many of these metrics as software testing metrics. But in reality, they are because they are indicators showing the results or effect of software testing and the entire quality effort on the company’s results.

Let’s take the customer satisfaction index for example. Of course there are many factors that influence customer satisfaction, but software quality, especially if software is a main driver of the business model, is a primary factor in driving customer satisfaction. Take Amazon for example, not a book company, but a software company. Although customer service, shipping, return policy and other factors influence customer satisfaction, the quality of each build or release is significantly influenced by their website and its ability to do what the customer wants it to do, flawlessly. So, customer satisfaction would be a measure of the External/Delivered product quality.

Likewise, the number of calls and call lengths into technical support do not appear to be software testing metrics, but in reality they are the result of finding or not finding defects. So these also go into the External/Delivered product quality bucket.

On the other hand, metrics like defect removal efficiency tell us about how the good the development team corrects defects that are found. This would go into the Process quality bucket. Test coverage would be a measure of quality that no one sees, so it would go into the Internal product quality category. Even though it definitely has an influence on the end product quality, it isn’t something that anyone sees.

So, looking at software testing metrics, while helpful, needs to be part of a balanced approach to examining, measuring, and improving software quality from multiple aspects. Attacking only one part will only lead to limited and or biased results.

Show Buttons
Hide Buttons