Of course we don’t want to waste our clients’ time, and that’s why we measuring test effectiveness is critical. For any defect we report, we want it to be a real defect, so we calculate these metrics:

  • Defect finding capability: real defects found / total defects found
  • Defect rejection rate: defects rejected by client / total defects found

This can measure how well the testers know the application. What is important here is to analyze why defects are rejected or invalid. Some possible reasons:

  • Could not be reproduced or repeated
  • Could not be understood, not enough detail
  • Not a defect – works as it should

By looking at the reasons that defects are rejected, we can then determine where our testers need more training. Perhaps they are just not good at expressing themselves, or expressing in a way that a developer would understand. Or, if the defect can’t be reproduced, then they need to be more thorough and do their homework on the defect before reporting it. “Not a defect” can be a cloudy one where the developer says it is supposed to work that way, and perhaps it is not documented well in the requirements, which can also point to problems in that area.

In a few of our projects where we have been working with the software for a long period of time, our testers get to know the software, or have an overall view of the software that is better than our clients. In these cases, our defect rejection rate is often 0.