In an attempt to get to all the questions from the webinar sponsored by QAI on software quality and metrics in preparation for the Quest 2014 conference, I’m attacking them a few at a time. The webinar was recorded and can be seen here.

How do you balance (or minimize) the effort involved in collecting and summarizing the results of the metrics to impact to the project work being done?

As you know most people responded that they collected 1-5 metrics.

Most of those surveyed collect 1-5 software quality metrics

Those collecting 1-5 metrics were the majority.

That’s a good start. We recommend 6-10 with metrics spread amongst the different stages in the process. For instance, metrics collected in user stories as mentioned above, as well as those collect in development such as unit tests, tech support metrics, etc. For each phase or part of the process, collect metrics that will enable you to really understand that part of the process. Then you can glue them together and make sense of them with ratios and possibly ascertain some cause and effect. Just collect a few to start with, and then go from there. Start simple. After collecting just 3-4 it will give you guidance as to what else needs more information to make a decision. Remember that you want to make decisions and evaluations based on the metrics you collect. If you can’t do that, either get rid of them or change them.

How do you determine system size?

System size previously was determined by KLOC, which stands for thousands (K) of lines (L) of (O) code (C). The problem with that measurement is that it penalizes highly efficient languages. So these days, the more relevant measurement is function points. User stories can also be used, but the problem with them is the inconsistency in their definition, content complexity and quantity.

It’s hard to calculate cost for test activities. You are right, but you can quantify in broad terms with labor hours, materials (tools and machines).