For Agile projects, everyone is concerned about velocity, quality and working software. So it makes sense for agile test metrics to carefully track anything that would impede those objectives. But as with any metrics, we want to make sure they are:
- Easy to collect – If not easy to collect, we’ll quit collecting them in the long run.
- Repeatable – If the measurement cannot be easily repeated, another reason for quitting.
- Valid – If the measurement and corresponding metrics from the measurements are not reliable representations then they will be questioned, and again, we’ll quit.
Test metrics can be used to help determine whether a project is on track. Metrics such as number of tests run and passed, code coverage and defect metrics can give only part of the picture. Like any data, they need interpretation in context and need to be examined as they relate to the objectives of the team and project. When doing continuous integration in agile, tests executed and passing is critical as it directly pertains to the objectives on speed and quality. Of course trends over time, or comparisons per module/function/story are more important than raw measurements. As the application gets larger with each sprint, the raw numbers of tests executed and tests passed should continue to increase. The ratio of pass/executed should also remain within a narrow band.
If a test fails, it should flag a warning, especially if it is a P1 defect. What we usually do is set a threshold of either an absolute number, or a percentage of outstanding P1 defects at any given time in the build. We never want to be above that threshold for more than a certain period of time, usually 48-72 hours. This helps us to reach the agile objective of software that is releasable and high quality at any given time on a continuous basis.
Regarding white box code coverage metrics, we usually have a baseline in the beginning of the project and work toward increasing that with each sprint. Code coverage is tricky because there could be features or stories written that have no tests. Even so, the code coverage metrics can give you objectives for improvement over time, although code quality is not necessarily related to quality in the end users’ hands.
We also look at value provided in terms of story points planned and completed. This gives you an idea of how well you are planning and estimating work loads. This metric combined with overtime will tell you when your team is exhausted and over reaching. Many times you can correlate this to the metrics related to defects and tests passed described above. It makes sense that a team that is tired will be less productive over time, and make more mistakes as well. Again, spot these things over time, on a sprint by sprint basis.
Use metrics that are directly associated with the goals of the project or for the agile process. And some metrics we can plan or shoot for them remaining constant while improving others. Review them regularly and often to make sure you’re getting the results and desired behavior from them.