Agile testing metrics are no different than metrics for other development methodologies, except for timing. Using agile, we want progress to be highly visible and problems (and potential problems) to be known immediately. We can’t spend days calculating metrics when we want to know what’s going on for that day. Collecting data, and then calculating and reporting should be easy and valuable, good ROI. We also need our metrics to be directly connected with goals and questions-answered for our stakeholders. Otherwise, they’ll walk by the whiteboard and won’t care.
Since one of the main reasons or objectives of stakeholders with regard to agile is deliver high quality code that works, and can be considered deliverable product, Continuous Integration (CI) is an integral component of agile. So, our agile testing metrics should be somewhat connected to the CI. With continuous integration, builds are continuously compiled (continuous can be defined as once/day, once/hour, etc.).
The number of tests executing and passing for each CI build is therefore a key metric. Tests should grow as we deliver and test more code. Failing tests are highly visible with each build. We like to track not only numbers that fail, but also, the number of days that a test script (continues to) fails. This gives an aging of our failure, so to speak. We use this to weight failures and try to keep failures less than X days old. Of course if you make X = 1, then this forces us to keep focused on a quality build for each day. Usually we set X to 2-4 which gives us incentive to fix stuff in a relatively short period of time.
As with all metrics, the trend is more important than the raw number. You can’t compare yourself with other teams even within the same organization because there are too many contextual factors that are different (different team size, product complexity, existing code, etc.).
If you’re interested in learning more about agile testing metrics, and how to use them to support you in keeping on track, give us a call.