Here we go again…even more valuable information from Jay Philips and Mike Lyles, two of the panelists from December’s lively discussion about Software Metrics entitled, The Good,The Bad and the Metrics.
Test Metrics Love ’em or hate ’em everyone has an opinion about them…
some even have questions about ’em….
Q. What are the purpose of metrics and how can they be done ‘well’?
A.(Jay) The purpose of a metric is to let others in your organization what is happening.
You know what you are doing, which defects you found, what areas of the application work as expected but how do others in your organization know?
The metrics allows others to see what you know visually.
A.(Mike) I go back to something Michael Bolton told me once. And that is that testers are investigative reporters.
We review the situation, we determine our assessment of the situation, and we report on that situation.
I like that statement, because we rarely are the cause or the solution. But we definitely have to ensure we accurately assess the condition – and that we can be confident in our reporting, and that our stakeholders can build confidence in our reporting as well.
Q. Is Defect Removal Efficiency (DRE) a valuable metric or a waste of time?
A. (Mike) It really depends on your organization. As with all metrics, it’s making sure you understand what the company is trying to accomplish and what the stakeholders expect.
Is the goal of your work to ensure that you have cleared defects FAST, early, before production? Maybe DRE is a metric you track.
If the goal of your efforts is to ensure that you provide a quality product that moves to production with little to no defects, and this is NOT time-boxed, then maybe DRE is less important to you.
You have to understand the fundamental goals of the organization – speed, accuracy, cost – and understand what is important. And then, if you feel this is a bad indicator of health, then you push back.
A. (Jay) As Mike wrote, it really depends on what the metrics means to your organization. All metrics can be useful if used correctly and understood, at the same times those same metrics can be a waste of you don’t know what the metric is for and what to do with it when it’s shown.
Refer to my post on creating metrics that matter: http://www.teamqualitypro.com/software-metrics/6-key-steps-in-creating-metrics-that-matter/
Q.Would you advocate measuring test effort rather than test numbers? i.e. score a test in effort wise?
A.(Jay) I’m not a fan of focusing on the test effort because one tester might take longer to write one test case than another but that doesn’t mean that one is better than another.
Organizations should be focusing on the number of defects the test team found, which helped increase the application quality to the end user.
Focus on the number of performance defects that were found and resolved that didn’t make it to production so the application didn’t go down, which would cost the organization a huge amount of money. This is where you can really show the testing ROI.
A. (Mike) Honestly, I see too many organizations getting hung up on the cost of testing, and almost always the % of testing time and cost verses development. If you’re doing this, then you are not focused on quality.
With that said, many testing organizations are GETTING this pushback because they are not able to clearly articulate how they came about their staffing plan and estimation of testing efforts.
Make sure you have your facts straight, and your process to build the testing and execute the tests correct – then go forth with confidence – regardless of the effort %, cost, time, etc. But i will tell you that more often than not, organizations are more focused on testing effort than results.
Q.What I think as a result out of calculated metrics is often missing is to adjust the priorities of executing single test cases. In case I find a lot of defects in an area, maybe similar test cases “around” the test case that failed or with similar requirements should get higher priority to be executed. Normally Priority of test cases is. defined once during test case specification and not anymore later during test execution, when errors are found. I think that apart from measuring metrics and creating reports it is important to react on the measured metrics, e.g. in case there are lots of defects in a module or test object, then maybe more test cases need to be specified especially for that module (where you find a lot of defects there are probably much more to find), or perhaps the complexity is high and parts of the software needs to re-designed.
A.(Jay) You should revisit the priority of your test cases prior to each release since each release is different. By doing this you will be able to determine what areas should be focused on more.
A.(Mike) I think this was a statement/question around a comment I made in the webinar around how to best understand how metrics and results can help drive you to specific targets for your future testing.
In other words, if you have a group of developers who typically have the most defects related to their code, you may want to target your testing coverage highly for any and all future work being done by this group.
If you have a specific module that typically always has a high number of defects regardless of the developers, that would, in my opinion, be an area for focused testing, exploratory testing, automated regression, and detailed tracking.
To your second sentence on the priority of test cases – I don’t feel I would ever want to prioritize at the start and never revisit my priority or plan for cases. This seems to be an old process and one in which you are setting your testing plan early and not leaving yourself open for exploratory and context driven practices.
Q.This poll is a splendid example of misleading measurement.
A. (Mike) I know why this was was stated. We had a Go-To-Webinar poll that was being taken where the attendees could answer “YES” or “NO” and we got a population number for how many agreed or disagreed with a question.
However, the setup of those questions requires that someone put in that you can only select ONE option. Otherwise, it allows you to select BOTH if you try.
And, since we are (mainly) testers in the webinar, that was exactly what happened. we had a question where the responses were, for example “58.2% YES” and “43.7% NO” with a total that was more than 100%.
This was eye opening but should be a good example of how that no matter what your intent is with a given metric, you have to ensure that the setup of those metrics, and how you plan to record them and present them are spot on.
I saw once on a social network that someone posted a funny quote that said “53.2% of all statistics are made up”.
Just because you see a number doesn’t meant you can’t question it, examine it, and clarify. The age old comment “Trust…but verify” will always hold true here.
A(Jay) I thought the poll example was great because the audience didn’t know about the setting that Mike put in place so everyone jumped to conclusions on what the metric was telling them.
This was a perfect example of how everyone should have been communicated on what the metric was, how it was being gathered and what the calculation was going to be.
Q.How do you avoid bowing down to an large audience and end up producing the 50-60 page report?
A. (Mike) Three words: Death by Committee. We get caught up in the spiral of making sure we have all the metrics that you can have and create, and what most always happens is that the result is a testing deck in powerpoint that is only looked at by the team building it.
I have been tempted, in past roles, to place a slide in the report that says “CALL MIKE IF YOU SEE THIS SLIDE AND YOU GET $20”. I am sure my money would be safe, because the audience was NOT reading the reports.
Be careful of building the mansion when all you need is a one room house. Stick with the simple things first. And when you build on, examine what you can trade off.
Don’t “overmetric” your company.
A(Jay) When you are determining the key metrics with your stakeholders determine the ones that make the most sense.
Don’t create a metric just because someone said that one looked pretty or I heard about this metric.
If you are asked by one person to add a mew metric take it back to the whole team and verify that everyone agrees that metric will be of value.
Q.Metrics for agile testing? Should they differ from the metrics gathered for more traditional testing?
A.(Jay) As answered previously, the metric should be based on the approach but on the outcome of the approach.
A.(Mike) I do not believe so. As I mentioned in an earlier question, it’s less about the approach for testing or the methodology of the type of development, and more about what your stakeholders are expecting to see on the health of the product, and how you plan to deliver the results of your testing.
The way you build those results may differ based on waterfall or agile (for example the timing of those results or frequency), but I would think the metrics would not be different.
Q. If the company is cmmi level 5 company, but if they don’t have particular standard methods/practices – are they failing testing metrics ?
A(Jay) This is a perfect example of why your organization would not implement a CMMi metric.
Just because you don’t have a certain standard in place doesn’t mean you’re failing. However, if your defects are being found more in Production than in System Testing then your testing is failing.
A(Mike) Yes. No. Maybe. LOL – I would not focus my testing around the overall CMM level competencies. I would be more interested in the maturity of the testing that is going on.
And I would be measuring the results of the testing that was being conducted based on the leakage of defects into production. In my opinion, it doesn’t matter if you are doing requirements based testing, context driven and exploratory, if you are CMM1 or CMM5, or if you have a terabye of process and methodology documents — if your product is failing over and over in production, and customers are unhappy and your product is not improvement, then you are failing as an organization.
A good standards and methodology practice is good, but its just one part of the equation.
It’s like saying that you exercise every day, but you eat only junk food, candy, cake, and things that are very unhealthy. It’s not good.
Q.Can you describe in more detail your automated dashboard…contents, frequency, device availability, etc…?
A(Mike) This would be Jay’s dashboard, so I will defer this question to her.
A.(Jay) The automated dashboard, TeamQualityPro, allows your entire organization to get the same information from a variety of different data sources. You can have it setup to show defect results, testing results, hours, project planning etc.
The dashboard can be setup to pull data every hour/day/week so it always available for people to view from where ever they are. The dashboard does not require anything to be installed on the users machine since it’s web based, so it can be housed within your organization or SaaS model.
The dashboard is also mobile ready so you can check for last minute changes via your iPhone/Android if needed. You can get more information/go through the demo at: http://www.teamqualitypro.com
Q.This dashboard makes me want to ask: what’s the difference between metrics that seek to measure internal health/efficiency of the testing effort vs. metrics that seek to measure the value of the testing outcomes to the business?
A.(Mike) Since this was Jay’s dashboard, I will let her answer specifically, but my note here is that both metrics are valuable to me. Think of it as a restaurant. I go into the restaurant and I eat, and I determine if the value of the food is good, bad, horrible, or delicious.
But before I even enter the building, they are measuring their own “grade” with internal inspections to ensure their kitchen is clean, their people are washing their hands, they are using the best methods to prepare the food, they are not contaminating the food, they are not wasting ingredients, etc.
I think it’s important for testing organizations, REGARDLESS of whether they use metrics or not, to ensure they are internally auditing themselves before working for their customers (the stakeholders).
A(Jay) The dashboard and metrics should measure both the productivity & the quality within all aspects. Stakeholders want to know how they are doing internally as a whole as well as each business unit, the only way to do this is to look at all areas of the organization.
Mike’s explanation of a restaurant is spot on and a perfect example of why you would want to track the internal health as well as the outcomes.
Q. How often should metrics be placed in front of stakeholders? Weekly, Monthly, etc.?
A.(Mike) Every 17.234 minutes.
Okay, This is one of the last question so I had to leave with a funny response.
The serious response is that it depends. And that dependency is on your audience, your stakeholders, what you’re trying to convey, and how you want them to understand the work that has been done. Is the product you are testing a highly critical product that ensures a hospital device works properly and keeps someone alive?
Then your reporting may need to be very frequent and very detailed. Is the product you are testing something that is less critical?
Maybe a weekly status or biweekly status is in order. Regardless of the frequency, you must ensure that its accurate and valid.
HOW you report the metrics is less critical than WHAT you are telling them. Ensure you don’t lose their respect
A (Jay) Only 17,234 minutes? Geez, Mike, that’s not enough.
The metrics would be dependent on your organization needs, which may be different based on the viewer.
The big thing to remember here is to be consistent of when the metrics are delivered, how they are delivered and what is delivered.
I always say “Don’t present any metric that you can’t back up with facts and/or recreate”
“Feeling the Love” from Positive Audience Feedback …
The best slide was the Suggested Metrics EXAMPLE slide… would like to see more ideas of what will be valuable yet simple to understand for management
(MIke) Please follow my blog at www.mikewlyles.com – and I suggest Rex and Jay’s sites as well. I have a link called “Testapedia” on my site, and it will share with you many of the sites that I have followed and read over the years. This should give you various views on the topic.
Also, I plan to keep you posted with a calendar update for my upcoming work, speaking engagements, and articles. Stay tuned.
But the one thing I will leave you with on this question is that you have to first undrestand management before they can understand you.
Stephen Covey had 1 of his 7 Habits called “Seek First to Understand, then to Be Understood”. Such a strong statement. Make sure you understand your customer before you beg them to understand what you’re trying to tell them. You’ll be much more successful.
Good discussion…continue as long as you can…maybe schedule a follow-on webinar…
(Mike)Thank you, and yes, I plan to both write on this topic, as well as pull in various angles from many of the experts in the field to continue this discussion in the near future.
There is a lot to be said about metrics, and whether you feel they are good, bad, misleading, or just plain confusing, everyone has to agree that the stakeholders have to get SOMETHING.
(Jay)That is our purpose in life – to show what we did, why we did it, and what the results were.
Thank you and I agree with Mike, this is one topic that truly could use more webinars, classes, workshops, etc. Metrics exist in all organizations some good, some bad.
Our goal should be to get the bad metrics out and get the good metrics better.
I’ve been more engaged in this webinar than I have in any other webinar I’ve attended this year…good topic.
(Mike) Thank you so much for your kind words. Follow me at www.mikewlyles.com for more and also all my social networks collaborated at http://about.me/mikelyles
(Jay) That’s great to hear. Please connect with all of us as we are always interested in discussing topics like this.
Well…That’s all for now follow @xbosoft to find out about upcoming webinars.
Thank you to Jay and Mike for taking time out to answer the questions.