Software Testing and the Hawthorne Effect

I've been to many sessions on software testing metrics where the instructor will discuss the software testing Hawthorne Effect. Often cited are lighting experiments done in the 30's where the light in a manufacturing facility was increased and decreased and its positive effect on factory workers' productivity. When applying the results of these experiments to software testing, most will then discuss testing metrics such as test cases written or defects found, and the unexpected consequences or changes in behavior that can result from using such metrics. But I think we are missing the importance and significance of the Hawthorne Effect. First, the Hawthorne Effect was based on several experiments not only in lighting but also, in many other factors such as break times, food, and payment schemes. Secondly, the interpretations of the Hawthorne experiments vary, and many researchers have derived different conclusions. Some of their conclusions, I hereby summarize as the Hawthorne Lessons:

Agile Defects – Should We Write Up Defects in Agile

Some folks might interpret "Working software over comprehensive documentation" to mean that there should be no agile defects written up. Shouldn't all defects be fixed for "working" software within an iteration? There is optimal collaboration within the sprint and thereby there is no need for an actual defect to be written up. Right? The only problem with this scenario is that is ideal, but rarely happens in reality.

Defect Removal Efficiency Versus Defect Detection

Just mention software testing metrics and there is ample discussion on Defect Removal Efficiency (DRE), so let's examine it closer. 

Back in the days of waterfall, if we found 100 defects during the testing phase (which we fixed pre-production) and then later, say within 90 days after software release (in production),  found five defects, then the DRE would be what we were able to remove (fix) versus what was left and found by users:

100/(100+5) = 95.2%

This 95% has been referred by Capers Jones, a well known software measurement colleague, as being a good number to shoot for.

Defect Aging – Time to Fix

We're back again with outstanding questions from our Agile Metrics Webinar. This question relates to Defect Aging.

Q: Slide 63 - Is this also called Defect Age?
A: Yes, what we term as Time to Fix is equivalent to Defect Age, but a little different. With Defect Age, you could measure all the defects that you have fixed and

By ||Categories: Agile Testing, Software Testing Metrics|Tags: , , |Comments Off on Defect Aging – Time to Fix

Defect Removal Effectiveness – Agile Testing Webinar Q&A

In our recent agile testing webinar, we had an outstanding question on Defect Removal Effectiveness (DRE). Query1: For DRE calculation, I believe below is the correct measure. Can you please double confirm? DRE = Defects found pre release/DFP+Defects found in prerelease Yes, your calculation is correct. Ours is actually the inverse and would be called Defect Escape Rate (wanting to keep low) instead of Defect Removal Effectiveness. Let’s do an example just to illustrate.

Defect Removal Efficiency – Software Quality Celebration

Sometimes there a a need to party, and when we're able to increase Defect Removal Efficiency for our clients, we party up. Metrics are a key part of measuring our service provided to our clients, so we track it carefully. Not only do we need to measure the DRE % in aggregate, but also by tracking pre-production defect issue types versus post-production issue types, we can determine where we need to focus more effort, and possibly beef up our knowledge. For instance, if most post-production defects are found on certain platforms then perhaps we can look closer at deployment configurations or particular platforms. On the other hand, if certain functional areas have more defects in production, then we know somehow these escaped and that we need either more training in these areas or more focus. DRE REDUCTION PARTY

Measuring Test Effectiveness

Of course we don’t want to waste our clients’ time, and that's why we measuring test effectiveness is critical. For any defect we report, we want it to be a real defect, so we calculate these metrics: Defect finding capability: real defects found / total defects found Defect rejection rate: defects rejected by client / total defects found This can measure how well the testers know the application. What is important here is to analyze why defects are rejected or invalid. Some possible reasons: Could not be reproduced or repeated Could not be understood, not enough detail Not a defect – [...]

Measuring Rework

The goal of agile is be faster and produce better software. To do this, we need to reduce and minimize rework. One key metric for measuring rework is:

  • Fix Effort Ratio - Time to fix bugs/Total effort expended (hours)
This metric depends on valid data whereby people are logging real hours to tasks they take on. This ratio will decrease over time because the #defects will decrease and also because of better defect definition and reporting over time, which means quick understanding, and thus faster fixing by developers. This directly increases velocity for new features because developers are then working more on new stuff and less on defect fixing. For rework to be reduced

Agile Test Metrics – Connecting Metrics with Agile Objectives

For Agile projects, everyone is concerned about velocity, quality and working software. So it makes sense for agile test metrics to carefully track anything that would impede those objectives. Test metrics can be used to help determine whether a project is on track. Metrics such as number of tests run and passed, code coverage and defect metrics can give only part of the picture. Like any data, they need interpretation in context and need to be examined as they relate to the objectives of the team and project.

Software Defect Metrics – Getting Ready for Softec 2014

As I prepare for my talk in Kuala Lumpur at Softec 2014, I started thinking about our own projects at XBOSoft and the software defect metrics that we use internally to see how we are doing for our clients. There are the normal ones such as defect escape or leakage rates, and defect fix time, technical debt reduction - refactoring. But from a 'pure qa' point of view and in particular XBOSoft, we want to reduce the work for our clients while improving the quality of their software. Some of the key metrics we track include:

Defect Removal Efficiency – Highlights from the ASTQB Software Testing Conference

During the ASTQB 3 hour workshop, Software Testing Metrics Are For Action, one of the top questions was the use of DRE, or Defect Removal Efficiency. I was surprised to find that out of the 40 participants only 5 used this metric (12%). This number was similar to a workshop I was in last year at PNSQC where the material presented indicated that even less than 12% considered this metric essential in measuring as part of a metrics program. The class was lively and spirited. We divided the class into 3 modules:

Managing Testing Projects With Metrics

In today's webinar, Shaun Bradshaw with Zenergy Technologies, discussed how to use the 'S-Curve' in managing testing projects. First he discussed metrics in general and where they should can should not be used. In particular, he discussed context and how there are always interpretations and assumptions that go with metrics. These interpretations and assumptions are key in using metrics and are one of the most prevalent reasons that metrics fail.
Then he discussed...

Managing with Metrics – 2/6/14- Webinar with Shaun Bradshaw & Philip Lew

Some consider test metrics and in particular, managing with metrics, a thorn in software development and testing, but when used properly, they provide valuable insights into what occurs during projects and what strategic and tactical adjustments must be made on a daily basis. Find out how a small set of test metrics were used to successfully manage a major test acceptance effort.  This was done at the conclusion of a two and half year ERP implementation. Bradshaw brings to attention specific use of the S curve analytics.  He provides an interesting history on how  the "S-curve" was first discovered.  Obviously, [...]

December 17, 2013
Webinar on Software Testing Metrics with Mike and Friends

San Francisco,CA (PRWEB) December 16, 2013

The Good, the Bad, and the Metrics, webinar focused on when to use and not to use software testing metrics.

On December 17, XBOSoft hosted another complimentary software quality webinar; this time on software testing metrics. Are metrics worth the effort? Speaker Jay Philips agrees,"Bad metrics are worse than no metrics …" Where and how does your organization use metrics and measurement if at all? Are your current metrics clear or perhaps misinterpreted?

Software Quality Metrics For Testers – Presentation at StarWest

Yesterday, I finished my presentation at Starwest on Software Quality Metrics For Testers. It was a packed room and I was really surprised. I started by discussing the title, why 'quality metrics' instead of 'testing metrics'. I like to use analogies, so I started out by discussing diet and health, something everyone can connect with. With diet it's easy to see that what we eat affects our weight, and our weight in turn impacts our performance and results if we engage in sports. Using that we can compare that to testing, where activities done well early in the development lifecycle such as requirements can influence code quality, and satisfaction of requirements. This in turn, can impact the test results, and in the end, the users.

Agile Testing Metrics

Agile testing metrics are no different than metrics for other development methodologies, except for timing. Using agile, we want progress to be highly visible and problems (and potential problems) to be known immediately. We can’t spend days calculating metrics when we want to know what’s going on for that day. Collecting data, and then calculating and reporting should be easy and valuable, good ROI. We also need our metrics to be directly connected with goals and questions-answered for our stakeholders. Otherwise, they’ll walk by the whiteboard and won’t care. Since one of the main reasons or objectives of stakeholders with regard to agile is deliver high quality code that works, and can be considered deliverable product, Continuous Integration (CI) is an integral component of agile. So, our agile testing metrics should be somewhat connected to the CI.

Software Quality Metrics – Top Reasons for Failure

Have you implemented a software quality metrics program? If so, then you know its not easy and that metrics programs often fail. Why? Software Quality Metrics - Top Reasons for Failure --Metrics don’t measure the right thing --Metric becomes the goal --Metrics don’t have consistent context so they are unreliable --Measurements used are not consistent --Metrics don’t answer the questions you had to begin with --Metrics have no audience --Measurements are too hard to get --Metrics have no indicators so cannot evaluate

Agile Testing with Jared Richardson and Philip Lew – Webinar March 20 2013

We are happy to announce that March 20 at 10 AM EST we are holding an agile testing webinar with two veterans in the field: Jared Richardson and Philip Lew. Jared and Phil will discuss the changes needed for QA and Testing when working in an agile environment. Agile Testing Topics covered include: - Agile development trends - How to test 'agile' - How to implement scrum - Typical scrum testing bottlenecks and how to solve them - Testing agile requirements - Agile test metrics For Jared Richardson & Philip Lew Jared Richardson Jared is a software consultant. [...]

Software Testing Metrics – Defect Analysis

Software quality is an abstract concept. So abstract, that standards bodies develop models to understand it better, i.e. ISO 25010. Quality models help us to crystalize and understand quality and quality requirements, both functional and non-functional, with the goal of evaluating them. The goal of testing is to determine if these requirements are met. During the course of testing, we find defects, or instances where the software does not meet requirements. Hence, in the area of software testing metrics, there has been abundant work in analyzing defects. With respect to analyzing defects, there are many flow charts detailing how defects flow back and forth to QA with changes in state (fixed, open, re-open, etc.). There are also numerous software applications (defect tracking systems and defect management systems) that help us track defects at the different phases of development and after release. However, these activities are rarely connected to metrics in such a way that is easy to analyze. Rather, there many defect metrics often listed out with definitions and calculations

Show Buttons
Hide Buttons