Software quality is an abstract concept. So abstract, that standards bodies develop models to understand it better, i.e. ISO 25010. Quality models help us to crystalize and understand quality and quality requirements, both functional and non-functional, with the goal of evaluating them. The goal of testing is to determine if these requirements are met. During the course of testing, we find defects, or instances where the software does not meet requirements. Hence, in the area of software testing metrics, there has been abundant work in analyzing defects.
With respect to analyzing defects, there are many flow charts detailing how defects flow back and forth to QA with changes in state (fixed, open, re-open, etc.). There are also numerous software applications (defect tracking systems and defect management systems) that help us track defects at the different phases of development and after release. However, these activities are rarely connected to metrics in such a way that is easy to analyze. Rather, there are many defect metrics often listed out with definitions and calculations but often there is limited direction on when and where to use them and how to benefit from them.
With that in mind, we have been working to analyze defects to ensure that testing can be as effective as possible toward the goal of improving software quality. To do this, we take the approach of understanding defects and their flow, combined with metrics that can measure the activities within the defect flow. To begin, we charted defects and their flow and realized that we needed more than a mind map and more than just simple arrows and rectangles in order to express our meaning and provide a good foundation for analysis. So we used an abbreviated form of UML in order to assign attributes such as metrics to different entity types. Our defect flow chart, prior to software release, is shown below.