As mentioned in a previous post on End User Perspective of Quality, quality means much more than conformance to requirements and the end user perspective on quality, sometimes called the User Experience (UX), is becoming more and more important. Measuring UX is a tough task but its important an any attempt to quantify end user quality and quality in general. Many often give up thinking that there is just too much subjectivity. I'd think measuring UX can be done from another viewpoint. UX depends on many factors, not just the software interface and the 'wow factor' of the interface, but the entire experience in interacting with the company. An end user's overall UX is also therefore influenced when they call your help desk or technical support. Some folks refer this as the front line in dealing with the customer/end user. Current metrics in measuring service desk performance are rarely tied to software quality
XBOSoft kicked off its Knowledge Bites series on March 18. This series is designed to take a deep dive on specific topics quarterly. This quarter, the 3-part series partnered with CEO and Founder of SQE Labs, Shilpa Dadhwal. The focus of these session was how software quality and the right metrics can drive business value. She identified the framework to understand software quality and software quality metrics. Specifically, she discussed who owns different measures and how to measure the attributes accurately and how to tie it all together so that organization can leverage all the metrics effectively. All easier said [...]
This is the last batch of questions from the webinar as a pre-cursor to the session I'll be doing on software quality measurement for the Quest 2014 conference. These questions focused on something really relevant in my research which is a "software quality scorecard"
Many agile metrics are related to velocity since agile is supposed to deliver product faster. That makes sense, you want to measure and manage what it is you want to achieve, speed. Perhaps you think less documentation, no need for a test plan, etc... leads to speed! However, velocity should only be one of several agile software quality metrics. And just thinking and focusing on velocity can have detrimental effects. After all, if you run full speed constantly, the body breaks down. Many forget that one of the keywords in the agile manifesto is 'sustainable'.
In an attempt to get to all the questions from the webinar sponsored by QAI on software quality and metrics in preparation for the Quest 2014 conference, I'm attacking them a few at a time. How do you balance (or minimize) the effort involved in collecting and summarizing the results of the metrics to impact to the project work being done?
Still trying to get to all the questions from the webinar as a pre-cursor to the session I'll be doing on software quality measurement for the Quest 2014 conference, I'm attacking them a few at a time. We also recorded the webinar, so you may want to view it here. What is the general improvement (cost %, time % of overall budget/duration of project) seen by implementing a measurements & metrics framework across the industry?
In a recent article I published at Software Test Professionals, one comment I received was that it was regarding Table 1 indicating that different people are likely to be interested in different things with regard to software quality and test metrics, and that many stakeholders were not included in the table. What other stakeholders would you include? Another comment regarding my list of "why metrics/measures fail” stating that since it was a listing of common reasons for failure, the implication is that there are other reasons. Certainly there are many reasons for failure of metrics programs
In the recent webinar "Software Quality Metrics - Do's and Don'ts", with ASTQB, we ran a few polls to get an idea of how software quality professionals are really using, OR NOT Utilizing Software Quality Metrics. It turns out that:
Here we go again...even more valuable information from Jay Philips and Mike Lyles, two of the panelists from December's lively discussion about Software Metrics entitled, The Good,The Bad and the Metrics. Test Metrics Love 'em or hate 'em everyone has an opinion about them... some even have questions about 'em.... Q. What are the purpose of metrics and how can they be done 'well'?
For those of you who were wrapped up in holiday madness and may have missed the December webinar The Good, the Bad and the Metrics featuring Jay Philips, Mike Lyles and Rex Black- fear not we've compiled (OK we nagged a little) the answers from two of our very busy but also gracious panelists from this very interesting discussion. For those of you who joined us for the live presentation thank you for your patience. Here is Part 1 of the Questions/Comments that attendees sent in that were not previously addressed. This is so much valuable information to share that it will be split into 2 blog posts.
Agile testing metrics are no different than metrics for other development methodologies, except for timing. Using agile, we want progress to be highly visible and problems (and potential problems) to be known immediately. We can’t spend days calculating metrics when we want to know what’s going on for that day. Collecting data, and then calculating and reporting should be easy and valuable, good ROI. We also need our metrics to be directly connected with goals and questions-answered for our stakeholders. Otherwise, they’ll walk by the whiteboard and won’t care. Since one of the main reasons or objectives of stakeholders with regard to agile is deliver high quality code that works, and can be considered deliverable product, Continuous Integration (CI) is an integral component of agile. So, our agile testing metrics should be somewhat connected to the CI.
Have you implemented a software quality metrics program? If so, then you know its not easy and that metrics programs often fail. Why? Software Quality Metrics - Top Reasons for Failure --Metrics don’t measure the right thing --Metric becomes the goal --Metrics don’t have consistent context so they are unreliable --Measurements used are not consistent --Metrics don’t answer the questions you had to begin with --Metrics have no audience --Measurements are too hard to get --Metrics have no indicators so cannot evaluate
October 25 at 10 AM EST we are holding a webinar in which we explain how XBOSoft’s Quality Process Assessment (QPA) can help you prevent defects throughout your software products’ entire life cycle, above and beyond development and testing. We show how to examine your software product's lifecycle and how each phase impacts your customers' view of the product's quality. This includes phases such as: - Sales & Marketing-Product/Project Sponsor (intra-Company software) - Product Management (Requirements) - Development - Quality Assurance - Customer Service/Tech Support In this webinar consultants from XBOSoft will share with you their first-hand experience. You'll learn: - [...]
In a nutshell, establishing and implementing a measurement program for software quality needs more than software testing related metrics. Testing metrics (like defect statistics) are detective type metrics which are very late in the product life-cycle, and also only include one part of the organization. What about metrics that could prevent the defects to begin with?
In my last post, I mentioned that metrics could be used for different purposes. One of them was tracking progress. Usually when we track progress, it is related to time, or other unit that indicates a schedule. Most often we use progress metrics to track planned versus actual over time. What we track depends on our role. If we are financial people, then we track money spent. But for software quality assurance, we want to track progress of such things as defects, test cases, man-hours, etc. Basically anything that is related to results or effort spent to get those results. For instance, here are a few:
Last time, I blogged on some general flows of defects and 'buckets' of measurements that I thought were possible. Then I thought that I needed to take a step back and think at a higher level about software testing measurement in general. Software testing measurements can help us in a few areas including: Planning: Cost estimation, training and resource planning, budgeting and scheduling Controlling: Track testing activities for progress and compliance to the plan Improving: Benchmarking processes and identifying where efforts should be focused and then measure the effects. There are many other areas that measurement can help us
Some of the research that we've been doing on web application quality, quality evaluation, and software quality metrics, has been chosen for presentation at the Quality in Web and Mobile Engineering Track. The title of the paper I'll be presenting is: Using Web Quality Models and Questionnaires for Web Applications Understanding and Evaluation. Fortunately, our research was also selected as one of the best papers of the conference. So, I'm so lucky... I get to present the paper in the conference main track as well! So if you have a chance to come see me in Lisbon on Sept 3-4, 2012, I'll take you out for dinner! Software quality and User Experience In summary, the paper presents
Today, I sat down and started thinking about defects and how to analyze them. There are many standard defect metrics such as: Defects Found/Resolved Defect Severity Defects Found/Test Cases Executed The list goes on. I'm not really a 'mindmapper', but I decided to think of defects in terms of flows, where they come from, what influences them, and what happens to them. I started doing a sketch and this is what I came up with. Flowchart of defect sources, inputs, and outputs Next I'll start analyzing each oval and see where that leads. If you have any input [...]
The concept of user experience has been given much importance in the contemporary human computer interaction research. However, modeling user experience requires quality evaluation schemes that are not restricted to the traditional concepts of usability
If your users cannot figure out your application in less than 30 seconds they’re gone. The UX and usability of your applications are their competitive advantage. May 31 11:00 AM Eastern UX and Usability Expert Phil Lew will discuss how to set up and implement the right measurements to continuously deliver the applications your users love to use. Together with RNA Health Phil will present a case study for a mobile health care application. See you there!
We've been testing mobile applications in the field as well as in the lab for one of the largest mobile device manufacturers in the world. We test in the field, in different countries to determine how different carrier networks, applications, and tasks, impact the user experience. When doing mobile performance testing, there are any number of factors that can be measured and tracked. They all will vary depending on the device, it's configuration, network, task being executed (what software and what action), etc.
How to define and measure UX for mobile? I discussed usability and UX modeling and measurement models. One of the models was based on logging user activity and determining if they completed tasks. If a user can complete a task easily, or learn how to complete a task easily, this is critical. Learning is especially important for mobile applications because
Any organization that develops software strives to improve the quality of its products. To do this first requires an understanding of the quality of the current product version. Then, by iteratively making changes, the software can be improved with subsequent versions.
When most people think about software quality metrics, they think about defect related metrics such as: Total defects, Defects at delivery, Mean time to defect, Defect burn rate, Mean time to failure, Mean time between failures, etc. While examining defects is a start to developing quality metrics, it’s just a start.