The software industry is still quite young, yet as it matures, we’ve seen many development trends, technologies and tools come and go. As you know, one methodology that has become popular of late is Agile. We all know it as an adjective, we all desire to be agile. Who wants to be slow and clumsy? Those that gathered and put together the Agile Manifesto certainly chose a good name for “it”, whatever “IT” is. That’s the subject of this article, the definition of Agile or what is Agile? Agile is capitalized from here on, because we all know we’re talking about the noun, as a development methodology, rather than the adjective.
At the beginning of August, Our CEO Philip Lew spoke at the Softec Asia 2017 conference in Kuala Lumpar, Malaysia. With the theme "Testing As A Service," speakers tackled the topic with knowledge in all aspects of improving software quality. Here is a visual look at what some of the speakers focused on in their presentations.
It’s a long plane ride to Kuala Lumpur, Malaysia. That’s where SofTec Asia 2017 was held the first few days of August. But, with all that I learned — both through formal presentations and informal discussions with fellow presenters — it was well worth the trip.
In this post, we’ll cover Habit #3: Sustain an Improvement Frame of Mind. At XBOSoft, we use these 7 “habits” to guide our everyday work. They’re all important, but I think Habit #3 is all encompassing of the others.The key word here is “maintain.” One of the key tenets of agile is sustainability. If you can’t sustain an effort with a constant pace, then you’ll soon fall behind and lose the race. And what better way to understand your pace than metrics?
Cloud communications software provider Mitel was experiencing various difficulties with new software releases due to lack of sufficient testers on staff. XBOSoft was hired to provide the needed manpower. Over time, XBOSoft's range of services has expanded, with ever greater satisfaction on the part of the customer.
The complexity of account reconciliation software places special demands on the testers of account reconciliation software. The primary challenge: There are two types of accounts to be reconciled, each with its own unique characteristics.
Elvis. Celine Dion. Cher. Penn & Teller. These legends have all appeared on The Las Vegas Strip. And soon, our own XBOSoft CEO Phil Lew joins them when he presenters at Better Software West at Caesars Palace, June 4th through 9th.
Agile demands continuous testing over the spectrum of the development process. Testing by people alone, no matter how skilled they are, doesn’t move quickly enough or kick the tires often enough to produce excellence software.
I read Stephen Covey’s famous book The 7 Habits of Highly Effective People when it first came out in 1989, almost 30 years ago. I was just starting my career then and was able to apply many of the principles to not just my work life, but life in general. When I was invited to give the closing keynote at TestIstanbul, in Istanbul, Turkey, on April 25, I knew giving a tailored version of my 7 Habits of Highly Effective Agile Testers talk would work well for this audience.
As Phil Lew told his TestIstanbul audience, “These seven habits have evolved over the years through working with clients that had software quality issues after implementing agile development. Some people think agile development will solve all their problems." He stresses, "Well, it can solve some, but if poorly implemented, it can cause many others. I’ve been able to boil it down to these 7 habits as a way to continually ‘sharpen the saw’ and root out problems one by one.”
This is the second blog post referencing our recent “Performance Testing Considerations Using JMeter and Google Analytics” webinar. We received quite a few questions we couldn't answer in full during the webinar, so today we're addressing the JMeter specific questions.
QA expert and author Johanna Rothman asks in a recent article whether managers in agile testing should "scale" agile to help multiple teams deliver products. Her answer: an emphatic "No." Why?
"Scaling process leads to bloat. Instead of scaling process, scale collaboration."
A few short weeks ago at the PSQT conference, I had the fortune of sitting in on Tom Cagley's session, titled Impact of Risk Management on Software Quality. One of the components that Tom mentioned as part of the Agile Risk Taxonomy is Agile Organizational Risk or People Risk. He describes it as the "impact of an environment populated by people." Some may think that this is the most nebulous risk, when compared to business or technical risk. I think it's probably one of the most important but, unfortunately, it's often swept under the rug.
Reducing Technical Debt and how to go about it was a hot discussion with one of our clients recently. First we discussed the metaphor and how the debt metaphor really fits for software projects as well as other domains. The metaphor created by Ward Cunningham back in 1992 was meant to help explain to non-technical stakeholders that there was a cost to not doing things the right way; that doing things the fastest way, may have costs later on. But as in the financial domain, debt is not always bad. Sometimes we want to have something now and pay back later. That's the concept of principal and interest. The question is how much is the principal and interest and will the interest bury me in the long run? Just like financial debt which can be a silent killer and can either cause problems or hide problems, technical debt has the same ramifications:
- Tasks that are not done but run the risk of causing future problems if not completed.
- Aspects of the software that have been done incorrectly (usually quicker or easier), but there is no time to do it right (now), or fix it.
In reducing technical debt, first we
One of the main methodologies in agile is extreme programming where programming is done in pairs with extensive peer review. Extreme Testing by using pair testing, not to be confused with Pairwise Testing, is a cousin of Extreme Programming developed at XBOSoft, from the testing point of view. In our own software testing practice, we have used Extreme Testing methods to increase test effectiveness and as a great way to get testers to collaborate in a purposeful way. Within Extreme Testing, we have a few non-mutually exclusive practices:
Everyone talks about how great Big Data is, and how it will revolutionize our world. They also talk about Big Data Risks in terms of security and privacy. I know I often discuss them in my tutorials and talks (most recently at Better Software East on Privacy, Security, and Trust in the Mobile Age, but that may be the least of our worries when it comes to Big Data Risks. The real danger of Big Data that I was able to glean from a recent Keynote (by Malcolm Gladwell) at the Blackline User Conference #InTheBlack15 was that acquiring masses of data does not really help us to make better decisions.
We had a question come in during one of our webinars on test cases, and thought it deserved more thought than a simple answer within the webinar itself. The question was, "In reference to writing unit tests for requirements, requirements should be testable, but why not write the test when you write the requirement. Is that over bearing?"
I recently had a client ask us to assess their situation and then give them recommendations on software QA best practices that they should consider. I know that many of my colleagues would say that there is no such thing as best practices, and I agree on that. But I think there should be a new term called Best Principles, or if you don’t like the word "best," perhaps Recommended Principles. To explain a little better, let me briefly draw an analogy.
After my talk on Implementing Pairwise Testing at the Atlanta Quality Assurance Association last week, I had a chance to sit down and think about the discussion and the audience:
- Great interest in tools. Everyone wanted to know how can I do it! Of course we all do, and I hope that I gave them plenty of references and sources for this.
- Limited knowledge of a much larger field called combinatorial testing (of which pairwise testing is just a subset). . .
Many agile proponents are not only applying agile methodologies to software development but also, business processes as well as corporate management. It's not uncommon for a management team to have 2 week sprints where they sit down during a planning meeting and decide what will get done in the next 2 weeks. Agile works for everything? While we seem to have this idea that Agile belongs to those who drafted the manifesto up on a weekend, perhaps it is really part of something much bigger. In this recent article on 'nowist' innovation, it mentions that traditional rules don't work anymore. Perhaps agile got its roots from Nowism, or vice versa. Some of the key concepts of nowism include:
- Power of Pull – pull resources from the network as needed instead of stocking resources.
Agile Metrics Webinar Question: What does Technical Debt include? Answer: The debt can be thought of as work that needs to be done before a particular job can be considered completeor “proper.” If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy and is usually as a result of poor system design. Think of all the things that you say “we’ll do that later”. That represents technical debt.
In our recent agile testing webinar, we had an outstanding question on Defect Removal Effectiveness (DRE). Query1: For DRE calculation, I believe below is the correct measure. Can you please double confirm? DRE = Defects found pre release/DFP+Defects found in prerelease Yes, your calculation is correct. Ours is actually the inverse and would be called Defect Escape Rate (wanting to keep low) instead of Defect Removal Effectiveness. Let’s do an example just to illustrate.
Implementing Agile QA depends on the environment, people, their skills and personalities and various other factors so we didn't title this 'Best" practices but good. This means that our goal here is to list out some practices that deserve consideration when implementing agile QA. There should be an information sharing system to document the latest information with regard to the user stories, tasks, defects, decisions made. There should be an issue tracking system (i.e. JIRA) to document processes and implement workflows. QA should understand the business background and objectives as much as possible in order to start testing from the user [...]
As I prepare for my talk in Kuala Lumpur at Softec 2014, I started thinking about our own projects at XBOSoft and the software defect metrics that we use internally to see how we are doing for our clients. There are the normal ones such as defect escape or leakage rates, and defect fix time, technical debt reduction - refactoring. But from a 'pure qa' point of view and in particular XBOSoft, we want to reduce the work for our clients while improving the quality of their software. Some of the key metrics we track include:
For defect verification, when developers are examining defects, they often complain that issues cannot be reproduced. There are many possible explanations and ways to avoid this. Let’s analyze some of the most common reasons.
Ensuring software quality requires much more than testing and should start much earlier in the development lifecycle. One activity that can help is cyclomatic complexity analysis. For cyclomatic complexity analysis, simply put, higher numbers indicate more complexity and are “bad” while lower numbers are “good”. Some modules and their functions may be inherently more complex to implement and therefore have a relatively high cyclomatic complexity than other modules. Cyclomatic complexity gives us a sense of how hard code may be to test, and maintain
As a project manager in XBOSoft, I focus on satisfying our clients which is dependent on how well I manage our testing team. Test team management requires that I communicate with team members frequently every day. In order to be able to function as an efficient team, I try my best to develop effective team communication. The five most common and effective ways we use to communicate are: 1. Team meetings 2. Email 3. IM tools 4. Team collaboration tools 5. Face to face/1-1 talks Team Meetings: Meeting in a conference room with team members directly can deliver information to all members at the same time, and it also helps to collect different ideas from different team members.
Have you implemented a software quality metrics program? If so, then you know its not easy and that metrics programs often fail. Why? Software Quality Metrics - Top Reasons for Failure --Metrics don’t measure the right thing --Metric becomes the goal --Metrics don’t have consistent context so they are unreliable --Measurements used are not consistent --Metrics don’t answer the questions you had to begin with --Metrics have no audience --Measurements are too hard to get --Metrics have no indicators so cannot evaluate
Our clients often ask “How and when the real software testing activities can be started?”. My answer has always been the same: “Once the Software Test Plan is DONE and it gets approved by you!” The Software Test Plan is one of the most important documents for a test project and is usually the first deliverable to our clients. Only after the test plan is approved will our Testing team start real testing activities. From my experience, a Software Test Plan should contain the following information at minimum: Test Scope Documentation Test Methodology Schedule Test Entry/Exit Criteria Test Tools
Defect tracking workflow is the life cycle of a defect. It describes the states of the defect from when it is created to when it is closed. There are two main defect tracking workflow models: 1. Identify defect ONLY with “State”; 2. Identify defect with both “State” and “Resolution”. You can set up a defect tracking workflow system with either of them, but which one is best for you? Let’s set up a defect tracking workflow with the most common defect states: New Open Reopen Fixed Invalid
We are happy to announce that March 20 at 10 AM EST we are holding an agile testing webinar with two veterans in the field: Jared Richardson and Philip Lew. Jared and Phil will discuss the changes needed for QA and Testing when working in an agile environment. Agile Testing Topics covered include: - Agile development trends - How to test 'agile' - How to implement scrum - Typical scrum testing bottlenecks and how to solve them - Testing agile requirements - Agile test metrics For Jared Richardson & Philip Lew Jared Richardson Jared is a software consultant. [...]
Reducing time on testing is all about effectiveness, not only for the testing process, but the entire development process. The further upstream that defects are found and eliminated, that means less defects found in testing or less defects to be found in testing. Therefore, by deduction, less time spent in testing. In terms of finding defects further upstream, most organizations don’t think of the term ‘defects’ until there is a final product to find fault in, but the term defect can be used for any mistake in the process that would lead to a defect in the end product. This means
Improve Software Quality - Find Out How In This Webinar October 25 2012 - We held a webinar on Quality Process Assessment (QPA), our software quality improvement service that takes into account how to improve software quality from the viewpoint of the entire life cycle of your software product including phases that are often overlooked such as sales, marketing, customer service. Software quality is not a spice you add at the end. It must be baked in from the beginning! Enjoy the recording! Jan
October 25 at 10 AM EST we are holding a webinar in which we explain how XBOSoft’s Quality Process Assessment (QPA) can help you prevent defects throughout your software products’ entire life cycle, above and beyond development and testing. We show how to examine your software product's lifecycle and how each phase impacts your customers' view of the product's quality. This includes phases such as: - Sales & Marketing-Product/Project Sponsor (intra-Company software) - Product Management (Requirements) - Development - Quality Assurance - Customer Service/Tech Support In this webinar consultants from XBOSoft will share with you their first-hand experience. You'll learn: - [...]
Software quality is an abstract concept. So abstract, that standards bodies develop models to understand it better, i.e. ISO 25010. Quality models help us to crystalize and understand quality and quality requirements, both functional and non-functional, with the goal of evaluating them. The goal of testing is to determine if these requirements are met. During the course of testing, we find defects, or instances where the software does not meet requirements. Hence, in the area of software testing metrics, there has been abundant work in analyzing defects. With respect to analyzing defects, there are many flow charts detailing how defects flow back and forth to QA with changes in state (fixed, open, re-open, etc.). There are also numerous software applications (defect tracking systems and defect management systems) that help us track defects at the different phases of development and after release. However, these activities are rarely connected to metrics in such a way that is easy to analyze. Rather, there many defect metrics often listed out with definitions and calculations
In a nutshell, establishing and implementing a measurement program for software quality needs more than software testing related metrics. Testing metrics (like defect statistics) are detective type metrics which are very late in the product life-cycle, and also only include one part of the organization. What about metrics that could prevent the defects to begin with?
In my last post, I mentioned that metrics could be used for different purposes. One of them was tracking progress. Usually when we track progress, it is related to time, or other unit that indicates a schedule. Most often we use progress metrics to track planned versus actual over time. What we track depends on our role. If we are financial people, then we track money spent. But for software quality assurance, we want to track progress of such things as defects, test cases, man-hours, etc. Basically anything that is related to results or effort spent to get those results. For instance, here are a few:
Last time, I blogged on some general flows of defects and 'buckets' of measurements that I thought were possible. Then I thought that I needed to take a step back and think at a higher level about software testing measurement in general. Software testing measurements can help us in a few areas including: Planning: Cost estimation, training and resource planning, budgeting and scheduling Controlling: Track testing activities for progress and compliance to the plan Improving: Benchmarking processes and identifying where efforts should be focused and then measure the effects. There are many other areas that measurement can help us
Today, I sat down and started thinking about defects and how to analyze them. There are many standard defect metrics such as: Defects Found/Resolved Defect Severity Defects Found/Test Cases Executed The list goes on. I'm not really a 'mindmapper', but I decided to think of defects in terms of flows, where they come from, what influences them, and what happens to them. I started doing a sketch and this is what I came up with. Flowchart of defect sources, inputs, and outputs Next I'll start analyzing each oval and see where that leads. If you have any input [...]
Last week Alan and I were at SQE's Better Software / Agile Development conference in Las Vegas. We had a great time at the conference and in the city. At the conference Alan spoke on software quality and how to improve it using a Quality Process Assessment (QPA). With a QPA we focus on preventing defects through out the full software life cycle and also include evaluating senior management, sales, and customer service as a root cause of defects. More info on a QPA here.
Defect reporting is one of the most important QA activities in software testing. Sure, we devise test strategies and plans, and test cases, but defects are the main ‘product’ that we produce that people can see as the result of our work. But what constitutes a ‘good’ defect? What are these elements’ definitions? I’ve been working in QA for 7 years now, so I thought I’d share some of our best practices: Summary/Subject/Title A short sentence to describe the issue: What is the issue? Where does it happen? Don’t put too many words here since many Defect Tracking Systems (DTS) usually have a text length limitation. Example: Error occurs after clicking "Load" button in home page.
The software world is changing by the minute. To test software that is constantly changing is a daunting task, especially when market share and profitability depends on quick delivery. In the rush to embark on a process improvement effort such as CMMI, TPI, TMMi, etc. it is easy to forget the essentials. Here are 3 best practices to keep in mind:
When most people think about software quality metrics, they think about defect related metrics such as: Total defects, Defects at delivery, Mean time to defect, Defect burn rate, Mean time to failure, Mean time between failures, etc. While examining defects is a start to developing quality metrics, it’s just a start.
Nowadays, many companies consider integrating TPI into their testing processes. However the questions of how much does a TPI effort cost and will these costs exceed its benefits have been raised by many practitioners. This article discusses and elaborates TPI’s benefits and costs, so that you can decide whether TPI is justified. Most of the time, a TPI effort is triggered by one for more of the following three testing issues: Testing takes too long Testing is too expensive False expectation between developers and testers The primary goal for TPI is to improve the product’s quality through making the testing [...]