Why outsource anything be it a service or production? Why not keep every aspect of the business in house? Two hundred years ago, that approach probably made sense. Today, in the modern economy, globalization and technological advancements have made outsourcing a strategic imperative for all businesses. One question on every CEO’s, VP’s, or even a Managers plate with skin in the game “What need or challenge to our business can be better handled through outsourcing?”This blog addresses seven important reasons why outsourcing software QA & testing makes sense....
These days, many companies are developing on the .Net platform with Visual Studio. What some don’t realize is that together with TFS (Team Foundation Server) and MTM (Microsoft Test Manager), they can manage the entire testing life cycles seamlessly including task assignment> test case design > test execution >result track. This blog gives a brief introduction on how to make this happen including both manual test activities and automation test activities. The precondition is: TFS is setup, and Visual Studio is installed. Here, we use TFS 2012 and Visual Studio 2012 for our example. For setting up TFS and Visual Studio, please see MSFT online.
At the beginning of August, Our CEO Philip Lew spoke at the Softec Asia 2017 conference in Kuala Lumpar, Malaysia. With the theme "Testing As A Service," speakers tackled the topic with knowledge in all aspects of improving software quality. Here is a visual look at what some of the speakers focused on in their presentations.
Cloud communications software provider Mitel was experiencing various difficulties with new software releases due to lack of sufficient testers on staff. XBOSoft was hired to provide the needed manpower. Over time, XBOSoft's range of services has expanded, with ever greater satisfaction on the part of the customer.
In this video, XBOSoft's CEO, Phil Lew, explains a key agile testing tip, Habit #2 of 7, and summarizes points made in our blog post, as well as his recent keynote at TestIstanbul 2017. In treating the user as royalty, you really need to focus on getting user stories right, prioritizing all your activities around the end user, and understanding the end user deeply in their own context. Don't forget, they are why you exist. For more in the series, check out Habit #1 too.
The complexity of account reconciliation software places special demands on the testers of account reconciliation software. The primary challenge: There are two types of accounts to be reconciled, each with its own unique characteristics.
If we really want to increase our testing capabilities, then we should be analyzing our ways of thinking, and identifying the strengths and weaknesses in our thinking. We all have cognitive biases we’re not consciously aware of. Why do you find certain types of defects and not others? Why are your estimates always low, and therefore forcing you to work overtime? How do our biases take shape in the agile testing process? Is it different from what happens in more traditional processes?
Cognitive biases such as inattentional blindness and anchoring can cause testers to make missteps during the testing process that can have disastrous effects.
Software Testing for Airlines – Reservation and Ticketing Systems More and more people reserve airline tickets using the internet, so travel agency and airline companies must focus on their website as the quality of the website represents the company and the quality of their services. From our experience, there are two important factors for airline software testing in ensuring their overall quality; system integration and business rules implementation. Systems Integration Testing Travel booking services are usually based on one of the following: Global Distribution System (GDS), also sometimes called Computer Reservation System (CRS), and the Internet Booking Engine (IBE) [...]
XBOSoft client AKVA rates our services very highly. In this Q & A with Eivind Brendryen, Product Manager Farming at AKVA group ASA, he speaks about the quality of the services and the value the company places on the AKVA/XBOSoft partnership.
Elvis. Celine Dion. Cher. Penn & Teller. These legends have all appeared on The Las Vegas Strip. And soon, our own XBOSoft CEO Phil Lew joins them when he presenters at Better Software West at Caesars Palace, June 4th through 9th.
Last week, I presented at the Atlanta Quality Assurance Association — AQAA — on “Managing Agile Software Projects Under Uncertainty.” In the beginning of the talk, we covered the reasons for agile project failure and took a vote as to why people thought or had the experience of failure in agile. The results were quite interesting...
Did you watch the Super Bowl? There was an 84 Lumber commercial showing a mother and daughter risking their lives trying to get to the United States from Mexico. Showing the long and arduous journey through the desert, only to face a giant wall separating them from America.Particularly poignant after what the now-president said about Mexican immigrants while campaigning, the ad was purposeful in showing one type of extremely arduous immigrant experience. It was gut-wrenching in a way that made you want to see what happens next to the mother and daughter, which drove traffic to the 84 Lumber site where the rest of the video could be seen. The only problem is that they didn’t predict they would be so successful and all the incoming traffic crashed their site.
When XBOSoft founder and CEO Philip Lew talks about the importance of collaborative testing with the company’s clients, he’s speaking not only about testing, but also about something that he practices outside the business arena.
One powerful example: His volunteer work with the Pacific Northwest Software Quality Conference. Known by its members as PNSQC, the organization has been hosting conferences devoted to quality and testing since 1983. Phil has been among its most active volunteers in recent years.
Phil Lew has served on the board and the social media committee, made presentations, and identified keynote speakers to support the program committee’s efforts. He’s made videos and done webinars to promote the conference, among his many other activities to support PNSQC.
How many of you know what the "blue screen" is? Or Ctrl-alt-delete? We used to test functionality ("Does it work as intended?"), but as software has become more complex and distributed, we're faced with different software quality challenges. This is getting even more complicated with what's now called The Internet of Things, or IoT. With IoT, software and hardware work together more than ever. How do you diagnose the problem? Where is it? In my recent keynote in San Diego at the Practical Software Testing Conference, I had the opportunity to present some of the most critical challenges facing us as software engineers.
Recently, one of our clients asked me to come and give a talk on software testing trends. They wanted to know 'best practices' but of course we all know there is no such thing. Despite that, I whipped up a presentation on what I thought were 'directions to go' in terms of where software testing is headed and the direction that companies should move. As I put together the talk, I started to think about how the boundaries between hardware and software are falling. Whatever you chose to call this disruption, the Internet of Things, the connected world, etc., it means that products are more complicated. So although software is simplifying our lives, it also takes more connectivity and integration to make them work. In my recent keynote at PSQT, I discussed how the Internet of Things was impacting QA and how we, as testers, needed to adapt. Extracting some of the key points, software testing trends are mostly pointing in these directions:
- Software is everywhere. It's working its way into almost all industries. What was once a hardware or embedded software company such as a speaker, garage door opener, refrigerator, now has a software component and that software continues to evolve with more functionality. Most of that additional functionality is tied to other products in an ecosystem whether it be home automation, security, or for life's conveniences. This means software engineers need to have much broader understanding and skills to think of how the products they develop will work not only by themselves but mostly with others (and some you don't know). Since a large part of the value of the product will be how it integrates with other products, ensuring that integration is seamless will require new knowledge and skills.
I've been to many sessions on software testing metrics where the instructor will discuss the software testing Hawthorne Effect. Often cited are lighting experiments done in the 30's where the light in a manufacturing facility was increased and decreased and its positive effect on factory workers' productivity. When applying the results of these experiments to software testing, most will then discuss testing metrics such as test cases written or defects found, and the unexpected consequences or changes in behavior that can result from using such metrics. But I think we are missing the importance and significance of the Hawthorne Effect. First, the Hawthorne Effect was based on several experiments not only in lighting but also, in many other factors such as break times, food, and payment schemes. Secondly, the interpretations of the Hawthorne experiments vary, and many researchers have derived different conclusions. Some of their conclusions, I hereby summarize as the Hawthorne Lessons:
I've been reading a book called Hackers and Painters, by Paul Graham. It's not a new book by any means, but it has many principles that apply to life beyond software. One of the principles that Graham mentions is that one of the key elements of success is empathy. When we think of empathy, we tend to think that it is one level higher than sympathy in terms of actually feeling someone's sorrows. But in terms of software, I don't think we can get too sad about a Do Loop.
Understanding unspoken requirements is probably one of the toughest things to do as a software tester. Recently, I was able to apply an experience in real life to my work in understanding what the client says, even if they are unable to communicate effectively.
As I start to think about my talk on IoT Quality Challenges at The Practical Software Quality and Testing Conference in August, it’s hard not to think about IoT without thinking about wearables. Wearables are just one facet of IoT, but for the average Joe not involved in industrial IoT, wearables hit you in the face.
A happy tester is a more productive and effective tester, so how do you make sure your testers are happy with their jobs? Good salaries with great benefits and a lively work environment only go so far. Testers want job satisfaction – they want to feel like they are part of something and that their contributions are appreciated. If testers feel like their company invests in them, they will invest in the company. Here are a few tips for keeping your testers happy and making sure they feel like they’re doing more than just working for a paycheck.
Agile software development methods have become more popular in recent years as development cycles shorten and competitive pressures mount to deliver quicker. Unfortunately Agile Technical Debt tends to pile up faster than we can pay it back. You see, with agile, the objective is to deliver working software in a short time frame with each iteration ‘working’ for the customer. Sometimes we get defects, yet we have ‘working’ software. We all want minimum overhead and value “working software over documentation”. Unfortunately, with each iteration, many times we can’t fix the defects and have to put it into the backlog. Any work item in backlog should be described in detail enough for those that need to work on it.
One of the main methodologies in agile is extreme programming where programming is done in pairs with extensive peer review. Extreme Testing by using pair testing, not to be confused with Pairwise Testing, is a cousin of Extreme Programming developed at XBOSoft, from the testing point of view. In our own software testing practice, we have used Extreme Testing methods to increase test effectiveness and as a great way to get testers to collaborate in a purposeful way. Within Extreme Testing, we have a few non-mutually exclusive practices:
Some folks might interpret "Working software over comprehensive documentation" to mean that there should be no agile defects written up. Shouldn't all defects be fixed for "working" software within an iteration? There is optimal collaboration within the sprint and thereby there is no need for an actual defect to be written up. Right? The only problem with this scenario is that is ideal, but rarely happens in reality.
I'm in a lot of agile planning meetings and as a major part of the meeting, we need to determine the amount of work for a story. Estimating agile story points is actually one of the major parts of the meeting, deciding how much work there is for us to do! We often use Fibonacci numbers and it seems to work well, but I never knew why, so I decided to do a little searching and this is what I found.
In math, the Fibonacci sequence follows the following integer sequence:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144... Can you guess the next one?
We had a question come in during one of our webinars on test cases, and thought it deserved more thought than a simple answer within the webinar itself. The question was, "In reference to writing unit tests for requirements, requirements should be testable, but why not write the test when you write the requirement. Is that over bearing?"
In a recent webinar that I moderated with Ken Pugh, we had a great discussion on acceptance tests. One of the points Ken made was that we all want to move testing forward. A no brainer, right? However, why don’t we do it? Maybe we just don’t know how?
Ken made the point of testing requirements. That finding and fixing requirements defects is much cheaper than finding and fixing defects later in production. He then put up the classic exponential curve showing how steep and expensive fixes can be when they are needed for defects found in production. However, I pointed out in reference to the graph that this maybe doesn't apply anymore. . .
There were still a few remaining questions from our webinar with Srilu Pinjala in July on Test Case Development - So You Think You Can Write A Test Case that we wanted to address. It was Srilu's opinion that test cases should be very detailed and document the functionality of the software. Her view was that this way test cases can then be used by anyone and at a later date. She coined the phrase Test Like a Robot which I found interesting. Should one really want to be like a robot?
I recently had a client ask us to assess their situation and then give them recommendations on software QA best practices that they should consider. I know that many of my colleagues would say that there is no such thing as best practices, and I agree on that. But I think there should be a new term called Best Principles, or if you don’t like the word "best," perhaps Recommended Principles. To explain a little better, let me briefly draw an analogy.
After my talk on Implementing Pairwise Testing at the Atlanta Quality Assurance Association last week, I had a chance to sit down and think about the discussion and the audience:
- Great interest in tools. Everyone wanted to know how can I do it! Of course we all do, and I hope that I gave them plenty of references and sources for this.
- Limited knowledge of a much larger field called combinatorial testing (of which pairwise testing is just a subset). . .
From our webinar with Srilu on "You think you know how to write a test case", we had several questions. With regard to agile test cases , "What do you do in the mean time between development and when the app/software is ready? Is it not advisable to write the test cases based on the product specification instead of waiting for the software to be ready?
I had a chat with Srilu about this, and here is what she says. . .
Besides the potential cost benefits, outsourcing software testing, with an external software testing service provider can help take over the management of staff and provide more flexibility. But working with an external vendor also provides access to expertise which is often broader and deeper than what makes sense to retain in-house. If you can set up a work process where you can benefit from cost benefits, flexibility of resources and especially extra expertise you are set to gain from outsourcing your software testing.
Just mention software testing metrics and there is ample discussion on Defect Removal Efficiency (DRE), so let's examine it closer.
Back in the days of waterfall, if we found 100 defects during the testing phase (which we fixed pre-production) and then later, say within 90 days after software release (in production), found five defects, then the DRE would be what we were able to remove (fix) versus what was left and found by users:
100/(100+5) = 95.2%
This 95% has been referred by Capers Jones, a well known software measurement colleague, as being a good number to shoot for.
After making the rounds doing talks at both Mobile Dev + Test in San Diego and Better Software West in Las Vegas, I've had a chance to reflect on some of the comments and discussion during my sessions and there is one resounding theme. Value.
We can think of defining software quality in many terms such as defects, requirements, code quality, etc. But when you think about it, it all has to do with the value for the end user. Value is not usability, and value is not defects found and fixed.
I've been invited to give a talk on Pairwise Testing Test Case Design in Atlanta on August 11 for the Atlanta Quality Assurance Association. My preparation, though, involves much more than just explaining pairwise testing. When using pairwise testing as a combinatorial method for test case selection, you need to think about much more than choosing what tool to use or how it works, but rather when, where and most importantly, WHY? Why choose pairwise versus 3-wise, or 4-wise? First I had to think of the Why's.
I had the fortune of sitting down and listening in on the webinar last night by our CEO @philiplew and his friend and colleague @jonduncanhagar where they discussed the merits of ISO29119 and where it should and should not be used. This blog provides an ISO 29119 Summary in light of the information I was able to glean from the webinar.
Firstly, there was debate on ISO 29119 a few months ago with Jon Hagar, Rex Black, Griffin Jones and JeanAnn Harrison. Phil decided to invite Jon back to speak on ISO 29119 because there is a lot of negative information against the standard i.e. #STOP29119, but not much in the ether regarding its merits. Could it be because there are none?
We're still answering questions from our April webinar on test automation! For our last question regarding Test Automation Frameworks -- "What are do's or don'ts one should know before working on a test automation frameworks" -- we went to some of our test automation engineers and they had these comments:
Develop a thorough understanding of the software to be tested.
- Type of project C/S, B/S, mobile, etc - The type of app heavily influences what architecture and tools you use.
- Business logic of the software. Without knowing how to test the software manually and the logic behind it, its difficult to set up test automation. . .
I recently came upon an article and corresponding infographic on software sizing, the basic gist of the webinar and article was that properly estimating software size is important? Why? Well obviously, if you estimate incorrectly you have some problems:
- Estimate too high, and you allocate too many resources and spend money you wouldn't do otherwise. I don't think too many organizations worry too much about this situation.
- Estimate too low, which is usually the case, and you end up missing deadlines. But let's think about what this means at a deeper level:
On February 20, Thursday at 1pm EST, learn about how to use agile on big enterprises. Increasing the speed of development on large scale projects is a constantly evolving challenge. XBOSoft is hosting a webinar featuring Steve Adolph, President of Development Knowledge and lead agile coach. He presents agile methodologies that can be adapted from beyond the small scale, ideal agile situations to large-scale ones with conflicting needs, product owners, time zones and schedules. Steve’s motto is “We make Good Teams Great”. He has been providing pragmatic software solutions for almost 20 years. A PhD in electrical and computer [...]
Finally the last question from our Agile Metrics Webinar. This question pertains to Rework From Defects. Q: Is Defect rework allocation = similar to root cause type mark up in defects after fix? A: Yes! We use a different terminology because we want to focus on Rework From Defects. In other words, what time are we wasting due to defects. So by classifying where the defects are from, we can determine where the rework comes from, and where we should put our focus on getting more efficient and thus FASTER, increasing velocity. If our rework comes from testing, we may want to look at
Is ISTQB certification useful? Yes, for certain roles on your team. But is is not for all, and does not necessarily make you a good tester. We recently had one of our European clients mention that they thought all testers that are on their team need to have the knowledge equivalent to the ISTQB foundation level. They haven't specifically said that all team members must have ISTQB certification, but this is scary. We certainly understand the need for basic knowledge of testing to work on a project and we would not even dare put a newbie on a project without hands on training, but we think ISTQB is a little bit over the top. Some of our guys have gone to ISTQB training and been certified, but we have found no direct correlation between those that are certified and those that make good testers. What assurance is there that a person with an ISTQB certification can find good defects? Can they recite the components that should go into writing up a defect? Yes. But can the find a good defect, and then can they actually document it in a way that a developer can understand and then rapidly fix the defect.
During our agile webinar, we had many questions we couldn't get to. Here's another one on User Acceptance Testing
Question: Assume an UAT (User Acceptance Team) where they do not have separate user story/functional requirement as such. Functional points are designed in such a way that delivery is done in modules. Groups of modules that deliver functions will be tested by UAT. In this case, how can we measure test coverage %?
Implementing Agile QA depends on the environment, people, their skills and personalities and various other factors so we didn't title this 'Best" practices but good. This means that our goal here is to list out some practices that deserve consideration when implementing agile QA. There should be an information sharing system to document the latest information with regard to the user stories, tasks, defects, decisions made. There should be an issue tracking system (i.e. JIRA) to document processes and implement workflows. QA should understand the business background and objectives as much as possible in order to start testing from the user [...]
Many of our new clients are in the process of implementing agile and many of them have no experience in agile. They hear a lot about it, read many papers and articles and are ready to dive in, but we usually recommend a staggered or gradual approach. For those implementing agile who are used to waterfall, the most benefitial approach is to try to borrow some practices from agile and implement those pieces first. Test automation is a key part of agile, The QA team can help write lots of tests (after code is developed, in a waterfall fashion), including unit, integration and [...]
Sometimes there a a need to party, and when we're able to increase Defect Removal Efficiency for our clients, we party up. Metrics are a key part of measuring our service provided to our clients, so we track it carefully. Not only do we need to measure the DRE % in aggregate, but also by tracking pre-production defect issue types versus post-production issue types, we can determine where we need to focus more effort, and possibly beef up our knowledge. For instance, if most post-production defects are found on certain platforms then perhaps we can look closer at deployment configurations or particular platforms. On the other hand, if certain functional areas have more defects in production, then we know somehow these escaped and that we need either more training in these areas or more focus.
Sometimes defect verification can make you insane. Eliminating non-reproducible defects can save everyone time and energy, so we thought we’d list out some of the common reasons behind ‘cannot reproduce’ as a guideline for fellow testers trying to verify defects. Generally, a defect is described with conditions and steps for reproduction. With the same conditions and steps, you’d expect the same result. When a defect cannot be reproduced, and you’re sure the step-by-step descriptions are clear and unambiguous, the cause is usually in the conditions. Conditions should contain all environment information when the defect occurs, including information on the hardware, software, network, baseline data and starting point in the application, and previous behaviors with the system under test. Based on our experience, four of the most common reasons for ‘cannot reproduce’ include:
If you are a development or QA manager, chances are you tend to focus on process improvement. But does high quality process automatically equal high quality software? Not always. So what does?High quality QA personnel. Competent, skilled resources combined with solid process can produce high quality software on a more consistent and predictable basis. Other combinations are a roll of the dice. But how do we load our teams with high quality people? Join XBOSoft on May 14, 2014 at 3:00PM EST as two guest panelists, Chris Laney from Zenergy Technologies, and Charlene Woolley from Xpanxion, LLC share their ideas on how to best recruit [...]
Phillip Lew will give a presentation entitled Improving the Mobile Application User Experience (UX) during the Star East conference on Wednesday, May 7, 2014 from 1:45pm - 2:45pm. From the conference website: “If users can’t figure out how to use your mobile applications and what’s in it for them, they’re gone. Usability and UX are key factors in keeping users satisfied so understanding, measuring, testing and improving these factors are critical to the success of today’s mobile applications. However, sometimes these concepts can be confusing—not only differentiating them but also defining and understanding them. Philip Lew explores the meanings of [...]