Why outsource anything be it a service or production? Why not keep every aspect of the business in house? Two hundred years ago, that approach probably made sense. Today, in the modern economy, globalization and technological advancements have made outsourcing a strategic imperative for all businesses. One question on every CEO’s, VP’s, or even a Managers plate with skin in the game “What need or challenge to our business can be better handled through outsourcing?”This blog addresses seven important reasons why outsourcing software QA & testing makes sense....
In XBOSoft's first 11 years, we've had our trials and tribulations in learning how to be great software testers. Not that anyone or anybody can be a great software tester. But it takes a lot more than great software testing to make our clients happy. In this blog, CEO, Philip Lew, discussing driving client satisfaction and what it means at XBOSoft.
There are several industry affiliations or certifications in the quality, software quality and software testing domains, but who the software QA certifications really for? Do they benefit you? or your client? the end user? ISO 9000 applies to product and process quality usually for manufacturing applications although some have applied them to software development. On the other hand, CMMI has specific certifications for software development processes as well as some service oriented processes. Most of the larger software companies do not comply with these certifications and they mostly exist for large government procurement or for software service providers to check a box as part of an RFP process.
The philosopher Seneca said, “Luck is what happens when preparation meets opportunity.” And I think that’s what happened when I founded XBOSoft in 2006 in China. I have a habit of questioning how things are done, and love coming up with solutions. And I seek out software testers with a similar mindset.
The Chinese cultural bias toward copying the ways of the master rather than striking out on one's own can make recruiting employees in China difficult for companies that are seeking creative thinkers.
Cloud communications software provider Mitel was experiencing various difficulties with new software releases due to lack of sufficient testers on staff. XBOSoft was hired to provide the needed manpower. Over time, XBOSoft's range of services has expanded, with ever greater satisfaction on the part of the customer.
The complexity of account reconciliation software places special demands on the testers of account reconciliation software. The primary challenge: There are two types of accounts to be reconciled, each with its own unique characteristics.
Treating the end software user as royalty is a critical factor for best software testing results during agile development.
Cognitive biases such as inattentional blindness and anchoring can cause testers to make missteps during the testing process that can have disastrous effects.
XBOSoft client AKVA rates our services very highly. In this Q & A with Eivind Brendryen, Product Manager Farming at AKVA group ASA, he speaks about the quality of the services and the value the company places on the AKVA/XBOSoft partnership.
Philip Lew delivers a keynote address in Istanbul on his agile software testing process. Will Turkey be the next wellspring of quality assurance innovation? Istanbul is home to TestIstanbul, an international tech conference that focuses on quality assurance and testing. TestInstanbul attracts entrepreneurs and engineers from around the world to share what they have learned about agile software testing practices. Philip Lew, XBOSoft founder and CEO, delivered the closing keynote presentation for the 2017 conference. A packed audience heard Lew’s signature speech: “The 7 Habits of Highly Effective Agile Testing.” Lew has been influenced in his career and [...]
Last week, I presented at the Atlanta Quality Assurance Association — AQAA — on “Managing Agile Software Projects Under Uncertainty.” In the beginning of the talk, we covered the reasons for agile project failure and took a vote as to why people thought or had the experience of failure in agile. The results were quite interesting...
Have you ever needed to complete a software project, but kept running into issues or hitting walls so you couldn't move forward? Later this evening, our very own CEO Philip Lew is presenting to the Atlanta Quality Assurance Association (AQAA) on managing agile software projects.
We asked Jon Hagar, our guest on Monday’s webinar, to give us a preview of his remarks. Here’s what he had to say. Jon: There is much hype, and there may even be quite a bit of money, around the IoT market. IoT merges several technology lines, such as Mobile, Cloud, Communication, Big Data, and Embedded software. Many aspects of IoT will be familiar. There are challenges, to be sure, for testers as the IoT market reveals its full potential. But these are really opportunities for testers who decide to get engaged in the IoT market. These include:
Software Should be Made for Humans
I attended Cornell Silicon Valley 2017, #CSV17, in early March. I’m a graduate of Cornell, and have found all of the Cornell events I have attended beneficial. This year’s conference was no exception. It featured presentations on various types of software innovation, some designed to plumb our humanity, others designed to replace it. The event never fails to rejuvenate me, inspiring new ideas and fresh thinking.
I listened closely to the keynote speech on reverse-engineering the brain for intelligent machines and thought the most important takeaway was that current algorithms are very fragile, not adaptive. Yes, we have gone beyond the first stage of artificial intelligence. But for truly adaptive and learning algorithms, we have a long road ahead. Applying this to our field of software engineering and, in particular, software quality and testing, I get a clearer picture of what’s ahead for software testing. We will see test cases that adapt and change, applying testing algorithms rather than specific test cases. Just as there will be software innovations in development, they will occur in quality assurance as well.
Yes, we have gone beyond the first stage of artificial intelligence. But for truly adaptive and learning algorithms, we have a long road ahead.
We intend to make 2017 — our 10th year in business — a break-out year.CEO Philip Lew and Vice President Stephen Gohres set the pace for the year by meeting and greeting attendees at the DeveloperWeek show in San Francisco Feb. 12-15, and then jetting to Orlando for the massive HiMSS17 the following Sunday.
Many companies talk and write about the concept of a software quality center of excellence. At XBOSoft, over the last 10 years, we've satisfied one client at a time as we have continued to build up our expertise. Having just celebrated our 10 year anniversary, I believe that XBOSoft’s software quality center of excellence resides in our team.
I'm writing about this because I must say that I was extremely proud of our team today. I did a routine check up call with two of our clients. I usually start the call off with, "I just wanted to check in with you to make sure everything is going ok and that our team is doing a good job for you. I want to make sure that if you have a complaint or an area that you think we can improve, that you have a number you can call and a person who will listen."
In both of my calls I listened as our clients discussed that they were more than satisfied with our services. It made me realize how easy my job is because our team is so strong.
By Philip Lew, XBOSoft Founder and CEO
A few weeks ago, I went to Beijing for our company’s 10 Year Anniversary celebration. Here are three things that I believe have made our software testing company successful, enabling us to thrive while others have perished.
QA expert and author Johanna Rothman asks in a recent article whether managers in agile testing should "scale" agile to help multiple teams deliver products. Her answer: an emphatic "No." Why?
"Scaling process leads to bloat. Instead of scaling process, scale collaboration."
How many of you know what the "blue screen" is? Or Ctrl-alt-delete? We used to test functionality ("Does it work as intended?"), but as software has become more complex and distributed, we're faced with different software quality challenges. This is getting even more complicated with what's now called The Internet of Things, or IoT. With IoT, software and hardware work together more than ever. How do you diagnose the problem? Where is it? In my recent keynote in San Diego at the Practical Software Testing Conference, I had the opportunity to present some of the most critical challenges facing us as software engineers.
One of my favorite restaurants is in the swankier part of town, surrounded by high-end designer shops and oddly overpriced candy stores. The restaurant serves French cuisine, has a happy hour worth ditching work early for, and the ambience of a California vineyard on a bright, sunny day. If guests aren’t dressing up to come here with their purse-size pooches, they dine in one of the many other “fancy” establishments. For such a prime location, it’s expected that service will be impeccable.
But it’s not. The staff is sometimes rude and you have to wait long periods of time before the waitstaff gets to you to take your order or bring it out to you.
What this restaurant is missing is the fulfillment of what we know in the software testing industry as non-functional requirements.
I have been working as a software tester for a while now, and I still have a very clear memory of the moment I decided to throw myself into this field. I majored in computer science and technology with hopes of becoming a software developer. The only problem was that I wasn’t very good at coding. So, instead of subjecting myself to the grueling task of studying coding languages, I decided to focus my energy into becoming a great software tester. When I first started, there was a lot of training on how to do the work, but not necessarily how to handle the work. While there will continue to be lots of learning to do on my part, I’m very passionate about what I do. I always wished that someone would have given me advice about entering the testing field, so here are three tips for all of you new software and QA testers that are getting started in software testing to help you adapt to your new work environment and grow faster.
I had the pleasure of attending the keynote talk of my colleague and friend, Michael Mah, at the recent PSQT conference in San Diego in August. Michael, as usual, was very entertaining and passionate about his ideas and research. In his talk, he discussed a case study in which one of his clients has achieved and continues to achieve great results in implementing XP. He also drew an analogy of their Agile team characteristics as compared to other high performing teams. No, he wasn't talking about the Golden State Warriors. Michael's passion is saving the oceans, so he was sharing his experience in working with the Sea Shepherds. The Sea Shepherds work together with passion and purpose and Michael's story could motivate anyone to stop going to SeaWorld and join the fight in saving the oceans and the life within them.
Recently, one of our clients asked me to come and give a talk on software testing trends. They wanted to know 'best practices' but of course we all know there is no such thing. Despite that, I whipped up a presentation on what I thought were 'directions to go' in terms of where software testing is headed and the direction that companies should move. As I put together the talk, I started to think about how the boundaries between hardware and software are falling. Whatever you chose to call this disruption, the Internet of Things, the connected world, etc., it means that products are more complicated. So although software is simplifying our lives, it also takes more connectivity and integration to make them work. In my recent keynote at PSQT, I discussed how the Internet of Things was impacting QA and how we, as testers, needed to adapt. Extracting some of the key points, software testing trends are mostly pointing in these directions:
- Software is everywhere. It's working its way into almost all industries. What was once a hardware or embedded software company such as a speaker, garage door opener, refrigerator, now has a software component and that software continues to evolve with more functionality. Most of that additional functionality is tied to other products in an ecosystem whether it be home automation, security, or for life's conveniences. This means software engineers need to have much broader understanding and skills to think of how the products they develop will work not only by themselves but mostly with others (and some you don't know). Since a large part of the value of the product will be how it integrates with other products, ensuring that integration is seamless will require new knowledge and skills.
A few short weeks ago at the PSQT conference, I had the fortune of sitting in on Tom Cagley's session, titled Impact of Risk Management on Software Quality. One of the components that Tom mentioned as part of the Agile Risk Taxonomy is Agile Organizational Risk or People Risk. He describes it as the "impact of an environment populated by people." Some may think that this is the most nebulous risk, when compared to business or technical risk. I think it's probably one of the most important but, unfortunately, it's often swept under the rug.
Risk is a weird entity. Many equate risk with uncertainty but it's not. We are uncertain about many things but they are not necessarily risks. Risks come about when you have an negative outcome with a probability greater than zero. But there are many types of negative outcomes and corresponding risks.
In sports, everyone plays different positions and makes different salaries based on their position and skills. A quarterback generally makes more than a linebacker. Likewise, Tom Brady makes more money than Michael Vick. Everyone knows it and they accept it. His numbers are better and his team wins. His teammates also know he makes more money than they do. However, for some reason, when it comes to evaluating Agile teams, I tend to hear only about measuring the team's overall success. Well, how do you put together a good team without knowing the true value of highly performing individuals like Tom Brady?
In the software development world, we often discuss DevOps synonymously with continuous integration and continuous releases. It seems everything is continuous these days and I’m wondering where this trend is taking us. With Agile, where “everyone is responsible for quality,” we are no longer throwing software over the wall after development to be tested. Different kinds of testing happen all the time. I call it Continuous Quality.
"Continuous" describes a movement much bigger than software, but supported mostly by technology and software. It's a movement that is founded on everything being smaller and delivered in smaller quantities (except food in restaurants, where we tend to like to supersize). Everything is moving toward smaller transactions.
I've been to many sessions on software testing metrics where the instructor will discuss the software testing Hawthorne Effect. Often cited are lighting experiments done in the 30's where the light in a manufacturing facility was increased and decreased and its positive effect on factory workers' productivity. When applying the results of these experiments to software testing, most will then discuss testing metrics such as test cases written or defects found, and the unexpected consequences or changes in behavior that can result from using such metrics. But I think we are missing the importance and significance of the Hawthorne Effect. First, the Hawthorne Effect was based on several experiments not only in lighting but also, in many other factors such as break times, food, and payment schemes. Secondly, the interpretations of the Hawthorne experiments vary, and many researchers have derived different conclusions. Some of their conclusions, I hereby summarize as the Hawthorne Lessons:
Agile software development follows an iterative method with the goal of providing working software to stakeholders at the end of each iteration. By providing a view of the software at the end of each iteration, I think each team member gets to see what the others have done and if the team has moved forward towards its final goal of delivering the software. I think that this process helps to build what I call Agile Trust. That is, confidence is provided throughout the process, not only for the stakeholders that the product is moving in the right direction, but also for the development team that they have understood the requirements well.
As a QA on an Agile team, providing confidence to stakeholders on a product in development requires consistent performance throughout the lifecycle.
Understanding unspoken requirements is probably one of the toughest things to do as a software tester. Recently, I was able to apply an experience in real life to my work in understanding what the client says, even if they are unable to communicate effectively.
When recruiting Agile testers for scrum teams, everyone wants to have all senior testers. However, this is not usually the case, and many times you need to recruit the newbies. When I run through my list of considerations, testing skills are, surprisingly, the last thing I look at.
I recently gave a short seminar discussing Agile implementation. In several previous webinars as well as blogs, we've discussed Agile implementation success stories, so I was actually happy to present another viewpoint that not everyone is excited and satisfied with Agile, as Agile does not fit everyone. One of my slides, titled Agile Reality, was on Agile not being so easy and not all it’s cracked up to be. Here are some of the points in my lecture.
Everyone talks about how great Big Data is, and how it will revolutionize our world. They also talk about Big Data Risks in terms of security and privacy. I know I often discuss them in my tutorials and talks (most recently at Better Software East on Privacy, Security, and Trust in the Mobile Age, but that may be the least of our worries when it comes to Big Data Risks. The real danger of Big Data that I was able to glean from a recent Keynote (by Malcolm Gladwell) at the Blackline User Conference #InTheBlack15 was that acquiring masses of data does not really help us to make better decisions.
From our webinar last December with Greg Burns and Ron Ben Yosef from BlackLine Systems on agile challenges that they’ve faced and how they got through them, there were several questions from the audience that we had no time to answer. We sat down with Ron and Greg and discussed the classic question of the recommended Developer-to-Tester Ratio.
Managing agile teams is often tied to metrics and measurements. I was in a recent talk on metrics and many were discussing agile metrics. One of the key points repeatedly raised was that you should never have individual metrics, only team metrics. So as I sit here writing evaluations, how should I be evaluating my team members if I have no individual metrics?
I was examining the Agile Manifesto the other day looking for some guidance on Agile QA, but found none. Many of our clients moving to agile from waterfall are wondering how to handle QA, so I had thought I would look at the famed manifesto to see what it says. Nowhere does it mention QA or testing.
I then began to look for mention of people or different roles in any way, and here's what I came up with:
I recently had a client ask us to assess their situation and then give them recommendations on software QA best practices that they should consider. I know that many of my colleagues would say that there is no such thing as best practices, and I agree on that. But I think there should be a new term called Best Principles, or if you don’t like the word "best," perhaps Recommended Principles. To explain a little better, let me briefly draw an analogy.
One of the hot buttons in the industry lately is agile documentation. How much is needed if at all? I can certainly see where the "no documentation" viewpoint comes from. Documentation slows us down, and if it brings no direct value to the end user, then don’t do it (one of the guiding principles of Agile). In this light, we should limit or have no test cases and no written defects. We should either write stuff down on yellow sticky pads to remember it or keep it in our heads.
For all those who claim Agile requires no documentation or even limited and sparse documentation, this is where I take a stand. Agile absolutely requires documentation -- just less than the SEI-CMM days. We aren’t trying to get certified for some government procurement, but we need to be practical and have documentation. Here’s why:
ISO29119 criticisms and feedback tell us that the Debate Continues. Perhaps a little more sensible than what we've seen on TV lately -- or maybe not.
During our webinar in July with Jon Hagar, we had a few that put forth their opinions on ISO 29119, tweeting and posing questions and commenting on the event. I had a discussion with Jon Hagar on these views.
Many agile proponents are not only applying agile methodologies to software development but also, business processes as well as corporate management. It's not uncommon for a management team to have 2 week sprints where they sit down during a planning meeting and decide what will get done in the next 2 weeks. Agile works for everything? While we seem to have this idea that Agile belongs to those who drafted the manifesto up on a weekend, perhaps it is really part of something much bigger. In this recent article on 'nowist' innovation, it mentions that traditional rules don't work anymore. Perhaps agile got its roots from Nowism, or vice versa. Some of the key concepts of nowism include:
- Power of Pull – pull resources from the network as needed instead of stocking resources.
Yet another question during our Agile Webinar. This one on Total Unexpected Work. Q: Question on Total Unexpected Work: Is that Metric for total work done in # of hours, and what is work? A: Work can be classified as any tasks that you do. This could be understanding requirements, design, defect fixing, working on new features, and meetings. Each of these work types should be classified into buckets or categories that make sense for your organization. You don't want to be too detailed because then data doesn't accumulate into meaningful amounts. Regarding measurement units, work can be measured in
Is ISTQB certification useful? Yes, for certain roles on your team. But is is not for all, and does not necessarily make you a good tester. We recently had one of our European clients mention that they thought all testers that are on their team need to have the knowledge equivalent to the ISTQB foundation level. They haven't specifically said that all team members must have ISTQB certification, but this is scary. We certainly understand the need for basic knowledge of testing to work on a project and we would not even dare put a newbie on a project without hands on training, but we think ISTQB is a little bit over the top. Some of our guys have gone to ISTQB training and been certified, but we have found no direct correlation between those that are certified and those that make good testers. What assurance is there that a person with an ISTQB certification can find good defects? Can they recite the components that should go into writing up a defect? Yes. But can the find a good defect, and then can they actually document it in a way that a developer can understand and then rapidly fix the defect.
As a dedicated testing company, we want to provide as much Software Testing Value as possible by focusing on testing our clients’ software and ensuring that it doesn’t get into the field with post-production defects. Sometimes our client finds defects too as we are not the only QA engineers on the team. Of course we want to minimize that or they don’t need us! To track this, we have a few metrics:
- Defects found by client / total defects
- Defects found by XBOSoft / total defects
"Can't We All Just Get Along?"Is ISO 29119, the new software testing standard useful or not? I've been reading a lot lately about the latest ISO 29119 Standard which has recently been released (some sections final and some in draft form) and has received abundant 'test coverage' from certain members of the software testing community. There is even a petition of sorts to 'stop' it. Note I say "software testing community", because I don't believe there has been any negative feedback from the software engineering or software development community, although testing is all part of the process. It seems the 'context' driven testing community feels that the Standard violates some rules or takes away somehow from what they are doing?
A while back when we first started our company, we had a very typical hierarchical structure in a management sense. We had me, as the CEO, and project managers beneath me. You could say we managed our business using waterfall. Decisions coming from the top, analyzing requirements, and then developing the plan, executing it, etc. I was determined to work myself out of a job, so that I was not needed. What happened if I got run over by a beer truck?
I’ve been working at XBOSoft for almost 7 years. I started as a tester and now I’m a project manager. Through the years, I’ve learned a many lessons on testing project management and how to manage projects and people. My Top 3 Most Important Things When Managing a Project are:
After reading Tim Rodgers' blog on staff meetings, I was re-energized to have a purposeful staff meeting. Many of our test teams use agile management methods, so they say so it was interesting to see that although they think their projects are agile, the company itself works under different standards.