Seven Test Automation Mistakes to Avoid
Mark Bentsen, a friend and fellow software QA enthusiast, recently posted on LinkedIn, “The number one reason QA Managers lose their jobs is from failed automation initiatives. Is automation important? Absolutely! Don’t make a rookie mistake and waste your budget only to get negative ROI.” I’ve seen it first hand. I’d say there are 7 big mistakes that can undermine any automation effort:
- Unrealistic Expectations. Thinking that automation can do it all (100% automation). We’ve all heard this before, believing machines can do everything. They can’t (not yet anyway). For automation, the “tool” needs to be taught. The question is how much effort will it take you to ‘teach the tool’? This comes in the form of automation code and the writing of the automation code is manual… So automation doesn’t happen by itself. In any case, if you have unrealistic expectations, they most likely won’t be met, then you may have to “explain” to management why. Manage expectations. Automate only the most appropriate tasks that testers don’t like to do or those that take the most time. Simply striving for automation to reduce headcount will likely get you in trouble. Have clear reason(s) for automating. Good reasons include increased platform coverage, decreased regression times, and increased confidence in your software are just a few.
- Automation without Context and Domain Expertise. You can’t automate what you don’t understand. Domain expertise is critical for any person or team setting out to automate their testing. You need to come up with real-life data and end-user scenarios that actually represent the software in operation. This may involve integration testing as software today has many components. Simply pressing buttons and making sure screens appear (recognition of an object) is not how real users behave. Simply automating clicks going from one menu to the next and making sure that buttons and controls work has limited value. With such automation, the project ROI may fall short leading to a “reboot” of the automation effort.
- Incorrect Measures of Success. As with any initiative, if you track the wrong metrics, you may end up driving unproductive behavior. As Mark Bentsen said, ‘don’t waste your budget only to get negative ROI’. For example, people stepping into automation may think that % automation is the key metric for ROI. Is it? Percent automation is a function of the # of automated test scripts (numerator) over total test scripts (denominator). Using this measure may lead to automating too many of the wrong things with the business just striving towards increasing the numerator (# automated test scripts) without paying enough attention to what is in the denominator. What should be automated and how much is automate-able?
- Demonstrating Automation Performance. If you can’t show value for why you automated in the first place, you’re in trouble. Did your automation effort ultimately support the business goals? How much time did it really save? Did your test automation find defects? Did it give you a level of comfort so that your manual testing efforts can be put into exploratory testing of new features and functions? If your automated tests execute, can your software be released? When you’re presenting the results of the automation project to upper management, was the bottom line achieved? Were the goals met and was ROI positive? Truthfully, automation does not usually find defects, but it can free up time for exploratory testing which does find defects. So make sure that when you’re presenting your automation project results that they are linked to the original objectives and goals. The number of test scripts executed with % pass-fail may look good in the beginning but will lose its shine to upper management if you’re not achieving your business goals.
- Choosing the Wrong Tool. Starting off an automation project with the wrong tool can lead to many reboots or even outright failure. I’ve seen some organizations having to ‘reboot’ their automation effort over and over just because the automation tool they selected wasn’t quite right for the company and its automation goals. For example, there is a resurgence in vendors offering codeless automation tools (see Angie Jones’ article about what to consider when choosing a codeless automation tool). However, many times tools fall short in enabling you to easily organize, execute, and maintain your scripts. Organizing scripts means being able to flexibly move your scripts around to execute in whatever groups you see fit, reusing code when needed. This is handy for example, if you want to execute all the ‘reporting’ function scripts because you made some changes in that module.
- Underestimating Total Costs and Effort. We all know about the headaches of maintenance. If you have no time or it requires too much effort to maintain your scripts and they become out of date, they won’t execute. Having automation scripts that won’t execute means trouble; reboot. Sometimes this is related to the tool you choose or the framework or method that you set up your automation. Or, it could simply be a variable in effort that you didn’t account for. When you buy a car, you need to maintain it. As you know, a Mercedes is really expensive to maintain. Most buyers think about this when they buy a car, so you should think about it too when setting up your automation effort.
- Incomplete Integration with manual testing. Lastly, automation must complement and integrate with manual testing. If you automate a function, the knowledge of that automation and its results should integrate with manual test cases so you know you don’t have to cover that area manually. Why test functions both manually and with automated scripts unless you don’t have confidence in one of them?
If you can’t save time (release faster) with test automation, or increase confidence that your software is ready for release, then you haven’t shown value. And if you can’t show value after you’ve spent thousands of man-hours developing your automated test scripts, you’re in danger.
Great article Mark. This really made me think about some of the situations we’ve run into and how people get into trouble when it comes to test automation. Thanks!