3. Incorrect Measures of Success. As with any initiative, if you track the wrong metrics, you may end up driving unproductive behavior. As Mark Bentsen said, “don’t waste your budget only to get negative ROI”. For example, people stepping into automation may think that % automation is the key metric for ROI. Is it? Percent automation is a function of the # of automated test scripts (numerator) over total test scripts (denominator). Using this measure may lead to automating too many of the wrong things with the business just striving towards increasing the numerator (# automated test scripts) without paying enough attention to what is in the denominator. What should be automated and how much is automate-able?
4. Demonstrating Automation Performance. If you can’t show value for why you automated in the first place, you’re in trouble. Did your automation effort ultimately support the business goals? How much time did it really save? Did your test automation find defects? Did it give you a level of comfort so that your manual testing efforts can be put into exploratory testing of new features and functions? If your automated tests execute, can your software be released? When you’re presenting the results of the automation project to upper management, was the bottom line achieved? Were the goals met and was ROI positive?Truthfully, automation does not usually find defects, but it can free up time for exploratory testing which does find defects. So make sure that when you’re presenting your automation project results that they are linked to the original objectives and goals. Number of test scripts executed with % pass-fail may look good in the beginning but will lose its shine to upper management if you’re not achieving your business goals.
5. Choosing the Wrong Tool. Starting off an automation project with the wrong tool can lead to many reboots or even outright failure. I’ve seen some organizations having to ‘reboot’ their automation effort over and over just because the automation tool they selected wasn’t quite right for the company and its automation goals. For example, there is a resurgence in vendors offering codeless automation tools (see Angie Jone’s article about what to consider when choosing a codeless automation tool).However, many times tools fall short in enabling you to easily organize, execute, and maintain your scripts. Organizing scripts means being able to flexibly move your scripts around to execute in whatever groups you see fit, reusing code when needed. This is handy for example, if you want to execute all the ‘reporting’ function scripts because you made some changes in that module.
6. Underestimating Total Costs and Effort. We all know about the headaches of maintenance. If you have no time or it requires too much effort to maintain your scripts and they become out of date, they won’t execute. Having automation scripts that won’t execute means trouble; reboot. Sometimes this is related to the tool you choose or the framework or method that you set up your automation. Or, it could simply be a variable in effort that you didn’t account for. When you buy a car, you need to maintain it. As you know, a Mercedes is really expensive to maintain. Most buyers think about this when they buy a car, so you should think about it too when setting up your automation effort.
7. Incomplete Integration with manual testing. Lastly, automation must complement and integrate with manual testing. If you automate a function, the knowledge of that automation and its results should integrate with manual test cases so you know you don’t have to cover that area manually. Why test functions both manually and with automated scripts unless you don’t have confidence in one of them?
If you can’t save time (release faster) with test automation, or increase confidence that your software is ready for release, then you haven’t shown value. And if you can’t show value after you’ve spent thousands of man-hours developing your automated test scripts, you’re in danger.
Great article Mark. This really made me think about some of the situations we’ve run into and how people get into trouble when it comes to test automation. Thanks Mark.