Get in touch

Lessons from Twenty Years of Testing Hype Cycles

Published: January 9, 2026

Updated: April 16, 2026

Today, AI seems to be everything anyone can talk about. There is so much hype around it that it has become difficult to see the value through the noise. We have been in software testing for over twenty years, and in that time we have seen similar patterns play out multiple times. Cloud, mobile, agile, shift-left, codeless automation: each arrived with bold promises, attracted significant investment, collided with practical realities, and eventually settled into a useful but more modest role than the original marketing suggested. AI is going through the same cycle now. Recognizing that does not mean AI is not valuable. It means that separating what actually works from what is just sales talk takes some patience and some pattern recognition.

What Test Automation Taught Us

The test automation wave of the 2010s is probably the closest parallel to what is happening with AI today. The pitch was straightforward: automate your manual tests, reduce your headcount, speed up your releases, catch defects earlier. Tools emerged that could record user interactions and play them back, and the expectation in many organizations was that manual testing would largely disappear.

It did not work out that way. The maintenance burden caught everyone off guard. As Phil Lew puts it, the industry discovered that not everything could be automated, and what could be automated required far more upkeep than anyone had budgeted for. Tests that worked fine in controlled environments would break whenever something in the application changed. UI updates invalidated locators. Teams ended up spending more time fixing their automation than they saved by running it. The World Quality Report 2022-2023 found that maintenance costs can consume up to 50 percent of overall test automation budgets, a figure that surprised a lot of organizations that had planned for something closer to 10 or 15 percent.

The staffing math did not work out as expected either. Everyone assumed automation meant fewer testers. What actually happened was that the work shifted. You could reduce manual testers, but you needed automation engineers to build and maintain the scripts, and those people cost more. The savings were real in many cases, but smaller than projected and slower to materialize than the sales presentations suggested.

None of this means automation was a bad idea. It became a standard part of how testing works, and it delivers real value when applied thoughtfully. But organizations that went in expecting transformation overnight struggled, while organizations that set realistic expectations and focused on the right use cases did better.

Other Cycles We Have Seen

Mobile testing followed a similar arc. The explosion of devices, screen sizes, and operating system versions created what looked like an impossible testing challenge, and vendors were eager to sell device clouds and automated compatibility solutions. What we found was that mobile introduced problems the tools were not designed for: performance issues, battery behavior, intermittent connectivity. Organizations that treated mobile as just another platform to point existing tools at had a hard time. Organizations that recognized it as genuinely different and invested in understanding its specific challenges did better.

Agile went through its own version, though it was more about process than technology. The promise was that quality would be built in from the start, with testers working alongside developers rather than waiting at the end of a waterfall. In practice, many organizations adopted the terminology and the meetings without actually changing how work got done. They called it agile, but testing still happened at the end. The organizations that got real value from agile were the ones that treated it as a genuine change in how people worked together, not just a new vocabulary.

Codeless automation is a more recent example. The pitch was that business users and manual testers could create automation without writing code. The tools did lower the barrier to entry, and people without programming backgrounds could build simple checks. But when applications got complex or changed frequently, those tests turned out to be fragile. Organizations still needed technical people to maintain the underlying frameworks and troubleshoot failures.

Through all of these cycles, Lew has noticed a consistent problem: organizations focus on activity rather than outcomes.

“A lot of times it’s a lot of action and doing, but not really measuring the results.” — Phil Lew

They measure how many test cases they automated or how many scripts they ran, without asking whether any of it is actually improving quality or catching real problems.

Where AI Fits in This Pattern

AI testing tools are following the same trajectory. The promises sound familiar: generate test cases automatically, eliminate maintenance through self-healing scripts, achieve coverage that would be impossible manually. The market is responding enthusiastically. Analysts project AI-enabled testing will grow from $856.7 million in 2024 to $3.82 billion by 2032. Gartner expects 80 percent of enterprises will have integrated AI testing tools by 2027, up from 15 percent in 2023.

The correction is already underway. Generative AI moved into Gartner’s Trough of Disillusionment in 2025, which is the phase where early adopters start reporting mixed results and the distance between vendor claims and real-world outcomes becomes harder to ignore. Despite average spending of $1.9 million on GenAI initiatives in 2024, less than 30 percent of AI leaders say their CEOs are satisfied with the returns. Across AI projects broadly, 70 to 85 percent fail to meet expected outcomes. In testing specifically, 42 percent of companies abandoned most of their AI initiatives in 2025, up from 17 percent the year before.

This does not mean AI in testing is failing. It means AI is going through the same adjustment that every significant technology goes through. Lew thinks the long-term trajectory is clear: AI keeps getting smarter and faster, while human capacity stays constant. The question is not whether AI will become essential to testing, but how long it takes to get past the hype and figure out where it actually helps.

What This Suggests for Organizations Evaluating AI

If earlier cycles are any guide, there are a few things worth keeping in mind.

Start with problems, not capabilities. It is tempting to look at what AI tools can do and then hunt for places to apply them. The better approach is to start with what is actually causing pain in your testing process, where you are falling short on coverage or speed, and then ask whether AI can help with those specific issues.

“I think it’s better to look at problems first and then see where AI can be used to solve those problems rather than taking AI and looking for someplace to use it.” — Phil Lew Broad AI initiatives that lack clear problem focus tend to produce disappointing results.

Budget for real costs, not just licensing. In every cycle we have been through, the expenses that surprised people were not the obvious ones. They were integration work, training, process changes, and ongoing maintenance. Lew points out that the same discipline that applies to automation applies to AI: figure out which subset of your testing actually benefits from the technology, rather than trying to apply it everywhere. The organizations that carefully considered what to automate, rather than trying to automate everything, came out ahead. The same will likely be true for AI.

Plan to invest in people, not just tools. This is consistent across every cycle. New technology changes what people do, but it does not eliminate the need for skilled people. Automation created demand for automation engineers. AI is creating demand for people who can evaluate AI output, configure AI tools effectively, and integrate AI into a broader quality strategy. Research from BCG found that successful AI transformations allocate about 70 percent of their effort to people, process, and culture, not technology.

Stay skeptical without being dismissive. The hype is real, but so is the underlying technology. AI is already producing measurable improvements in specific areas. Self-healing test scripts have reduced maintenance burdens by 35 to 50 percent in documented cases. The challenge is figuring out which vendor claims hold up and which do not, and that means asking hard questions and looking for real evidence rather than accepting marketing at face value.

Give it time. Every technology we have discussed took longer to mature than early projections suggested. There is real pressure to move quickly on AI, but rushing into something you do not fully understand usually costs more than being a bit patient. The organizations that tend to do best with new technology are often not the earliest adopters. They are the ones that let others go first, learn from what works and what does not, and move in once the tools and practices have stabilized.

The XBOSoft Perspective

We have spent about three and a half years working with AI in testing at this point: evaluating tools, helping clients think through adoption, even exploring whether to build our own tool at one stage. What we have learned is mostly about the gap between what works in demos and what works in practice. The enthusiasm around AI is familiar to us because we have seen it before with other technologies. What we try to bring to clients right now is some discipline around asking the right questions, setting realistic expectations, and treating this as a process that unfolds over time rather than a switch you flip.

Next Steps

Understand the broader landscape AI in testing fits into a larger set of decisions about quality strategy and technology adoption. The pillar guide covers where AI helps, where it does not, and how to think about the tradeoffs.

Explore AI-Informed QA: Going Beyond the Hype

Talk through your situation Every organization starts from a different place. A conversation can help clarify which lessons from past technology cycles apply to your context and what that means for how you approach AI.

Contact XBOSoft

Learn from other transitions Case studies of how organizations have navigated earlier technology shifts offer practical lessons that apply to AI adoption.

Read our Case Studies

Related Articles and Resources

Looking for more insights on Agile, DevOps, and quality practices? Explore our latest articles for practical tips, proven strategies, and real-world lessons from QA teams around the world.