In a recent webinar that I moderated with Ken Pugh, we had a great discussion on acceptance tests. One of the points Ken made was that we all want to move testing forward. A no brainer, right? We don’t do it, however. Maybe we just don’t know how?
Ken brought up testing requirements; that finding and fixing requirements defects is much cheaper than finding and fixing defects later in production. He then put up the classic exponential curve showing how steep and expensive fixes can be when they are needed for defects found in production. However, I pointed out in reference to the graph that this maybe doesn’t apply anymore. Finding defects in production is actually still darn expensive. Perhaps less so than it used to be, as we have devops tools and configuration management to roll back and find the errors, thereby giving us the ability to fix them faster and more effectively than before. However, just because shifting left, or moving testing forward, translates to finding defects in requirements earlier it doesn’t necessarily mean that those that make it to production cost less to fix — it means there hopefully should be a lot less! It means that if you had a chart of defects found in different stages of development, these numbers should have changed dramatically with agile.
Regarding requirements defects, are there any studies out that show a shift in percentages for where/when defects are found during the software development cycle? I guess the problem with this question is that many agile shops don’t record defects during their development, so they wouldn’t know what defects they found in requirements. But I thought I’d ask anyway.