Automated Regression Testing – Revisited

test case selection in automated regression testingThis sounds strange, doesn’t it? Revisiting regression? Regression by nature of its definition means to go back, but revisiting? The truth is, as most software testers realize, regression tests tend to increase as software is developed. Each feature brings new regression tests, so naturally, when you go back and execute all the tests as part of regression, the time to execute regression gets longer and longer. How long should regression tests take? That depends on many factors. I hate to sound like a consultant but it also depends on your expectations. Some of our clients want their regression tests to run in less than an hour, while others, understanding that their application functionality is very complex, would understand if full regression were to take a full day.

The bottom line is that over time, automated regression testing (yes, even when machines do the work) almost always takes TOO LONG. So, it’s important to revisit your regression tests periodically. For the project that I’m working on, In the past months of July and Aug, we revisited the full regression test scenarios to make sure the regression scenarios are all applicable to the latest updated system features and improve the execution time. During this process, we discovered that our automation planning caused test cases that are too granular or detailed for the given objective. This blog summarizes our thinking and process in this effort.

  1. Why do we need to review the automated regression testing scenarios?
  2. How and what did we do?
  3. The Payback
  4. What we learned from this

Why Do We Need to Review Regression Scenarios?

  1. Reduce the full automated regression execution time so we can leave more time for the Dev team to fix any urgent issues.

The full regression test scenarios have been created over 2 years. After every sprint release, we’ll add the new feature test cases into this full regression test cases library and update the existing scenarios with the update of the feature in the past sprint. This leads to more and more test cases for us, and it takes longer and longer to execute all of them. We totally have 1166 test cases and that will take 10 days (2 weeks) to complete a full regression testing. This is too long for a release as no one can wait for 2 weeks for regression testing to finish.

  1. Decrease the granularity of the test cases so we can plan automation tests more easily.

Currently, the granularity of test cases is too small. This is not good for us to plan automated test scripts. For example, we have 10 test cases for Edit Subscription, and each test case is one sub-test of this overall feature. To automate the Edit Subscription function, it would be best to combine duplicate steps and compress them into one script. But because the granularity of test cases is too small, we have to review all related use cases and merge them together when planning automated tests for that feature. This takes up a lot of our time for automation use case planning and increases the risk of missing relevant test points.

How and What Did We Do? 

  1. Combine the sub-tests of one feature into one.

In order to decrease the time required for planning and managing regression automation scripts caused by test case granularity that is too granular, we reviewed all the regression scenarios and combined the same feature tests into one. Totally 531 test cases are identified as sub-test cases of some features. We combined the sub-tests of one feature into one test.

  1. Delete the duplicate tests.

Because our test cases are divided according to functional modules in the test case set, some test points may be cross-referenced in different modules. For example, regarding the test of placing an order, because this function is related to two other functional modules, we have test cases for related functions in these two functional modules. So when we revisited this scenario, we reviewed all the scenarios and deleted the tests that were already covered in other checkpoints. We were able to delete 59 out of 1166 test cases in our regression suite.

  1. Find and delete the test cases which are already not applicable anymore due to the feature updates.

Although we update the existing test cases according to the feature updates after each release, and some test points are involved in different feature modules, we may miss updating some test cases. Revisiting this, we were able to delete another 35 test cases.

The PayBack

  1. Saved 10% of automated regression testing time.

By deleting test cases and reorganizing, we were able to reduce regression testing automation execution by 10%. Not hugely significant but the entire process repeated over time will lead to cumulative results.

  1. Organize and prioritize our automation script generation on a regular basis. 

By merging sub-test cases of one feature, we were able to condense some test scripts but also organize them to reduce execution time. It also enabled us to improve future scripts and develop more common modules.

Automated Regression Testing Lessons – What We Learned From This

automated regression testing takes too longThrough this test case revisiting we learned that regular maintenance of test cases is an important part of quality assurance. Keeping our test cases matching updates of the system functionality to prevent us from executing invalid test cases while also removing duplicates should be done on a regular basis and also with test case management. It sounds like the natural thing to do, but when you are struggling to keep up and different people are writing test cases, they may not be aware of sub-feature duplication.

In the future, we plan to conduct overall maintenance for all scenarios every six months to ensure that our test cases are accurate.

The following is what I learned and thought through when we revisited our automated regression testing scripts and scenarios, and what we can do in the future to improve our test case management. We can refer to it and plan the test scenarios maintenance plan according to the context of the projects we work on. 

  1. We should try to combine the sub-test cases of the same feature together when we add new feature tests to the regression library. This way we can reduce a lot of duplicate steps during the test.
  2. Add Checkpoints for each test. Then we can filter the related checkpoints in each feature module and update them when the corresponding feature updated
  3. In Step 2, We will inevitably miss some test cases that are not updated. Then we will need overall maintenance for all scenarios. 
  4. Regarding the frequency of conducting the overall maintenance we mentioned in Step 3, I think this depends on the test case number and the used frequency for each project. For our project, we have a large number of full regression test cases but we don’t use these test cases very often. So we will do it every 6 months, not every month after the release. If the number of test cases is small and used more frequently, I think the frequency of review can be increased.
  5. We can also use the priority filter to identify different test libraries for different testing purposes, for example, P1&P2 for monthly regular releases and full regression tests for big platform upgrades.

Almost all of our clients want automated regression testing, and many of our clients want us to automate test cases and scenarios that map to their manual test cases. As you can see from the lessons above, it’s important to review all your test cases, whether manual or automated, and determine if they still meet your objectives on a regular basis.