Our most recent webinar, Are You Making These 7 ‘Testing Metric” Mistakes?, with ARGO Data’s Mark Bentsen produced a question from a participant regarding unit test coverage that we didn’t have time to answer live. We asked Mark to share his response and here’s how he answered.
Question: How do we/should we enforce developer-based unit testing to cover the many options so that QA can focus on test cases and user stories, the paths that a true user would take? I contend that if test cases/stories are planned well, many of the error messages can be woven into the QA
version of tests.
Answer: Start by meeting with your development lead(s) and ask them to explain how they perform unit testing and how they determine if it is adequate. Don’t be afraid to ask to be included on their reports regarding the unit testing success or failure rate. Normally they select a type of structural coverage like statement, decision, condition, multi-condition, or path and the relative percentage of coverage. Then development selects a target passing rate in order to promote code into the next test level. For example, our unit tests will have 80% decision coverage and we will not promote the code to QA until 93% of these are passing. Without development defining their exit criteria for unit testing and measuring their success at reaching that exit criteria, their unit testing should be suspect. Get them to publish the exit criteria for success and hold them accountable to reporting on its status.
Making sure that your unit testing is thorough but still allows QA to focus on the user requires effort, but it isn’t something that can’t be done. You have to make sure that the majority of your unit tests pass and then you can move on over to QA. It’s about taking it one step at a time.