There are two parts to this question. The first part is why are you testing? The answer to that question helps answer the ‘good’ test case question. Remember that testing is only one element of QA and QA is verifying satisfaction of requirements both functional and non-functional (see What is software quality?). Some questions to ask are:
- What are the requirements that you are testing for?
- If you validate that a particular function exists, does that translate into quality?
- Is the test case ‘good’ because it can show that the function either works or does not work?
A ‘good’ test case is only a start toward accomplishing the overall objective of quality. For example, if we are testing an accounting application, and want to reconcile accounts, a test case that validates that two accounts have been reconciled correctly while validating that all downstream calculations are correct is a good test case. On the other hand, a test case written which validates the precise position of buttons, or number format may not be as useful (unless that is one of the objectives of the software). If that test case failed, what would be the action taken? Maybe nothing.
Another viewpoint is risk. What if a defect slipped through if a test case was not executed? What would be the consequence? Would anyone care? So before pounding out test cases, think; if the test case fails, will it warrant action? If not, it could be a waste of time. So with that in mind, we can turn to the contents or structure of a ‘good’ test case:
Short and according to a naming convention that enables you or your QA team to know generally what the test case covers.
The objective of the test case with explanation why it’s important with type, i.e. regression, smoke, acceptance. Many test cases will only be included in full regression for instance.
Clear steps to execute the test case for someone not familiar with the application. For instance, “Click on XX button, Enter date”.
One or more sub-steps with validation criteria. For example: “Report has XXX number”. The criteria should be yes or no.
Getting back to “does anyone care”, prioritize test cases appropriately. If this test case failed, what action would take place? Would there be a patch released to customers? Would it be fixed in the next version, or would it just be ignored along with many other defects? Test cases should be executed according to the priority given to the test case such that all the important bugs will be discovered as soon as possible. This puts a new light on priority. You want to think carefully about priority and order of execution. One possible categorization would be smoke, acceptance, and regression.
Should you write detailed test cases for each minute function? As general guidance, any test case that fails should result in only one defect. For the accounting report example above, it may take some understanding of the application, so initial test cases may be at the wrong level where one test case results in multiple defects or many test cases result in the same defect, but hopefully over time, test cases will improve.