Performance is no longer nice to have but a “must have” feature, especially for web-based applications. No matter how rich your product is functional, if it fails to meet the performance expectations of your customer, the product will be branded a failure. Incorrect design decisions made at the outset of a project as a result of invalid assumptions may become impossible to remedy downstream. In summary, performance testing:
- Simulates the behavior of a large number of users to determine how an application responds.
- Determines how an application reacts and performs under a particular workload (#users executing a defined scenario).
Why do Performance Testing? (just a few reasons)
- Demonstrate that the system meets performance criteria.
- Compare two platforms with the same software to see which performs better.
- Discover what parts of the application perform poorly and under what conditions.
- Develop benchmarks to ensure performance is maintained or improved from release to release.
Some typical types of software performance testing include:
- Multi-user test that simulates the expected user community, including delays in their behavior
- Executed with differing user loads to find information such as the maximum number of users that can be supported while still meeting the stated performance goals. For example, how many users can the system support with a maximum of 2 seconds of response time?
- Determine the load under which a system fails and how it fails
- Keep pushing to failure and see where the system fails
- What happens when we reach 1050 users?
- Also sometimes called a CHO or Continuous Hours of Operations test
- Running a system at high levels of load for prolonged periods of time
- Check for memory and resource leaks