Performance Testing – Lessons Learned

When software testers engage in a performance testing project, they often take shortcuts in the interest of cost savings and ignore performance testing basics, but we have found that these methods, while they might be able to get some quick results, often end up with results that are not reliable or valid. We often help clients that start off on a performance testing software effort and then call us in to assist them because it’s not going as well as they’d like. Some of the most common performance testing lessons learned that we find include:

1. Modifying think time in performance testing to get more testing out of a fixed number of virtual users; especially since performance testing software trial versions may have virtual user limit. In the actual performance environment, users will have varying think times and think time should be randomized. Shortening think time in performance testing doesn’t accurately represent real people using your application.

2. Testing a smaller environment and extrapolating performance to a larger environment. Most testers assume that every component of the application will scale linearly. In reality, we know that performance testing, #1 does not do this, and #2 different parts of the application (in the workflow) behave differently and have different performance characteristics. Testing and measuring at a capacity as close to what will be expected from your user base, with some assumptions for growth will provide a much more realistic assessment (although may take longer) than extrapolation from a smaller performance testing software environment.

3. Only using performance testing tools to test critical pages instead of testing the entire workflow. This strategy creates unrealistic workloads that don’t represent reality while often failing to catch the worst performance issues. If you don’t optimize for a realistic workflow, you could be neglecting important performance issues.

4. Measuring performance by response time alone. Most performance testing tools measure response time as one of the key metrics. Response time indicates whether or not there is a problem but does not tell you any details about the problem or what to do. Other metrics to examine in parallel include: transactions per second, throughout, processor usage and memory usage among many others.

There’s A Better Way To Do Performance Testing

These methods, while ‘quick and dirty’, are just that. Certainly, doing things this way was understandable when performance testing tools carried expensive license fees. However, next-generation performance testing tools are much more affordable than even a few years ago and have robust data analysis functionality to help you troubleshoot problems. Of course, you still need to understand performance testing basics and interpret results, set up load profiles and scenarios, etc. but at least the tool part of the equation should not keep you from doing the job right.

Interested in learning more about performance testing tools?

The software testing experts at XBOSoft can help! Contact us today.