questions to ask when performance testing

In addition to security testing, it seems that performance testing has come to the forefront of requests from our clients. As we now go beyond basic ‘it works’ functionality, clients are now worried about taking it to the next level. They’ve come to realize that performance has become more important as expectations continue to increase from their end users. Whereas 3-4 seconds used to be ‘good enough’, now users expect 1-2 seconds response time and even less for frequent usage scenarios. One of our clients recently told us that they had 10,000 users and they wanted to simulate all users logging in simultaneously. After convincing them that this scenario was highly unlikely, I decided to put together a list of performance testing considerations. This list is just what comes to mind and is by no means all-inclusive.

Performance Testing Considerations: Questions to Ask

  1. Who are your users? Break your users down into types based on what they do or their user privileges. Some will be admins, while others may only do one thing and one thing only.
  2. What are your users really doing? All your users won’t be doing the same thing! Think in their shoes. What are they searching for? What is their job role?
  3. When are they doing what? Depending on your application, different tasks will be executed at different times. If your software is accounting software, then end of month and end of quarter will have certain activities where many users are accessing similar data.
  4. How often do they do these things? If your software is teleconferencing software, then maybe Mondays are peak days, and top of the hour is when most users login to a call. What if you have a two-person conference call or a 10-person conference call? Which ones are most likely? What happens when everyone exits the software (video conferencing) at the same time?
  5. What functions do they consider most important? Beyond logging in, do you have a priority scheme based on different user types? What is most important to each user role and are these tasks optimized for performance?
Based on the answers to these questions, pick out the worst ones from a simple manual execution standpoint. You may see latency in certain areas just by doing some of these scenarios manually. Evaluate them simply by slow, medium and fast, then cross reference with frequency of use and importance of that function. From there, develop performance scripts by priority using the schema above. That information and the results can then be used for benchmarks.
As you can see, when it comes to performance testing considerations, don’t jump to the tools or getting results out of the gate. By using a methodical approach, you’ll be much better off in the long run with results you can compare over time and use as a basis for improvement.