Analyzing performance test results is very critical for performance testing. Any results must be interpreted in light of the test’s objectives or they will have little meaning. For example, an objective might be to measure the maximum load that a server can handle (stress testing), verify that a certain number of concurrent users can stably run for a certain time, such as 1000 mail users can run 24 hours on a mail server (CHO – continuous hours of operation or endurance test), or how many users can the system handle executing multiple scenarios and maintain an average response time under X seconds (boundary test). When running performance tests, it’s important to note that performance tests are highly dependent on many variables and need to be run in a controlled environment so that you know what is affecting the results. For example, hardware, software environment, network conditions, user scenarios, should be considered case by case and executed holding all variables constant while changing only one at a time. This will provide a controlled test and provide the foundation for a valid analysis. This information can then be used to characterize and solve the problem. Without the right information, the developer, however eager, may not be able to figure out where the bottleneck is; maybe the server’s performance, the website timeout settings, or the network bandwidth, etc. One of the common things that we try to do is hold all variables constant while only changing user scenarios. By running multiple scenarios with several common functions, this will help to pinpoint what parts of the application have performance issues. Another example is finding the hardware/software platforms that perform the best for your application. By for instance, changing only the OS/DB combination while leaving all else constant, you can determine what combination performs the best and feel comfortable listing them in your ‘supported platform’s list.