Why do project managers often worry about the ratio of the number of testers to developers? It follows that how much functionality there is to test is determined by the developers and what they produce but applying a simple ratio is simply not the answer. I’ve heard that the agile development paradigm requires more developers than testers. In my opinion, this is far from the truth. More developers may create even more concerns and increase the possibility of breaking the application if agile is not implemented correctly.  So, what determines the number of testers needed? Over the years, we’ve developed a methodology to gauge the testing effort required based on several factors.

Number of Testers to Developers Depends on…

Complexity of the application under test

  • For a mobile online game application, you probably don’t need more than 1 tester.  Even though the scenarios could be endless, you can easily prioritize. But for a mobile messaging application, testing would require more technical knowledge and a different skill set, in addition to more testing resources.

Users’ expectation on quality

  • Which industry the users are in has a significant influence on their expectations of quality. Take the operating system for instance, Windows 7 Professional (which presumably would consist of business uses) will require more testing than Windows 7 Starter (home users doing non-critical tasks).


  • The more user interaction, the more testing required. For instance, Windows and Google. The ratio of developers to testers is 1:1, sometimes up to 1:2 in Windows Operating System testing, while it’s 2:1 in Google. Depending on the application, sometimes testing can’t be done through the GUI. For example, at Google, a lot of testing is done by developers through the back end.


  • Some features or projects are in their initial phase, with limited functionality available for testing, with most of the application under development. Depending on the application maturity, more or less testing effort is required. However, even if the application is not complete, testers can start working on requirement understanding, test plan, test design, etc.

Task intensity and application complexity

  • Depending on the types of testing, timelines involved, and application complexity, more or less test resources are necessary. Performance testing, security testing, install testing, et al can all add to the complexity and scope of testing.

Automation test coverage

  • The less tests that are automated, the more manual testers are needed. So in the initial phases of development manual testing will be more prevalent until tests can be automated. However, for applications with intensive user interaction and high customizability can increase the manual testing effort as they are not as well suited for test automation.
  • More frequent the application is updated, the less test automation can be applied in the beginning. Especially with continuous integration, testing can sometimes have a hard time keeping up.

As you can see, the issue is not about the number of testers to developers, but about the software development project itself.  Lately with the popularity of agile, there is a movement toward merging the developer and tester roles where anyone can be assigned to any tasks at any phase. However, this is not realistic as people have different skills and personalities with some suited more toward development or testing. Additionally, developers sometimes think that testing is a low skill level job and beneath them, so there is a tendency to not want to do any testing. Until this mindset changes, it will be difficult to completely combine the two roles.

In the end, the answer is still ‘depends’, but by applying these factors that I’ve listed above, you can come up with at least a good ‘guestimate’ with sound reasoning and logic.