When doing a white box testing project, its best come up with a list of White Box Testing Requirements. Otherwise it can be very difficult to satisfy your client. We recently worked with a client that was looking for a ‘check’ because things were just not running quite right and was concerned they had coding problems. Before we got started, we sat down with them and asked several questions prior to coming up with a good estimate of work to be done and a timeline:

  • Can your software be ported to run on a standard linux machine for testing, or does it require proprietary hardware?
  • Do you have existing white box tests or a framework?  Do you have any existing whitebox test infrastructure, or would this be starting from scratch?  
  • Do you have existing tools that you prefer?
  • Do the developers do any sort of Unit Testing? Coverage tests are designed to measure how much ‘coverage’ the unit existing tests cover. Otherwise, we can do static tests just for code.
  • What does your dev tool chain look like?
  • How much is the code to test still changing?
  • Can the white box testing execute completely independently from the UI? If not, is the UI xwin based, or some sort of proprietary display/user input mechanism (like the panel on the connected hardware)?
  • How high (or low level) are the entry points for the user or other systems that the software interfaces with.
  • Are there other outside async inputs, environmental effects and time / sequence dependencies.
  • Have you done an analysis that generates some sort of function point estimate from the lines of code?
  • Do you have goals or trends for cyclomatic complexity over time?  What tools do you use for this, if any?
  • Do you use any external libraries that aren’t part of the C++/C# standard set?

This was just the beginning of our process in collecting white box testing requirements. As you can see, a conversation like this could go on for quite a while, but its best to get it down prior to digging in with any tools.