Some of you know that I am an organic gardener. I do this not only because I like the fresh taste of vegetables and fruits that are not tainted with chemicals, but my daughter loves to eat things right off the vine or stem as we are harvesting our bounty and I am perhaps overly protective of my daughter. Here in the Pacific Northwest slugs are a particular problem, and I must use several different techniques and approaches to try to ward off or destroy these nasty, slimy creatures throughout the year because no single approach is 100% effective. Yes it is unfortunately true, even with my diligent efforts and myriad of tactics a few slugs still get by into the raised gardens. This sure seems a lot like software testing and how some tests find some bugs and miss others, and how iterative builds seem to allow new bugs to continually creep in. But, this really isn’t a revelation.
In 1983, in his seminal book Software Testing Techniques, Boris Beizer compared the diminishing effects of insecticides on boll weevils destroying cotton fields to the decreasing effectiveness of testing methods in exposing defects in software. This became well known as the Pesticide Paradox. The Pesticide Paradox states “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.”
Now, unlike insects (or pests), software doesn’t magically build up an immunity to bugs. What happens is that we design and execute tests using one approach that exposes some set of issues. But, then the number of issues being exposed by that approach starts to diminish, yet there are still some ‘residue of subtler bugs’ that haven’t been detected by that set of tests or testing approach. Also similar to how the ladybugs in my garden that eat aphids and mites have no effect on slugs, not all approaches or techniques we might use in our testing is effective in detecting or preventing all types of bugs.
For the past 10 years, I have been teaching various software testing practices and approaches for solving complex problems at Microsoft. One cool aspect of my job is that I get to experiment a lot in a reasonably controlled environment with diverse groups of people. We often design these studies to better understand the benefits and limitations of various testing approaches and methods in exposing different categories of defects to better understand how each approach can be used more effectively within the appropriate context. Of course, it should be of no surprise to anyone that the pesticide paradox holds true not just in the classroom, but also in practice.
I often explain Beizer’s paradox by stating, “there is no single approach or method used in software testing that is completely effective in exposing all bugs, and some approaches or methods are more effective in exposing different types or categories of bugs.”
At Microsoft there is no “one size fits all” solution, and the Engineering Excellence group doesn’t dictate how to test or what testing methods can or can’t be used. But, through a series of problem solving exercises in our new SDET training program each tester experiences the benefits and limitations of various approaches and techniques. Based on their experiences in the classroom and in ‘real life’ they also learn the most effective strategy for testing is not to rely too heavily on a single approach, and to use a variety of test design principles and patterns throughout the product lifecycle. But, that’s just how we roll.
|Software Testing Techniques 2nd Eidition|