Archive for July, 2010
I have been a little busy at work lately designing 2 new advanced software testing courses. One of the courses is on combinatorial testing. The course focuses primarily on feature decomposition to identify input parameter interactions, modeling input variables, using the more advanced features of our PICT tool to customize the model file, how to generate a variety of subsets of combinatorial tests from a single model to increase test coverage using PICT, and how to design oracles for data-driven automated combinatorial tests.
In this particular course I used the Page Setup dialog in Paint as a feature to model in one of the exercises. And as it turns out, this was a good choice because as it turns out, it has made me rethink how to model input variables for use in combinatorial testing.
I generally don’t advocate hard-coding specific values for input parameters that have a linear range of values. The reason should be reasonably obvious; if we have a range of values from 1 to 100, and I hard-code the values of 1, 10, 50, and 75, 100 (for a positive test) then I have absolutely 0 probability of ever including the value of 42 in combination with other input parameters. To avoid hard-coding values I usually recommend creating equivalent partitions of appropriate input parameters (e.g. xsmall (1-10), small (11 – 25), medium (26-50), etc). Modeling a range of input values using equivalent partitions allows me to randomly select a value in each set, increases my probability of testing with values that I might not otherwise include in a hard-coded set, and adds some degree of variability of inputs for improved test coverage of all possible input values.
However, sometimes we might want to include specific values in the model file we use to generate combinatorial tests. These specific values might include boundary conditions or other values based on historical failure indicators for that feature. In the past I suggested that we don’t necessarily have to specify boundary values in our combinatorial tests. The reason for this suggestion is based on the idea that:
- many boundary issues are single mode faults (meaning the error occurs when 1 parameter is set at or immediately above or below its boundary condition
- testing for single mode errors is often easier and less costly as compared to combinatorial testing
- combinatorial testing might obfuscate the cause of a boundary bug
However, I am now convinced that
- some developers are so inept at unit testing and completely overlook boundary conditions (If you are a developer and only write “happy path” unit tests, please read Pragmatic Unit Testing by Hunt and Thomas, and Clean Code by Robert Martin)
- we find boundary bugs so late in the test cycle that someone determines they are too obscure to fix
- we have “trained” customers to avoid boundaries (due to the number of issues and resultant failures that often occur around boundaries) so we don’t care about them anymore either
- we don’t understand the fault model and therefore don’t now how to adequately identify boundary conditions and test for them
But boundary issues are still fun to find, and they always make for good examples in training or conference demos.
Anyway, on to the bug. While ‘checking’ the ranges of the margins on Paint’s Page Setup dialog for the exercise in this course I came across an interesting anomaly. When the margins were set to values that were grossly outside the allowable margin and I pressed the OK button I got an appropriate error message. But, when I changed the Scaling variable state from Fit to: to Adjust to: the Fit to: value changed to 0 although the textbox control was grayed out. I now realized the margin values are being used to auto calculate the Fit to: output values.
Since the boundary value for letter size paper with a portrait orientation is 8.5 inches, I decided to see what happens when I set the left margin to 8.501 and the right margin to 0 and then change from Fit to: to Adjust to: to check and see what happens. Interestingly enough, the Fit to: value now changed to 4,294,965,329. OK…now, I just overflowed a variable (the developer only allows the user to input a maximum of 2 characters (99) in the Fit to textboxes).
Surely, I am thinking that a page size boundary is a standard value, and surely someone tested this. But, I decided to check the specific boundary value just to see what happens anyway. So, I set the left margin to 8.5 and the right margin to 0, change the Scaling from Fit to: to Adjust to: and…
There are many ways to expose this failure. Another fun way is to set the Scaling parameter to Adjust to: first. Next set the left margin to 8.5 and the right margin to 0 (assuming letter size paper with a portrait orientation), and click the OK button. Then, open the Page Setup dialog again and…game over!
Now, I really didn’t find this bug doing combinatorial testing. In fact, although combinatorial testing might ultimately reveal this problem (depending on the model of inputs provided to the tool that generates variable combinations), this bug was discovered during the data modeling process and discovering where calculations were occurring on certain variables. Once I saw an output boundary anomaly caused by other input variables I forced those input values to target the output boundary conditions of the output variable I wanted to further investigate. So, while we should use failure indicators and experience to specify important values in our combinatorial tests in conjunction with random values within the total population of possible values) I am still not thoroughly convinced that we should always include specific boundary values in our combinatorial test models because I suspect that even the process of modeling this feature for combinatorial testing would likely have exposed this issue.
But, in the end this is really just another example of a simple boundary bug that could have easily been found during unit testing.