Did you ever notice that when you ask someone to test something the first thing they do is to start ‘testing?’
I often see this in my classes and I ask the person, “what is the purpose of your test?” Typically the response is, “I’m testing this,” or “I’m trying to find a bug.”
Unfortunately this seems to indicate there is no or very little pre-thought that goes into the act of software testing. To some people, testing appears to be little more than simply pounding away at the keyboard and trying whatever flies into our subconscious mind as we interact with the software and declare a bug when we stumble upon unexpected behavior or see something we might disagree with.
This is why I found it especially interesting in my own research, and the case studies by Juha Itkonen that testers who were trained in formal software testing techniques or patterns there was no significant difference in terms of defect rates or coverage between pre-defined test cases and an exploratory testing approach. This is not to say that one approach to testing is preferred over the other. It is not an either or proposition as I explained in my post on the pesticide paradox, and there are certainly more than 2 approaches to software testing. Testing requires multiple approaches to most effectively aid us in collecting and presenting the appropriate information to the decision makers.
But, I am often puzzled that it seems we can easily think of negative or destructive tests once we have the product in hand, yet when we are designing a set of tests from the requirements the tests simply test the requirements and little else. I wonder why it is that we can think of ‘tests’ while executing other tests, but we can’t think of those same tests before hand. Is there some limitation in our psyche that prevents us from analyzing a problem until we are actually faced with the problem (software in hand)?
I don’t think so, but I suspect there is a mental hurdle in that we sometimes feel more productive when we are interacting with software as opposed to sitting back and analyzing the problem more prior to executing well-designed test cases. (More tests doesn’t equal better testing!)
The bottom line is that if we are given a set of requirements and can only design tests that only test the requirements, then we are probably not thinking critically about how to design test cases.