In response to my last post, Shrini suggested, “You should probably do a post on types of bugs that unit testing (or developer testing at any level) would’nt catch. That would be a fitting reply to all those who swear by automated unit testing/API Testing.”
I am a big proponent of well-designed automated tests, and I have written a lot about the value of automation as an effective tool in the development process. But, I also know that automated tests find a relatively low number of bugs throughout a product lifecycle. If you think the primary goal of an automated testing effort is to find lots of bugs, or the same types of bugs as a person then you probably don’t know very much about test automation. Personally, I don’t think of automated testing as a proxy for the tester or as a bug finding solution. Sure, some automated tests can help find bugs, but more importantly it is one approach I might use for
- defect prevention (esp. computational logic problems),
- earlier identification of key integration issues,
- potential degradation of critical areas (battery, performance, memory),
- efficient execution of redundant ‘checks’ (if necessary or for confidence)
- more effective/precise ‘oracles’ as compared to humans
- cost reduction in long term sustained engineering
In my new role at Microsoft I am leading a team of great SDETs that tests primarily at the foundation (API) level. Everyday I see first hand the value that automated regression testing at the unit and API levels of testing provides to the overall production lifecycle (because we build everyday). While the tests we run ultimately affects the customer’s experience, it also helps reduce our overall production costs and drives certain aspects of ‘quality’ upstream. This is especially important in large scale, enterprise systems where a build breaks or integration failures can be costly and lead to unnecessary delays.
But, this is just one level of testing, and I realize that most testers are testing at the ‘system’ level of testing and rely heavily on GUI automated tests. I have never been a big fan of GUI automated tests; especially GUI regression testing. This is not to suggest that all GUI automation is bad, and there are several situations where GUI automated tests can be especially valuable to a test team. However, there are a few situations where I think automating tests that manipulate the GUI is mostly a complete waste of time such as attempting to emulate the behavior of a customer “scenario,” or trying to verify the ‘correctness’ of what the customer “sees” (e.g. visual verification).
I once told a colleague that the computer is really bad at emulating “me.” I said, “for example, sometimes when I type I hit the wrong key, or I lay my finger on a key for too long and that stupid sticky keys message box appears, and sometimes my hand position on my laptop causes my insertion point to jump to some random point in the text body and I have to reset it to the correct location. You can’t automate the unpredictability of me typing!” He said, “Sure I can!” I said, “Ok..I know that we can automate randomness or errant behaviors to some extent, but why would we?”
I sometimes think that in our zeal to “automate everything” we forget that our products often get a lot of “face time” through self-hosting, product partners, beta releases, and other strategies that are intended to get feedback on unanticipated or escaped functional issues, behavioral issues in how people might use the features in different ways (scenarios), and of course the “look” or visual anomalies that might occur while using the product.
As an example, the other day I was searching for a new program for my students to practice their GUI automation skills (yes, while I generally dislike GUI automation it is still a good skill to have). I came across text editor application called XINT. Within minutes of downloading the application and exploring the features I found a bug with the feature that inserts a URL into the text body.
As a simple example I was going to show how we can automatically go through the menu structures to make sure there are no changes, and that the menu items trigger the appropriate events (e.g. displaying a dialog). I was going to develop an automated test demo that systematically marched through the menu structure and validated the expected event triggered by that menu item. An automated GUI test such as this could provide a high level ‘check’ much more quickly than could be performed by a tester. Also, it automates a redundant ‘check’ of the application under test that we might want to perform on each new build to potentially ‘look’ for changes. I wouldn’t expect this automated test to find a lot of bugs, but it would clue us in very quickly to any changes in the menu structure and any anomalies with basic functional expectations triggered by those menu items. Quick and simple.
So, we get to this menu item and programmatically simulating the menu item “click” via a SendMessage() call to the appropriate menu item the ‘correct’ dialog appeared. OK so far. We are not testing the “insert a URL” functionality; my high level ‘check’ is simply checking that the correct event occurred (in this case a dialog appeared). So, now I am going to send a message to “click” the Cancel button and my expected result is for this dialog to go away and focus will return back to the application under test (XINT). But, in this case the URL insertion dialog is repainted with the sample text removed, and a new label. (It gets even better because clicking Cancel again forces the sample text into the text edit control. You have no choice at this point…you selected to insert a URL…so you are going to get whether you like it or not!) But, there is a bug, and my automation could have found it.
But, a little more exploration and I discover another anomaly that almost defies logic, and that automation would certainly not have found. When I pressed the ALT key, I noticed something odd. The fricking Cancel button disappears! However, as far as the ‘system’ is concerned the handle to this control is still there and I can programmatically send a message to that button control. But, to me the user…it’s gone!
This is just one example of the types of issues that automation is not especially suited for, and even trying to automate a test for these types of issues is not just an effort in futility, I would say it is damn near insanity. Of course there are other types of issues that automation is not especially efficient or effective in detecting such as ease of use, consistency in layout or behavior, general “look and feel,” and most importantly customer scenarios or user stories.
Bottom line, use automation for things that computers are really good at such as computational logic and redundant ‘checks’ that we might want to do after each new build. And, use humans to test for the things that humans are really good at which often happen to be the things that will delight your customer if you get it right, or the things that will piss them off if you screw it up!