As I write this I am flying at 38,000 feet or so across the Artic Circle. I am on my way to the Netherlands where I have been invited to keynote the TestNet conference. I am very excited to attend this conference and to meet new friends and connect with old ones. But, the guy beside me is hacking his head off (God, I hope he is not contagious). There is an elderly gentleman sitting in front of me constantly readjusting his chair up and back, up and back. And, I think the woman behind me has a bladder problem because about every 15 minutes she slams her tray up into my back, and jerks violently on the back of my chair as she lifts herself up. Fortunately on this flight there are no crying children within earshot (as of yet anyway). Since I don’t sleep well on airplanes without being in a drug induced stupor I thought I would hammer some thoughts on automated testing.
I am still surprised how often I hear people disparage automated testing. It’s almost like a rampant phobia similar to fear of water, fear of heights, or fear of the unknown. What leads people to proclaim disingenuous tripe that I sometimes read on blogs, or discussion forums? For example, the other day I read “automation will not find any new bugs.” Well, I guess that is the experience of some folks. But of course we could say that any inexperienced, untrained, or non-thinking person who uses a tool is likely to misuse the tool and get hurt, do damage, or come to a false conclusion that the tool doesn’t work. Automated tests are not bad; stupid automated tests are bad. If your test automation is not providing value don’t blame the tool…maybe a little introspection is in order.
All Automation Is Not Equal
The first problem in almost any discussion about automation is the lack of context. Automated tests are different and when done well can be effective at helping us identify some types of functional issues. An automated unit test suite can be a powerful tool for developers during refactoring. Of course some bugs escape unit testing because unit tests are not intended to be comprehensive. I have long been a proponent of functional testing at the API layer and this is where my team plays today. I see first hand the value of our automated tests in the early identification of potentially critical functional issues during our continuous integration process. While this level of testing is effective in helping prevent build breaks, it is not comprehensive and does not test the behavior of the application through the user interface. And then, comes GUI automation.
Why (Most) GUI Automation Sucks!
Although I am not a big fan of GUI automation I have done it, I teach it, and I know that it can be of value in many ways. Last week I was discussing reasons for intermittent failures in test automation with a colleague. Of course, a large number of intermittent failures occur in GUI automation for a variety of reasons (that’s a different post somewhere in the future). But, as I was reviewing the test code something dawned on me…the tests sucked!
It seems that when many people sit down to automate a GUI test (using just about any test automation paradigm) the automation is about a brain dead as a zombie. I suspect that when folks sit down to automate a GUI test the first thing they do is step through the UI. Then the tester automates each step until he/she arrives at some final state and validates the results with an equally brain dead oracle. In other words a lot of GUI automation is comprised of simplistic scripts that attempts to emulate human behavior patterns rather than being a well-designed test that capitalizes on the power of the computer as a tool.
Here is an example of a brain dead automated test that we might find if we asked someone to automate this “feature.”
1: namespace BraindDeadAutomatedGuiTest
3: class Program
6: static void Main(string args)
8: // Launch the AUT
9: Process testAut = new Process();
10: testAut.StartInfo.FileName = "Hello.exe";
13: // Hard coded sleep to give AUT time to instantiate
16: // Use key mnemonic to set focus to textbox
17: // Often this is skipped because people assume this textbox has focus
20: // Enter hard-coded test data
23: // Tab to the OK key and press
27: // Another hard coded sleep to make sure messagebox appears
30: // Now comes the brain dead oracle
33: // Capture the messagebox text
36: // Now get the text from the clipboard and see if it contains the
37: // hard-coded test data; often I see people compare strings
38: // verbatim which is a recipe for false positives & maintenance
39: string actualResult = Clipboard.GetText(TextDataFormat.Text);
40: if (actualResult.Contains("Boob"))
49: // Clean up - kill the messagebox then close the AUT
Now, you might be saying that this is an absurd example, Really? I don’t think so. I see this sort of automation all the time in examples on the web and at conferences. For example, how often have you ever seen an example along the lines of:
- Launch IE (or some other browser)
- Navigate to “http://google.com”
- Enter “Ruby” in test box
- Press Search button
- Verify “Ruby” appears in results
Now, if you really expect this type of automation to add value then you should really consider a new line of work…perhaps assembly line work.
Well, we are about 1 1/2 hours out from Amsterdam and breakfast is about to be served, so I am going to wrap this up now. Yes, I learned a long time ago not to complain if I don’t at least offer a solution (whining is never appreciated). So, later this week, perhaps on the trip home on Wednesday, I will suggest one possible solution to automate even this simple app that could provide value by increasing test coverage and potentially exposing some errant behavior (bugs).
(Oh…and by the way, while writing this a little dirt merchant in pajamas and a pacifier sticking out of his pie-hole started doing laps through the cabin banging off the sides of the chairs…but at least he wasn’t crying!)