Archive for September, 2010
This week I was in Ho Chi Minh City, Vietnam. After 3 days of attending the VistaCon 2010 conference and 1 day visiting the MS office here I got to explore the city, visit some museums, and end my trip by zipping around the city on a rented scooter. What great fun! Michael Hackett and I rented scooters on Saturday, drove around for a bit, then around noon got separated. Three hours later we met back at the place we rented the bikes and Hung Nguyen was there as well. We once again set out for a coffee shop, then on to a Czech style beer hall well hidden in the middle of the city. After a beer (or two) and a great squid dinner Hung and I headed off to Vive for a few games of pool with his son Denny. Next we headed to a karaoke place I embarrassed myself and also got a treat listening to Hung, Denny, and our companions sing songs in Vietnamese. Today was a continuation of the day before and I spent the day riding out to Hiep Phuoc to explore the countryside a bit. Getting caught in 2 torrential down pours was not part of the plan and forced me to stop once in a small coffee shop for an hour, then a pho stand for another hour to wait out the rain. Away from the city communication is mostly smiles, head movements and hand gestures, but ice coffee is well understood and I knew how to say pho bo to order noodles with beef. All in all, I had way too much fun this week immersing myself in the culture and getting to meet so many wonderfully friendly Vietnamese!
Back to the conference, I was very honored when I was invited to present the opening keynote for VistaCon 2010, the first software testing conference in Vietnam. My dear friends Hung Nguyen and Michael Hackett and the rest of the staff at LogiGear organized a fantastic conference that hosted about 180 people from 6 different countries. I also presented a 2 hour tutorial on combinatorial testing practices, and another talk on random test data generation in automated tests. The talks went well although culturally the Vietnamese people are a bit shy about speaking out, and many opted to talk with me one on one or in smaller groups during breaks or the lunch hour. One attendee later told me, “I didn’t understand everything you said, but your talks really made me think.”
The software industry in Vietnam is growing rapidly, and Ho Chi Minh City has tremendous potential as both an outsourcing destination and for distributed development. So, it was not a surprise to repeatedly hear the recurring question, “What skills will we need as software testers in the future?”
Those of you who read this blog regularly or have listened to me speak at conferences know that I think that software testers should have a rich understanding of the “system.” In my opinion, the less you know about the “system” in which you are testing the greater the potential to miss important issues, or decrease your ability to troubleshoot issues and identify patterns of software testing that can then be applied in the appropriate context.
For example, many testers have heard of the problems with the antiquated double byte encoding systems (DBCS) and will continually cut and paste hard-coded strings containing “problematic” CJK characters into various input textboxes. Unfortunately, much of this ‘testing’ is simply wasted effort. Due to a lack of “system” knowledge they don’t know is that these characters were problematic in file I/O operations on ANSI based systems, or where an application is thunking between Unicode and ANSI. Today the native character encoding of the Windows operating system is Unicode. Different encoding; different problems.
I also think testers should be competent in at least one programming language for several reasons that I have discussed in previous posts. Over the past few years, my message has been pretty clear; to grow as a professional in the software testing discipline many of us must improve our overall technical skills and knowledge. I have leaned in this direction mostly because I saw this skill gap in many testers and understood the needs of companies that produce software will require a greater breadth and depth of skills from their testers. The key operative phrase in that last sentence is “companies that produce software” because that is the lens through which I view the maturing role of software testing. My perspective of testing is biased towards testing practices at companies that produce software.
Cem Kaner also gave a keynote at VistaCon. At first I wasn’t sure if his keynote was on investment banking or testing, but he eventually drew a parallel between the job of “quants” in the financial sector and the role of a business domain expert tester working for an outsourcing company. But he did say that if you work for a company that produces software then you probably want to increase your technical skills, and if you work for a outsourcing company that tests software produced by software producer then you should become a specialist in the business domain of the software you are testing. You know…I agree. If you want to grow your career as a vendor for companies that outsources the testing of their products to specialists that test software from an end-user perspective then you should definitely become an expert in that business domain. On the other hand if you work for a company that produces software, or creates software solutions then I think you need to constantly improve your overall knowledge of the complete system; including some understanding of the domain in which you are testing.
Surely in the future there is demand for testers who are business domain specialist working for outsourcing vendors, and there will be demand for testers who want to work at a company that produces software and will need an increasingly richer set of technical skills, and I suspect that professional testers in the future will have a mix of skills and knowledge that will enable them to adapt to the changing job market and demands of the industry.
This week I am in Hyderabad, India. It has been some time since I have been here, and it is really nice to get to this part of the world. It’s also nice to finally put a face to an email alias and get to meet new engineers in Microsoft IDC and reconnect with old friends. For those have never been here, India is a an incredibly amazing place with a rich culture. And the food…wow…it’s like a party in my mouth.
I am teaching 2 classes on various approaches and techniques used in software testing. Our course discusses various fault models, provides examples and hands-on practice with patterns of software testing (PoST) to help expose specific categories of functional bugs, structural test design to evaluate control flow to help reduce overall risk, and some internal tools. Of course, many of these concepts are intended to improve the effectiveness of our test designs and also provide a foundation when using an exploratory and even ad hoc (bug bash) testing approaches.
Of course much of the conversation tends to center around incorporating these foundational principles into the automated test designs. To me the act of designing an automated test is much more than recording some sequence of actions to emulate a user, or filling in a spreadsheet with keywords and hard-coded test data. Also, when I talk about automated test scripts I an not simply referring to some script-let that drives the automation to emulate the actions/inputs a user might perform. In fact those that read the blog regularly know that I am not a big fan of GUI driven automation. But, that is a tangent. Much of our automation spans from the API level to GUI level.
Of course, an automated test is code. We need competent test developers to write (nearly) bullet proof code. The code must be essentially bullet proof because every time an automated test throws a false positive our team loses a bit of confidence in the value the automation effort is providing. Of course, getting to this level of automation requires a strong design. Before we begin writing a single line of code for an automated test script we should consider a few key factors.
- Purpose – we need to define the primary objective of the test. Some tests are focused on evaluating a single outcome (micro), and some are designed to look for systemic type issues by emulating end-to-end scenarios (macro). Also, we must decide if this is a positive test that is intended to demonstrate the program functions as expected with specific or generalized valid inputs, or a negative test using invalid inputs that should trigger the appropriate exception handlers or error messages. A well-designed test has a clearly defined purpose with a predictable outcome that we can objectively evaluate against a predetermined expectation. Without a purpose or a predictable outcome we are simply automating events and at best hoping to expose some gross error.
- Oracle – this is perhaps the hardest part of automating a test. If we cannot automate the oracle then we should really consider whether to automate that test. Sure, automation can also be useful to ‘set-up’ the environment to a particular state, but I don’t consider that to be an automated test. In my opinion an automated test must have an oracle that can accurately determine the outcome of that test. The whole purpose of an automated test is to provide value to the testing effort and to free up my time. If I have to literally check the outcome of an automated test it isn’t really freeing up much of my time. There is is little more boring than sitting in front of a machine and watching an automated test. Also, in my opinion automated test oracles should evaluate whether or not the purpose or objective of the test succeeded or failed. Trying to design an oracle to look for systemic level problems is usually not effective other than identifying gross anomalies (crashes, hangs, etc.). But, the test should have checkpoints that are able to catch unexpected failures or inconclusive conditions that cause the test to exit prematurely or otherwise fail to execute as intended.
- Data – many tests from unit level tests to system level tests require test data. During the design phase of a test is a good time to think about what data we need for our tests. For example, does the test require specific ‘real-world’ data, or could we model the data by decomposing the data into equivalent partitions and generate parameterized random test data that is representative of the data required for the test. Many tests require some form of input, and using hard-coded values in an automated test just makes no sense at all.
- Approach – we also need to consider the approach for our test. For example, if our application under test requires input data we need to decide if we should use a data-driven approach with static test data, or will we generate random test data. Also, is this a combinatorial test, a permutation test, a state transition test, or are we looking for single mode faults and targeting individual parameters. The type of issue we are trying to detect or hypothesis we are attempting to validate might impact our approach used in the design. Now is also a good time to think about reuse, dependencies, and specific environment configurations necessary to run the test.
I am sure there are other things to consider. But, without a good design strategy our automated tests are likely to be fragile, error prone, and likely to provide little value in the long run.
When I first started at Microsoft I worked on the Windows 95 international test team. Not only did we focus on globalization testing, but we also did a lot of the localization testing for the East Asian language versions (Japanese, Korean, Simplified Chinese, and Traditional Chinese). Localization is the adaptation of software to a particular target market. Translation of resource strings is one of the most visible parts of the localization process, and in those days part of our testing effort was spent on translation validation (e.g. checking the strings for appropriate translation). (In retrospect, I now think it is a huge waste of time and resources to use professional testers to validate the translation “quality.”) During our localization testing cycles it was common practice to take screen shots of issues we found during localization testing. These screen shots often help put things in context for the localization engineers, and helped them troubleshoot the issue. Of course, there were other times when we took screen shots to put other anomalies in context.
I am generally not a big fan of just attaching screen images to bug reports carte blanche. Joe Strazzere has an excellent post describing why a screen shot doesn’t always add value in bug reports. But, I also know that there are times when screen shots of the desktop can be of value. When we are testing using pre-defined tests or exploratory approaches we are physically there to see anomalies as they occur. But, (hopefully) nobody is babysitting the machines our automated GUI test scripts are running on. So, when an unexpected anomaly occurs with automated GUI test scripts is might sometimes be beneficial to capture the desktop state as an image. This screen capture might provide some clues as to the state of the desktop at the time of unexpected automated test case failures resulting in an indeterminate automated test script result.
There are several 3rd party tools that some testers use to capture the desktop image and save it to a file. Automated test frameworks should also have methods that test developers can call to capture screen shots at important points during the execution of an automated test script, or when an anomaly occurs. But, if not, here is a simply method that will take a snapshot of the desktop and save a jpeg file of the desktop state.
I would not capture tons of images during a test run; they just aren’t that valuable. And, I do not advocate capturing images to use as oracles…they are just too unreliable in my opinion. But, there are times when a screen capture of the desktop might add value and provide some context for the tester or the developer in troubleshooting issues.