Archive for the ‘General Testing Topics’ Category
Well, now that the summer has “mostly” passed and we are “mostly” finished with our latest release I am finally able to see light at the end of the tunnel. So, time to start blogging again!
Back in May my colleague wrote a post on why 100% automation pass rates are bad. In the past I also questioned whether striving for 100% pass rates in automated test passes was an important goal. I wondered whether setting this goal focused testers on the wrong objective. I was concerned that my managers might assume that 100% automation implied the product was of high quality. And I struggled with the notion of investing time in automated tests that won’t likely find new bugs after they are designed versus time spent “testing.”
Automated testing is now more pervasive throughout the industry as compared to the past. Automated tests running against daily builds are an essential element in an Agile lifecycle. Continuous integration (CI) requires continuous feedback from automated tests. So, let’s dispel some myths and explain why 100% pass rates are not only a good thing, but are and important goal to strive towards. Let’s start by understanding the nature and purpose of many automated test suites.
Even with high volume automation the number of automated tests are a relatively small percentage of the overall (explicit and exploratory) testing effort dedicated to any project. This limited subset of targeted tests does not in any way imply the product works as expected. An automated test suite provides information about the status of specific functional attributes of the product for each new build that incorporated changes in the code base. The ability of those tests to provide accurate information is determined by the design and reliability of the test and the effectiveness of the oracle used by that test. Basing the overall quality of a product on a limited subset of automated test is as foolish as the notion that automated tests will replace testers.
Also, most automated testing suites are intended to provide baseline assessments, or measure various aspects of non-functional behaviors such as performance, bandwidth, or battery usage. For example, my unit tests provide a baseline assessment of a function or method in code, so when I refactor that code at a later date my unit tests validate that method works exactly as before the code change. Likewise, many higher level test automation suites are some form of regression tests similar to lower level unit tests. Regression tests help identify bugs that are introduced by changes in the product code through refactoring, adding or removing features, or fixing bugs. Automated regression test suites are usually not intended or designed to find “new” bugs. Automated regression test suites provide a baseline and can help find regressions in product functionality after changes in the code base.
We should also remember that the purpose of testing is to provide information continually throughout the lifecycle of a product. Testers who are only focused on finding bugs are only providing one aspect of information. I know many books have been written touting the purpose of testing is to find bugs. However, we should get beyond outdated and limited views of testing and mature the profession towards the more valuable service of collecting and analyzing data through tests and providing the decision makers with the appropriate information that will help them make the appropriate decisions. Testers should strive to provide information that:
- assesses the product’s ability to satisfy explicit and implicit goals, objectives, and other requirements – measure
- identifies functional and behavioral issues that may negatively impact customer satisfaction, or the business – find bugs
Automated tests are just one tool that helps the team (especially management) quickly assess a product’s ability to satisfy a limited set of explicit conditions after changes in the code base (especially in complex products with multiple developers and external dependencies). Automated tests enable earlier identification of regressions in the product (things that used to work but are not broken), and also provides a baseline for additional testing.
Automation pass rates below 100% indicate a failure in the “system.” The “system” in this case includes the product, the test code, the test infrastructure, or external dependencies such as Internet connections. An automated test that is failing due to a bug in the product indicates that product doesn’t satisfy specific functional, non-functional, or behavioral attributes that are important. (I wouldn’t write time writing automated tests for unimportant things. In other words, if the automated test fails and the problem doesn’t have a good probability of being fixed then I likely wouldn’t waste my time writing an automated test.) So, if the pass rate is something that is looked at every day (assuming daily builds) by the leadership team then there is usually more focus on getting a fix in sooner.
Tests that are reporting a failure due to faulty test code are essentially reporting false positives. False positives indicate a bug, but in this case the bug is in the test code; not in the product. Bugs in test code are caused by various reasons. But, whenever a test throws a false positive it eats up valuable time (testers need to troubleshoot/investigate/fix) and also reduces confidence in the effectiveness of the automation suite. An automated test suite should be bullet proof…and testers should adopt zero tolerance for faulty or error prone test code. Ultimately, every test failure (test fails, aborts, or the outcome is inconclusive) must be investigated or the team may become numb to failing tests.
Becoming numb to automated test pass rates less than 100% is a significant danger. In one case a team overlooked a product bug because pass rate was consistently around a 95% and they had stopped investigating all failures in the automated test suite. That team became accustomed to the pass rate varying a little due to unreliable tests that sometimes passed and sometimes threw a false positives due to network latency. So, when 1 test accurately detected a regression in product functionality is went unnoticed because the team became numb to the pass rate and did not adequately investigate each failure.
The bottom line is that teams should strive for a 100% pass rate in functional test suites. I have never met a tester who would be satisfied with less than 100% of all unit tests passing, so we shouldn’t be hypocritical and not demand the same from our own functional automated test suites.
Every few months the STE vs. SDET debate reemerges like the crazy outcast relative that comes to visit unexpectedly and sits around complaining about imaginative ailments, and reminiscing about how things were in the good ol’ days. We certainly don’t want to be rude to our relatives, so we tolerate their rants while watching the clock and giving subtle suggestions about the late time. But, with the ridiculous ‘debate’ between STE and SDET I can be rude; drop it! It’s a baseless discussion without merit. It’s only a title!
In this previous post I explained the business reasons why Microsoft changed the title from STE to SDET. But, for some reason people commonly mistake the title with the role or job function. In the good ol’ days our internal job description for STE at level 59 included ‘must be able to debug other’s code,’ and ‘design automated tests.’ Almost all STEs hired prior to 1995 had coding questions as part of their interview and were expected to grow their ‘technical skills’ throughout their career. That was the traditional role of the STE.
As I explained in this previous post we established the title of SDET to ensure that testers at a given level in one organization in the company had comparable skills to another tester in a different organization. As part of the title change, the company decided that we needed to reestablish the base skill set of our testers to include ‘technical competence.’ Unfortunately when the career profiles were introduced some managers misinterpreted ‘technical competence’ with raw coding skills and the naive ideology of 100% automation. These same managers now complain their SDETs don’t excel at ‘bug finding’ and customer advocacy.
On my current team, the program managers are big customer advocates. They run their own set of ‘scenarios’ against new builds at least weekly. My feature area is testing private APIs on our platform. Our primary customers are the developers who consume those APIs, but we also must understand how bugs we find via our automated tests might manifest themselves and impact our customers. So, our team spends quite a bit of time also self-hosting, doing exploratory testing, and we even started a new approach that takes customer scenarios to the n-th degree that we call "day in the life" testing to help us better understand how customers might use our product throughout their busy days. Our product has 93% customer satisfaction.
So, if its true that the SDETs on some teams aren’t finding bugs and lack customer focus (and I suspect it is for some teams) then they hired the wrong people onto their test team. If SDETs don’t balance their technical competence with customer empathy then we have a problem; and I will say it is likely a management problem.
The testing profession is diverse and requires people to perform different roles or job functions during the development process and over the course of their career. Microsoft didn’t eradicate the STE “role” we simply changed the title of the people we hire in our testing “roles” and reestablished the traditional expectations of people in that role.
Differentiating between STE and SDET in our industry seems nonsensical to me, and I also think this false differentiation ultimately limits our potential to positively impact our customer’s experience and advance the profession. Testers today face many challenges, and hiring great testers (regardless of the job title) is about finding people who not only have a passion and drive to help improve our customer’s experience and satisfaction, but can also solve tough technical challenges to advance the craft and help improve the company’s business.
Last Sunday evening our summer league Monarchs hockey team had a game against the Ice Dogs. In our previous game we tied against this team so I knew this would not be an easy game. To compound things we had a short bench (10 players and a goalie); enough for 2 forward lines and 2 defense lines. It was a hard game and our team really congealed and we played one of our best games this summer season. Just like the saying ‘when the going gets tough the tough get going.’ When you have a great team of people they don’t sit around and cry like a bunch of panty wastes, play the victim card, point fingers, or incessantly complain. A good team buckles down in hard times in spite of the difficulties that might lie ahead and work together to get things done. Individual hero’s need not apply. Team’s don’t worship heroes; they value every person on the team.
The weekend hockey game was a good break for me. You see, we have been in ship mode for our Mango release on the Windows Phone. The hockey game was a good outlet for some pent up frustration. Every D-man had at least 2 shots on goal, and I blocked a couple of shots; one off the mask and one off my inner thigh of course where there are no pads, and yes it left a pretty good bruise. But, as they say, “pain is temporary; a win is forever.”
Seemingly against the odds, we ended up winning the hockey game 5 to 1.
Ship mode often times gets a little crazy. Second guessing takes on a whole new meaning. “Did you test this"?” “What about that?” “I have a situation when I do such and such, and the sun came out (remember we’re in Seattle) something bad happened. Have you seen this before?” Some people run around looking for fires, others are trying to start them.
As I get older I have learned not to react to fires as I did in my younger days. I have learned that sometimes fires aren’t really fires at all; it’s just a spark that someone is recklessly trying to fan into a flame. Sometimes there are fires that burn themselves out, but you just have to manage them in a control burn. And then there are the fires that have to be dealt with. Dealing with fires late in the product cycle is not fun for on the team. But, a team of people are responsible for doing just that and it is seldom easy; and it happens in the ship room.
Our ship room looks at a lot of data every day throughout the product cycle to help us manage our release schedule and stay focused. In ship mode, data is scrutinized even closer and every bug goes under the microscope. Managers must now work together to make some hard decisions about whether to take a fix. There is often intense discussion, but you will never hear anyone play the consultant card saying “it depends.” These guys in ship room have been in the game a long time, they know the risks and they know the business. Of course they know “it depends.” They don’t want a bunch of hand waving and bloviating, they need facts to make hard decisions. If you say an issue going to adversely affect customers you better be able to explain how customers are impacted, how many customers are impacted, how customers get into that predicament, and know if there is a potential work-around.
In the end, a team of senior managers must make hard decisions about what issues to punt and which to fix based on the information that is presented. This never easy at anytime during the product cycle, but in ship mode each issue is carefully investigated down to root cause, the fix is understood, and the impact of fix and testing considerations are well defined before the final decision is made. Of course, many products are schedule driven, but at the forefront of every decision in our ship room is customer impact. Perhaps that is why customer satisfaction for Windows Phone 7 is at 93%, and why I am glad to be on a team that works hard to do the right thing for our customers.
The other day my friend and colleague Alan Page was asking about the levels of testing. When I teach fundamental testing concepts I often use the the ‘levels of testing’ as a model to help students conceptualize different types of testing activities and potentially different stages of testing processes during the software development lifecycle (SDLC). In his book “Software Testing Techniques,” B. Beizer writes there are 3 different types of testing activities. He groups them as unit/component, integration, and system ‘levels of testing.’ The different levels are explained by Beizer as:
“A unit is the smallest testable piece of software, by which I mean it can be compiled or assembled, linked, loaded, and put under the control of a test harness or driver.”
“A component is an integrated aggregate of one or more units. A unit is a component a component with subroutines it calls is a component, etc. By this (recursive) definition, a component can be anything from a unit to an entire system.”
“Unit testing is the testing we do to show that the unit does not satisfy its functional specification and/or that its implemented structure does not match the intended design structure.”
It’s interesting to note Beizer’s explanation of the purpose of unit testing. Perhaps this is the inspiration behind TDD concepts rediscovered 10 years later. It also is counter to the typical ideology that unit tests are intended to “prove it works.”
“Integration is a process by which components are aggregated to create larger components.”
“Integration testing is testing done to show that even though the components were individually satisfactory, as demonstrated by successful passage of component tests, the combination of components are incorrect or inconsistent. “Integration testing is specifically aimed at exposing the problems that arise from the combination of components.”
“A system is a big component.”
“System testing is aimed at revealing bugs that cannot be attributed to components as such, to the inconsistencies between components, or to the planned interactions of components and other objects. “System testing concerns issues and behaviors that can only be exposed by testing the entire integrated system or a major part of it.”
I often us the following visual model to illustrate the concept of ‘level of testing.’ This model is intended to show how each level builds upon the previous level, and also shows the increasing scope of each level of testing.
The problem with this model is that if we don’t understand the internal structure of software such as methods (or functions), classes, APIs or interfaces, forms or other UI elements, etc., it is difficult to differentiate between the different levels other than system (the entire integrated system, or product), and the other levels.
One way to explain this model is that unit testing level tests individual methods or functions in a class. Component level testing tests a public methods that call one or more methods in a class. Integration tests are targeted at testing individual APIs or combinations of APIs treating the APIs as black boxes but usually without traversing through a user interface. And system testing tests both functional aspects and behavioral aspects of the entire product’s components together in a single build. Obviously, the scope of testing at the system level of testing is very large and includes functional testing of computational logic, non-functional testing such as performance, security, etc, and behavioral testing such as usability, look and feel, etc.
Another problem with this model is that it may appear that by focusing testing efforts at the system level (e.g. testing each build through a user interface) we would have greater coverage and implicitly cover the ‘levels’ below the system. Unfortunately this is not always the case because
- the user interface can mask or hide some functional bugs in public APIs that are called by other developers
- the system is large and complex and we may miss underlying functional bugs
As Beizer indicated, each ‘level’ of testing has different objectives and can help identify certain types of bugs more efficiently as compared to the other ‘levels of testing.’ In our experience at Microsoft, we have learned that hiring massive numbers of people to test at the system level with little or no unit/component or integration testing does not necessarily result in higher “quality,” and may cost more in maintenance and support due to undetected functional issues.
Another model that I also like to show comes from Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory. In their chapter on automation they illustrate the “test automation pyramid” with 3 layers of automated tests.
This model also shows how automated tests benefit from the underlying automation, and each layer is targeted towards helping the tester (and developer) identify different types of issues during the SDLC. I also like this example because it shows where automated tests are most effective or provide the greatest value to developers and testers.
Ideally there should be heavy emphasis on unit/component level testing, additional investment in API level testing, and as Crispin and Gregory state, “The top tier represents what should be the smallest automation effort, because the tests generally provide the lowest ROI.” While ROI means different things to different people, I would generally agree that GUI automated tests tend to be less reliable and require much more maintenance as compared to other levels of automation.
There are also some illustrations of testing levels that show a sequential progression from unit testing to component testing to integration testing and finally to system testing. In my opinion these don’t provide much value to anyone other than process wonks who are only capable of linear thought. There are also other models of ‘levels of testing’ that folks have devised they use usually to separate unique classes of system testing. Models are abstract concepts that can be used to help explain complex systems, and sometimes a little more detail in the model helps people understand the system.
But, I guess that is both the benefit and the drawback of models. Models can help explain complex concepts, but they can also be misused when single-minded individuals attempt to create rigid linear processes from an abstract model, or assume how someone actually implements a model is representative of the abstract concepts that someone attempted to present in a model.
Well, it has been 60 days on my new team. It has been a challenge going from an IC back to a management role. Time management is a huge challenge. I also am still ramping up on our feature area (I now know more about social networks then I ever thought I would). By day I find my time split between people management (team meetings, 1:1’s, etc.) and project management (triage meetings, status meetings, external partner sync-ups, etc.) At night I read through the API specs and brush up on my C++. While I come up to speed I am fortunate to lead a great team of highly competent SDETs. Someone once asked me if I were to ever leave the classroom and go back to manage a team in a product group if I would hire people as smart as I was. I replied, “Well, I am not really that smart, so I would want a team of people who are much smarter than me.” And, that’s exactly the team I have!
So, what does my team do? Essentially we are responsible for integrating the social networking functionality onto the Windows Phone devices. The Windows Phone is already the “real Facebook Phone” according to Wired magazine, and also helps you manage feeds from your LinkedIn contacts. The next release will also include Twitter integrated into the People’s hub (peeps) as well so you won’t need to install and switch between multiple apps to read or send updates from your contacts on different social networks. (The photo illustrates key aspects of the Peeps hub for those of you that don’t have a Windows Phone yet).
How we test in our corner of Microsoft
Last year I wrote about testing at the importance of functional testing at the API layer. So it is interesting that 1 year later I am leading a team that does just that. Our feature area includes the functional capabilities of the APIs that integrate social network features into the Windows Phone. 100% of our automated tests are designed to test functional scenarios at the API layer. Like many other teams at Microsoft we have daily builds of our feature branch. In addition to code reviews, our dev team runs their unit tests and a series of critical tests authored by our test team before checking in fixes and updates. Then an automated test suite is fired off in a lab on each nightly build to further validate the code changes did not destabilize key functional areas. Next lower priority automated tests are ran in the lab, as well as stress, performance and other test suites. This is continuous integration continuously.
While the team members are highly competent coders writing tests in C++ and some C#, they are more than developers in testing roles. Although a sister team is responsible for testing the customer experience, we also spend a time exploring our functional area via the user interface using an emulator or a device. This helps us develop customer empathy, but more importantly when our automated tests expose an issue we are better able to explain how that anomaly might impact our customer. We also spend a lot of time eating our own dog food and self hosting on test devices and some of us even flash our personal phones.
So, while I feel I am starting to come up to speed and beginning to contribute to the team I know I still have a lot more to learn. But, most importantly…I am having a blast!
…developers basically suck at testing and relying on developers to test your product will cost you money! Hey you pointy haired managers who think you can save money by cutting testing costs…are you listening?
Don’t get me wrong, I don’t think developers suck in their jobs; in fact, I know many really great developers (but, I also know and have seen the work of some hacks who call themselves developers). Great developers are usually pretty good at writing unit tests that test discrete functions. Some developers are even good at writing higher component or API level tests. Unit testing and API testing are valuable approaches to help identify certain types of problems in a new build after refactoring or bug fixing.
I don’t distrust developers; I think similar to testers they are concerned about the quality of their code and of the product they are working on. But, testers have a different mindset as compared to developers, and we have different perspectives. Some testers may write, or partner with developers for component level or API tests, but most testers generally focus on integration and system levels of testing. Using various approaches in our craft we often expose functionality issues as well as behavioral or usability issues that might adversely your customers.
“Hmm…this is odd,” I thought to myself. I refreshed the page thinking there might have been a glitch in getting the list of months. Ultimately, I had to call the travel company to make reservations.
At first they didn’t believe me, then they confessed they just commissioned this new website. So, I told them that after they fire the developer and hire some testers for the next go round they should also figure out why “test” is listed as a State.
Or, among other things, why this dropdown list contains seeming unrelated options, why some options are all upper case, and why one city name (scottsdale) is not capitalized. And of course our all time novice faux pas…”abc_test.”
These are just of few of the examples of the lack of perceived quality of this site. And the company really expects customers to enter personal data into this site; especially credit card information?
Now, of course testing doesn’t guarantee the absence of bugs, but I am pretty confident that most testers would have easily found these before staining this company’s reputation.
So, the next time your company thinks it can cut testing costs by having developers do the testing…point them here and say…I sure hope the developers you hire do a better job than this.
Well, it has been quite some time since I last posted to this blog, and I apologize to my regular readers for the inconsistency. Since the December timeframe I have been a bit busy behind the scenes searching for a new direction at Microsoft, and last week I was finally able to officially announce that I will soon be a Principal Test Lead in the Windows Phone group on the Communications team. My feature area will be Social Networking. I am so excited about the change, and getting back into the “real-world.” I have already started making the transition (more mentally than physically) and I am recalling the feelings I had when I first started at Microsoft. It’s incredible!
For the past 5 weeks I have also been teaching a course on Software Testing at the UW on Monday and Wednesday evenings. Tonight was my last class with a final that culminated in the attendees writing “scripted tests” from vague requirements, running those tests followed by some exploratory testing, and finally reviewing structural coverage effectiveness and defect detection effectiveness of the testing approaches.All in all, it is a fun class.
One lesson in the class focused state-transition testing. The focus of the lesson is to identify the important “states” the machine is in at a given time, and identify various ways a customer might navigate from one state to another (traversals). In my opinion, one of the easiest ways to present the concept of modeling states is with a setup or installer of a program with a minimum number of configuration variables. I have found that minimizing the configuration variables helps people focus on “state” rather than be distracted by the different configuration settings (combinatorial testing).
After a brief explanation and a demo, I cut them loose on a problem to solve. In this case I used a shareware application called 3rd Plan It. The objective was to identify all possible paths a customer could navigate while setup is running and the important states the machine might be in at any given moment in time during the installation process. The value of a state diagram or map is that it can help us identify paths that we might miss via exploratory testing. Also, by identifying important characteristics of each state we can better assess whether the machine is in the expected state (oracle) as we navigate the various paths towards the objective of our test (in this situation it is the installation of a program).
The state map below illustrates a model of what we might consider the “important states,” and the various ways to traverse or navigate between states. Coming up with a state transition diagram often requires us to explore the feature. But, it also seems to force us to really understand the feature we are exploring at a much deeper level, and consider paths that we might otherwise have missed. A state transition diagram also helps us think about what “state” the machine should be in at any given moment during the setup process (it is also important to understand the “state” within the appropriate context).
This is not to suggest that we need to create a state transition diagram for every feature. That would be silly. But, even if we don’t create a state diagram on paper we certainly create one in our mind. I suspect that most of the time we are purposeful in our testing approach and use heuristic patterns as we design tests (that is of course unless someone is high on PCP or some other hallucinogen and are completely reckless in their test approach and simply bang away indiscriminately in hopes of finding bugs).
Of course, some people found 1 or 2 issues using an exploratory approach without first creating a state diagram. But, after viewing the state diagram those folks were also able to realize paths they had not considered or traversed and were able to find more issues. The single biggest epiphany that most people have after participating in this exercise is how much more they understand the feature they are modeling compared to what they thought they knew.
Then of course, when the setup is littered with problems, the uninstall functionality which is part of the setup engine has its own set of problems as well.
Often, after teaching concepts and demoing examples of situations where various testing techniques might apply I am asked how diligently we should apply these patterns of testing. Of course, functional techniques are only one way to think though a problem. Test techniques are useful to help us systematically analyze a problem, and come up with some model of what we are testing. Models are sometimes useful to help us think of the problem differently, or identify things we might otherwise miss. State transition testing is a useful functional technique that is useful for helping us visualize various ways a customer might use a feature, and in turn come up with various tests and potentially find more issues.
I am generally not a big fan of static test data. I do know that in the proper context static test data can provide some value. Of course we should be aware of the common problems with files of static test data or (even worse) hard coded test data in a test case. Some problems with static test data include:
- Stagnation – static test data may add some initial value, but over time simply reusing the same test data over and over in a test diminishes the value of that test. For example, retesting the same name strings in a first name input textbox is not providing any new information if those ‘static’ names worked in the previous build and the underlying functionality has not changed.
- Contextual blindness – sometimes we have files of static test data that was identified as “problematic” in one situation (context), so we reuse the “problematic” test data regardless of the context. In 1995 I wrote a white paper on “problematic double-byte encoded characters (DBCS) explaining why each code point was problematic in a given context. For example, a Japanese character that began with a 0x5C trail-byte might be problematic in a filename on an ANSI based system that parsed characters by bytes instead of wide bytes. This is not true on Windows systems where the default encoding is the Unicode transformation format of UTF-16. However, some people continue to use obsolete DBCS problem characters perhaps because they don’t fully understand the underlying contextual differences between ANSI based encodings and Unicode.
Perhaps on the opposite end of the test data spectrum is random test data. Many of you that read this blog or have heard me speak know that I am a big proponent of parameterized random test data generation. Parameterization allows us to better model our test data. I know that even parameterized random data can be crafted to be representative of real data, but it is not “real” data.
But, there may be a happy medium between static test data and random test data. And, best of all it is abundantly available. One of the best sources for (especially non-English) test data comes from sources that most of us already use on a daily basis. The test data source I speak of are social networks.
I have met many wonderful people from around the world both in person and virtually, and stayed in contact with many of them. Last year while keynoting at the first software testing conference in Vietnam (VistaCon 2010) I was privileged to meet my dear friend Thuyen, who helped organize the conference. Since the conference we have stayed in contact via email and Facebook. When she posts on Facebook it is usually in Vietnamese. Since I don’t (yet) read Vietnamese I use Bing Translator to help me figure out the comment.
Last week she had an entry on her Facebook wall that began “Tối nay vô tình nghe trên TV 1 bài hát mà giai điệu…” So, I copied the entry and opened Bing Translator to translate the entry.
Many of you will quickly notice the strange anomaly in the translation. I initially thought that this service might be incrementing this numeric value for some reason, but when I changed the number value to 2 the number 2 displayed in the translated string. I tried various other numbers and quickly discovered that 6 incremented to the number 7, 8 decremented to 7, and 9 decremented to the number 3. I didn’t see a clear pattern here so I thought this might be an issues resulting from parsing a particular sequence of characters.
So, I modified different parts of the string (removed words) to narrow down the problem. I found the string “tình nghe trên TV 1 bài hát mà giai điệu” contained the problematic sequence. Removing any ‘word’ from this string displayed the translated string with a number of 1, with the exception of 1 word. Removing the word “nghe” from the above string resulted in the translation illustrated below.
But, the purpose of this post is not to illustrate this particular bug, but to give you ideas of how we can use social network feeds in our testing. People around the world use social networks and you can find “real world” strings in various languages that you can use as test data in various contexts. Most of the time this ‘test data’ will not likely result in a bug; but sometimes it can reveal interesting issues. Best of all, strings taken from social networks are not some manufactured static or random test data. Using strings copied from social networks is about as “real world” as we can get…this is the “data” from our customers.
This has been a rather eventful year; both positive and negative. I have done quite a bit this past year professionally and personally, but certainly not as much as I had hoped to accomplish. As the year is winding down, my attention is focused on fulfilling my daughter’s Christmas dreams and wishes. Christmas is still a magical time for her and I both.
I don’t know if she still truly believes in Santa Claus, but at least she does a good job of pretending. Tonight we will perform our yearly ritual of putting milk and cookies on the fireplace hearth and toss some carrots out onto the back lawn for the reindeer. We won’t build a fire in that fireplace on Christmas eve because she doesn’t want Santa Claus to get burnt as he comes down the chimney. Of course, after she is tucked soundly in bed, I will eat the cookies, drink the milk, and throw the carrots into the greenbelt behind my house for the critters to munch on.
She is delighted by finding the present she asks for from Santa Claus under the tree. She only asks for one thing from Santa so “he” feels obliged to satisfy her wish. There are some smaller gifts as well; puzzles, books, American Girl doll accessories, etc.). But, it is not the presents that make this a special time for her and I. It is all the things we do leading up to Christmas day that make this a wonderful time of year for our family.
Well, I am won’t blather on about Christmas, or what a special time it has become for me since my daughter was born. Instead, I would actually like to thank Michael Larsen for taking the time to write several detailed reviews of How We Test Software at Microsoft. As I read his reviews of chapter 5 and chapter 6 (the ones I wrote) he actually brought out several key points that I think were a little obscure such as, “Bj champions the use of Exploratory Testing (ET). ET is a great way to get a handle on an application, especially when a tester is new to it.” Michael also caused me to reflect on the following point in the book, “Bj argues that Exploratory Testing can be sufficient for small software projects, or software with limited distribution, or software with a limited shelf life. But that it doesn’t scale well for large-scale, complex or mission-critical applications.” In retrospect I wish I would have expanded on that statement by saying that in our experiences at Microsoft ET does not scale well as a primary approach to testing, and the teams that tried to use ET as a primary approach have changed their strategy. However, ET is used throughout MS; it always has been and always will be a valued tool in any professional tester’s toolbox. Fortunately, Michael was able to read between the lines and wrote, “I get what he’s trying to say, in that Exploratory Testing techniques will not be the be all and the end all with testing (nor should it be; all tools have their right time and their right place).” Right on!
So, whether you have read the book or not, I highly recommend that you visit Michael’s blog to read the reviews for insightful thoughts on the book. I also recommend his other thoughtful posts on the topic of software testing.
This week I was in Ho Chi Minh City, Vietnam. After 3 days of attending the VistaCon 2010 conference and 1 day visiting the MS office here I got to explore the city, visit some museums, and end my trip by zipping around the city on a rented scooter. What great fun! Michael Hackett and I rented scooters on Saturday, drove around for a bit, then around noon got separated. Three hours later we met back at the place we rented the bikes and Hung Nguyen was there as well. We once again set out for a coffee shop, then on to a Czech style beer hall well hidden in the middle of the city. After a beer (or two) and a great squid dinner Hung and I headed off to Vive for a few games of pool with his son Denny. Next we headed to a karaoke place I embarrassed myself and also got a treat listening to Hung, Denny, and our companions sing songs in Vietnamese. Today was a continuation of the day before and I spent the day riding out to Hiep Phuoc to explore the countryside a bit. Getting caught in 2 torrential down pours was not part of the plan and forced me to stop once in a small coffee shop for an hour, then a pho stand for another hour to wait out the rain. Away from the city communication is mostly smiles, head movements and hand gestures, but ice coffee is well understood and I knew how to say pho bo to order noodles with beef. All in all, I had way too much fun this week immersing myself in the culture and getting to meet so many wonderfully friendly Vietnamese!
Back to the conference, I was very honored when I was invited to present the opening keynote for VistaCon 2010, the first software testing conference in Vietnam. My dear friends Hung Nguyen and Michael Hackett and the rest of the staff at LogiGear organized a fantastic conference that hosted about 180 people from 6 different countries. I also presented a 2 hour tutorial on combinatorial testing practices, and another talk on random test data generation in automated tests. The talks went well although culturally the Vietnamese people are a bit shy about speaking out, and many opted to talk with me one on one or in smaller groups during breaks or the lunch hour. One attendee later told me, “I didn’t understand everything you said, but your talks really made me think.”
The software industry in Vietnam is growing rapidly, and Ho Chi Minh City has tremendous potential as both an outsourcing destination and for distributed development. So, it was not a surprise to repeatedly hear the recurring question, “What skills will we need as software testers in the future?”
Those of you who read this blog regularly or have listened to me speak at conferences know that I think that software testers should have a rich understanding of the “system.” In my opinion, the less you know about the “system” in which you are testing the greater the potential to miss important issues, or decrease your ability to troubleshoot issues and identify patterns of software testing that can then be applied in the appropriate context.
For example, many testers have heard of the problems with the antiquated double byte encoding systems (DBCS) and will continually cut and paste hard-coded strings containing “problematic” CJK characters into various input textboxes. Unfortunately, much of this ‘testing’ is simply wasted effort. Due to a lack of “system” knowledge they don’t know is that these characters were problematic in file I/O operations on ANSI based systems, or where an application is thunking between Unicode and ANSI. Today the native character encoding of the Windows operating system is Unicode. Different encoding; different problems.
I also think testers should be competent in at least one programming language for several reasons that I have discussed in previous posts. Over the past few years, my message has been pretty clear; to grow as a professional in the software testing discipline many of us must improve our overall technical skills and knowledge. I have leaned in this direction mostly because I saw this skill gap in many testers and understood the needs of companies that produce software will require a greater breadth and depth of skills from their testers. The key operative phrase in that last sentence is “companies that produce software” because that is the lens through which I view the maturing role of software testing. My perspective of testing is biased towards testing practices at companies that produce software.
Cem Kaner also gave a keynote at VistaCon. At first I wasn’t sure if his keynote was on investment banking or testing, but he eventually drew a parallel between the job of “quants” in the financial sector and the role of a business domain expert tester working for an outsourcing company. But he did say that if you work for a company that produces software then you probably want to increase your technical skills, and if you work for a outsourcing company that tests software produced by software producer then you should become a specialist in the business domain of the software you are testing. You know…I agree. If you want to grow your career as a vendor for companies that outsources the testing of their products to specialists that test software from an end-user perspective then you should definitely become an expert in that business domain. On the other hand if you work for a company that produces software, or creates software solutions then I think you need to constantly improve your overall knowledge of the complete system; including some understanding of the domain in which you are testing.
Surely in the future there is demand for testers who are business domain specialist working for outsourcing vendors, and there will be demand for testers who want to work at a company that produces software and will need an increasingly richer set of technical skills, and I suspect that professional testers in the future will have a mix of skills and knowledge that will enable them to adapt to the changing job market and demands of the industry.