Archive for the ‘Test Management’ Category
I have been in my new role as Test Lead 6 months now. The experience has been magnified because I am actually leading 2 platform teams; the social networking integration team, and the models team. The learning curve has been exponential. In my transition to this role I took advantage of attending a few HR courses to refresh my knowledge in management principles. I also read quite a few books. Perhaps the single book that I read that helped reinforce my ideas of leadership (outlined in this blog post) was The Mentor Leader: Secrets to Building People and Teams that Win Consistently by Tony Dungy. This is a great book for leads/managers and anyone who mentors others.
If you ask any lead they will likely agree that their success as a lead hinges largely on their team. But, if you ask leads what their first priority is they will likely say shipping a product, or managing testing of some feature area they have been assigned. Yes, ultimately we need to ship a product and do our best to make sure our feature areas are adequately tested in an attempt to improve our customer’s overall experience. There are many ‘managers’ throughout the industry who are good at manipulating ‘resources’ to achieve some desired result or filling in magic numbers on a balanced scorecard. Balanced scorecards provide some value to a business, but sometimes managers lose sight of what is most important and focus on doing mundane things that will twiddle the numbers to make it fit into the scorecard to hype success. But, managing resources to ship a product is different than leading a team of people to achieve, and sometimes exceed goals and visions.
Leadership is much more than management. A successful leader manages projects by articulating a clear vision, guiding people towards achieving goals, and motivates people by helping them grow. When folks ask me what my first priority is as a Test Lead I say it is the people on my team. But, what does that mean?
Open doorways to dreams!
One of my primary responsibilities as a lead is to help the people on my team grow and expand their scope of influence and impact not only on my team, but ultimately within other teams across Microsoft. Of course it is always hard to see someone on our team leave for new opportunities, but good leaders understand the career aspirations of the people on the team and work with them to help them achieve those dreams. Leaders find opportunities that will help people develop skills that will benefit both the project and the person. Leaders should be truly invested and take an active role in helping people on their team grow even if that means the person will eventually leave the team to find new challenges. Managers fear losing the people on their team; leaders nurture people on their teams and open doorways to dreams and new opportunities. Think of it this way, would you rather join a team in which the manager holds on to people until they burn out, or a team in which the leader has a track-record of helping people grow into their next job.
Delegate responsibility not just work!
Like many other leaders, I have many balls to juggle, and I can’t juggle them all alone. So, as leads we must delegate some of the things on our plates. But, delegation is more than assigning tasks to people. Delegation is endorsing people on your team who will represent you have be responsible for driving a project that has a broad scope of impact. Of course, delegation also doesn’t mean just throwing ideas out there and seeing what happens. A leader who delegates work will set clear expectations and realistic goals, coach for success, provide guidance on how to build upon success, and perhaps most importantly empower the person to make decisions on their own. When we delegate we should set people up for success; not throw them into the fire of failure.
Encourage risk and accept failure!
Sometimes when people know that I am an avid sailor they will ask me to teach them to sail. I love sharing knowledge and experiences about things that I am passionate about with people who are interested in learning. Sometimes people are hesitant to do something because they don’t want to break something, or do something wrong. I make it very clear from the start that every inanimate object on the boat is replaceable, and while back-winding a sail means we steered too far into the wind it can always be corrected. I know many ‘captains’ who yell and shout when a line gets twisted and jams in a sheave, or someone accidentally releases the main halyard while under sail. It’s our reactions to such situations that provide a positive learning experience or turn our experience into a day of hell on the water. Leaders encourage people to try new things, innovate, and experiment. Leaders also know that sometimes things might not work out perfectly and should be willing to protect people from harm (either physical harm on the boat, or professional/political harm at work), and rebuild a persons confidence when things don’t work out so well.
The burden of blame!
At the end of the day I can’t point fingers and say “so and so didn’t do such and such,” or “if things weren’t so screwed up to begin with we wouldn’t be in this mess.” As a lead I am accountable. If things go wrong I first look at my own leadership to see if I failed to set clear expectations, or neglected to provide adequate guidance (without hand-holding), or did I “delegate and disappear.” Ultimate the responsibility of achieving my team’s goals and objectives is mine. We succeed as a team, or I fail as an individual.
I sometimes see managers who are grumpy or apathetic. I sometimes hear managers say, “I don’t like this either, but we have to do it to satisfy some other manager or scorecard criteria.” A good leader understands asks and explains why and how they might provide value to the requestor. I know that my attitude affects the people on my team, and if I appear empathetic towards their ideas then they will likely not share their innovative ideas with me. If I am constantly complaining about something, then my team learns to complain about similar things and we start to look like a bunch of whiners. (Nobody really likes whiners. People might try to appease whiners from time to time, but ultimately they just want the whiners to go away.) Leaders know they are being watched and should always project a positive attitude.
So, after 6 months of being back in the trenches, shipping a product, and facing some tough challenges I will say that I am still loving it!
On March 1st I switched roles at Microsoft from an individual contributor (IC) in the Engineering Excellence team to a test lead in the Windows Phone team. This not only meant going from an IC role to a role in which I would lead a team of people, but also going from an academia/consultant role right into the fire pit of the newest Windows Phone 7 release. People often relate the experience of starting a new job at Microsoft, or changing jobs within Microsoft as ‘drinking from a fire hose,’ or in my case right from the hydrant itself.
It has been more than 8 years since I directly managed a team. In that time some ‘operational’ things have changed, but most importantly I have changed. Personally, I think my responsibilities as a lead include helping the people on my team grow their career, providing a vision and guidance towards clear goals, protecting the team from political fallout and pressure, managing the features(s) I am responsible for, preventing project and personal ‘fires,’ and also actively participating in the testing effort. There are skills, experiences, and values that I have learned in the past that will help me in my transition, there are some adaptations to make, and there is a whole lot to learn.
So, here are a few brief thoughts after my first 30 days back in the saddle.
Meet the “people” on your team – this seems rather obvious, but I don’t mean meet the team, I mean really get to know the people on your team at a personal level. You will be spending a lot of time working together, and occasionally socializing outside of the office at morale events, team lunches, etc. People have lives outside of ‘the project.’ Get to know the people and actively support their work-life balance.
Many times the first 1:1 meeting with a new manager mostly spent discussing my role on the team the project status, or other items related to the business. Leading people is different than leading projects. Leaders should be able to relate to the people on his/her team at a personal level, so get to know them a bit. Getting to know your team not only helps build trust, but helps a lead think about parental leave, vacations, children getting sick, dropping off/picking up children at school, wedding planning, and a host of other things that are not accounted for in the project schedule.
Meet your peers – Within the first 2 weeks I set up meetings with my developer and program manager counterparts. At Microsoft the Dev/Test/PM triad meets regularly (often daily), and must work together to manage the project and make critical business decisions. I also wanted to get their perceptions about the test team. Your peers can brief you on group policies, help you get ramped up on team processes, and also bring you up to speed on the project. They can point you to the right aliases to join, and make sure you are scheduled for the appropriate meetings.
Ask (a lot of) questions – you don’t know jack! Of course, leads/managers generally have pretty good track records and are hired not only on past accomplishments but also on future potential. But, when you move to a new team you are going to spend a fair amount of time learning new things. Teams are different, but many people say that it takes about 6 months on a new team before they feel comfortable and able to fully contribute. That doesn’t mean you have 6 months to slack off, it means that the first few months you might be cut some slack. But, the pressure is on and you need to step up and push the envelope of your personal development. Ask your peers and the people on your team for help; they often have a vested interest in your success.
Time management – There are triage meetings, the 1:1 meetings with directs, leads meetings, meetings with partners, etc. Some days I might have an hour or 2 of free time between 9 am and 5 pm that I don’t have meetings, but those days are rare. As a new lead/manager your time is going to be pulled in a lot of directions at once, and time management is a lot more challenging. Expect to come in earlier and sometimes stay later or log on in the evenings to prepare for meetings or catch up on things. Oh yeah…you still have to make personal time for yourself.
Provide stability – change often invokes uncertainty. The people on your team are usually highly capable and experienced. They likely understand their role on the team and know what needs to be done in their areas of responsibility. One of the first things a new lead/manager should do is provide some degree of stability, and provide some clear goals to help focus the team on achieving the project goals.
So far, my transition back into the product groups and a management role is at times a bit overwhelming, but it is a total rush and each day brings new and exciting challenges. On to the next 30 days!
I arrived in Switzerland on Monday morning and met with our team here in Zurich who work on the communication server. Tuesday I presented a tutorial on advanced combinatorial testing and delivered a keynote address at Swiss Testing Days on Wednesday. Unfortunately, I really didn’t get to spend a lot of time exploring the city, but it was great to catch up with my long time friend James Whittaker. James and I also gave brief presentations at an executive dinner the night prior to the conference. It was also really nice to meet new friends from SwissQ who put together Swiss Testing Days. This was my first time to present at this conference and I was greatly impressed. More than 750 people attended the conference! It was quite an event and I hope to return next year.
At the executive dinner and during my keynote I discussed various challenges in software engineering that directly impact testers. One of those challenges we need to get our heads wrapped around is software measures. By software measures I am referring to objects in software engineering mapped to various scales in the mathematical world. Although we sometimes also use biased qualitative measures, such as “too slow,” if we are to be taken with any degree of credibility we have to define what to slow is and set a reasonable goal for ‘acceptable’ based on customer values.
As testers we expend a lot of cycles collecting buckets full of metrics. We spend time producing fancy charts, and spend countless hours ‘looking’ at the data as if it were some type of oracle that would speak to us and tell us what we wanted to know. In the best case we convince ourselves that the numbers are telling us what we want the numbers to tell us. In the worse case the decision makers do not even consider the measures, or we don’t analyze the data in an attempt to identify ways to improve some of our engineering processes and practices. In the end, all the fancy charts are taken off the walls only to be shredded and we start over.
We often get caught up in tracking mostly useless data such as bug count and code coverage. What in the world does bug count or code coverage tell us (or the decision makers) about quality? Nothing; absolutely nothing! Some people want to believe that finding a lot of bugs or have high levels of code coverage means better quality, but that is sort of like believing that you’ll find a pot of gold and a leprechaun at the end of every rainbow. So, why do we measure bug counts and code coverage? Simple…because they are easy to measure!
Good metrics are hard to define mostly because we don’t always have clear goals, or we use a scatter-gun approach to setting a bunch of disparate wishful goals (goals that we hope we can achieve, but nobody is accountable if we don’t). I personally advocate the Goal/Question/Metric paradigm by Victor Basili. But, the biggest problem I have in using this approach is in establishing meaningful goals! People are generally good with coming up with superfluous objectives such as 100% automation or 80% code coverage. But, when you ask those people why they want 100% automation or 80% code coverage they retort only with a bunch of hand-waving and philosophical arguments. It seems we sometimes have difficulty expressing the ‘why’ of setting certain goals. Of course the answer in most cases is to ‘get better’ or ‘improve’ something! But, why? What is the business value?
Once we establish clear goals the next step is to understand the variables that we can manipulate to help us achieve those goals. Then we must decide on which ones we want to change that we think will have the biggest bang for the buck. Finally, we figure out which measures will let us know whether we are progressing towards our goal. (This usually isn’t a single point of measurement.)
At one time I naively believed that there was a core set of metrics that all teams should be collecting all the time that we could put into a ‘dashboard’ and compare across teams. In retrospect that was really a bone-headed notion. Identifying these measures is not easy, and there is no cookie-cutter approach. Each project team needs to decide on their specific goals that may increase customer value or impact business costs. Testers should ask themselves, “why are we measuring this?” “What actions will be taken as a result of these measures?” And, “if there is no actionable objective associated with this measure, then why am I spending time measuring this?”
At times is seems we are locked in a vicious cycle of relearning things via tribal knowledge, and we make decisions based mostly on ‘gut-feel’ and emotion. We collect a bunch of measures and display them similar to how the ancient Chinese used the mystical ‘dragon bones’ as oracles. But, if we are interested in being able to articulate business impact (either positive or negative) in a professional manner then we must be able to find ways to measure the things that are really important and actionable, and spend less time collecting numbers for wall decorations. At the end of the day someone is going to ask, “How do we know?” And trust me on this…really great managers will eat you alive if you answer with “well, we think…” or “we feel…” or try to evaluate success on some other subjective measure.
Originally Published Tuesday, April 28, 2009
Using context-free software product measures as personal performance indicators (KPI) is about as silly as pet rocks!
Periodically a discussion of assessing tester performance surfaces on various discussion groups. Some people offer advice such as counting bugs (or some derivation thereof), number of tests written in x amount of time, number of tests executed, % of automated tests compared to manual tests, and (my one of my least favorite measures of individual performance) % of code coverage.
The problem with all these measures is they lack context, and tend to ignore dependent variables. It is also highly likely that an astute tester can easily game the system and potentially cause detrimental problems. For example, if my manager considered one measure my performance on the number of bugs found per week, I would ask how many I had to find per week to satisfy the ‘expected’ criteria. Then each week I would report 2 or 3 more bugs than the ‘expected’ or ‘average’ number (in order to ‘exceed’ expectations), and any additional bugs I found that week, I would sit on and hold in case I was below my quota the following week. Of course, this means that bug reports are being artificially delayed which may negatively impact the overall product schedule.
The issue at hand is this bizarre desire by some simple-minded people who want an easy solution to a difficult problem. But, there is no simple formula for measuring the performance of an individual. Individual performance assessments are often somewhat subjective, and influenced by external factors identified through Human Performance Technology (HPT) research such as motivation, tools, inherent ability, processes, and even the physical environment.
A common problem I often see is unrealistic goals such as "Find the majority of bugs in my feature area." (How do we know what the majority is? What if the majority doesn’t include the most important issues? etc.) Another problem I commonly see is for individuals to over-promise and under-deliver relative to their capabilities. I also see managers who dictate the same identical set of performance goals to all individuals. While there may be a few common goals, as a manager I would want to tap into the potential strengths of each individual on my team. I also have different expectations and levels of contributions from individuals depending on where they are in their career, and also based on their career aspirations.
So, as testers we must learn to establish SMART goals with our managers that include:
- goals that align with my manager’s goals
- goals that align with the immediate goals of the product team or company
- and stretch goals that illustrate continued growth and personal improvement relative to the team, group, or company goals
(This last one may be controversial; however, we shouldn’t be surprised to know individual performance is never constant in relation to your peer group. )
But, (fair or not) for a variety of reasons most software companies do (at least periodically) evaluate their employee performance in some manner, the key to success is in HPT and agreeing on SMARTer goals upfront.
Originally Published Friday, October 24, 2008
Last week I was at the Test2008 conference in India. The organizers from PureTesting planned a grand event with workshops in Hyderabad, Delhi, Bangalore, and Pune. Then the main conference was then held outside of New Delhi. When I arrived in Delhi at the conference I was told I would be on a discussion panel. Surprise!
Although the conference organizers thought the topic would be controversial, in retrospect it turned out to be a non-issue to the majority of the audience. But, during the discussion one person asked the most important question of the session. He essentially said that new people coming into the industry and specifically the testing discipline are sometimes confused because there is sometimes contradictory information. "So," he asked, "how do new people know who the leaders in testing are?"
Rather than drone on forever, here is a list of traits of leaders whom I respect, and attributes I try to follow when I lead a team or mentor people.
- Leaders are able to foresee technological changes and changes in business practices on the horizon and predict how those changes will influence the careers of the people they manage or mentor.
- Leaders don’t let the people they are managing or mentoring to become stagnant.
- Leaders constantly seek opportunities for the people the manage or mentor to flourish.
- Leaders constantly help the people they manage or mentor develop their careers even if that means moving to a different role or team.
- Leaders connect with the people they manage or mentor and develop a nurturing bond.
- Leaders delegate responsibility because they explicitly trust in the people they manage or mentor to do the best they can do. Similarly, people respect leaders who they know grew to become a leader and were not merely placed in some position of management.
- Leaders understand the challenges in the industry and they unleash the potential of the people they manage or mentor to take on and tackle those challenges. If the team fails the leaders accept the responsibility and support their people for giving it their best effort. Then, they rethink the problem, and try again.
- Leaders don’t say that something can’t be done, or you can’t do such and such; they continuously search for alternative solutions to problems.
- Leaders identify hard problems and point the people they manage or mentor in the right direction and say, let’s figure out how to solve this together as a team!
- Leaders don’t whine about changes in the industry. We work in one of the most dynamic industries in the world, and leaders can successfully lead their teams to face new challenges head on.
- Leaders don’t shamelessly ridicule other people or hurl personal insults.
- Leaders challenge the ideas and statements of others, but they do so in a professional manner.
- Leaders present compelling points of view based on rational logic and empirical analysis. Not everyone may agree with a point of view, but they comprehend the results, and may sometimes present conflicting data which is repeatable in unbiased studies.
- Leaders must also occasionally make hard decisions that may be unpopular with the people they manage or mentor or even their own managers.
- Leaders don’t attempt to segregate the discipline or mislead neophytes with reckless statements based on emotional or philosophical ideals.
- Leaders have a strong personal constitution and are not swayed by emotional opinions or baseless peer-pressure.
- Leaders not only strive to improve the people around them, but they also continually strive to be the best they can be.
- Leaders never become apathetic or dispassionate. (If a person is apathetic or dispassionate then it is way past time for them to leave and pursue other directions.)
- Leaders are often recognized as technical experts in their fields.
- Leaders are respected by other leaders in their field.
- Leaders don’t refute challenges to ideas or statements with hypothetical or philosophical multi-syllabic hyperbole, they present substantiated facts or logical and rational points of view within the context of the discussion.
- Leaders know to criticize in private and promote in public.
- Leaders are also competent contributors.
- Leaders know the difference between ‘big-bang’ one-time dog-and-pony shows, and achievements that provide lasting results, and they reward accordingly.
- Leaders figure out how to permanently fix small problems so they can tackle larger and larger issues.
- Leaders drive themselves and others around them to be the best they can be because they know that being good enough is simply not good enough in the long run.
- Most importantly, leaders provide strategic direction and help guide and grow the people they manage or mentor to face new and exciting challenges.
I rattled off some of these traits and in the end I told the person that I have come across a lot managers in this industry (people who have people reporting to them in some capacity), but in my opinion there are too few real leaders. Fortunately I personally see many new highly knowledgeable and technically skilled people coming into the discipline again along with many experienced people who are reemerging. These people represent the potential for the type of leaders we need to help drive the discipline forward. Are you one of those people?
Originally Published Thursday, May 01, 2008
It has been quite some time since I have posted. Part of that is due to personal distractions (getting my garden planted and my sailboat ready for the upcoming season), and part of that is being ‘in the zone’ working on some special projects at work. DeMarco and Lister began talking about being "in the zone" in their famous book Peopleware written in 1987. (In my opinion, this is a must read for anyone in the software business, especially for managers who should reread it yearly.) The Croatian psychologist Mihaly Csikszentmihalyi also discusses a similar concept he calls "flow," and identifies 9 characteristics of ‘flow.’
Being in the "zone" for me is a myopic mental state where we are so focused on completing a task the world whizzes by and time becomes irrelevant. For many it is sort of a magical place; a momentary escape from reality. For example, when I sit down to write code in the evening the time passes so quickly that I soon discover it is 1 am in the morning. But, I want to complete one more class, and before I realize it is 4 am. (I am a slow coder.) People who are addicted to computer games know all too well about the ‘zone.’
This past month I had to come out of my zone and fly down to San Mateo, California to present a workshop and 2 talks at the Software Testing and Performance conference. It was a welcome break, and was nice running into old friends and meeting new acquaintances. I had the pleasure of meeting Karen Johnson and reconnecting with Doug Hoffman (who I had only met once previously) for lunch one day, and interestingly enough the conversation found its way to being in the zone. Karen brought up the fact that email is a constant distraction that often times impedes productivity.
Often times when we are in our zone our productivity increases. But, every time something changes our focus, such as responding to an email on a completely unrelated or tangential topic, or answering a phone call we are sucked out of the zone, and according to Lister and DeMarco it takes approximately 30 minutes to get back into the zone or back to our peak point of productivity.
For example, when I sit down to write, or code, or meditate I don’t want to be disturbed and will usually retreat to my boat or some quite place where I know I won’t be disturbed. I turn off my cell phone, no radios, no newspapers, magazines, no instant messaging (which I abhor) , and certainly no email. If I must stay at the office, then I block of my calendar for at least 4 hours and will sometimes disappear into some nook or cranny on campus.
I know that email is the life blood of many tech-companies. Unfortunately, I suspect that many of us could use a good bloodletting and need to relearn the art of verbal communication. I also suspect that we could better optimize our time by not being tethered to some email client during every single minute our waking hours. Let’s face reality. For most of us 80% of the email we get is noise that we will forget about within the next 4 hours or so, 15% is good to know stuff (but not necessarily critical to our success), and I suspect that approximately 5% is really important stuff that we must respond to immediately. Of course, these percentages depend on our primary job. For example a manager or a consultant probably gets a larger percentage of email the requires immediate response.
So, I wonder if we managed our own use of email (and instant messaging) a bit more effectively how our own personal productivity might increase?
Originally Published Tuesday, October 24, 2006
I recently went to Portland, and when I am there I make it a point to always stop by Powells Book Store. They have a whole building about the size of a typical Barnes & Noble dedicated to technical books. It had been several years since I had read Peopleware: Productive Projects and Teams by Tom DeMarco and Timothy Lister, so when I saw the second edition sitting on the shelf with 8 new chapters I just had to get it.
DeMarco and Lister’s have a witty and enjoyable style of writing that makes their books fun to read. I simply couldn’t put the book down on a flight from Seattle to London. Not only is the book engaging, it provides strong evidence to support their arguments and dispells many common myths, misconceptions, and misinformation around efficiency and effectiveness of people, projects, and the workplace. It provides managers with better information and keen insight on how to manage technical people and technical projects.
As Joel Spolsky wrote, "I can’t recommend this book highly enough. It is the one thing every software manager needs to read…not just once, but once a year."
Originally Published Monday, June 26, 2006
Every once in awhile I meet testers who say their manager rates individual performance based on bug metrics. It is no secret that management is constantly looking at bug metrics. But, bug numbers are generally a poor indication of any direct meaningful measure, especially individual human performance. Yet, some managers continue this horrible practice and even create fancy spreadsheets with all sorts of formulas to analyze bug data in relation to individual performance. Number of bugs reported, fix rates, severity, and other data points are tracked in a juvenile attempt to come up with some comparative performance indicator among testers. Perhaps this is because bugs numbers are an easy metric to collect, or perhaps it is because management maintains the antiquated view that the purpose of testing is to simply find bugs!
Regardless of the reasons, using bug numbers as a direct measure of individual performance is ridiculous. There are simply too many variables in bug metrics to use these measures in any form of comparative analysis for performance. Consider a team of testers of equal skills, experience and domain knowledge there are several factors that affect the number of defects or defect resolutions such as:
· Complexity –the complexity coefficient for a feature area under test impacts risk. For example a feature with a high code complexity measure has higher risk and may have a greater number of potential defects as compared to a feature with a lower code complexity measure.
· Code maturity – a product or feature with a more mature code base may have less defects than a newer product or feature.
· Defect density – a new developer may inject more defects than an experienced developer. A developer that performs code reviews and unit tests will likely produce less defects in their area as compared to a developer who simply throws his or her code over the wall. Are defect density ratios used to normalize bug counts?
· Initial design – if the customer needs are not well understood, or if the requirements are not thought out before the code is written then there will likely be lots of changes. Changes in code are more likely to produce defects as compared to ‘original’ code.
Attempting to use bug counts as performance indicators must also take into account the relative value of reported defects. For example, surely more severe issues such as data loss are given more weight compared to simple UI problems such as a misspelled word. And we all know the sooner defects are detected the cheaper they are in the grand scheme of things. So, defects reported earlier are certainly valued more than defects reported later in the cycle. Also, we all know that not all defects will be fixed. Some defects reported by testers will be postponed, some will simply will not be fixed, and others may be resolved as “by design.” A defect that the management team decides not to fix is still a defect! Just because the management team decides not of fix the problem doesn’t totally negate the value of the bug.
The bottom line is that using bug metrics to analyze trends is useful, but using them to assess individual performance or comparative performance among testers is absurd. Managers who continue to use bug count as performance indicators are simply lazy, or don’t understand testing well enough to evaluate key performance indicators of professional testers.
Originally posted Published Monday, May 08, 2006
I overheard some test managers discussing problems with their test automation effort, so I couldn’t refrain from asking the redundant question, “What is your test automation strategy?” They looked at me as if I had just beamed down from another planet and said, “c’mon, you know our strategy is to automate everything!”
It is unfortunately true that some managers drink the proverbial kool-aid and blindly regurgitate the 100% automation mantra or similar incantations such as “no manual testing” popular among agile pundits like Lisa Crispin.
Let me be clear. A goal of 100% automation is not a test strategy; it is a fantasy! Similar to the Disney fairytales where fairy dust causes magical transformations, evil is defeated, the prince marries the maiden, and everyone lives happily ever after forever automating everything is not practical or realistic.
Perhaps the single biggest problem with most test automation efforts is lack of a practical strategy. A practical test automation strategy is one that provides a pragmatic solution to address specific business needs with well-defined, measurable goals based upon realistic expectations.
Business needs drive a lot of the change in any organization, and usually involve cost saving measures, quality improvement, or increased customer satisfaction. A business need for test automation includes reduced testing time. (This doesn’t mean reduced ship cycles; it simply means the time it takes to perform certain tests during the product life cycle can be shortened.) For example, the Build Verification Test (BVT) is a necessary test suite to verify the stability of each new build. Depending on the size and complexity of the product a manual BVT suite can be very time consuming. An automated BVT suite (which should be 100% automated including results validation because it establishes a baseline measure on build stability and the tests remain relatively static over the duration of the development life cycle) can substantially reduce the time spent in this phase of testing especially in iterative build environments where the team is getting daily or even weekly builds. It doesn’t take long to realize the cost savings over the product life cycle.
Test automation strategies must also have realistic expectations. For example, I have never been convinced that finding “new’ bugs is a realistic expectation for test automation. (Yes, it will occasionally find some new bugs, but let’s face it…the majority of the 5 -15% of the bugs exposed by test automation in production environments are regressions.) I have never seen data that suggests increased automation reduces the overall development cycle. Nor will test automation eliminate testers. (This is a false hope imagined only by prima donna developers and bean counting managers scheming of ways to find value in their Masters in Business Mismanagement degrees.) So, what are realistic expectations for test automation? Well, I can reasonably expect test automation to identify stress issues such as mean time to failure (MTTF) and mean time between failures (MTBF). I can reasonably require test automation to establish baseline measures such as BVT suites or regression suites. Test automation is a pragmatic solution for load any type of load testing or other forms of concurrency testing.
Finally, a good test automation strategy must have measurable goals so we clearly understand what success looks like (or identifies where we need to improve). Without goals we are developing automated tests just to say we are automating. Unfortunately, I occasionally see teams with goals of automating n% of existing tests. This really doesn’t make much sense because it doesn’t take into account logical decisions of what tests should be automated (remember, not all tests need to be or should be automated), so some redundant tests or run-once type tests are automated (which may not be the best use of your limited testing resources). Also, the ‘existing’ set of tests is usually a moving target, so that means the goal is a moving target, which means we can never achieve the goal. Goals for test automation should be specific, measurable, achievable, realistic, and timely (SMART). Set short term and long range SMART goals for your test automation effort. For example, a short term goals might be 100% automation of the BVT suite within 1 week after the first build drop. Long term goals might include design elements and processes to transfer automation to sustained engineering or maintenance teams, or 100% language neutral automation that will execute on any localized (or pseudo localized) language version.
Test automation is expensive. Testers have a lot of work to do in a very limited timeframe, so it is important that we use our testing resources effectively. A well defined automation strategy will establish clear goals, set expectations, and provide practical, automated solutions.