Here we are almost half way through the year. Once again Seattle has been unseasonably cool with less than 10 days above 70 degrees so far. The Seattle area is nice, and it is especially beautiful on a warm sunny day. But I am a water-baby at heart and really enjoy consistently warm days. I sometimes long for the island life when I could walk down to the ocean and jump in for a swim or go surfing, diving, sailing or just about any other waterborne activity. So, these unseasonably cool temperatures and these consistently gray skies (you know its bad when you can readily identify 265 different shades of gray) are taking its toll. I would rather be outside doing something but more often then not find myself curled up on the couch reading a book, or falling asleep and dreaming of warmer climes.
Some automated test suites also succumb to sleepiness. One of the most common problems with automated tests is synchronizing the automated test and the application or service under test. Automated tests sometimes race through their instruction set faster than the application or service can respond leading to false positive (test reports a failure but there is no bug). Often times testers use a Band-Aid approach to solve the problem by sprinkling Sleep() methods throughout their test code in an attempt to synchronize the test code and the application or service under test. Unfortunately, these Sleep() methods may negatively impact the performance of an automated test and artificially increase the time of the automated test pass.
In general, sprinkling Sleep() statements in code is not highly recommended. Sleep() methods halt execution of the test code for the specified period of time regardless of whether the system under test is ready or not. Basically, the more a test “sleeps” the longer it takes that test to run. The time required to run an automated test suite may not seem important. But, if you have daily builds and your automated regression test suite takes more then 24 hours then obviously you are not running your full regression suite on each daily build. If you have a daily build and your automated build verification test (BVT) suite takes 8 hours that means that testers are probably spending some amount of time testing on ‘yesterday’s build’ and if they find a bug the developers will likely tell them to try and repro it on today’s build after the BVT is complete. Many of our teams partnered with developers to run a subset of our functional tests as part of a pre-check-in test suite before code fixes. We agreed the total time for a pre-check-in test suite including unit tests to not exceed 15 minutes. The bottom line is that the time it takes to run an automated test suite is important, and the longer a test takes to execute the longer it takes to get the results.
In various code reviews of test code I have found stand alone Sleep() statements for as much as 2 minutes, Sleep() statements that were inadvertently left in the code during test development or troubleshooting, and Sleep() statements in polling loops that were 5 seconds or more. As a general rule of thumb, wrapping a Sleep() method in a polling loop rather than having a stand alone statement with a call to a Sleep() method is a best practice in test automation. But, it is almost always better to increase the poll count (or number of retries of the polling loop) and decrease the Sleep() time rather than have a long Sleep time with a short number of retries.
For example, some tests will stop execution of the automated tests for some period of time (often using a magical number pulled right out of the blue) to give the AUT time to launch and ready to respond, or service to start, or to consider network latency.
1: // Launch AUT
2: Process myAut = new Process();
3: myAut.StartInfo.FileName = autName;
6: // Stop the automation for 5 seconds while
7: // the AUT launches (or wait for network delays)
10: // Start executing the test
11: // This assumes the system's state is in the
12: // expected condition to conduct the test
Of course, this scripted test blindly assumes that the system will be in the proper state after a delay of 5 seconds. If it is not, the test code will try to execute the code and the test will likely fail miserably. This is where a polling loop can be used to help synchronize your test with the system, but either allows the test execute as soon as the system is ready to respond, or exit the test if the system is taking too long to respond.
3: // Launch AUT
4: Process myAut = new Process();
5: myAut.StartInfo.FileName = autName;
9: // Start executing the test
12: catch (Exception msg)
15: // handle exceptions
19: public static void WaitForAutReady(Process aut)
21: int retry = 50;
22: while (!(retry-- == 0))
24: if (aut.Responding)
31: throw new Exception(
32: "AUT takes more than 5 seconds to respond");
In this example we can use a polling loop to wait up to some predetermined amount of time for the AUT or system to get into the desired state, but
- the test will restart executing if the system is in the desired state prior to the allotted time (in other words, if the AUT is responding within 1 second the WaitForAutReady method will return and the test will start doing it’s thing
- if the system does not achieve the necessary state within the allotted time the method throws an exception, the test execution stops, and the test case result is blocked or indeterminate (something went wrong during test execution that is preventing the test from determining a pass/fail result).
This is a rather simple example, but in most situations the use of a polling loop is way better than a simple Sleep() statement that stops test execution for some period of time. Polling loops can be used to
- help synchronize test execution with the system state
- reduce overall test execution time by allowing the test to run when the system state is ready
- help troubleshoot race conditions
- prevent false positives in your test results
So, stop putting your automated tests into periodic comatose states, and use the system’s state to determine when a test needs to rest for a few milliseconds to let the system catch up.