Skip to content

Sleepy Automated Tests

Here we are almost half way through the year. Once again Seattle has been unseasonably cool with less than 10 days above 70 degrees so far. The Seattle area is nice, and it is especially beautiful on a warm sunny day. But I am a water-baby at heart and really enjoy consistently warm days. I sometimes long for the island life when I could walk down to the ocean and jump in for a swim or go surfing, diving, sailing or just about any other waterborne activity. So, these unseasonably cool temperatures and these consistently gray skies (you know its bad when you can readily identify 265 different shades of gray) are taking its toll. I would rather be outside doing something but more often then not find myself curled up on the couch reading a book, or falling asleep and dreaming of warmer climes.

Some automated test suites also succumb to sleepiness. One of the most common problems with automated tests is synchronizing the automated test and the application or service under test. Automated tests sometimes race through their instruction set faster than the application or service can respond leading to false positive (test reports a failure but there is no bug). Often times testers use a Band-Aid approach to solve the problem by sprinkling Sleep() methods throughout their test code in an attempt to synchronize the test code and the application or service under test. Unfortunately, these Sleep() methods may negatively impact the performance of an automated test and artificially increase the time of the automated test pass.

In general, sprinkling Sleep() statements in code is not highly recommended. Sleep() methods halt execution of the test code for the specified period of time regardless of whether the system under test is ready or not. Basically, the more a test “sleeps” the longer it takes that test to run. The time required to run an automated test suite may not seem important. But, if you have daily builds and your automated regression test suite takes more then 24 hours then obviously you are not running your full regression suite on each daily build. If you have a daily build and your automated build verification test (BVT) suite takes 8 hours that means that testers are probably spending some amount of time testing on ‘yesterday’s build’ and if they find a bug the developers will likely tell them to try and repro it on today’s build after the BVT is complete. Many of our teams partnered with developers to run a subset of our functional tests as part of a pre-check-in test suite before code fixes. We agreed the total time for a pre-check-in test suite including unit tests to not exceed 15 minutes. The bottom line is that the time it takes to run an automated test suite is important, and the longer a test takes to execute the longer it takes to get the results.

In various code reviews of test code I have found stand alone Sleep() statements for as much as 2 minutes, Sleep() statements that were inadvertently left in the code during test development or troubleshooting, and Sleep() statements in polling loops that were 5 seconds or more. As a general rule of thumb, wrapping a Sleep() method in a polling loop rather than having a stand alone statement with a call to a Sleep() method is a best practice in test automation. But, it is almost always better to increase the poll count (or number of retries of the polling loop) and decrease the Sleep() time rather than have a long Sleep time with a short number of retries.

For example, some tests will stop execution of the automated tests for some period of time (often using a magical number pulled right out of the blue) to give the AUT time to launch and ready to respond, or service to start, or to consider network latency.

   1:        // Launch AUT
   2:        Process myAut = new Process();
   3:        myAut.StartInfo.FileName = autName;
   4:        myAut.Start();
   5:   
   6:        // Stop the automation for 5 seconds while
   7:        // the AUT launches (or wait for network delays)
   8:        System.Threading.Thread.Sleep(5000);
   9:   
  10:        // Start executing the test 
  11:        // This assumes the system's state is in the
  12:        // expected condition to conduct the test

Of course, this scripted test blindly assumes that the system will be in the proper state after a delay of 5 seconds. If it is not, the test code will try to execute the code and the test will likely fail miserably. This is where a polling loop can be used to help synchronize your test with the system, but either allows the test execute as soon as the system is ready to respond, or exit the test if the system is taking too long to respond.

   1:        try
   2:        {
   3:          // Launch AUT
   4:          Process myAut = new Process();
   5:          myAut.StartInfo.FileName = autName;
   6:          myAut.Start();
   7:          WaitForAutReady(myAut);
   8:   
   9:          // Start executing the test
  10:   
  11:        }
  12:        catch (Exception msg)
  13:        {
  14:          Console.Write(msg.ToString());
  15:          // handle exceptions
  16:        } 
  17:      }
  18:   
  19:      public static void WaitForAutReady(Process aut)
  20:      {
  21:        int retry = 50;
  22:        while (!(retry-- == 0))
  23:        {
  24:          if (aut.Responding)
  25:          {
  26:            return;
  27:          }
  28:          System.Threading.Thread.Sleep(100);
  29:        }
  30:   
  31:        throw new Exception(
  32:          "AUT takes more than 5 seconds to respond");
  33:      }

In this example we can use a polling loop to wait up to some predetermined amount of time for the AUT or system to get into the desired state, but

  • the test will restart executing if the system is in the desired state prior to the allotted time (in other words, if the AUT is responding within 1 second the WaitForAutReady method will return and the test will start doing it’s thing
  • if the system does not achieve the necessary state within the allotted time the method throws an exception, the test execution stops, and the test case result is blocked or indeterminate (something went wrong during test execution that is preventing the test from determining a pass/fail result).

This is a rather simple example, but in most situations the use of a polling loop is way better than a simple Sleep() statement that stops test execution for some period of time. Polling loops can be used to

  • help synchronize test execution with the system state
  • reduce overall test execution time by allowing the test to run when the system state is ready
  • help troubleshoot race conditions
  • prevent false positives in your test results

So, stop putting your automated tests into periodic comatose states, and use the system’s state to determine when a test needs to rest for a few milliseconds to let the system catch up.

2 Comments

  1. X11::GUITESTER wrote:

    Great point! This is one of the many significant items I picked up in the Software Test Automation course and I constantly find examples of this in test automation and even in production code. While it may not be entirely avoidable, it should be discouraged if possible. Don’t be a lazy tester! :-)

    Friday, June 15, 2012 at 9:01 AM | Permalink
  2. Ahmet wrote:

    Hi BJ,
    great point indeed. Don’t you think that the best practice would actually to register for an event and make sure (testability | code hooks) that the event is raised whenever the system is in the state we are waiting for.

    Tuesday, July 17, 2012 at 4:59 PM | Permalink

3 Trackbacks/Pingbacks

  1. A Smattering of Selenium #86 « Official Selenium Blog on Wednesday, June 13, 2012 at 3:58 AM

    [...] Sleepy Automated Tests talks about polling loops. I like the use of the phrase periodic comatose states [...]

  2. Testers Caught Sleeping on the Job « Expert Testers on Tuesday, September 11, 2012 at 4:57 AM

    [...] that BJ Rollison recently wrote about the same topic, only much more eloquently, on his own blog. See for yourself. Share this:Like this:LikeBe the first to like [...]

  3. [...] • I.M.Testy заводит разговор о синхронизации автотестов и тестируемого приложения, предлагая альтернативу для вызова Sleep(). [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *
*
*

hemanestasia@mailxu.com butrickstacie@mailxu.com mccaine@mailxu.com