Archive for the ‘Testing Tools’ Category
The ability of our software products to function correctly in a global environment is becoming more and more important. Our software should support national conventions used by the various locales around the globe. For example, in some regions of the world the period character is used as the number group separator and the comma is used as the decimal symbol (radix). European calendars generally start on Monday rather than Sunday which is customary in the United States. Era based calendars are still in common use in Japan and Korea, date formats and order, and time formats also vary by region or locale. As testers we need to test our software to ensure our customers around the world can use the national conventions they are accustomed to, and not force them down a US-centric, one-size-fits-all format or standard.
There are several settings that we can modify and customize for more robust globalization testing such as number, currency, time and date formats. Modifying these settings can help us test that our application is globalized to use National Language System (NLS) APIs provided by the system.Although a user would change these settings using the Regional Options user interface property sheets, if the purpose of our test is not to emulate user interaction, then modifying the custom regional settings for globalization testing programmatically is more efficient.
Last year I talked about how to programmatically make changes to the settings in the Region and Language control panel applet when doing globalization testing. Unfortunately, the code sample provided in the previous post was appropriate for versions of Windows XP and earlier. For versions of Windows Vista and later things have changed a bit. Also, the previous sample tried to be a one-size fits all and relied on the test developer to set the appropriate lcType constants and lcData argument variables required by the Win32 function SetLocaleInfo().
This time, I decided to simplify things a bit and wrapped some methods to call the appropriate Win32 API functions and properties to set lcType and lcData values to make it easier to incorporate into automated tests. I also separated the various advanced custom formats for Region and Language options into separate classes. Of course, I have a beta version of an automation library (DLL) called GlobalTest.DLL on my website that testers can use in their automated test cases, but this week let’s look at the class for setting custom date formats.
Making these changes programmatically still requires the Win32 SetLocaleInfo() function. MSDN also states this function modifies the specified values for all applications, so to prevent potential issues in other applications running on the system we should also broadcast the WM_SETTINGCHANGE message. To broadcast the WM_SETTINGCHANGE message we will also need the Win32 PostMessage() function. Since we are Process Invocation (PInvoke) to call these unmanaged functions we should put them in a separate class that I’ve called NativeMethods. I also included all necessary constant values required by these methods in the NativeMethods class also as illustrated below.
The class for the custom wrapper method is TestingMentor.TestTool.GlobalTester.SetDateFormat. There is a public enumeration for the short date and long date constants. One of these values must be assigned to the SetDateType property. The other property that must be set is the SetDateFormatPicture. The big change in the SetLocaleInfo() function is that the lcData type is a null-terminated string that MSDN refers to as a format picture. Current versions of Windows allow users to customize the order of the month, day and year, the format for each, and even allow different separators between the date elements. The format picture enables the user to select various format types in different orders for either the short date or the long date. See MSDN’s Month, Day, Year and Era Format Pictures for the various supported format types.
Once the SetDateType and SetDateFormatPicture properties are assigned we simply have to call ChangeDateFormat() method to change the settings and broadcast the message to the system. The code snippet below illustrates how a tester would change the default long date format in an automated test to determine globalization support in the application under test. Customizing the date format is useful if the application under test uses a date string in any way. For example, if the application includes a function to insert a date string in an edit control, or if the date is printed as a header or footer in a document, or if a date string is appended to a record.
Programmatically changing the date format is an easy way testers can customize date formats in their automated tests without having to manipulate the controls on Region and Language property sheet. Also note, that since the format picture is a string the order of the supported date format types is now controlled by the arrangement in the string, and the separator characters can be different between the day and month and the month and year as illustrated in the example above.
Modifying national conventions is one way to test for globalization support upstream and should be done early in the testing cycle rather than relying on a separate globalization testing cycle.
Next week I will discuss customizing the time format. Also, check out the beta release of the GlobalTester automation library that has this functionality and more and let me know what you think.
Originally Published Wednesday, February 25, 2009
I value static test data that is derived from historical failure indicators, or representative of typical end-users. But, of course a problem with static test data is that it only provides a limited set of all possible data, and becomes stale or provides little new information after multiple iterations of the test. So, I am a proponent of using random data in well-designed tests. Of course, recklessly generating random data is just plain dumb and potentially results in numerous false positives. But, when the data set is well defined and decomposed into equivalence class subsets then it is possible to generate random data that is representative of all possible data elements; probabilistic stochastic test data!
Last week I released an update to the test tool Babel for generating random strings of Unicode characters. Babel is a useful tool for comprehensive positive or negative testing of a textbox and other edit controls, and API parameters that take string arguments. Using probabilistic stochastic test data significantly increases the breadth of data coverage during a test cycle which increases the probability of exposing anomalies in string parsing and other string manipulation algorithms. But, when using characters from across the Unicode spectrum anomalies are usually caused by a specific character code point (or code points for surrogate pair characters), or combinations of characters.
Of course, telling a developer that a string composed of the characters ꁲᱚRבּ䍳܁쭤ኳ causes an unexpected error would most likely be met with that classic deer in headlights look followed by some muttering such as "That’s not a real string" and "nobody would ever enter such a string." Often times developers are likely to shun random strings as test data, and managers might claim it is not representative of ‘real’ customer scenarios. So, the professional tester knows that instead of simply arguing in favor of random string testing we must troubleshoot the string to identify the specific character code point or code point combination causing the error. Because while a ‘real’ customer may not likely enter a string of random characters from multiple language scripts, the problem is likely caused by a single character (and sometimes the combination of character code points), and there is some probability of a customer somewhere in the world entering that problematic character! So, as professional’s we must find that specific problematic character.
To help professional testers decode each character in a string to its code point value I recently completed a new tool called String Decoder. This test tool is an updated version of my old Str2Val tool (which had some serious problems when converting strings with surrogate pair characters). String Decoder will decode Unicode characters (including surrogate pairs) to their hexadecimal UTF-16 (Big or Little Endian), UTF-8, UTF-7 encoding values, or an integer value (UTF-32).
For example the UTF-16 Big Endian encoding values displayed in the Results list in the image for the given string.
Once the specific character code point or combination is identified, the tester can now tell the developer exactly what Unicode character or integer value is causing the anomaly. For example, it is much better to state a Unicode value of U+13BD is causing unexpected functionality as compared to trying to explain how to input the Cherokee letter MU or saying "just enter this character Ꮍ."
String Decoder can also be used to compare different Unicode transformation format encodings, or convert between Unicode hex values and 32-bit integer values of characters.
Let me know what you think!
Originally Published Tuesday, February 17, 2009
One of the biggest challenges in input testing is the sheer amount of potential characters and the virtually infinite number of permutations of those characters in different character positions in a string. Even if we know about the myriad of language scripts used throughout the world, manually generating characters from multiple language groups would be excruciatingly inefficient.
Since any modern application should support Unicode character we can assert the strings “abcdefg” and “ڄƥ藖꼩昨”are equivalent for most input testing requiring a Unicode string. So, random string test data generation is useful for easily increasing the breadth of test data tested, and also for testing the robustness of the applications ability to process complex data streams.
Babel 2.0 is a free test tool, and one of the few random string generators that can generate a string of character across the entire Unicode spectrum, since its initial release in 2006 it has been widely popular. So, I am happy to announce that an updated Babel 2.0 is released! I know this constitutes a shameless plug…but, sometimes it helps to plug tools we’ve made that can benefit other testers or developers.
Unlike many string generators that only produce a string of random ASCII characters, Babel can produce a string of random Unicode characters defined in the Unicode 5.1 specification, including surrogate pair characters (which often expose problems in various text boxes…hint, hint). Additional updates to Babel 2.0 include:
- Updated to the Unicode 5.1 spec (including new script groups and character code points)
- Ability to include/exclude combining character code points
- Ability to include/exclude reserved NetBIOS characters
- Custom range allows character generation from 0×01 through 0xFFFF.
- Ability to generate strings with a max length of 100,000 characters
- Improved distribution of characters from the selected language script groups
The following illustration provides a basic flow diagram of how Babel generates random strings. Essentially, one script group is randomly selected from all selected script group nodes, and all code points assigned to that script group are put into a collection. Next, one character is randomly selected from that collection and is appended to a string. This process continues until the string length equals a specified number of characters.
Better distribution of character selection across multiple script groups occurs by preventing the same script group from being selected before at least ½ of the other specified groups are selected. This means that as long as more than one script group node is selected the selected group of characters will be removed from the random selection process until at least half of the other script groups are chosen. This provides a greater distribution as compared to simple random generation.
The download also includes the Babel.DLL (and the dependent UnicodeData.DLL) for test automation. The older methods are deprecated and no longer supported. The new methods have been simplified and now include:
public static string Polyglot (int, int, bool, bool, bool, bool, bool)
Returns a string of random Unicode characters in all Unicode script groups based on a specified seed value.
public static string Polyglot (int, bool, bool, bool, bool, bool, out int)
Generates a random seed value and returns a string of random Unicode string of characters in all Unicode script groups, and passes a reference to the seed value.
public static string Polyglot ( int, int, bool, bool, bool, bool, bool, char, char)
Returns a string of random Unicode string of characters in all Unicode script groups based on a specified seed value
public static string Polyglot (int, bool, bool, bool, bool, bool, char, char, out int)
Generates a random seed value and returns a string of random Unicode string of characters in all Unicode script groups, and passes a reference to the seed value.
Get the new release of Babel 2.0 !
Originally Published Thursday, September 20, 2007
For some time I have wanted to add surrogate pair character support to a tool I developed called GString, and this week I managed to find some time to do that work and more! As I developed the methods for surrogate pair support I rewrote (refactored in developer parlance) some of the previous methods to reduce complexity. And wouldn’t you know it…the simple act of refactoring exposed some otherwise hard to find defects (and one pretty obvious one). I discovered these defects because I had to approach the problem space from a different perspective, and that perspective (working primarily with int types instead of char types) exposed the problems.
So, I decided to retire the GString code base, and I ported what I could into a new tool named Babel (and this is my shameless plug for that tool.) I know it is not ‘customer friendly’ when someone goes and renames a tool, especially when it comes with a library for test automation because now the ‘customer’ has to change their references in order to use the functionality in the new DLL. However, the name Babel seems more fitting in the purpose of this tool to generate random characters across the Unicode spectrum of language scripts; and besides Java also has a class called GString and I didn’t want to cause any confusion.
The obvious bug fixed in Babel is a problem that occurred when generating character in the ASCII only range. For some bizarre reason I neglected to exclude Japanese half-width katakana characters (and for an even more bizarre reason I failed to find it; which is a really good reason why unit testing only goes so far and we really need a second set of eyes for sufficient testing). One not so obvious defects included exclusion of a range of code points between U+1A20 and U+1AFF instead of U+1B80 and U+1CFF. This was a classic boundary bug! But unless we did a formal code review it is unlikely this one would have never been found.) The other not so obvious defect that has been fixed involved the the programs inability to exclude some valid Unicode code points that have not been assigned a character if the user selected to exclude unassigned code points (again a similar problem to that described above.)
The good news is these are now fixed, and the new Babel tool also includes support for Unicode surrogate pair characters in the range of U+10000 through U+10FFFF as an option. Also, I included a feature to save the output to a text file rather than having to copy and paste. The installation package include a desktop tool, a DLL for test automation, and the user’s guide and can be found at Testing Mentor.
If you encounter any problem using the tool, or if you have any feedback please let me know. Enjoy!
Originally Published Wednesday, May 30, 20
I am not a big fan of static test data, so this month’s issue of Software Testing and Performance magazine published an article I wrote outlining one approach for generating random string data (although the basic concepts can be used for generating other types of random data).
Unfortunately, it appears that some of the numbers got a little screwed up and the printer did not superscript the exponents correctly so the numbers in the third paragraph are probably looking pretty strange. So, to clarify, the paragraph should read:
Using only the characters ‘A’ – ‘Z’ the total number of possible character combinations using for a filename with an 8-letter filename and a 3-letter extension is 268 + 263, or 208,827,099,728. If we were assigned to test long filenames on a Windows platform using only ASCII characters (see Table 1), the number of possibilities increases because there are 86 possible characters we can use in a valid filename or extension and a maximum filename length is 251 characters with a 3 character extension is 86251 + 863. Trust me, that is one big number.
(NOTE: There have been several assertions regarding the above formula for determining the number of tests, here is the explanation. Essentially, the Windows platform file system treats the base filename and the file extension as 2 separate components and there is no interaction or dependencies between these two components. (For example, we cannot save a filename as CON.txt, but we can save a filename as myFile.CON.) Since there is no dependencies between the base filename component and the extension component they are treated as 2 independent parameters which would mathematically result in 268 + 263, or 208,828,082,152 tests if we elected to test all possible combinations of the base filename component with a nominal valid extension, then test all possible extension component combinations with a nominal valid base filename. One could argue we could combine the 17576 unique 3-character extension combinations with various combinations of the 8-character base filename component to reduce the overall number of tests by 17576; however I choose not to use that approach and instead test each parameter independently. If we mistakenly assumed dependency or inter-relationship between the base filename and extension components of a filename on the Windows platform testing all combinations (or 268 * 263 (or simply 2611) on a Windows OS would result in approximately 3,670,135,659,905,624 redundant tests (if we could do exhaustive testing). This is where in-depth knowledge of the ‘system’ really pays off.)
Of course, the filename length and extension length is variable. Also, 251 characters assumes a base filename component length from the root directory (it does not take into account the MAXPATH constant). So, the total number of combinations using only ASCII characters is much greater because the base filename component length with a ‘default’ 3-letter extension from the root directory is actually 86251 + 86250 + 86249 + 86248 + 86247 … + 861. Then, of course vary the length of extensions, and the total number of combinations increases even further. But, all this is only to provide some scope the magnitude of the testing problem.
Also, the equivalence class table (Table 2) is simplified and does not include reserved device names. For example, Windows will/should prevent a user from saving a filename of LPT1, or COM6, or CON, etc. (The behavior for saving filenames with strings composed of reserved device names is different on Windows Xp and Windows Vista…Vista finally got it right!).
Unfortunately, I did not get a chance to read the edited copy before print, but I think the basic idea comes through and I hope you find value of using intelligent random test data in your testing and would be interested in hearing your feedback.
Originally Published Sunday, December 24, 2006
Well, for those of you living outside the Pacific Northwest you are probably unaware of the recent wind storm with winds gusting to 60+ miles per hour that left more than 1 million people on the eastern side of the state without power. The damage was pretty extensive, and since I live in a fairly remote area I was without power for more than 7 days and without the Internet for almost 9 days. I do have a generator, but it hadn’t been used in almost 4 years. Sure, I started it every 6 months for about 15 minutes each time, but after the first full day of operation the generator started doing wierd things. So, during the past week I have become pretty good at fixing generators (mine and my neighbors), tracing electrical systems, troubleshooting furnace problems, splitting a lot of firewood, cutting up fallen trees, and repairing fences.
After the sun set (which is quite early) I had little else to do (other than making sure nobody stole my generator), so between stoking the fire I started developing a DLL for Unicode string generation in automated tests based on the GString utility. While reviewing the data tables I created for the GString utility with the Unicode Handbook I noticed some holes (OK…defects). Some of the boundaries for code ranges that are not assigned to any Unicode script group were incorrect. (That will teach me to use a web page with the listing put together by a web developer rather than using the Unicode handbook.) But, I also found a problem that prevented unassigned code points from being generated even if the Only use assigned code points check box was unchecked.So, the (hopefully final) update to GString is complete, including the GString.DLL! So, along with the massive overhaul of the Unicode data tables, the new GString package available from my personal website also includes a new DLL for anyone needing to generate strings of random Unicode characters in test automation. The GString zip file also includes detailed documentation on the utility and the dll usage. Let me know if you have any questions about the tool or using random string generation in your testing.
Well, now back to (mostly) normal life.
Originally Published Sunday, November 12, 2006 3
After a week in Boston presenting at the 3rd Software Testing and Performance Conference I am relaxing in Baltimore (where I grew up) visiting family and friends. For the second year in a row I presented a workshop on functional and structural testing techniques, and also presented a double-track session on GUI test automation using C#. One speaker cancelled at the last moment, so I volunteered to present the globalization testing basics talk I presented at STAR West a few weeks before. At both conferences I promised the attendees a tool to generate strings of random Unicode characters, and while relaxing along the waterfront of Baltimore’s inner harbor (the weather was quite beautiful this weekend) I managed to finish the tool (at least I am meeting the functional requirements I wanted to achieve).
So, without further ado, on my site Software Testing Mentor is a new section for tools and utilities where you will find the tool I have named "GString." GString will generate random strings of Unicode characters between the ranges of U+0020 and U+FFFF up to 65,535 characters in length either as a fixed length string or a random length string. The ranges of Unicode code points that are not assigned to a language script, and special areas such as Private use and surrogate areas are excluded from the generated strings. The resultant string can be copied to the clipboard and pasted into the edit control you are testing. (I am already thinking that a 2.0 version will populate the edit control that has focus automatically.)
GString is written in C# and requires the 2.0 .NET runtime available from Microsoft if you don’t already have it installed on your computer.
Well, back up to Boston for a few days before heading home. If you have any comments about the tool (or find any defects) please let me know.
Originally Published Wednesday, October 25, 2006
Last week I went to StarWest as a presenter and as a track chair to introduce speakers. Being a track chair is wonderful because you get to interface more closely with other speakers. Anyway…one of the speakers I introduced was Jon Bach. Jon is a good public speaker, and I was pleasantly surprised that he was doing a talk on the allpairs testing technique (also known as pairwise or combinatorial analysis). I wish Jon dedicated a little more time to the specifics of the technique during his talk and was generally more aware of available tools and information for folks to investigate further, but I think he successfully raised the general awareness and interest in pariwise testing as an effective testing technique among the audience.
Pairwise testing is one approach to solving the potential explosion in the number of tests when dealing with multiple parameters whose variables are semi-coupled or have some dependency on variable states of other parameters. For example, in the font dialog of MS Word there are 11 checkboxes for various effects such as superscript, strikethrough, emboss, etc. Obviously these effects have impact on how the characters in a particular font are displayed and can be used in multiple combinations such as Strikethrough + Subscript + Emboss. The total number of combinations of effects is the Cartesian product of the variables for each parameter, or 211 or 2048 in this example. This doesn’t include different font types, styles, etc. which also interdependent. So, you can see how the number of combinations increases rapidly especially as additional dependent parameters are included in the matrix.
The good news is the industry has a lot of evidence to suggest that most software defects occur from simple interactions between the variables of 2 parameters. So, from a risk based perspective where it may not be feasible to test all possible combinations how do we choose the combinations out of all the possibilities? Two common approaches include orthogonal arrays and combinatorial analysis.
But, true orthogonal arrays require that the number of variables is the same for all parameters. (Rarely true in software.) It is possible to create "mixed orthogonal arrays" where some combinations of variables will be tested more than once. For example, if we have 5 parameters and one parameter has 5 variables and the remains 4 parameters only have 3 variables each, we can see from the orthogonal array selector (available on FreeQuality website) the size of the orthogonal array is L25 (which basically means the test case will require 25 tests which is still significantly less than the total number of combinations of 405).
The other approach is combinatorial analysis (often referred to as pairwise or allpairs testing) because the approach most commonly used is to use a mathematical formula to reduce the total number of combinations in such a way that each variable for each parameter is tested with each variable from the other parameters at least once. In the above example, the number of tests would be reduced to 16. (Note: some tools will give slightly different results.) However, some tools (such as Microsoft’s PICT) also allow for more complex analysis of variable combinations such as triplets and n-wise coverage.
One problem that is hopefully not overlooked by testers using these tools is that some combinations of variables are simply not possible. For example, in the Effects group of the Font dialog it is impossible to check the Superscript checkbox and the Subscript checkbox simultaneously. Therefore, the tester either has to manually modify the output, or use a tool that allows constraints. Again, this is another situation where Microsoft’s free tool PICT excels. PICT uses a simply basic-like language for conditional and unconditional constraining of combinations of variables. PICT also allows weighting variables, seeding, output randomization, and negative testing.
I didn’t want this to be a PICT sales job, but alas my bias has influenced this post. So, I will conclude by pointing the readers to the Pairwise Testing website. My colleague Jacek Czerwonka has pulled together great resources on the technique of combinatorial analysis including a list of free and commercially available tools, and white papers supporting the value and practicality of this testing technique.
Originally Published Thursday, June 08, 2006
In the past few years there has been a push to use Ruby as a programming language for test automation. Ruby as a programming language has some benefits compared to other scripting languages. However, for test automation or any other serious application development the disadvantages of Ruby certainly outweigh any of the recent sensational hype promoting the language. I may be a bit biased, but I honestly can’t fathom why a tester wanting to learn a programming language would spend time learning Ruby, especially with the ease of use, availability of resources, and broad adoption of C#.
1. Ruby lacks informational resources. A search on Barnes & Noble, or Amazon reveals about 50 or so books on Ruby programming. But, that is barely a drop in the bucket compared to more than 400 books written about C#. These numbers certainly don’t inspire a lot of confidence in Ruby as a broadly accepted programming language in the industry at large. Web searches for available online resources also reflect this tremendous disparity. Sure, the Ruby zealots support the few websites and respond to requests for assistance. But, there are a greater number of C# forums with greater numbers of registered members who frequently participate and provide solutions to questions. Additionally, you won’t find too many universities or community colleges offering courses in Ruby programming. If Ruby is so good then why are there such limited resources? The answer is because there simply isn’t the business demand, or other compelling reasons for adoption.
2. Ruby is not a high demand skill among employers. Take a look at any of the technical job sites such as Dice, Monster, etc. and you will not find a plethora of jobs asking for Ruby programming skills. For example, of the approximately 16,500 software testing jobs on Dice only 65 contain the keyword Ruby as compared to 1,668 job listings containing the keyword C#. That means there are 25 times more employers desiring C# as compared to Ruby. IT Jobs Watch in the UK has an interesting site with lots of statistics relative to software testing positions in the UK. Looking at software testing jobs C# is listed in the top 5 desired programming languages. (Ruby doesn’t even make the top 20 list anywhere on this site.) The job trends on Indeed provides a visual perspective comparing jobs with C#, Ruby, Perl, and VB.NET keywords.
3. Ruby has performance problems. Scripting languages are notoriously slower than compiled languages, but it seems that Ruby is often slower (CPU time) and requires a larger memory footprint as compared to other scripting languages. The Computer Language Shootout site provides some very interesting benchmark results for various languages. One benefit of automation is the decreased time to execute a specific test. Now, you may be thinking that it only takes a few more seconds to execute a test in Ruby as compared to Java or C#. So, let’s arbitrarily say that each automated script in Ruby takes 5 seconds longer to execute as compared to the same automated test in C#. That may not seem like a big deal. But, instead of running one or two automated tests you want to run a test library of 200,000 tests. That’s about 12 hours of additional time needed to run the test automation written in Ruby. Now, I am sure Ruby advocates will discuss the reduced cost of development time, but time to develop an automated test is a one time expense (this does not include sustained maintenance which is a variable cost compounded over time for all automation libraries). Depending on your product’s shelf-life, you may need to rerun your test automation suites for the next 7 to 10 years for sustained engineering and product maintenance.
4. Ruby has not been widely adopted in application development. So, you may ask why this is a weakness for testers writing test automation? The simple fact is that if the development team is programming in say C/C++ or Java, and the test automation is in Ruby you probably won’t get a lot of support from the development team to help reviewing or debugging test automation. Also, it is very likely the developers may not want to install the Ruby interpreter on their Windows machine to use test automation to reproduce a defect, and instead ask for the manual steps. The test libraries the development team creates will require porting to Ruby which increases the cost and effort. Since many developers are familiar with at least the basic syntax of C/C++ and Java it is easier for them to pick up C# syntax and understand automated test code.
5. Ruby is just as cryptic as any other programming language. All programming languages use unique syntax, and users must learn the language’s syntax to code effectively. Now, I am no expert in Ruby but let’s compare a Ruby script to launch Windows calc.exe as compared to a C# program.
# Launch calc.exe in Ruby
// Launch calc.exe in C#
static void main ()
Obviously there are more lines of code in the C# program as compared to the Ruby script. But, considering the fact the template in Visual Studio auto-generates the framework for a console application (the primary method of writing an automated test case) the only thing I need to add to the .cs file are the ‘using System.Diagnostics’ namespace declaration, and the ‘Process.Start(“calc.exe”);’ statement. Additionally, the IntelliSense feature of the Visual Studio IDE references language elements, and even inserts the selected element into the code. Also, perhaps it is a matter of personal taste, but Process.Start() seems a lot more ‘readable’ than run_cmd.
Ruby activists boast how quickly they can teach Ruby scripting to non-programming testers. I have been teaching C# to testers for more than 3 years. I have been very successful at teaching testers with no previous programming skills to write automated GUI tests in C# that will launch a Windows application, manipulate the application, generate and send data, verify and log results, and clean up the test environment within a day.
There may be some interesting features of Ruby, but don’t get sucked in by all the fanatical propaganda. Ruby has been around for more than 10 years and hasn’t replaced any language or garnered a significant following! The simple fact is that Ruby simply doesn’t offer anything revolutionary, and thus hasn’t compelled the development community to rush to adopt or support it. All programming languages have strengths and weaknesses depending on the application. But, for test automation Ruby is not the best choice as a programming language.
In my (biased) opinion, I think C# is a much better choice, and in a later post I will outline the benefits of C# as a programming language for test automation.
Published Thursday, June 08, 2006 12:55 AM by I.M.Testy