Open UI: Testing with Selenium

Selenium is a browser automation tool, that allows web projects to automate repetitive tasks. As an early pioneer of Open UI, I found this new capability to be incredibly useful for achieving continuous integration. I gave readers a hint of this last year, in the article "Open UI - Build Process", which left readers with a list of ingredients, and a plan to manage their build automation.

Now that more Siebel projects around the world are on embarking on upgrading their Siebel Applications to Siebel Open UI. Its about time we advance this concept into something more tangible that Clients can benefit from.

Picture this... as part of the Open UI upgrade, you are required to navigate to 1000+ views in Siebel and test for WCAG defects in all 3 major browsers. That isn't so exciting.

Testing for Open UI defects requires a high level of thoroughness to ensure that entire your Application is compliant, and remains compliant in the future. The move to Open UI will expose poor configuration practices that might have been passable in HI, but will break in Open UI. Open UI will also introduce defects as a side effect of the upgrade.

The simple approach to this problem is to brute force it, and assign a team of developers to navigate to each view to analyse for technical defects. But this isn't really viable, as a long term strategy, a smarter approach is to use web automation.

Web automation brings to mind images of robots that are used to scrape web sites, harvest email addresses and index content, but Siebel developers have a more important itch to scratch: Testing the Siebel UI.

The obvious use case is to run continuous integration testing, to ensure that new builds are thoroughly tested over night. A more advanced crawler can be built to perform functional testing of the applications main areas, but the scope of this article is to show how you can build your own Open UI crawler that can be used to programatically navigate to each view, and optionally validate your application.

Theres no shortage of tools in the market that is available to perform this sort of work, but if we narrow our criteria to open source web automation solutions, the Selenium web driver makes a pretty good choice, as it's also set to become a W3C recommendation. Selenium works with all the major browsers, is compatible with your favorite programming language, and it is also free.

This article will provide you with a sample application that implements Open UI automation with Ruby, but you can extract the lessons learnt from this article, and implement them in your language of choice.

I've chosen to use Ruby as my language, because there is solid support for web automation. Ruby has gems that allow the developer to easily parse the DOM, make selections using CSS or XPath syntax, deal with dialog boxes, and take screen shots of problem views. Combining with this Watir, which is a selenium wrapper in Ruby, allows the developer to build the automation quickly in a light weight language.

The solution for your project may be different, if you require enterprise support, headless servers, or if you plainly prefer to stick to your language of choice, because of your available skill set, then the right tool for your circumstance will be different. The most important ingredient here is Selenium, or in this case Watir, which is a Ruby flavor of Selenium.


This is the Browser automation API that allows you to control your browser programatically

" Selenium automates browsers. That's it! What you do with that power is entirely up to you. Primarily, it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should!) also be automated as well."

Browser Support

Chrome, Firefox, IE...

Remote control drivers

Javascript, Java, C, Ruby, Perl, PHP...

Open UI

The crawler can be run against a thin client, or against a local thick Client running Open UI.

Ruby Implementation

Knowing Ruby isn't a prerequisite to understand this concept. Ruby is a pretty readable language, and I've added a good dosage of comments to explain what the code is doing so those from non Ruby backgrounds can follow.

#import required libraries
require 'rubygems'
require 'watir-webdriver'
require 'cgi'

#Parametise the URL that will be used to navigate to the Open UI application
#This should really be configured for a server URL for real testing
#but im using a local URL for this example
sURL = "http://localhost/start.swe?"
sPass = nil
sUser = nil

#Open a new chrome browser session
browser = :chrome

#navigates to the URL defined above
browser.goto sURL

#maximise the window so we can see all entire view

#This block simulates the user login process for thin client connection
#else it is bypassed for the thick client
if sUser!=nil && sPass!=nil
     #fill in username in the user name field
     username = browser.text_field(:name, "SWEUserName")
     username.set sUser if username!=nil
     #fill in password in the password field
     password = browser.text_field(:name, "SWEPassword")
     password.set sPass if password!=nil
     sleep 1
     #click the login button =>"Login")
sleep 5

Congratulations!. At this point we have just created a simple crawler that has logged into an Open UI Application.

Next we want to tell it how to navigate the application, by going to the sitemap, read all the views, and parse all the JS links embedded in the view elements. This is done pretty easily in Ruby.

     #use CSS to locate sitemap icon, and click it 
     browser.element(:css => "li[name=SiteMap] > img")

     #use CSS selector to read all the views in sitemap into an array
     #these links also contain an onclick attr that allows to emulate a view navigation
     links =browser.css("span[class='viewName'] > a")

Finally, we loop through every link, execute the JS code to perform the navigation, and optionally perform any automation.

     links.each do |link|
     params = CGI.parse(link.attr("onclick"))
     sleep 5
     #Goto view
     browser.execute_script( link.attr("onclick") )

Thats as simple as it needs to be.

To fire it up with a dedicated client, you'll first need to open up an existing session to start the local web server. When the above code is run, it will launch a new browser connecting to the existing session.

For thin client connections, it is not necessary to pre-launch the session, the above code will instantiate a new browser session, and will login in normally.

A crawler like the program above, can navigate the application, verify each view and flag views that have issues.

Building a simple crawler is a easy, and is something that you can give to an energetic graduate to perform in a day, however if you require a crawler that can be a workhorse for your Continuous Integration strategy, then it needs to be a little more robust, and more scalable than the simple example above, or if your needs warrant a more specialized crawler, that can check for and enforce WCAG compliance in your application, then you'll ideally need a plugin system that can allow you to easily add new validators. If want to go a little further, and build in functional testing capabilities, then could build an API bridge to facilitate a DSL to build more readable test cases.

Open UI + selenium opens up these exciting opportunities. For my client, having a nightly build process + continuous integration, and WCAG reporting, ensures they have a strict standards compliant UI and a more stable application for every deploy.

This article provides the necessary ingredients, and a simple recipe for other Siebel customers to follow the same path.

Further Reading

Selenium Web driver
Continuous Integration

More on Open UI

Serious Open UI developers should be on the watch, for the up comming Open UI book from some very distinguished authors

Siebel Open UI Developer's Handbook

Open UI: Grunt and JSHint

Previously we looked at how Grunt can improve your Open UI development experience, by automatically reloading the browser, every time a source file is changed. This time we take another step, and configure an automated code check using Grunt and JSHint.

Think of how Siebel Tools prevents you from compiling, or checking in poorly constructed eScript. JSHint will do the same (if not better) job of making sure your client side code is free from language errors, and potential problems.

Starting from where we left off in Open UI: Grunt and Livereload, we should have an operational grunt job that reloads our browser. To add JSHint to the mix, we need to modify the package.json file slightly, to include one new line to install this new dependency

  "name": "SiebelGrunt",
  "version": "0.0.1",
  "devDependencies": {
    "grunt": "~0.4.1",
    "grunt-contrib-watch": "^0.6.1",
    "grunt-contrib-jshint": "^0.10.0"

With your command prompt at the directory with this package.json file, type the following command to pull down this module

npm install

Now we need to configure the Grunt build process, to register this new task, and configure it so it only checks changed files.

module.exports = function(grunt) {
     var sJSPath = '../PUBLIC/enu/*/SCRIPTS/**/*.js';
        pkg: grunt.file.readJSON('package.json'),      
          jshint: {
               scripts: [sJSPath]

          watch: {
               scripts: {
                    files: [sJSPath],
                    tasks: ['jshint'],

                    options: {
                         nospawn: true,
                         livereload: true

     grunt.event.on('watch', function(action, filepath){
          grunt.config(['jshint', 'scripts'], filepath);

     grunt.registerTask('default', ['watch']);

To explain what I've done, this newly added block registers JSHint, and uses the file path that we defined previously
          jshint: {
               scripts: [sJSPath]

Next, I register the 'jshint' task with the 'watch' task, which will run JSHint every time a file is changed.
                    tasks: ['jshint'],

The only problem with this configuration is that, JSHint has no idea which file was modified under the watch task, so JSHint will run over all 1500+ files under Siebel scripts and lint them all. If this is run without a filter, your command prompt will hang, and after a while it will come back with a window similar to this.

JSHint reports 15K plus errors across all the files under the Siebel scripts folder, but its not all bad, some of the code under Siebel/scripts is 3rd Party which can raise some errors, and the rest will have been through Oracle's own lint process, and then minified before delivery to customers. We should only be concerned with the files that we have control over.

To make JSHint validate a single file, I added the following block.
     grunt.event.on('watch', function(action, filepath){
          grunt.config(['jshint', 'scripts'], filepath);

This configures a listener on the watch event, and passes in the name of the modified file to JSHint. Now we are ready to roll. Fire up the grunt process.


Switch back to your code editor, open up postload.js, and type in every body's favorite command "asdf"

Watch will automatically fire JSHint, and report back on the errors, with the line number and a description of the problem. Note that we lost our purple background because our script failed to parse.

Alright, now lets fix that bug

We see that JSHint has reported "1 file lint free", and our browser has automatically reloaded in the background with our purple background again. 

JSHint won't make your Siebel developers better web developers. It will only ensure that any code that is written is written with a consistent syntactic style, and that critical language errors are picked during build and unit test, but it will provide you with the coordinates of any errors that is picked up. 

JSHint is a great safety net, but remember its recommendations can be ignored. To enforce this on a broader scale JSHint can be configured to run against the entire code base on a regular basis, and its results can be enforced by a QA process.

In the case above, a small typo caused 12 errors resulting in missing background color, but in other scenarios, these kinds of errors could cause more serious issues, which illustrates the importance of a code quality tool like JSHint.

Hopefully this article has piqued your interest in Client side build automation. Are you using similar tools on your project, or have you adapted a build process that you would like to share with the Siebel community?.

Send in your comments.

Open UI: Grunt with Livereload

With Siebel Open UI development we naturally spend the majority of our time working on the browser. Part of that requires clearing the browser cache, and hitting F5 to reload the browser.

Sometimes we reload the browser, and think afterwards, did I really clear the cache that time? Maybe not... so we the clear cache, and reload the browser again. Seconds later, we spot a typo in our code and the endless cycle continues. Wouldn't it be good if we can focus on writing the code, and let something else clear the cache and reload the browser? 

This is where tools like Grunt can help.

1. Download No Cache

"No Cache" is a Chrome extension that I wrote to automatically clear your browsers cache. It is available for free on the chrome store. There are 1 click solutions out there, but I prefer to have an automatic background process take care of this.

1. Right click on the No Cache icon, and choose "Options"
2. On the "Cache" tab select "Enable for filtered requests"
3. On the "Filters" tab select the input box, type in "start.swe" and click the + icon

This tells No Cache to clear the cache for every request Siebel makes, this should work on the local or thin client. If you use another browser and have your own favorite clear cache plug in, then that will work as well.

2. Install Grunt

Grunt is a JavaScript task runner that we can configure to reload our browser, but first we need to download and install Node.js from this website

Its not necessary to know what Node.js does, just know that it provides us with the environment to run Grunt. Once you've installed Node.js open a command prompt and type the following line to install the grunt command line globally.

npm install -g grunt-cli

Next we need to decide where to put our Grunt configuration. I like to put it under the Siebel/Client folder because I backup the PUBLIC folder, and don't want copies of Grunt and its modules copied along with this backup.

Open a command prompt, cd to your Client folder, and follow the next set of instructions to install and configure grunt on your local machine.

mkdir grunt
cd grunt

Create a new file in the same folder called package.json, with the following contents

  "name": "SiebelGrunt",
  "version": "0.0.1",
  "devDependencies": {
    "grunt": "~0.4.1",
    "grunt-contrib-watch": "^0.6.1"

back at the command prompt, type the following command to download the required dependencies defined in the file above

npm install

Next create a new file called Gruntfile.js, with the following contents

module.exports = function(grunt) {
     var sJSPath = '../PUBLIC/enu/*/SCRIPTS/**/*.js';
        pkg: grunt.file.readJSON('package.json'),
          watch: {
               scripts: {
                    files: [sJSPath],
                    options: {
                         livereload: true
     grunt.registerTask('default', ['watch']);

This file is important, so lets run through it line by line

>     var sJSPath = '../PUBLIC/enu/*/SCRIPTS/siebel/custom/**/*.js';

This path specifies the folder and file pattern that Grunt will watch for changes. Essentially we want to watch for any changes to JS files in the Siebel script folders

> watch: {

This line defines a "watch" task, there are other tasks that can be run. Eg code lint/minification/, but I've kept it simple for this example.

> files: [sJSPath],

configure the files to watch using the path statement above

> livereload: true

This option will spawn a background process that will cause your browser it refresh automatically. To Integrate our browser with this background process, there are 3 options as described here

The browser extension is the most suitable method for development, so go ahead and download it with these instructions.

To start it all up, go back to the command prompt and simply type

To test it out, goto SCRIPTS, open up any JS file like "postload.js", modify it as below, and watch your browser reload magically.

I've presented the bare minimum required to get up and running with Grunt, but once you have that setup, you no doubt want to look into what other things Grunt can do for you.

A good place to start is..

Siebel cScript


eScript is Siebel’s de-facto scripting language, but in an alternate universe, Siebel could have called its scripting language cScript, in reference to its C heritage. eScript is based on the ECMA Script standard (from which JavaScript is also based upon), and shares the same C family style syntax, but it also has uncanny similarities to C that are unparalleled by other ECMA Script derivatives.

Here are 5 good reasons, why Siebel could have called its scripting language cScript instead of eScript.

1. Compiled Code

eScript code is compiled, and runs outside of a browser, just like C.

2. Pre-processors

Pre-processor directives tell the compiler to perform specific actions before the code is actually compiled.

A typical C program is constructed like so:

#include <stdio.h>
int main(void)

#include is an example of a pre-processor directive.This tells compiler to import the standard IO library and make it available to the program.

Veteran Siebel programmers will also recognise that the same syntax is used in eScript for EAI purposes.

#include "eaisiebel.js"

This pre-processor isn't explicitly documented under the main eScript reference, but it is hidden away under the following section.

Siebel eScript Language Reference > Compilation Error Messages > Preprocessing Error Messages

It is also mentioned in

Business Processes and Rules: Siebel Enterprise Application Integration > Data Mapping Using Scripts > EAI Data Transformation

A small caveat. The #include directive can be written in the following two styles in C.

#include <eaisiebel.js>
#include "eaisiebel.js"

The <> brackets tell the compiler to look in a predetermined directory, while the “”(two double quotes) tell the compiler to look in the current directory as the c program, however in Siebel, when the “” is used, Siebel will look in Tools\Scripts\ for the include file. (Siebel v8+)

3. Libraries

In the above code sample under Pre-Processors, we saw the usage of a function called “printf”. printf outputs a formatted to the standard console. This isn’t part of the core C language, but was made available via the include pre-processor directive, which imports this command into the program from standard libraries.

Standard C libraries from stdio.h, stdlib.h, are also available in Siebel, but there’s no need to include them, Siebel has conveniently wrapped these libraries in Siebel and exposed them through the Clib global object.

Clib.rsprintf – returns a formatted string to the supplied variable.
Clib.fscanf – read data from file stream

The complete Clib object reference can be found here

4. Reference Operators

The following example shows how reference operators are used in a C program.

#include &lt;stdio.h>
void sub(int* main_int)
    printf("Address sub:\t %d\n",&*main_int);
int main()
    int myInt=5;
    printf("Address main:\t %d\n",&myInt);
    printf("Before Value:\t %d\n",myInt);
    printf("After Value:\t %d\n",myInt);
Address main:    2293340
Before Value:    5
Address sub:     2293340
After Value:     10

In the above code, an int is given the value 5, its address reference in memory is provided to a sub function using a reference operator


The sub function takes this reference, goes to its address in memory, and modifies the original value.

We can the same functionality in eScript with the following code

function sub(&main_int)
function main(Inputs, Outputs){
    var myInt=5;
    Outputs.SetProperty("Before Value",myInt);
    Outputs.SetProperty("After Value",myInt);
Before Value:    5
After Value:     10

Note: The reference operator in this case is declared in the sub function input argument. This tells Siebel to treat &main_int as a pointer to the original value, instead of a copy.

5. Memory pointers

Like all relatives, eScript and C share common ancesters. In this case, the commonality is in the sharing of memory pointers.

The following example shows how an invoked C program can modify data inside eScript. This example is sourced from the bookshelf document on SElib

In summary, a couple of buffer objects are created.

    var P_CHURN_SCORE = Buffer(8);
    var R_CHURN_SCORE = Buffer(8);

SElib is used to call a DLL, passing in the reference to the buffer

SElib.dynamicLink("jddll.dll", "score", CDECL,

The following C snippet, de-references the pointers, and modifies the data at the memory address that was allocated in the above eScript code.

_declspec(dllexport) int __cdecl
score (…
    double *P_CHURN_SCORE,
    double *R_CHURN_SCORE

Clib vs eScript

There is an obvious overlap between CLib and eScript functions, which causes some confusion on which is better to use. The recommendation from Siebel, is to use the eScript equivalent over Clib.

The CLib library is useful for OS level operations, and those scenarios where you need to operate at the memory level, but such usage is usually confined to edge cases, otherwise eScript provides a safer and more familiar playground for Siebel professionals.


Siebel designed eScript as a hybrid of ECMA Script and C, providing customers with the (not so well documented) ability to extend eScript beyond its natural abilities. Although everyday configurators may never need to touch the Clib libraries, use pre-process instructions, utilize reference operators or even call custom DLL's, but understanding these advanced features, and recognizing the C heritage provides insights for experienced developers, to maximize, and extend the product beyond its out of the box capabilities.

Finally, as for the great idea of renaming eScript, unfortunately the cScript moniker is already taken (most notably by cscript.exe from MS), but you could also argue the same for 'eScript', which is claimed by these other software companies.

eScript - Eclipse scripting language

EScript - C++ Scripting language

Escript – Python programming tool

escript – Erlang scripting language

eScript.api - This is the name of an Adobe plugin

As a consolation, we could designate eScripters who primarily use Clib, cScripters. I think this might catch on =)

Anonymous functions in Siebel

I was on a Project, that was performing an upgrade from Siebel 7 to Siebel 8, and discovered  parts of their Application was implemented using undocumented eScript features. The code was written by graduate web developers, who used their trade to write ECMA compliant code, but unfortunately some of these features were unsupported by Siebel 8.

One such feature was the usage of anonymous functions, which has the following benefits.

* Provides locally scoped variables
* Can be used as a call back function
* Allows better code readability
* Avoids extra named reference on the current scope

Unfortunately anonymous functions, or even function expressions cause a reference error in Siebel 8.

> ReferenceError:':-$753' is not defined

Siebel customers who need a viable substitute have the option of using the Function constructor, which is appropriately documented by Oracle here.

Let’s look at a real world scenario to see where this can come in handy.

During development, a Siebel developer might come across the need to enumerate a PropertySet. The following code shows a basic implementation of the enumerator function.

var propName = oPS.GetFirstProperty(), 
propValue ="";
while (propName != "")  {
                propValue = oPS.GetProperty(propName);

                //###  Do something with propName and propValue ###

                propName = oPS.GetNextProperty();

I propose that we create a utility for Siebel developers to easily loop over the properties of a single PropertySet level, without the above verbosity, using the following syntax.

1. [<PropertySet> ].psEach( <Callback> )
2. util.psEach( <PropertySet>, <Callback> )

The implementation of option 1 is outside the scope of this article, but it can be easily built by customers who already have a familiar construct in their Library.

However option 2 should be a little more familiar to customers who use script libraries in Siebel. util is the name space, or the local reference to a name space from an eScript library. psEach is the logical method that performs the enumeration.

The following diagram shows, how we should use the Function object in this context.

1.       ‘Class_Util_PSNodeEach’ is the literal name for util.psEach.
2.       The PropertySet that we wish to enumerate
3.       A function object that will receive the Name, Value pair.
4.       This function will receive two inputs n, v. This is optional, as the arguments object can be used just as easily.

The following screenshot of The Harness shows the definition of ‘Class_Util_PSNodeEach’, the code that was used to set up the test case, and the expected results.

Line 15. above shows where the magic happens, this API essentially replaces 9 lines 
of code (x the number of times this construct is used in your Application) with 1 line.

Just a friendly reminder that this code has been simplified for illustration purposes, it is not is not production ready or is it 
optimized to be used in the real world.

> new Function("n,v", "' = ' +v " );

The syntax above creates a new function on the fly, which expects an input of "n" and "v". It is invoked by line 5 in the screenshot above, and prints it out using a library method that outputs the result back to the UI. 

This allows us to hide the clunky looping construct that is normally required to enumerate a PropertySet in eScript, and expose a much better API through psEach.

Are there any side effects?

* Code written inside the Function constructor is supplied as a string, which has to be properly escaped

* The function constructor will run in the scope of its instantiated instance. This is normally the current Siebel event handler or Business Service, or the code library where psEach lives, but it could also be the instance of any object created by the new Operator. What this all means is that, the callback function won’t inherit the local scope of your object, and will not have access to your local function’s variables unless they are passed in

* These functions won’t appear in your Siebel eScript object explorer. This could also be an advantage.

In context of the problem that we are trying to solve, the above list doesn't apply, but it should still be taken into consideration if you want to utilize this elsewhere in your Application, and please check with your local eScript architect if you are unsure of the impacts.

Are there any speed difference with either option? Once the function is parsed, it will perform just as well as the function declaration. The following test case shows that with our set up above, there are no discernible speed advantage of using either method.

The Function constructor is possibly one of the most underused features of eScript, but when used in the right way, it can provide your project with the capability to organize, and reduce the amount of code that your developers have to write, test, and maintain. 

Further Reading Topics

Function Constructor JavaScript
Immediately Invoked Function Expressions (IIFE) or
Self Executing Anonymous Functions (SEAF).

The Test

Every once in a while, a developer decides to make a fix without unit testing and breaks the application. There are a variety of reasons why it occurs, it could be that the developer got too confident, maybe it hard to setup the test case, it could be due to inexperience, or it could be due to pressure to turn around the defect, and check it in.

It could also be less sinister, maybe unit testing was performed, but the fix had other unintended impacts, or the unit test was not creative enough to produce errors, or worse still, the defect goes unnoticed until it is released into the wild.

Unfortunately it happens, but can it be prevented?

Imagine your Siebel developers could run a suite of automated test cases, to verify the sanity of the core application just before check in. This would provide your project with a good measure of quality assurance that the application isn't broken. How good the measure is, depends on your test coverage.

Testing can be exhaustive, and take hours to regression test the Application. To make things more manageable, we can split it into two categories.

1. Core application
2. Business functionality

2, can be covered by more comprehensive regression test methods, and are done post check in. Sometimes this is too late. The concept of ‘The Test’ is to focus on 1. Testing the core application, to verify that what’s being checked in, doesn't cause widespread damage to the application in the next migration.

In the past, I've covered this topic in the following articles

SiebUnit - an xUnit implementation

SiebUnit - Build Inventory

I've had a lot contact from various Siebel customers over the above articles, confirming that the tool meets a real world need.

“The Test” is not as ambitious, its purpose is to be simple, effective and a lot easier to implement, but it’s not declarative. Writing code to test code, and maintain that extra code, isn't a deal breaker, what’s more important is that your developers have the confidence to make core changes, and your application is protected from serious defects.

Like SiebUnit, the main requirement of ‘The Test’ is to have testable interfaces. Testable interfaces are API’s that can accept an input, produce an output, without any reliance on Siebel’s application context, and are predictable. Ideally it should also have as few dependencies as possible.

In Siebel, these API’s take the following forms.

Business Service
Script Library
Browser/OUI Script

To test the above interfaces, I propose we create a Test library using eScript (for testing server side APIs), it could sit inside your existing Script library if you have one.

There should be an external method called “The Test”, this method could instantiate a new Test class, with a main entry point called “run”, that would run groups of test cases, each of these groups would contain 1 or more test cases. To facilitate the writing of clean test cases, I suggest we enforce a formal structure.

The following code snippet shows an implementation of the above concept in Siebel.

1. Calls a script library function called parseINI that will be tested. The util namespace is used, and invokes parseINI, to return object representation of a configuration file
2. assert is defined as a class function.
3. The purpose of ‘assert ‘, is to determine if a test case has passed or not. The first argument passes the evaluated the condition to assert. 
4. The Output PropertySet is passed to assert for logging the result
5. A description is provided to identify the test case.
6. Test case helpers to minimize coding.

Examples of helpers

* Create a common complex hierarchy for PS manipulation
* Invoke Siebel APIs without using setting up PropertySets
* Invoke WF without the normal setup
* Create a data randomizer to facilitate more creative test cases.

The good news is that with a bit of organization, we can reduce the amount code that a developer has to write to test your APIs. I claim that these test cases are readable, are easily maintained, and require no effort to pick up, other than the inertia to change ones habits.

Now that we have a proposed test case construct, we need to have an ability to report on the test results. Step 4 states this from a high level, but the report can be as simple as name value pairs on an Output PropertySet, or as complex as you need. The report should at the very least, display the number test cases that have passed and failed. This will allow the developer identify where the problem is, fix the issue, and re run The Test again to ensure that all expected test cases pass.

Establishing your Test library a great first step, but acquiring it alone is not is actually not enough, the project has to foster the culture of writing test cases into developers, and factor the effort into the build estimates. The project also has to identify what is testable, and augment what is not testable. A script library is a good example of a testable interface, members of the audience who have the ABS framework on their projects, should be familiar with "The Test".

It takes effort to write the test cases, as well as to maintain them going forward. To be conservative, your current unit testing estimates should be doubled when implementing automated test cases. This may sound a little daunting, so lets take a look a little example that demonstrates the ROI.

Testing an email address validator

You have a written a library function to test the validity of emails. The manual method of test this might be to launch the UI, and navigate to a view, where you repeated enter a series of bad, and good emails, to test the string. Lets say that takes you 5 minutes to perform, and writing automated test cases for the same scenario takes 10 minutes. Now imagine a defect was raised against your code, you would need to fix the defect, and spend another five minutes to re-validate the function to ensure you haven't broken something else. On the other hand, your automated test cases can be re run in an instant.

The initial effort to write the test case, will be paid back in spades. Your application will require less re-development, and suffer less downtime as a result, your developers will have the confidence to make core changes, automated unit tests can be your unit test documentation, your application will be more robust, and my personal favorite, is that developers can write the unit test once, and run it multiple times.

Once you have the right framework in place, and given enough experience (1-2 weeks), writing test cases actually saves more time than not writing them.

The Test is simple to build, and can be taken as a quick win towards a larger goal, for ensuring a better quality product. However be warned that once you start writing automated test cases, you won't want to look back to manual testing.