Monday, August 10, 2009

Keyword driven Testing

Keyword-driven testing or Table-driven testing is a Software testing methodology as part of the Test automation discipline that separates the programming work from the actual software code. Because of this separation, the test scripts can be developed without knowledge of programming. As a result the test scripts can be maintained with a little updates, even when the application or testing requires a major changes.

Base Requirements
There are several requirements that consider being "base requirements" for success with keyword driven testing. These include:
Test development and automation should be mutually exclusive – It is very important to split test development from test automation. The two disciplines require very different skills. Basically, testers are not and should not be programmers. Testers must be skillful at defining test cases independent of the essential technology to implement them. Individuals who are skilled technically, the automation engineers, will implement the action words to test them as per test cases.
Test cases must have a clear and differentiated scope – It is important that test cases have a clearly differentiate scope and that they should not deviate from the scope.
The tests must be written at the right level of abstraction – Tests must be written at the right level of concept such as the higher business level, lower user interface level, or both. It is also important that test tools should provide this level of flexibility.

The Framework
The implementation of keyword driven testing methodology is framework dependent. This framework requires the development of data tables and keywords, which are independent of the test automation tool used to execute them. It also needs the test script code which "drives" the application-under-test and the data.
In a keyword-driven test, the functionality of the application-under-test is documented in a table as step-by-step instructions for each test.

Methodology
The keyword-driven testing methodology divides test design into two stages:
Planning Stage
Analyzing the application and determining, which objects and operations of business processes that need to be tested.
Deciding which keywords is having provided additional functionality, to achieve business-level clarity, and/or to maximize efficiency and maintainability.
Implementation Stage
There should be a unique reference to identify the objects which is some time known as an object repository. it also ensure that the these references have clear names that follow any predetermined naming conventions.
Developing and documenting business-level keywords in function libraries. Creating function libraries involves developing customized functions for the application which needed to be tested.
Anyway this methodology requires more planning and a longer initial time-investment. This methodology makes the test creation and test maintenance stages more efficient and the individual tests have more readability and is easier to modify.

Vision for Automation
There must be a clear vision for the automation.
Having a good methodology – It is important to have a good integrated methodology for testing and automation. It is also important to use the best technology that supports the methodology, enhances flexibility, minimizes technical efforts, and maximizes maintainability.
Have the right tools – Any tool that is in use should be specifically intended for keyword based testing. It should be flexible enough to permit for the right mix of high and low level testing. It should allow the testers to build keyword tests without difficulty and in no time. It should not be overly complicated for automation engineers.
Three "success factors for automation" – There are three critical success factors for automation that the vision should account for. They are:

Test Design
Test design is more important than the automation technology. Design is the most underestimated part of testing. It is my belief that test design is the single most important factor for automation success.
Automation Architecture
Scope, assumptions, risks
Methods, best practices
Tools, technologies, architecture
Stake holders, including roles and processes for input and approvals
The "right" team must also be assembled.

Test management who is responsible for managing the test process.
Test development that is responsible for production of tests. Test development should include test leads, test developers, end users, subject matter experts, and business analysts.
Automation engineering, those are responsible for creating the automation scheme for automatic execution. Members of this team include a lead engineer as well as one or more automation support engineers.
Support functions, providing methods, techniques, know how, training, tools, and environments.
For the team there should be a clear division of tasks and responsibilities as well as well defined processes for decision making and communication.

How to Measure Success
With any major undertaking, it is important to define and measure "success". There are two important areas of measurement for success – progress and quality.

Progress
You should measure test development against the test development plan. If goals are not reached, act quickly to find the problems. Is the subject matter clear? Are stake holders providing enough input? Is it clear what to test? Is the team right?
You should measure automation and look at things such as implemented keywords and interface definitions.
You should measure test execution looking at things such as how many modules are executed and how many executed correctly?

Quality
Some of the key quality metrics include:
Coverage of system and requirements
Assessments by peers, test leads, and by stake holders (recommended)

Effectiveness
Are you finding bugs?
Are you missing bugs?
Can you find known bugs (or seeded bugs)?
After the system is released, what bugs still come up? You should consider calculating the "Defect Detection Percentage"
Dig for your bug base for additional insights

No comments: