Tag Archives: testing

VodQA Gurgaon – Selenium for Beginners

We at ThoughtWorks Gurgaon conducted 4th Edition of VodQA on 19th October 13.

There were 160+ external registrations and 50 participated in the event. 20 volunteers supported the event brilliantly. There was lot of buzz and the office was full of energy during the event. The participants were from varied experienced groups.

The Motivation:

With the increasing popularity of VodQA in NCR region, QAs from different skillsets and experience levels started to participate. There was a popular demand to conduct workshops on automated testing and this VodQA was an attempt to empower the manual QAs by introducing them to Selenium and basics of automated testing.

The Planning:

We divided the event into 2 parts: The first part was aimed to introduce participants to the basics of selenium and familiarize them to use x-paths. The second part was aimed to create automation frameworks using page-object model.

In order to achieve these objectives we created a small web-app, which had registration and listing page.

Participants were given the machine setup instructions over email and SMS reminders were send to them on the eve of the event.

The Workshop:

Participants started to join us early in the morning and registration desk was busy till 10:15 am. We started the event with Icebreaker sessions where everyone gave their introduction, shared their motivation and dream holiday vacation. The setting up machines and preparing them for the workshop followed that.

 Lokesh and Raman kicked off the event by presentation on selenium and its’ history. How it has evolved and how it was adopted in the testing world. They also explained the basics of DOM structure and element identification using CSS selectors and X-paths.

 Participants wrote a test to register a new user and view them on the listing page. There were lots of interesting questions on drivers, junit assertions and X-path. It was very encouraging to see so many questions being asked about the fundamentals of various tools.

 In the second half of the event we started with modeling of the test code and slowly evolving the automation framework. As a group we realized the importance of frameworks. Principles of re-usability and abstraction resonated through out the session. We created separate pages for registration and listing pages. The drivers were abstracted under base page and tests were refactored.

We concluded the workshop with words on data parameterization and left it for the participants to explore the concept further.

2013-10-19 10.51.29-2

 2013-10-19 10.56.18

Feedback and Next steps:

The overall feedback was very encouraging. Those who were beginners to selenium were delighted. There was some constructive feedback on seating arrangements. Overall it was a very eventful event and as organizers we are excited to conduct such events in near future.

The focus on next session will be on build tools, continuous integration and BDD.

The VodQA Team:

2013-10-19 17.10.12_01

Leave a comment

Posted by on October 24, 2013 in Uncategorized


Tags: , , , , , , ,

Using All-Pairs wisely- Independent and Dependent Variables

A Little Background on All-Pairs

All-pairs is a tool written by James Bach which is used to generate the optimized test cases when we are dealing with data-intrinsic scenarios.

The problem statement:

Let’s take a simple e.g. where we have 3 radio-options in a form and each radio-option has 3 choices

Radio Button Option 1:

  • Option A
  • Option Bohn B
  • Option C

Radio Button Option 2:

  • Option X
  • Option Y
  • Option Z

Radio Button Option 3:

  • Option M
  • Option N
  • Option O

The possible number of test cases for these 3 radio options will be 3*3*3 =27.

It is very time-consuming and tedious to test all 27 test cases. A tester in this condition will randomly select 4-5 combinations and test them. But there is randomness in this process which a tester can’t justify. Why a particular test case was given preference and others were left out.

The all-pair tool is handy in this situation.

All-pair’s philosophy:

The philosophy of All-pair is to “create tests that pair each value of each of the variables with each value of each other variable at least once”.

 Hence we can optimize the test scenarios and be sure that we have tested each value of the each variable at least once. For e.g. the optimized test scenarios of the above problem would look like this:

TestCase Radio-Option 1 Radio-Option 2 Radio-Option 3 Pairing
1 Option A Option X Option M 3
2 Option A Option Y Option N 3
3 Option A Option Z Option O 3
4 Option B Option X Option N 3
5 Option B Option Y Option M 3
6 Option B Option Z Option M 2
7 Option C Option X Option O 3
8 Option C Option Y Option M 2
9 Option C Option Z Option N 3
10 Option B Option Y Option O 2

So we have narrowed our test scenarios form 27 to 10.

The optimization is much appreciated when the number of variables and number of choices corresponding to each variable is high. For instance, to try all combinations of 10 variables with ten values each we would require 10,000,000,000 test cases. All-pairs only require 177 cases.

The Subway menu Example :

A typical subway menu looks is a 7 step process and each of them provide us with multiple options to choose from. for e.g.:

Size of Sub 6 inch Foot Long
Bread Italian herbs Honey Oat Wheat Hearty Italian
Filling Veggie Delight Chicken Teriyaki Turky Breast Ham
Toasted Yes No
Salad Yes No
Sauce Light mayo Honey Mustard BBQ
Fresh Value Meal Soda Crisps Cookies

If the QA have to test wether we can create a sub using these combinations then he will have to test 2*4*4*2*2*3*3 =1152 combinations.All-pairs reduces the number of test cases to 24.

The Twist in the problem: Adding dependencies between variables

Now, if I add dependencies to these scenarios, will All-pairs be effective in this case? For e.g. There are 3 variables, Variable A, B,C and each variable can have 3 possible values  Valid, Invalid, Null.

Variable A Variable B Variable C
Valid Valid Valid
Invalid Invalid Invalid
Null Null Null

There are additional dependent conditions.

  1. You can access Variable B only if variable A is valid.
  2. You can access variable C if variable A and Variable B are both valid.

If one uses All-pairs here to generate optimised test cases, we will have following output:

TestCase Variable A Variable B Variable C Pairing
1 Valid Valid Valid 3
2 Valid Invalid Invalid 3
3 Valid Null Null 3
4 Invalid Valid Invalid 3
5 Invalid Invalid Valid 3
6 Invalid Null Valid 2
7 Null Valid Null 3
8 Null Invalid Valid 2
9 Null Null Invalid 3
10 Invalid Invalid Null 2

So what’s missing?

  1. We have never tested the variable C for invalid and null scenarios because of the added dependencies.
  2. The test cases 5,6,10 are redundant as scenario 4 had already covered them.
  3. The test cases 8, 9 are also redundant and don’t add any value.

The Actual test scenarios should be:

Test Case Variable A Variable B Variable C
1 Valid Valid Valid
2 Valid Valid Invalid
3 Valid Valid Null
4 Valid Invalid Any Value
5 Valid Null Any Value
6 Invalid Any Value Any Value
7 Null Any Value Any Value

The additional dependencies have actually helped us to reduce our test scenarios.

Conclusion and learning:

All-pairs is designed for creating optimized test scenarios, if only and only if, the each variables are independent and options within each variables are mutually exclusive.

As a user of the tool, it is one’s responsibility to handle the dependencies externally and use All-pairs with independent variable set.

1 Comment

Posted by on January 11, 2012 in Testing


Tags: , , , ,


Brief Background: The client is a start up, which provides community portal for the American buyers to express their views and opinion via reviews for various products. The portal slowly evolved and is currently sold as a product to varied retail chains both as a ‘white labelled ‘and customised solution according to the customer needs. Our client has 3 major customers with over 7 websites maintained for 6 browsers combinations and 2 Operating systems.

Current Development Process: The client initially started with a small development team and has grown to 80 developers in around 3.5 years. They currently have a two week iteration in which the first 6 days developers code and in next 4 days they themselves test the application. As the customer base for the client has been growing it became essential to establish a QA practise for providing the safety net. Moreover the client has to cater to the demands of customisation hence they need to optimise the utility of their developers for coding and free them from testing. Client approached us to set up a reliable QA practise.

Problem statement

  • legacy application of 3.5 years which has never been tested in a rigorous fashion.
  • As most of the initial designers have left the organisation therefore it is very difficult to understand the business context of available features as well as no documentation is available.
  • There is an urgent need to provide test coverage and optimise the entire testing process so that we can free the developers time and contribute to the overall productivity.
  • We do not have the exact start and end point to ensure all major business flows are covered.

Our approach:

Given the above constraints, we needed to devise a plan to provide a comprehensive solution in a phased manner.

  1. Prioritization: The first step was to prioritise the various features in importance of business value. For e.g. Login and Review modules were given preference over blogs and polls module.This decision was important as we needed to quickly cover the core business areas and provide valuable feedback which had greater significance over others.
  2. Breadth over Depth: There were two possibilities in which we could have directed the entire testing process. The first was to cover in depth a particular module before venturing to other modules .The second was to provide basic safety net across various modules and then dive deeper to enhance the test coverage over the period of time.We preferred second approach over the first as our primary intention was to get used to application as soon as possible and come up with  at least smoke test cases across the modules for upcoming releases.
  3. Exploratory Testing: The best approach to handle these conditions was to explore the application, understand the functionalities and write test cases simultaneously.The biggest advantage with this approach was that we were able to find bugs from the very beginning and hence were able to contribute while learning.The second advantage was that we were able to come up with various business questions real fast and got them clarified, which highlighted our holistic approach, covering both functional and business areas.
  4. Structuring the Exploratory Testing: After few weeks of writing test scenarios  we felt that via exploratory testing we were able to come with smoke scenarios but there were lot of grey areas which were missed out and need to be covered in regression.Moreover this approach does not have a definitive starting and end points hence it cannot be quantified nor can it be time bounded.In order to overcome these loop holes it became important for us to restructure our approach. We wanted to hares the advantages which exploratory testing provides as well as overcome its disadvantages at the same time. The viable solution lied in combining basic flow diagrams with exploratory testing and hence structuring the entire process.

Let me take an example of Review Module:

When we drew the basic flow diagram of this module it appeared like this.

We took these flows as our base and explored around them to come up with exhaustive test coverage.

Advantages of combing Basic Flow Diagram with ET:

  • It revealed the test flows which we missed out while doing exploratory testing.
  • It provided confidence that all possible flows for a particular module are covered.
  • We were able to come with Start Point and End point for each module and hence could quantify the exploratory testing in terms of test flows covered.
  • We were able to time bound our effort and come with more accurate effort estimation for writing the test cases.
  • We were still able to maintain the pace while widening the test coverage.

Incorporating the Customization: Few minor customizations sometimes add an extra flow or modify an existing flow as well as there are few customised versions of the functionalities but the basic flow of the modules remains the same.

Since we were able to visualise these flows with help of the flow diagrams it became easy for us to be become more flexible and robust and we easily incorporated these changes in the current test scenarios.

Other added Advantages:

  • We were able to build a comprehensive regression suite within 4 weeks using 2 resources consisting of around 240 scenarios.
  • We were able to log around 40 bugs which proved to be a valuable feedback for the client to modify the current code base.
  • We were able to optimise the entire regression cycle from initial period of 4 days to 2.5 days and smoke tests from 3.5 hrs to 2 hrs.

Key Business Value delivered:

  • The client was able to see the immediate benefits in terms of bugs, which provided effective feedback to incorporate changes in code base.
  • The client developers need not be responsible for testing after 6 weeks of overlap.
  • The test scenarios identified are used by the automation team to build automation scripts for CI.


  1. Exploratory testing is an effective way for providing quick feedbacks for legacy applications with little or no functional knowledge.
  2. Exploratory testing can help us to be productive and learn at the same time hence adding value.
  3. Exploratory testing when coupled with Basic flow diagrams becomes an effective tool to provide extensive test coverage.
  4. Structured exploratory testing provides a definitive start and end point and hence can be tracked and time bounded.
Leave a comment

Posted by on January 10, 2012 in Exploratory Testing, Testing


Tags: , , , , ,