Do you publish automated test execution result in Test Management System?

Your team uses open source tools to automate tests? Does your Testing team update the automated test execution results in Test Management System (TMS), NO, then you may like to recommend this post to them.

Teams want to know from testers about completeness of the testing to take a call to progress their code higher in delivery chain. Usually Testing teams use a TMS to keep their test artifacts such as test cases (mapped to requirements/user stories), test plans, and execution results at centralized repository to track the progress. If your test automation results are not published in the TMS then your team will never get the real time snap of the tests execution and progress. With the results updated in TMS, you will have the history of the test execution to take various decisions.

Usually teams using open source tools, are not aware of they can update the execution results with an adapter to TMS, or they have not thought of it. Even if the automation team knows about it, they don’t want to update the results as their test scripts are not stable enough and they don’t want to publish the wrong data. 🙂 good point

My team has developed adapters for HP ALM, Rally and TestLink. Post automated test execution, these adapters updates the results in the relevant TMS. You could also build the adapters. ALM provides the REST APIs which you could use to publish the execution results. Those who are using earlier version of QC can use QC OTA API. Rally also provides the API for the same purpose.

In case you are facing issues to build adapters, do comment for the assistance.

Web services/API testing, go open source

Are you paying for the web services/api testing tool? Then this post is for you.

Few weeks back, during a discussion, Client shared they have around 1800+ web services tests and automated 1100+ using (I will not name the tool J) and they are planning to buy more licenses to automate more.  I advised him to consider open source tools over licensed as my team successfully automating and already delivered 1250+ functional tests with Open source technology (java, selenium, testNG, maven, Jenkins, extent reports etc.). After the demo, client got confidence and we have started automating new ones as well as migrating existing API tests. Plan is to migrate all existing test in another 4 months.

Let me talk about this framework, which you may also like to build and use. We have developed and have a stable REST Assured API based automation framework built with Java, where you keep all your data in excel, use testNG to drive the parallel execution to save time. Any licensed tool provides you, the feature of sending the request, perform the assertions on response, control the execution, execute test in parallel and keep the test data & configuration separate from your tests. Complexity increases when your services require some kind of authentication using Tokens or Certificates and do the validation in databases. Another challenge is when your api tests are fetching data from previous test’s Response.

Below is the high-level overview of the framework for your reference.

RestAssuredFramework

You would like to go through the below comparison with licensed tools.

RestAssuredAPI_Vs_LicensedToolComparision

In case you want to share your opinion or experience, please share your opinion in comments.

Selenium: How to verify PDF content?

import java.io.BufferedInputStream;
import java.io.IOException;
import java.net.URL;
import java.util.concurrent.TimeUnit;

import org.apache.pdfbox.pdfparser.PDFParser;
import org.apache.pdfbox.util.PDFTextStripper;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.testng.Assert;

public class PdfReadWrite {

public static void main(String args[]) throws InterruptedException,
IOException {

WebDriver driver = new FirefoxDriver();
driver.get(“http://www.nikon.co.in/tmp/IN/2419865273/3760176746/2586568015/286546384/3855120363/4043737636/4196861409/3724334581/3315196638.pdf”);
;
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);

// create poiter to URL
URL url = new URL(driver.getCurrentUrl());

// Buffer the file into temporary storage
BufferedInputStream sourceFile = new BufferedInputStream(
url.openStream());

PDFParser parser = new PDFParser(sourceFile);
parser.parse();

// put PDF text in String variable
String textPdf = new PDFTextStripper().getText(parser.getPDDocument());

// in case you want to see where the sub string is
System.out.println(textPdf.indexOf(“D3X Body only”));

// Apply Asserts
try {
Assert.assertTrue(textPdf.contains(“D309X Body only”),
“Text ‘D3X Body only’ is not found”);
} catch (AssertionError e) {

}
// close the document
parser.getPDDocument().close();
driver.quit();

}// main

}// class

//in case you find error, please check you have all the jars mentioned in the import section

Testing in Agile based methodologies

https://mmathpal.wordpress.com/

Testing in Agile Framework

In contrast to traditional SDLCs, in Agile, testing life cycle is squeezed from few months to few weeks and each cycle begins and ends in the same iteration. Iteration is like a sprint in terms of Scrum. Test planning is done at the beginning of each iteration during iteration planning meeting. Testing team needs to work in close collaboration with the development team and Product owner. During iteration, development team produces frequent fully or partially unit test builds for testing.

Though the objective of testing is same, the challenges of testing in Agile are different as in the traditional SDLC. However, in many cases, Testing professionals still try to fit the traditional testing approach, techniques and measurements in agile. This blogs talks about role of test engineers and what testing they should consider during the testing in agile..

In 2001, Agile manifesto was introduced to software industry to provide a framework to build software faster in turbulent business environment. Over the time various SDLCs such as Scrum, XP etc. complying with the Agile Manifesto, got matured and various success stories are backing up the success of agile methodologies.

Software is developed in iterations. Each iteration, agile team is focused on design, code, and test a small set of requirements. Agile team delivers a potentially shippable software i.e. implementation of small chunk of the requirements at the end of each iteration. Iteration sizes from 2 weeks to 6 weeks. Test engineers involves from sizing the user stories to confirming their correct implementations.

Sizing of User Stories

During User Stories Sizing workshop, agile team relatively sizes each user story against a specific user story size. Usually test engineers don’t participate or participate passively. Test engineers needs to compare the testing effort they may need to put in to test user story and accordingly share their estimates. Usually, development and testing teams estimates matches in terms of relativity. However there might be cases where development efforts are less than testing and vice versa. E.g. test engineers needs to put more efforts if they need to test a user story (As an Admin I want to see yearly reports containing monthly sums so that I can compare which month has larger sums). For this user story, testing team needs to create test data for across the years, months, and days.

Unit Testing

Unit testing ensures the developer are writing right code correctly to implement the desired functionality. It also catches cases which black box test engineers may miss and reduces the cost of the bug by finding them early. Test Driven Development (TDD), Behavior Driven Development (BDD), and Acceptance Test Driven Development (ATDD), are getting popular among developer community. They discover the bugs at unit level as well as at integration points. In cases where following TDD/BDD/ATDD is not possible, still Unit testing and integrated testing should not be compromised and implemented. To ensure the frequent and efficient unit/integration testing, continuous integration tools must be used.

Acceptance Testing

It is recommend that each iteration’s delivery should be designed, coded and Tested in the same iteration. To capture the requirements, User stories are being used by the agile practitioners. A user story contains the purpose, business value as well as completion criteria. This completion criteria also known as Done criteria, becomes the basis of testing and accepting if a user story is implemented as expected.

Test engineers needs to ensure that each user stories has agreed upon completion criteria. They are supposed to write acceptance test cases to ensure user stories meets the acceptance criteria.

Acceptance test cases should be written against each completion criteria and these test cases should be executed frequently and once more after code is freeze and added to the potentially shippable product.

Functional Testing

Though Acceptance testing cases confirms that user story is implemented as expected, but it cannot be the replacement of testing of end to end flows, alternate flows, negative cases etc.

As the new user stories are implemented and integrated with the existing software, new and alternate flows gets introduced, test data need increases, and additional user roles/persona comes into the picture. Functional test cases will ensure that these additional testing cases are captured and executed to ensure the correct integration between new and existing functionalities. In agile however it is not recommended to write exhaustive test cases as most of the test cases’ life will end with the end of iteration. Few of the functional test cases will be selected for automated regression suite. Hence implementing the test case optimization techniques such as Orthogonal array, Equivalence class partitioning, workflow based & risk based testing, plays pivotal role. More the test cases, more the overhead of the managing the test cases.

Automated Regression

Automate as early as possible and automate as much as possible, are the mantra of testing in agile. Software is developed incrementally and by every passing iteration, the regression suite gets larger while the Iteration size remain same. Gradually testing team starts spending more time on regression instead of Exploratory testing and testing of new user story implementation. As a thumb rule all validation tests and acceptance tests should be automated. However in cases where test automation is not possible, it is better to have a hardening iteration after every 4-5 iteration. During the hardening iteration, testing team focuses on bug regression and retesting, while development team addressing the bugs from the backlog.

Exploratory Testing

Agile is for faster development by increasing productivity and reducing waste & rework. Test cases though try to catch most of the cases, ends up with test cases for happy paths and known negative cases. As we discussed the need of optimized test cases to reduce the test case counts, still test case based testing cannot substitute what and how human mind observe and react while using the software. Here comes the Exploratory testing to unearth more issues in shortest time. It helps in finding scenarios and defects which with the help of test cases difficult to even imagine at the time of test case writing.

Testing team needs to work closely with the product owner to understand the end user. To ensure that exploratory testing does not go unguided, testing team develops user Roles, Persona and Extreme characters and then imitate them.

Measuring Testing Progress

Though working in Agile, project managers still judge Test engineers’ productivity by measuring the test cases written and executed in a specific period. Testing progress is measured by measuring executed and pending test cases. Measurement for testing progress should be Burn down charts. And to measure the quality of testing, Bug escape rate should be used. To check the stability of User stories, map Bugs with the user stories.

Defect Management

Defects in user stories implementation should be addressed in the same iteration. Still there are bugs that could not be fixed because of time constraints, priorities, resource crunch, ambiguous confirmation criteria, afterthoughts cases etc. Over the time unattended bugs gets accumulated and dealing with them becomes a challenge. Some team prefers to create a bug backlog and adds the bugs that could not be addressed in the recently completed iteration. With the help of Product owner they prioritize the bugs and fix them. This approach increase the overhead of maintaining two backlogs.

Another approach is to consider the open bugs as user stories and add them to Product backlog. This approach is least preferred as it increases the size of product backlog and product owner has to put more efforts to prioritize the backlog. Teams need to spend more time on estimations and planning. Also product owner resists as they have to accept bugs as user stories which he did not created but outcome of incorrect implementation.

During iteration planning, agile teams which consider only User Stories during the Iteration planning, have to struggle later to find time to fix and regress the bugs. Hence it is better to have 10-20% of Iteration length for bug fixing and regression. To reduce the bug fixing effort, Team selects the bugs which were found in recently complete iteration first as the code and constraints are still fresh in their mind.

Test engineers, next to product owner, usually has the better understanding of the system, they can identify the potential bugs which can be converted in to User Stories. Then these user stories can be added to Product backlog for sizing and priority.

Non Functional Testing

Team needs to break the user stories in to multiple stories to keep the functional and non-functional requirements separate. It helps in tracking both functional and nonfunctional needs. Testing of Nonfunctional needs such as security, performance, accessibility, high availability, reliability, and usability requires different skill set and expertise. Agile team first focuses on functional user stories of themes. Then team picks the nonfunctional user stories and involves the nonfunctional testing experts. Though functional test engineers remain in the team in all iterations, specialized test engineers are involved on need basis.

Buddy testing

In XP, two developers work on the same user story, one developer is writing code while another developer generating tests to generate quality code. In similar way, associate a test engineer with a developer. Whenever, developer develops and integrates his code, he invites the test engineers to test. Test engineers performs exploratory testing and explains the purpose of various tests. Developers consolidate the bugs found and fix them later. This approach is good as it helps in bringing the development and testing team on same page, unearth ambiguous requirements, increase collaboration and pulls down the project cost. Gradually developers also starts understanding the testing techniques and starts testing their code that consequently reduces bug count and increase productivity of entire team.

Role of Test Engineers

Keeping in mind the changing paradigm, writing lots of test cases, create big test suites for various type of testing, establishing various traceability matrices, filing defects will not bring the value in Agile. Teams are small and team members work in coloration in time boxed environment.

Functional Test engineers needs to have manual and automation expertise besides good knowledge of database. As the iterations are time boxed, they must be expert in exploratory testing and use skills to optimize test data and test cases. In absence of Product Owner, Testing professional has to play role of product owner. Automating regression suites is need of the hour to save reduce the test cycle in iterations. Team is working closely with product owner to understand user stories, hence the test engineers must have good communication skills and analytical skill.

 

What do you think about testing in Agile, please do share.