HP ALM frequent crashes and how we are dealing with it

HP-ALM is one of the tools which is being used by large companies to maintain their testing life cycle. I consider it as a reliable tool and it helps in managing testing well, however recently i have had a bad experience with ALM and one of my team lost almost 200 person days productivity.

One of my team was using HP-ALM 11.5 for BPT approach to automate tests for a leading US based telecom company. We were using HP-ALM for managing our test cases, test data, components, flows, scripts, automated test cases etc. Initially it went well when team was busy in creating reusable components and pulling them into test cases. When my engineers started executing tests from test lab, we were troubled by many issues.

-UFT is not getting launched from ALM;

-Components get crashed when trying to run test cases;

-Components and data sheets get vanished from ALM without leaving a trace

-Test set were executed half and then UFT getting crashed

-Components used to get locked due to crashes

We had to re-develop/write the components, after every crash when we could not find way to find them back. Data sheets have to be reloaded and then again the mapping tasks have to be done. Team gradually getting frustrated with the rework and despite our efforts, we were not able to deliver what we committed. We lost a three week sprint with no delivery.

While team was struggling with ALM, our ALM Admin was looking into these issues with HP team trying to find the root cause. They run few utilities, but we did not get any respite.

However in mean time we also explored our infrastructure. We realized that ALM server is holding 7 big projects under one ALM project with more than 52GB of data. You could imagine the size that there were 0.14 million test cases which got executed and produced 0.30 million test runs.

To address these issues, we are now proceeding for below activities:

  1. Clean up to remove unwanted records (test data, test cases, runs, user details etc). To improve the performance by reducing the load on server
  2. Segregating existing projects to have one ALM project for each development project

In case you are also facing such issue, you may like to know how currently we are dealing with it:

  1. We take backup every day in the morning, which helps in recovering components and datasheets in event of loss.
  2. Team is working in two shifts to put load as low as possible
  3. Avoid working on same components concurrently
  4. Reducing frequent checkin/out

 

HP ALM is a good tool for automation testers as it saves time to develop and maintain the frameworks as well as provide management, reports to measure the quality of product being built. If you have gone through the similar experience with ALM and able to resolve, please do write, it will be a great help. I will update this blog once our cleaning activity is complete or if we find some solution.

Matrices that measures UAT Readiness, Cost of quality etc…

A small debate on bugs over lunch got converted into a discussion. During the discussion on how to ensure that team is delivering the right quality, business is getting value, cost of quality is gradually optimized, test coverage is giving confidence etc.  All of us agreed that there is a need of adding checkpoints. Each checkpoint will produce some useful matrices, which can help in identifying the symptoms of non-conformance of quality (budget, scope, time). Here are few matrices I would prefer to use, depending upon the development model, availability of the data, and need of the specific information. I believe one should not create a matrices which he or other can’t use in decision making. Please let me know if you would like to add more.

=================================
Pre UAT/Production Matrices
=================================

Test coverage
Unit test coverage
Functional/non-functional Test coverage
Requirement /Acceptance Criteria Traceability
Executed tests vs pending tests

Defect Analysis
Defect Density Feature-wise
Defect Density feature/requirement Size-wise
Defect Escape Rate
Bug Find Close Trends
Bug Reopen trends
Bug Aging
Use Stories Stability
Test Cases Effectiveness
Browser wise Bug Distribution
Root Cause Analysis
Stability Component wise
Productivity Matrix

Test Automation
Automation Execution Results
Automation Execution Build Wise
Automation Execution SprintWise
Automation Progress Week Wise
Automation Script Maintanance

UAT Readiness
Status of Stories/requiremetns
Tests execution coverage and status
Priority 1 and 2 defect status
Requirement/component/feature stability
Regression Status

Productivity
Number of deployment re-runs in test environment
Bug Reopen trend

=================================
Post Production Matrices
=================================

Customer Satisfaction
Production Defect slippage
Number of Product/release rollback (Quality Issues)
Product enhancement requests after each production release (Requirement Completeness)
Maintenance fixes per year/release (Technical debt)
Number of emails/calls to customer service (Usability Issues)
Number of training session for end user/customer (Usability Issues)

Defect Analysis
Production Defect slippage per production release
Production Defect slippage per requirement size
UAT Defect slippage per UAT release
UAT Defect slippage vs requirement size
Root cause Analysis (requirement/code/test/configuration)
Pre UAT/UAT/Production bugs ratios
Defects found by all in Production in a year

Defect Cost
Business loss per defect
Costs of work-around
Brand image impact
Cost to fight/pay legal cases
Raito of Maintenance and enhancements

Product Reliability
Availability Actual vs Expected
Mean time between failure (MTBF)
Mean time to repair (MTTR)
Reliability ratio (MTBF / MTTR)
Production Defect slippage
Number of Product/release rollback (Quality Issues)

Deployment Quality
Number of Production deployment re-runs (Deployment issues)
Ratio Failed deployment and total deployment attempts

Turnaround Time Matrices
Turnaround time for production issues (severity wise)
Turnaround time for Enhancements developments (Major/minor)
Planned/Actual Turnaround time for production issues (Major/minor)

Cost of Quality
Cost of preparing Test strategy/plans
Cost of test development and management, defect tracking
Cost of test execution and defect reporting
Cost of static validations like reviews, walkthrough, and inspections
Costs of analysis, debugging and fixing, retesting
Costs of tools

Cost of trainings

Testing in Agile based methodologies

https://mmathpal.wordpress.com/

Testing in Agile Framework

In contrast to traditional SDLCs, in Agile, testing life cycle is squeezed from few months to few weeks and each cycle begins and ends in the same iteration. Iteration is like a sprint in terms of Scrum. Test planning is done at the beginning of each iteration during iteration planning meeting. Testing team needs to work in close collaboration with the development team and Product owner. During iteration, development team produces frequent fully or partially unit test builds for testing.

Though the objective of testing is same, the challenges of testing in Agile are different as in the traditional SDLC. However, in many cases, Testing professionals still try to fit the traditional testing approach, techniques and measurements in agile. This blogs talks about role of test engineers and what testing they should consider during the testing in agile..

In 2001, Agile manifesto was introduced to software industry to provide a framework to build software faster in turbulent business environment. Over the time various SDLCs such as Scrum, XP etc. complying with the Agile Manifesto, got matured and various success stories are backing up the success of agile methodologies.

Software is developed in iterations. Each iteration, agile team is focused on design, code, and test a small set of requirements. Agile team delivers a potentially shippable software i.e. implementation of small chunk of the requirements at the end of each iteration. Iteration sizes from 2 weeks to 6 weeks. Test engineers involves from sizing the user stories to confirming their correct implementations.

Sizing of User Stories

During User Stories Sizing workshop, agile team relatively sizes each user story against a specific user story size. Usually test engineers don’t participate or participate passively. Test engineers needs to compare the testing effort they may need to put in to test user story and accordingly share their estimates. Usually, development and testing teams estimates matches in terms of relativity. However there might be cases where development efforts are less than testing and vice versa. E.g. test engineers needs to put more efforts if they need to test a user story (As an Admin I want to see yearly reports containing monthly sums so that I can compare which month has larger sums). For this user story, testing team needs to create test data for across the years, months, and days.

Unit Testing

Unit testing ensures the developer are writing right code correctly to implement the desired functionality. It also catches cases which black box test engineers may miss and reduces the cost of the bug by finding them early. Test Driven Development (TDD), Behavior Driven Development (BDD), and Acceptance Test Driven Development (ATDD), are getting popular among developer community. They discover the bugs at unit level as well as at integration points. In cases where following TDD/BDD/ATDD is not possible, still Unit testing and integrated testing should not be compromised and implemented. To ensure the frequent and efficient unit/integration testing, continuous integration tools must be used.

Acceptance Testing

It is recommend that each iteration’s delivery should be designed, coded and Tested in the same iteration. To capture the requirements, User stories are being used by the agile practitioners. A user story contains the purpose, business value as well as completion criteria. This completion criteria also known as Done criteria, becomes the basis of testing and accepting if a user story is implemented as expected.

Test engineers needs to ensure that each user stories has agreed upon completion criteria. They are supposed to write acceptance test cases to ensure user stories meets the acceptance criteria.

Acceptance test cases should be written against each completion criteria and these test cases should be executed frequently and once more after code is freeze and added to the potentially shippable product.

Functional Testing

Though Acceptance testing cases confirms that user story is implemented as expected, but it cannot be the replacement of testing of end to end flows, alternate flows, negative cases etc.

As the new user stories are implemented and integrated with the existing software, new and alternate flows gets introduced, test data need increases, and additional user roles/persona comes into the picture. Functional test cases will ensure that these additional testing cases are captured and executed to ensure the correct integration between new and existing functionalities. In agile however it is not recommended to write exhaustive test cases as most of the test cases’ life will end with the end of iteration. Few of the functional test cases will be selected for automated regression suite. Hence implementing the test case optimization techniques such as Orthogonal array, Equivalence class partitioning, workflow based & risk based testing, plays pivotal role. More the test cases, more the overhead of the managing the test cases.

Automated Regression

Automate as early as possible and automate as much as possible, are the mantra of testing in agile. Software is developed incrementally and by every passing iteration, the regression suite gets larger while the Iteration size remain same. Gradually testing team starts spending more time on regression instead of Exploratory testing and testing of new user story implementation. As a thumb rule all validation tests and acceptance tests should be automated. However in cases where test automation is not possible, it is better to have a hardening iteration after every 4-5 iteration. During the hardening iteration, testing team focuses on bug regression and retesting, while development team addressing the bugs from the backlog.

Exploratory Testing

Agile is for faster development by increasing productivity and reducing waste & rework. Test cases though try to catch most of the cases, ends up with test cases for happy paths and known negative cases. As we discussed the need of optimized test cases to reduce the test case counts, still test case based testing cannot substitute what and how human mind observe and react while using the software. Here comes the Exploratory testing to unearth more issues in shortest time. It helps in finding scenarios and defects which with the help of test cases difficult to even imagine at the time of test case writing.

Testing team needs to work closely with the product owner to understand the end user. To ensure that exploratory testing does not go unguided, testing team develops user Roles, Persona and Extreme characters and then imitate them.

Measuring Testing Progress

Though working in Agile, project managers still judge Test engineers’ productivity by measuring the test cases written and executed in a specific period. Testing progress is measured by measuring executed and pending test cases. Measurement for testing progress should be Burn down charts. And to measure the quality of testing, Bug escape rate should be used. To check the stability of User stories, map Bugs with the user stories.

Defect Management

Defects in user stories implementation should be addressed in the same iteration. Still there are bugs that could not be fixed because of time constraints, priorities, resource crunch, ambiguous confirmation criteria, afterthoughts cases etc. Over the time unattended bugs gets accumulated and dealing with them becomes a challenge. Some team prefers to create a bug backlog and adds the bugs that could not be addressed in the recently completed iteration. With the help of Product owner they prioritize the bugs and fix them. This approach increase the overhead of maintaining two backlogs.

Another approach is to consider the open bugs as user stories and add them to Product backlog. This approach is least preferred as it increases the size of product backlog and product owner has to put more efforts to prioritize the backlog. Teams need to spend more time on estimations and planning. Also product owner resists as they have to accept bugs as user stories which he did not created but outcome of incorrect implementation.

During iteration planning, agile teams which consider only User Stories during the Iteration planning, have to struggle later to find time to fix and regress the bugs. Hence it is better to have 10-20% of Iteration length for bug fixing and regression. To reduce the bug fixing effort, Team selects the bugs which were found in recently complete iteration first as the code and constraints are still fresh in their mind.

Test engineers, next to product owner, usually has the better understanding of the system, they can identify the potential bugs which can be converted in to User Stories. Then these user stories can be added to Product backlog for sizing and priority.

Non Functional Testing

Team needs to break the user stories in to multiple stories to keep the functional and non-functional requirements separate. It helps in tracking both functional and nonfunctional needs. Testing of Nonfunctional needs such as security, performance, accessibility, high availability, reliability, and usability requires different skill set and expertise. Agile team first focuses on functional user stories of themes. Then team picks the nonfunctional user stories and involves the nonfunctional testing experts. Though functional test engineers remain in the team in all iterations, specialized test engineers are involved on need basis.

Buddy testing

In XP, two developers work on the same user story, one developer is writing code while another developer generating tests to generate quality code. In similar way, associate a test engineer with a developer. Whenever, developer develops and integrates his code, he invites the test engineers to test. Test engineers performs exploratory testing and explains the purpose of various tests. Developers consolidate the bugs found and fix them later. This approach is good as it helps in bringing the development and testing team on same page, unearth ambiguous requirements, increase collaboration and pulls down the project cost. Gradually developers also starts understanding the testing techniques and starts testing their code that consequently reduces bug count and increase productivity of entire team.

Role of Test Engineers

Keeping in mind the changing paradigm, writing lots of test cases, create big test suites for various type of testing, establishing various traceability matrices, filing defects will not bring the value in Agile. Teams are small and team members work in coloration in time boxed environment.

Functional Test engineers needs to have manual and automation expertise besides good knowledge of database. As the iterations are time boxed, they must be expert in exploratory testing and use skills to optimize test data and test cases. In absence of Product Owner, Testing professional has to play role of product owner. Automating regression suites is need of the hour to save reduce the test cycle in iterations. Team is working closely with product owner to understand user stories, hence the test engineers must have good communication skills and analytical skill.

 

What do you think about testing in Agile, please do share.

Is it possible to Automate Accessibility Testing???

Accessibility- Visual Disability  This blog discuss the challenges to Automate the accessibility testing of Web applications made accessible  for people having Vision related issues; such as blindness and low vision etc.

   Starting with a brief about Accessibility, I will discuss the challenges we faced, and then will end with the
solution.

As per W3C, Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web.

W3C started an initiative; Web Accessibility Initiative (WAI), to lead the Web to its full potential to be accessible, enabling people with disabilities to participate equally on the Web. Web Accessibility testing is to validate that website is accessible to people with various level of disabilities.

Businesses are making their websites accessible to avoid legal issues, expend the business (approx 1 trillion $ market) and remove inequality among people with various level of abilities. Tesco invested £35,000 to make their website accessible and generated £1.5 million in a year from online sales to disabled people in Europe.

Broadly disabilities can be grouped under Sensory (Vision, Hearing), Physical (Hand movement, paralysis etc), and Cognitive (dyslexia, slow processing of information etc) disabilities.

For people with Visionary problems such as low vision or blindness, there are some assistive screen reading tools; such as JAWS, NVDA etc.  These tools read the web content; end user hears and accordingly with the help of Keyboard can interact with the website. Tab key, Arrow keys, Enter Key, Shift, CTRL, ALT, and Space bar are most used keys for navigation.

Testing the Website for vision accessibility is a two steps process;  Step 1, Use free tools where you provide the URL of your website and the tool generates a report showing the how accessible is the website. Take appropriate action. Step 2, Manual test engineers imitate the blind users, hear the web content for correctness, and test the navigation and functionalities using keyboard.

Hearing the content and then verifying what you see on the screen, repetitively is monotonous and boring tasks for manual test engineers.  Disorientation leaves space for missing vision accessibility issues during regression testing. Keeping this in mind, since few days I along with my colleague, trying to automate, accessibility testing.

The foremost challenge was we could not find over web, if somebody has tried to automate Screen readers.  Then next biggest challenge was how to verify if the Screen readers are reading the content right. Another technical challenge is that Screen reader tools are not accepting the Keyboard Shortcut inputs sent by various paid/open source tool such as QTP, SilkTest, Test Complete, Selenium, AutoIT, Robot Api etc.  These Keyboard shortcuts help the disabled people to navigate and use the functionality of the Web page.

The solution; we created an Object Repository which contains all the objects, their IDs, specific attribute which JAWS reads, and expected content. We were sure that if the right content is set in the right property of an object, JAWS is going to read it correctly. We also found during our R&D, that JAWS reads the ARIA labels first. So in cases where the development is at initial stage, I would recommend to ensure that development team is entering content in the ARIA labels associated with an object. These contents will be read by JAWS when user moves control on the object.

What is ARIA (http://en.wikipedia.org/wiki/WAI-ARIA) WAI-ARIA describes how to add semantics and other metadata to HTML content in order to make user interface controls and dynamic content more accessible.)

In our case, the test website is already developed, so instead of asking development team to add ARIA labels for all objects, we collected all objects, and their content in specific attributes, which JAWS is reading. This way we created our object repository and automated Screen Readers. For navigation, currently we are using the Tab key, Arrow keys, Enter Key, and Space bar.  With these keys we are able to check all objects, content for JAWS, and functionalities.

I am looking forward to hear from you if you have suggestions and queries.

Empower your team, build a Responsibility Matrix

A group becomes a team when each member is sure enough of himself and his contribution to praise the skills of the others.

A group becomes a team when each member is sure enough of himself and his contribution to praise the skills of the others.

Have you ever faced a situation where in your absence, or that of a critical person, other team members are in a quandary regarding taking decisions, executing tasks or plans or sending reports?

Have you found yourself in a position where team members are calling you, as the key decision-making rests with you?

This is typically a problem faced by people who are managing multiple projects and are key to specific projects. Their team members normally have issues when they are unavailable.  This problem is compounded in the case of geographically distributed teams, or those working in different time zones.

If you have faced such a situation, this blog may be useful for you.

I believe the solution to this challenge lies in engaging with all your team members to create a ‘Responsibilities Matrix’. The idea here is to identify and list all the important tasks to be performed by the team.  Following this, the team needs to identify the primary and secondary owners of these tasks.  These responsibilities must be rotated, wherever possible, among other team members. This will help create backups and reduce dependencies on a few individuals.

I am using this responsibility matrix in all of my projects and a sample of it is attached to this blog.

The idea behind the matrix is bigger than simply adding tasks and the names of the engineers in charge. The aim is to develop a sense of ownership and team spirit. It is to empower the team, improve transparency and communication and lower the dependency on specific people.

How we did it was we sat together and identified the various tasks and grouped them under a ‘Major task’.  The teams then picked the owners of each Major task and recorded them in the Matrix.

The Primary owners were assigned the job of ensuring that the tasks were completed as planned. The secondary owners were directed to play the role of the primary owners in their absence. The individual contributors—the team members—were asked to complete the tasks. In case of a rotation, we advised them to make sure that the primary and secondary owners of Major tasks were not be moved at the same time.

I hope this gives you an idea about the ‘Responsibility Matrix’ and its benefits. I look forward to hearing your views on it.

responsibility matrix

Compressed workweeks: A new strategy for workforce retention

Compressed Workweek

Increase Productivity – Reduce working days

How wonderful it would be if you had to work for only four days and get three days off—starting from Friday and ending on Sunday. Interesting? Keep reading.

I read a few articles recently which talked about how a few organizations were experimenting with the idea of giving people Fridays off, in case they had completed their weekly quota of hours!

They referred to this as ‘Compressed Workweeks’. Some other companies called it Alternative workweek schedules.  What this really means is that if the weekly quota of people is 42 hours, they can work 10.5 hours for four days and avail of the Compressed Workweeks benefit.

Another option is for people to work nine hours for four days and then work three hours each on Friday and Sunday.

I recently heard that companies such as IBM, Qualcomm, PwC India, Dell and some others were experimenting with the Compressed Workweeks concept.

It is my belief that flexibility in working hours can help employee manage their hectic schedules as well as balance their professional and person lives.

The concept, can for instance, work for professionals who want to enjoy long weekends.   Naturally, they will have had to put in extra effort on weekdays and deliver their assignments on time.

While on the face of it, the model appears interesting, I am not sure whether it can work in the software industry. For one, it requires better visibility of the work to be done in every week/month and a clear division between the fixed working days and the optional working weekdays.

Another bottleneck can be that the software industry is driven by output rather than the hours spent in the office. Even Agile practices suggest that if a person is unable to deliver user stories assigned to him, then his velocity reaches zero!  Also, after working for 6-7 hours, the productivity of people typically recedes and produces rework.

Though, I am sure that the compressed workweek idea will help engage employees, keep their morale high and retain them, them, there are several logistical issues to consider. Keeping track of the projects, monitoring their progress and managing them will need additional effort.

The Compressed workweeks model can be truly successful where the work volume per hour is defined, as with call centers and software maintenance projects.

This is of course my view. I am keen to know what you think about the emerging trend. Do you think it will be a hit with the software industry, especially outsourcing service providers?

Do write in and share your views.

Planning to Outsource Testing: OSTC or OSDC

Outsourcing testing services to outside vendors (we call them extended partners these days) is not a new phenomenon. There are many Offshore Software Testing companies (OSTC) exclusively focusing on testing services, such as AppLabs (acquired by CSC) , NeuSoft etc. Sensing the potential of exponential growth, major Offshore Software Development Companies (OSDC)  are also offering, testing services such as Impetus, Globallogic, Symphony, and Persistent etc.

It is understood that outsourcing the testing will speed up the testing, improve quality, reduce project cost, and lets you focus on business. But when it comes to selecting an outsourced business partner, it becomes difficult to make a choice between OSTC and OSDC offering testing services.

OSDC should be prefered if you have vision to outsource the complete or partial software development in future. There are instances where few customers outsourced the Development and Testing contracts to OSDC and OSTC companies respectively. Usually this happens when a customer has already outsourced testing to some OSTC and now planning to oursource the product development.  Viceversa, if OSDC does not have appropriate capability to provide the testing, customer involves OSTC for testing services. Though OSDC-OSTC collaboration model is also successful, however in such cases ownership and coordination become a big challenge.

Only deterent to go with OSDC is they are more focussed in Software development , hence you need to check their References, Testing infrastructure, Talent pool, Ramp-up capacity, Testing  and Tools expertise, Attrition of testing talent. It is good to look if they have a Testing Center of excellance.

OSTC are usually keeping watch on current Software testing trends and usually acquire required Talent. They invest a lot in research and has right mindset as well as sytem to carry out quality testing.  OSDC has Testing Center of Excellance to serve while in cases of OSTC, the entire company it self become Testing center of Excellence.

Do you think that OSDC should be prefered over OSTC because if required they have software development capabilities. Looking forward to hear your opinions.