Do you publish automated test execution result in Test Management System?

Your team uses open source tools to automate tests? Does your Testing team update the automated test execution results in Test Management System (TMS), NO, then you may like to recommend this post to them.

Teams want to know from testers about completeness of the testing to take a call to progress their code higher in delivery chain. Usually Testing teams use a TMS to keep their test artifacts such as test cases (mapped to requirements/user stories), test plans, and execution results at centralized repository to track the progress. If your test automation results are not published in the TMS then your team will never get the real time snap of the tests execution and progress. With the results updated in TMS, you will have the history of the test execution to take various decisions.

Usually teams using open source tools, are not aware of they can update the execution results with an adapter to TMS, or they have not thought of it. Even if the automation team knows about it, they don’t want to update the results as their test scripts are not stable enough and they don’t want to publish the wrong data. 🙂 good point

My team has developed adapters for HP ALM, Rally and TestLink. Post automated test execution, these adapters updates the results in the relevant TMS. You could also build the adapters. ALM provides the REST APIs which you could use to publish the execution results. Those who are using earlier version of QC can use QC OTA API. Rally also provides the API for the same purpose.

In case you are facing issues to build adapters, do comment for the assistance.

Advertisements

Matrices that measures UAT Readiness, Cost of quality etc…

A small debate on bugs over lunch got converted into a discussion. During the discussion on how to ensure that team is delivering the right quality, business is getting value, cost of quality is gradually optimized, test coverage is giving confidence etc.  All of us agreed that there is a need of adding checkpoints. Each checkpoint will produce some useful matrices, which can help in identifying the symptoms of non-conformance of quality (budget, scope, time). Here are few matrices I would prefer to use, depending upon the development model, availability of the data, and need of the specific information. I believe one should not create a matrices which he or other can’t use in decision making. Please let me know if you would like to add more.

=================================
Pre UAT/Production Matrices
=================================

Test coverage
Unit test coverage
Functional/non-functional Test coverage
Requirement /Acceptance Criteria Traceability
Executed tests vs pending tests

Defect Analysis
Defect Density Feature-wise
Defect Density feature/requirement Size-wise
Defect Escape Rate
Bug Find Close Trends
Bug Reopen trends
Bug Aging
Use Stories Stability
Test Cases Effectiveness
Browser wise Bug Distribution
Root Cause Analysis
Stability Component wise
Productivity Matrix

Test Automation
Automation Execution Results
Automation Execution Build Wise
Automation Execution SprintWise
Automation Progress Week Wise
Automation Script Maintanance

UAT Readiness
Status of Stories/requiremetns
Tests execution coverage and status
Priority 1 and 2 defect status
Requirement/component/feature stability
Regression Status

Productivity
Number of deployment re-runs in test environment
Bug Reopen trend

=================================
Post Production Matrices
=================================

Customer Satisfaction
Production Defect slippage
Number of Product/release rollback (Quality Issues)
Product enhancement requests after each production release (Requirement Completeness)
Maintenance fixes per year/release (Technical debt)
Number of emails/calls to customer service (Usability Issues)
Number of training session for end user/customer (Usability Issues)

Defect Analysis
Production Defect slippage per production release
Production Defect slippage per requirement size
UAT Defect slippage per UAT release
UAT Defect slippage vs requirement size
Root cause Analysis (requirement/code/test/configuration)
Pre UAT/UAT/Production bugs ratios
Defects found by all in Production in a year

Defect Cost
Business loss per defect
Costs of work-around
Brand image impact
Cost to fight/pay legal cases
Raito of Maintenance and enhancements

Product Reliability
Availability Actual vs Expected
Mean time between failure (MTBF)
Mean time to repair (MTTR)
Reliability ratio (MTBF / MTTR)
Production Defect slippage
Number of Product/release rollback (Quality Issues)

Deployment Quality
Number of Production deployment re-runs (Deployment issues)
Ratio Failed deployment and total deployment attempts

Turnaround Time Matrices
Turnaround time for production issues (severity wise)
Turnaround time for Enhancements developments (Major/minor)
Planned/Actual Turnaround time for production issues (Major/minor)

Cost of Quality
Cost of preparing Test strategy/plans
Cost of test development and management, defect tracking
Cost of test execution and defect reporting
Cost of static validations like reviews, walkthrough, and inspections
Costs of analysis, debugging and fixing, retesting
Costs of tools

Cost of trainings