Matrices that measures UAT Readiness, Cost of quality etc…

A small debate on bugs over lunch got converted into a discussion. During the discussion on how to ensure that team is delivering the right quality, business is getting value, cost of quality is gradually optimized, test coverage is giving confidence etc.  All of us agreed that there is a need of adding checkpoints. Each checkpoint will produce some useful matrices, which can help in identifying the symptoms of non-conformance of quality (budget, scope, time). Here are few matrices I would prefer to use, depending upon the development model, availability of the data, and need of the specific information. I believe one should not create a matrices which he or other can’t use in decision making. Please let me know if you would like to add more.

Pre UAT/Production Matrices

Test coverage
Unit test coverage
Functional/non-functional Test coverage
Requirement /Acceptance Criteria Traceability
Executed tests vs pending tests

Defect Analysis
Defect Density Feature-wise
Defect Density feature/requirement Size-wise
Defect Escape Rate
Bug Find Close Trends
Bug Reopen trends
Bug Aging
Use Stories Stability
Test Cases Effectiveness
Browser wise Bug Distribution
Root Cause Analysis
Stability Component wise
Productivity Matrix

Test Automation
Automation Execution Results
Automation Execution Build Wise
Automation Execution SprintWise
Automation Progress Week Wise
Automation Script Maintanance

UAT Readiness
Status of Stories/requiremetns
Tests execution coverage and status
Priority 1 and 2 defect status
Requirement/component/feature stability
Regression Status

Number of deployment re-runs in test environment
Bug Reopen trend

Post Production Matrices

Customer Satisfaction
Production Defect slippage
Number of Product/release rollback (Quality Issues)
Product enhancement requests after each production release (Requirement Completeness)
Maintenance fixes per year/release (Technical debt)
Number of emails/calls to customer service (Usability Issues)
Number of training session for end user/customer (Usability Issues)

Defect Analysis
Production Defect slippage per production release
Production Defect slippage per requirement size
UAT Defect slippage per UAT release
UAT Defect slippage vs requirement size
Root cause Analysis (requirement/code/test/configuration)
Pre UAT/UAT/Production bugs ratios
Defects found by all in Production in a year

Defect Cost
Business loss per defect
Costs of work-around
Brand image impact
Cost to fight/pay legal cases
Raito of Maintenance and enhancements

Product Reliability
Availability Actual vs Expected
Mean time between failure (MTBF)
Mean time to repair (MTTR)
Reliability ratio (MTBF / MTTR)
Production Defect slippage
Number of Product/release rollback (Quality Issues)

Deployment Quality
Number of Production deployment re-runs (Deployment issues)
Ratio Failed deployment and total deployment attempts

Turnaround Time Matrices
Turnaround time for production issues (severity wise)
Turnaround time for Enhancements developments (Major/minor)
Planned/Actual Turnaround time for production issues (Major/minor)

Cost of Quality
Cost of preparing Test strategy/plans
Cost of test development and management, defect tracking
Cost of test execution and defect reporting
Cost of static validations like reviews, walkthrough, and inspections
Costs of analysis, debugging and fixing, retesting
Costs of tools

Cost of trainings

How to Save Cost of Testing by Tests Reusability

Reduce Investment planned for Testing by Tests Reusability

In software test life cycle (STLC), among various phases, test case development is most critical and consumes considerable portion of the time, allocated for testing. While working on various projects, my target was to write optimized number of effective test cases.  During each product/project I tested, I used to feel that I have executed the similar tests in the earlier assignments. I am sure you might also have observed, that you are sometime writing the test cases, which you have already written for some other project/product.

Let me explain with an example of Login, whether you are in Finance, health care or CRM domain, test engineer has to write almost similar cases with few more addition or deletion. Also, as a Supervisor, you might have observed that on many occasions your team missed potential test cases; reason could be the time crunch, skill, experience ….. Usually Test case effectiveness is dependent on individual’s skills.  Supervisor faces bigger challenges on estimating the count of test cases to be written as it will provide the basis for test execution time. Also they face challenge to ensure their team is not missing potential test cases as well as spending effort to write quality test cases instead of redundant and exhaustive test cases. I experienced that my various teams used to find many potential test cases during the test execution and add them to the test suite later.

How is the idea if a Team

  • Can estimate testing efforts close to reality
  • Saves investments by shortening the Testing life cycle
  • Instead of investing efforts in writing test cases from scratch, choosing test cases from an organized test cases repository
  • Can verify the completeness of their Tests by checking them against predefined set of cases
  • Can shorten the Test cases Review cycle
  • Ensure the completeness of Test suite
  • deliver quality product by ensuring no missing test cases

Now, how to implement the Test reusability and ensure the completeness of test cases.  I am writing on these topic too. Will publish them soon.