The Orthogonal Array testing technique is a statistical way of reducing the test cases by removing redundant test conditions. A software system works on large number of interactions and integrations. It is very difficult to find out the erring interaction/integrations which the It is observed that major source of the bugs are interactions and integrations.
The bugs in the software are usually caused by a specific Condition independent of other conditions in the system. A condition is a set of conditions or variables, configurations, and logical linking of variables.
Think about testing a simple system, which has four variables and each variable can have 3 inputs. We need 81 cases (3x3x3x3) to test each pair of this system. If we use the orthogonal array techniques, we will have only 9 cases to test the system where all the pair-wise combination will be validated. It will unearth the combination which is causing the failure in the system.
In the above case, Orthogonal Array techniques has reduced the test condition by 89%, i.e. now with 11% of existing test cases all the variable pairs are validated. Interestingly, it will will provide 100% coverage of pair combinations, 33% of three way combinations, 11% of four way combinations. Further to reduce the test condition, orthogonal technique can be used on three way and four way combinations.
Use of Orthogonal array techniques will ensure
All the pair-wise combinations of the selected variables are validated.
An effective test set containing with fewer test cases
Lowers the cost of testing as test cycles are shorter
For one of the client, which is Leader in the Video communications, orthogonal array techniques reduced the test cases from 8171 to 2614 i.e. reduced by 68%. Test cycle efforts were reduced from 164 to 55 person days.
take decision whether a product is ready for release/shipping.
predict the count of remaining bugs.
estimate the testing and rework due to bugs
identify the areas having more bugs
determine if the enough testing is done
Defect Density and Formula:
Defect Density is the calculated by dividing the Valid Bugs identified in a specific duration, by the size of the release.
The formula is simple, Defect Density= Defect Count / Size of the release
Size of the release, can be measured in terms of Line of Code (LoC), which is very popular. However, it is better to size the projects using Function Point, Use Case Point, Size of iterations/sprints etc.
It is based on the concept that if you have historical data, then you can predict the count of the bugs in a release. Then based on the difference of Expected Bug count and actual bug count, one can take decision.
This example calculates the Defect Density using KLoC
Lines of Code
The Defect Density in above product is in range 3 to 6. Hence if we consider release 9, then you may like to spend more time on testing as the defect density is yet to go near expected density. In other words, there is still scope of finding 430 to 1180 bugs.
Once you close the testing of Release 9.0, then you will updated the above table, and it may possible that expected Defect Density dips/surges further.
Following Graph is the example, of Defect Density using size (Function/Use point).