For metrics to be useful in any situation, they must always be established and measured from a perspective of, “You need to see the forest for the trees.” There are many metrics that are outright useless by themselves, and they can do damage if you make decisions based on them in isolation. In an article for DZone, Gilad David Maayan gives examples of seven useless metrics.
- Number of test cases executed: There are a potential infinity of test cases you can conduct—but most of them do not matter, practically speaking. And even initially good test cases can linger on and lose their value as software changes. Be careful that a high number of test cases executed does not just reflect an abundance of vestigial, redundant tests.
- Number of bugs found per tester: You do not want to compare testers to each other, as it can create unhealthy and hostile competition. Plus, if testers are testing different features in the first place, it is unfair if one person is dealing with refined software and another is dealing with a buggy mess.
- Percentage pass rate: This metric is just way too easy to game, plain and simple.
- Unit test code coverage: This metric only hints at whether individual aspects of the software are working; it cannot tell you if the software functions well as a whole. Furthermore, unit test code coverage cannot speak to the actual quality of the unit test conducted, so this can also be gamed.
- Percentage of automation: Automation is great where it is practical, but it can actually slow things down or provide unfounded optimism where it is not practical. There must be a logic to automation, and percentage of automation flirts with being a vanity metric.
- Cost per defect: Measuring cost per defect holds value, but it is subjective. It is important to distinguish minor cosmetic bugs from serious bugs and to prioritize accordingly.
- Defect density: Defect density is also subjective, according to how people choose to classify bugs. What one person might classify as one big bug, another might classify as 10 little bugs. That muddies the message.
Maayan concludes that there are three major challenges associated with testing right now: (1) finding a way to improve testing quality and speed at the same time, (2) preventing untested code changes from making it into production, and (3) collecting quality metrics for analysis in a single place. You have to employ metrics in tactical combinations if you are to have a good shot at confronting these issues.
You can view the original article here: https://dzone.com/articles/7-useless-test-metrics