We live in a time where it is easy to measure things. Websites measure visits from users all over the world; YouTube videos measure views and likes; mobile apps measure crash statistics. So it makes sense that software managers would want to measure quality activities.
Unfortunately, we don’t always have clear language to describe what is being measured. You may have heard a manager talk about getting to “100% Test Coverage”. But what do they mean by this statement? Here are a few things this could mean, and one thing that this cannot mean.
Test Coverage can actually mean Code Coverage
Sometimes when people refer to test coverage, what they really mean is code coverage. Code coverage is the number of lines of code that are executed when automated tests run against the product or feature. Areas that are not touched by automated tests could indicate gaps in testing. Unit tests are a great way to increase code coverage, because they are designed to test the code directly. And there are number of tools available to measure code coverage, such as DotCover, Coverlet, Cobertura, and SonarQube.
Test Coverage does not mean the percent of every possible test
A common mistake made by those who have never tested software is thinking that there are a finite number of things that can be tested in an application, so it is possible to have 100% test coverage. This is simply impossible. There is no limit to the number of tests that could be created even for the simplest applications. Consider a simple web form with five fields, none of which are required. A user could fill in just one field (five different possibilities), just two fields (ten different possibilities), three fields (ten different possibilities), four fields (five different possibilities), or all five fields. That’s 31 test cases, and that isn’t even considering negative testing. What if one field has an error and two fields are correct? What if three fields have an error and one field is correct? It’s easy to see how the number of test scenarios can quickly move to the millions of test cases. So no team will ever reach 100% test coverage as defined here.
Test Coverage can refer to Automation Coverage
It’s possible to use the term test coverage to refer to the percent of manual test cases that have been automated. This is useful for showing the status of an automation project. If you have 500 documented manual test cases, and you have automated 200 of them, your automation coverage is 40%. However, this meaning is less useful if you don’t already have a documented suite of manual tests. If only a handful of manual tests have been documented, a team could claim to have 100% automation coverage after automating those tests. Meanwhile, hundreds of other test cases could be neglected.
Test Coverage can refer to Feature Coverage
Another possible meaning for test coverage could be the measurement of which application features have existing tests. If a product has ten features, and the tester executes tests for five of those features, it would be possible to say that 50% of the features have been tested. In this usage, it’s also important to distinguish what is meant by “tested”. Tested could mean that exploratory testing has been conducted and documented, but no regression test suite has been created. Or it could mean that manual regression tests have been created, or that test automation has been written. Also keep in mind that it’s not possible to say that a feature has 100% coverage, for the same reason listed above.
What can Test Coverage tell us?
Test coverage can tell us how many lines of code are executed by tests, how much testing has been done on a feature, how many regression tests have been created, or how many automated tests have been written. But it is only useful if all members of the team or organization have agreed upon what the term means. If your team is asked to start measuring “Test Coverage”, share this blog post with the requester and ask them to clarify what information they are looking for.