What Does a Bug Count Mean?

As software companies grow, they often find it necessary to start measuring things. When a company has just a couple of dozen people, it’s pretty easy to see whether a software tester is performing well, or whether they are struggling. But when a company grows to several hundred or several thousand people, it becomes more difficult to keep track of how everyone is doing.

At this point, the management of the growing company will conclude that they need to look at metrics to gauge how a software team is performing. And often, it will be suggested that teams start counting their bugs. But what does a bug count mean? This post will discuss what a bug count will and will not tell you.

A Bug Count By Itself Tells You: Absolutely Nothing
Saying that a team found fifty bugs this month means nothing. It makes as much sense as trying to determine whether a book is good by counting how many pages it has. “Was it a good book?” “Well, I haven’t read it, but it has five hundred pages!!!”

Comparing Bug Counts Between Teams Tells You: Absolutely Nothing
The number of bugs a team finds is dependent on many things. Each product tested is different. One team’s product might be very simple or very mature, while another team’s product might be quite complex or brand new. Further, each team might have a different way of logging bugs. One team might have a practice of verbally reporting bugs to the developer if the developer is still working on the feature. Another team might have a practice of logging every single bug they find, down to a single misplaced pixel. Finally, one person’s single bug might be three bugs to someone else. Consider two teams who test their product on iOS and Android. One team might log a single bug for a flaw they’ve found on both operating systems, while another team might log two bugs for the flaw, one for each operating system.

Comparing Bug Counts by the Same Team, Over Time Tells You: Maybe Something
If you are tracking the number of bugs a team finds each month, and there is a big change, that might mean something. For example, an increase in the number of bugs found could mean:
* The testers are getting better at finding bugs
* The developers are creating more bugs
* There’s a new complicated feature that’s just been added
But it could also mean:
* There’s been a change in the procedures for logging bugs
* Last month, half the team went on vacation so there was less work done, and now everyone is back

Analyzing Who Found The Bug, the Customer or the Tester, Tells You: Likely Something
When bugs are logged, it’s important to know who found them and when. Obviously, the best case scenario is for the tester to find a bug in the test environment, long before the feature goes to production. The worst case scenario is for the customer to find a bug in production. So if you pay attention to what percentage of bugs logged are found by testers and what percentage are found by customers, this could tell you something, especially if you look at the metric over time.

If your team started to track this metric, and the first month you tracked it, 75% of the bugs were found by the testers and 25% of the bugs were found by the customers, you’d have a baseline to compare to. Then in the second month, if 85% of the bugs were found by the testers and 15% of the bugs were found by the customers, you could surmise that your team is getting better at finding the bugs before the customers find them.

Metrics are a double-edged sword: on the one hand, they can be used to illustrate a point such as how well a team is performing, whether more hiring is needed, or whether there’s a problem in your software release processes. But on the other hand, metrics can be weaponized, manipulated, and gamified. By considering all the possibilities of what the number of bugs could mean in a specific context, you’ll be more likely to find metrics that will be helpful and instructive.

5 thoughts on “What Does a Bug Count Mean?

  1. Peg Foltz

    I’ve recently joined an organization that had no formal QA process, which I’ve been tasked with implementing. One of the first things I did was to ensure that when a bug is filed, it addresses ONE task, not “here’s all the stuff that’s wrong before we release the software” (which is a nightmare to decide when things are completely tested and fixed). As part of my process, I’m assigning test scenarios to each bug, and the marker that tells me my bug is granular enough is when I only need to assign one test scenario to it. If that scenario doesn’t describe everything that went wrong, I split out the other items into separate tickets. Then I’m more likely to be counting apples to apples, instead of applies to bushel baskets.
    It’s also important to communicate to development managers that finding bugs is a GOOD thing (ESPECIALLY in the test environment) and is not meant to reflect badly on their developers. Testers and developers approach a project work-item from different points of view, so there will be things missed by even the most thorough developers when a good tester gets in there with their unique perspective.
    The key to good metrics, as mentioned in your article, is that they build upon each other. Come up with metrics that are measurable, and compare them over time, from release to release. Our team distinguishes bugs found during testing from ones found in production very explicitly. This helps us to prioritize them and determine when and where they will get addressed.

  2. Anna

    Thank you for sharing your experience, kristin!
    I completely agree with you. But what to do when the Lead (head of the testing department) is tied to deadlines and does not want to perform a thorough check so as not to “spoil the statistics”? Allocates a minimum of time to complex functionality with the conviction – “everything is fine there.” How can a newbie argue in a new team that more time needs to be allocated, otherwise the quality suffers.

    1. kristinjackvony Post author

      Anna, probably the best way to make that argument is with statistics that come from customer complaints. If you say something like “In our last two releases, we’ve received 50 customer complaints each”, that may make the point that it makes more sense to slow down before a release so that the testers can find the bugs instead of the customers.

Comments are closed.