Measuring Quality

The concept of measuring quality can be a hot-button topic for many software testers.  This is because metrics can be used poorly; we’ve all heard stories about testers who were evaluated based on how many bugs they found or how many automated tests they wrote.  These measures have absolutely no bearing on software quality. A person who finds a bug in three different browsers can either write up the bug once or write up a bug for each browser; having three JIRA tickets instead of one makes no difference in what the bug is!  Similarly, writing one hundred automated tests where only thirty are needed for adequate test coverage doesn’t ensure quality and may actually slow down development time.

But measuring quality is important, and here’s why: software testers are to software what the immune system is to the human body.  When a person’s immune system is working well, they don’t think about it at all.  They get exposed to all kinds of viruses and bacteria on a daily basis, and their immune system quietly neutralizes the threats.  It’s only when a threat gets past the immune system that a person’s health breaks down, and then they pay attention to the system.  Software testers have the same problem: when they are doing their job really well, there is no visible impact in the software.  Key decision-makers in the company may see the software and praise the developers that created it without thinking about all the testing that helped ensure that the software was of high quality.

Measuring quality is a key way that we can demonstrate the value of our contributions.  But it’s important to measure well; a metric such as “There were 100 customer support calls this month” means nothing, because we don’t have a baseline to compare it to.  If we have monthly measurements of customer support calls, and they went from 300 calls in the first month, to 200 calls in the second month, to 100 calls in the third month, and daily usage statistics stayed the same, then it’s logical to conclude that customers are having fewer problems with the software.

With last week’s post about the various facets of quality in mind, let’s take a look at some ways we could measure quality.

Functionality:
How many bugs are found in production by customers?
A declining number could indicate that bugs are being caught by testers before going to production.
How many daily active users do we have? 
A rising number probably indicates that customers are happy with the software, and that new customers have joined the ranks of users.

Reliability:
What is our percentage of uptime?  
A rising number could show that the application has become more stable.
How many errors do we see in our logs?  
A declining number might show that the software operations are generally completing successfully.

Security:
How many issues were found by penetration tests and security scans?  
A declining number could show that the application is becoming more secure.

Performance:
What is our average response time?
A stable or declining number will show that the application is operating within accepted parameters.

Usability:
What are our customers saying about our product?
Metrics like survey responses or app store ratings can indicate how happy customers are with an application.
How many customer support calls are we getting?
Increased support calls from customers could indicate that it’s not clear how to operate the software.

Compatibility:
How many support calls are we getting related to browser, device, or operating system?
An increased number of support calls could indicate that the application is not working well in certain circumstances.
What browsers/devices/operating systems are using our software?
When looking at analytics related to app usage, a low participation rate by a certain device might indicate that users have had problems and stopped using the application.

Portability:
What percentage of customers upgraded to the new version of our software?
Comparing upgrade percentages with statistics of previous upgrades could indicate that the users found the upgrade process easy.
How many support calls did we get related to the upgrade?
An increased number of support calls compared to the last upgrade could indicate that the upgrade process was problematic.

Maintainability:
How long does it take to deploy our software to production?
If it is taking longer to deploy software than it did during the last few releases, then the process needs to be evaluated.
How frequently can we deploy?
If it is possible to deploy more frequently than was possible six months ago, then the process is becoming more streamlined.

There’s no one way to measure quality, and not every facet of quality can be measured with a metric.  But it’s important for software testers to be able to use metrics to demonstrate how their work contributes to the health of their company’s software, and the above examples are some ways to get started.  Just remember to think critically about what you are measuring, and establish good baselines before drawing any conclusions.

8 thoughts on “Measuring Quality

  1. Karlo Smid

    Hi Kristin!

    I found your writing very useful.

    But I would like to point out that in this post you fell into trap thinking that software testers are quality keepers:

    "software testers are to software what the immune system is to the human body. When a person's immune system is working well, they don't think about it at all. They get exposed to all kinds of viruses and bacteria on a daily basis, and their immune system quietly neutralizes the threats".

    Software testers can not prevent any bug to become part of the application. Only developers could do that by changing their approach to quality. When you found a bug, bug was already there.

    For this statement, you can do something about it:

    "Software testers have the same problem: when they are doing their job really well, there is no visible impact in the software. Key decision-makers in the company may see the software and praise the developers that created it without thinking about all the testing that helped ensure that the software was of high quality."

    Write at the end of every week summary email to you manager about how your testing helped to make their illusion about perfect software.

    Thanks again for your blog posts.

    Regards, Karlo.

    https://blog.tentamen.eu

  2. Kristin Jackvony

    Hi Karlo- Thanks for your comments. I agree with you that testers are not the keepers of quality. However, I believe that testers can have a huge influence on the quality of an application by pointing out where the bugs are. Here's an example- at a company where I worked, I would always test every form field by attempting to enter in all the lyrics to "Frosty the Snowman". I found a lot of bugs this way. The developers I worked with learned that I would always run this test, so they started doing the same test before they gave a new feature to me for testing. In this way, the software began to be of higher quality.

  3. Unknown

    If the software met these 6 quality characteristics (functionality, reliability, usability, efficiency, maintainability and portability), we can say it's a good quality product

  4. Karlo Smid

    Hi Kristin, thanks for the follow up.

    That is one important aspect of software testing influence on final product quality. And I like your approach how you used gamification to make your developers start using your simple but effective "Frosty the Snowman" data oracle!

    Regards, Karlo.

  5. Komail

    Great to see that you included #3, usability, and talked about Accessibility towards the end. By that, I mean people with disabilities like hearing, color blindness, old age and other disadvantaged groups.

    I recently had a meeting where we went in-depth about Accessibility testing and it really opened my eyes as I usually thought about language barriers- dealing with unicode and other languages, and forgot about the user experience with folks that are handicapped/disabled.

Comments are closed.