SOLID Principles for Testers: The Interface Segregation Principle

We’re over halfway done learning about the SOLID principles! Today it’s time to learn about the “I”: the Interface Segregation Principle.

In order to understand this principle, we first need to understand what an interface is. An interface is a definition of a set of methods that can implemented in a class. Each class that implements the interface must use all of the methods included in the interface. Because the interface only defines the method signature (name, parameters, and return type), the methods can vary in each implementation.

The Interface Segregation Principle states that no class should be forced to depend on methods that it does not use. In order to understand this, let’s look at an example. Imagine you have a website to test that has two different forms: an Employee form and an Employer form. So you decide to create a Form interface that has methods for interacting with the various objects that can be found in a form:

interface Form {
    fillInTextField(id: String, value: String): void;
    selectRadioButton(id: String): void;
    checkCheckbox(id: String): void;
}

When you create an EmployeeForm class, you set it to implement the Form interface:

class EmployeeForm implements Form {
    fillInTextField(id: String, value: String): void {
      driver.findElement(By.id(id)).sendKeys(value)
    }
  
    selectRadioButton(id: String): void {
      driver.findElement(By.id(id)).click()
    }
  
    checkCheckbox(id: String): void {
      driver.findElement(By.id(id)).click()
    }
}

This works great, because the Employee form has text fields, radio buttons, and checkboxes.

Next, you create an EmployerForm class, which also implements the Form interface. But this form only has text fields and no radio buttons or checkboxes. So you implement the interface like this:

class EmployerForm implements Form {
    fillInTextField(id: String, value: String): void {
      driver.findElement(By.id(id)).sendKeys(value)
    }
  
    selectRadioButton(id: String): void {
      // no radio button
      throw new Error("No radio button exists");
    }
  
    checkCheckbox(id: String): void {
      // no checkbox
      throw new Error("No checkbox exists");
    }
}

You’ll never call the selectRadioButton and checkCheckbox methods in the EmployerForm class because there are no radio buttons or checkboxes in that form, but you need to create methods for them anyway because of the interface. This violates the Interface Segregation Principle.

So, how can you use interfaces with these forms without violating the principle? You can create separate interfaces for text fields, radio buttons, and checkboxes, like this:

interface TextField {
    fillInTextField(id: String, value: String): void;
}
  
interface RadioButton {
    selectRadioButton(id: String): void;
}
  
interface Checkbox {
    checkCheckbox(id: String): void;
}

Then when you create the EmployeeForm class you can implement the three interfaces like this:

class EmployeeForm implements TextField, RadioButton, Checkbox {
    fillInTextField(id: String, value: String): void {
      driver.findElement(By.id(id)).sendKeys(value)
    }
  
    selectRadioButton(id: String): void {
      driver.findElement(By.id(id)).click()

    }
  
    checkCheckbox(id: String): void {
      driver.findElement(By.id(id)).click()
    }
}

Now when you create the EmployerForm class, you only need to implement the TextField interface:

  class EmployerForm implements TextField {
    fillInTextField(id: String, value: String): void {
      driver.findElement(By.id(id)).sendKeys(value)
    }
  }

When each class only needs to implement the methods they need, classes become easier to maintain. And having smaller interfaces means that they can be used in a greater variety of scenarios. This is the benefit of the Interface Segregation Principle.

SOLID Principles for Testers: The Liskov Substitution Principle

It’s time to learn about the “L” in SOLID!  The Liskov Substitution Principle is named for Barbara Liskov, a computer scientist who introduced the concept in 1987.  The principle states that you should be able to replace objects in a superclass with objects of a subclass with no alterations in the program. 

In order to understand this, let’s use an example that is very familiar with testers: waiting for an element.  Here’s a class called WaitForElement that has two methods, waitForElementToBeVisible and waitForElementToBeClickable:

class WaitForElement {
  constructor() {}
  async waitForElementToBeVisible(locator) {
    await driver
      .wait(until.elementIsVisible
      (driver.findElement(locator)),10000)
  }
  async waitForElementToBeClickable(locator) {
    await driver
      .wait(until.elementIsEnabled
      (driver.findElement(locator)),10000)
  }
}

This class could be used to locate all kinds of elements.  Now imagine that the tester has created a class specifically for clicking on elements in dropdown lists, that extends the existing WaitForElement class:

class WaitForDropdownSelection extends WaitForElement {
  constructor() {
    super()
  }
  async waitForElementToBeClickable(locator) {
    let selection = await driver
    .wait(until.elementIsEnabled(driver.findElement(locator)),
      10000)
    selection.click()
  }
}

If we were going to use the WaitForElement class to select a city from a dropdown list of cities, it would look like this:

let waitForInstance = new WaitForElement()
waitForInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForInstance.waitForElementToBeClickable
  (By.id(‘New York’)).click()

But if we were going to use the WaitForDropdownSelection class to select a city instead, it would look like this:

let waitForDropdownInstance = new WaitForDropdownSelection()
waitForDropdownInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForDropdownInstance.waitForElementToBeClickable
  (By.id(‘New York’))

Do you see the difference?  When we use the waitForElementToBeClickable method in the WaitForDropdownSelection class, the method includes clicking on the element:

selection.click()

But when we use the waitForElementToBeClickable method in the WaitForElement class, the method does not include clicking.  This violates the Liskov Substitution Principle.

To fix the problem, we could update the waitForElementToBeClickable method in the WaitForDropdownSelection to not have the click() command, and then we could add a second method that would wait and click:

class WaitForDropdownSelection extends WaitForElement {
  constructor() {
    super()
  }
  async waitForElementToBeClickable(locator) {
    await driver
      .wait(until.elementIsEnabled
      (driver.findElement(locator)), 10000)
  }
  async waitForElementAndClick(locator) {
    let selection = await driver
      .wait(until.elementIsEnabled
      (driver.findElement(locator)), 10000)
    selection.click()
  }
}

We’ve now adjusted things so that the classes can be used interchangeably. 

With the WaitForElement class:

let waitForInstance = new WaitForElement()
waitForInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForInstance.waitForElementToBeClickable
  (By.id(‘New York’)).click()

With the WaitForDropdownSelection class:

let waitForDropdownInstance = new WaitForDropdownSelection()
waitForDropdownInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForDropdownInstance.waitForElementToBeClickable
  (By.id(‘New York’)).click()

Or we could use the new method in the WaitForDropdownSelection class instead:

let waitForDropdownInstance = new WaitForDropdownSelection()
waitForDropdownInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForDropdownInstance.waitForElementAndClick
  (By.id(‘New York’))

Using extended classes is a great way to avoid duplicating code while adding new functionality.  But make sure when you are extending a class that the methods are interchangeable. 

SOLID Principles for Testers: The Open-Closed Principle

This month we are continuing our investigation of SOLID principles with the “O” value: the Open-Closed principle. This principle states the following: a class should be open for extension, but closed for modification.

What does this mean? It means that once a class is used by other code, you shouldn’t change the class. If you do change the class, you risk breaking the code that depends on the class. Instead, you should extend the class to add functionality.

Let’s see what this looks like with an example. We’ll use a Login class again, because software testers encounter login pages so frequently. Imagine that there’s a company with a number of different teams that all need to write UI test automation for their features. One of their test engineers, Joe, creates a Login class that anyone can use. It takes a username and password as variables and uses them to complete the login:

class Login {
    constructor(username, password) {
        this.username = username
        this.password = password
    }
    login() {
        driver.findElement(By.id('username'))
            .sendKeys(this.username)
        driver.findElement(By.id('password))
            .sendKeys(this.password)
        driver.findElement(By.id('submit)).click()
    }
}

Everybody sees that this class is useful, so they call it for their own tests.

Now imagine that a new feature has been added to the site, where customers can opt to include a challenge question in their login process. Joe wants to add the capability to handle this new feature:

class Login {
    constructor(username, password, answer) {
        this.username = username
        this.password = password
    }
    login() {
        driver.findElement(By.id('username'))
            .sendKeys(this.username)
        driver.findElement(By.id('password'))
            .sendKeys(this.password)
        driver.findElement(By.id('submit')).click()
    }
    loginWithChallenge() {
        driver.findElement(By.id('username'))
            .sendKeys(this.username)
        driver.findElement(By.id('password'))
            .sendKeys(this.password)
        driver.findElement(By.id('submit')).click()
        driver.findElement(By.id('answer'))
            .sendKeys(this.answer)
        driver.findElement(By.id('submitAnswer')).click()
    }
}

Notice that the Login class is now expecting a third parameter: an answer variable. If Joe makes this change, it will break everyone’s tests, because they aren’t currently including that variable when they create an instance of the Login class. Joe won’t be very popular with the other testers now!

Instead, Joe should create a new class called LoginWithChallenge that extends the Login class, leaving the Login class unchanged:

class LoginWithChallenge extends Login {
    constructor(username, password, answer) {
        super()
        this.username = username
        this.password = password
        this.answer = answer
    }
    loginWithChallenge() {
        this.login(username, password)
        driver.findElement(By.id('answer'))
            .sendKeys(this.answer)
        driver.findElement(By.id('submitAnswer')).click()    
    }
}

Now the testers can continue to call the Login class without issues. And when they are ready to update their tests to use the new challenge question functionality, they can modify their tests to call the LoginWithChallenge class instead. The Login class was open to being extended but it was closed for modification.

SOLID Principles for Testers: The Single Responsibility Principle

Those who have been reading my blog for several years have probably figured out that when I want to learn something, I challenge myself to write a blog post about it. In 2020, I read one book on software testing each month and wrote a book review. In 2023, I learned about one Logical Fallacy each month, and wrote a post explaining it (which eventually turned into my book, Logical Fallacies for Testers).

For the next five months, I’ll be taking on a new challenge: learning about the SOLID principles of clean code. I’ve wanted to understand these for years, but I’ve always felt intimidated by the terminology (“Liskov Substitution” just sounds so complicated!). But I’m going to do my best to learn these principles, and explain them with examples that will be useful to software testers. So let’s begin with the Single Responsibility Principle, the “S” in SOLID.

The Single Responsibility Principle says that a class should have only one responsibility. Let’s take a look at this Login class:

class Login {
     constructor() {}
     login(username, password) {
          driver.findElement(By.id('username'))
               .sendKeys(username)
          driver.findElement(By.id('password'))
               .sendKeys(password)
          driver.findElement(By.id('submit')).click()
     }

     navigate(url) {
          driver.get(url)
     }
}

The Login class has two methods: login and navigate. For anyone writing test automation this appears to make sense, because login and navigation often happen at the beginning of a test. But logging in and navigating are actually two very different things.

Let’s imagine that multi-factor authentication is now an option for users of the application. The test automation engineer will now want to add a new login method that includes multi-factor auth. But what will the impact be on the navigation method? It could be that there is no impact at all, but if new package imports need to be added to the class, it’s possible that they could break the navigation method.

Any time a class changes, there’s a chance of breaking existing functionality. The test automation engineer will need to check every test that has navigation in addition to checking every test that has logging in.

This is why it’s best to give classes just one single responsibility. The navigation method should be moved to its own class. Then when new login methods are added, the navigation method will be completely unaffected. And it will also be possible to add new navigation methods to the navigation class without affecting the login methods!

Why I Hate Test Case Management Systems (and Why I’m Using One Anyway)

One of the first things I learned as a new software tester was how much I hated test case management systems. If you aren’t familiar with test case management systems, they are tools that allow testers to keep a collection of manual tests for reuse. In the days before test automation was popular, they were extremely common. In this post, I’ll discuss why I hate test case management systems so much, and then I’ll explain why I use one anyway, and how I don’t hate the one I’m using.

Part One: Why I Hate Test Case Management Systems

Reason One: They take too much time to set up

Test case management systems are often filled with required fields that the user must fill out in order to save the test case. These fields might not be at all relevant to the tester’s situation, but they must be filled out anyway. Many systems require you to put in one or more test steps, which adds to the setup time.

Reason Two: They require too much maintenance

In my first salaried testing job, the company had a few hundred test cases in their management system. Each of these cases had several steps. Often the steps were confusing or outdated. The QA team decided to update the test cases as they were running them. The updating often took longer than running the tests themselves, and it seemed as if the tests needed updating every time they were run.

Reason Three: They can stop people from doing exploratory testing

When a QA team is spending all of their time running and maintaining existing test cases, they may not have time to think about new ways to test the application. Taking the time to do exploratory testing is a great way to find new and important bugs, but if the team is so focused on following all of the test case steps exactly, they might not take that time.

Reason Four: They use energy that could be focused on automation

Similar to Reason Three, if the testers are spending all their time using the test case management system and keeping the test cases updated, they might not have time to do the deep work required to write and maintain automated tests. And as their product adds more features, the amount of time spent on manual testing will continue to grow.

Reason Five: They often include features that have little to do with quality

Test case management systems seem as if they are written by people who have never actually done any testing. They include features like how long it takes to run a manual test. While I can see that knowing the rough time it would take to run a full manual regression suite might be useful, knowing the time it takes to run a single test makes no sense whatsoever. If the tester uncovers a bug, or an anomaly worth investigating, the test will take longer to run. That’s a good thing, because it means the tester is using their brain and not just following a rote set of directions. Similarly, being able to attach bugs to the test cases doesn’t provide anything useful. What difference does it make if three bugs were attached to one test case? As long as those bugs are logged and fixed, it doesn’t really matter when they were found or who found them.

Part Two: Why I’m Using a Test Case Management System

My company recently started using a test case management system called Qase. I was initially planning to ignore it completely, but I was working on a project that was becoming increasingly complex, and my manager suggested that I give Qase a try. I was pleasantly surprised! As a result, I am now advocating that all software teams at my company use it. Here’s why:

Reason One: Qase is extremely easy to set up

Qase has no requirements for creating a test case other than that the test case has a title. I decided to make the title the only step in each test case. For example, if I wanted to create a test case to test that the user could log in with Single Sign-On (SSO), I created a case with a title of “User can log in with SSO”, and the test case was ready to go! Using this method, I was able to create 350 test cases in the Qase system in less than two hours.

Reason Two: Qase offers a lot of flexibility

Test cases in Qase can be organized in folders, and you have complete freedom to name and organize the folders however you want. You can have folders within folders within folders. This makes it really easy to organize your test cases in ways that make sense to you and your team.

Reason Three: Using Qase can actually save you time

With Qase, it’s incredibly easy to create and save test plans from your test cases. You just give your test plan a name, and choose all the test cases you’d like to be in the plan. Then when you need to run that plan, within seconds you can create and start the test run. This saves time compared to creating a new test plan from scratch each time you need to run a regression suite, or copying an Excel spreadsheet or Confluence chart and resetting it for a new test run.

Reason Four: Using a test case management system can help you remember to run tests that you might otherwise forget

We all have our blind spots. Mine is forgetting to test file names with spaces in them (see this blog post). Having a test case management system is a great way to make sure that you are covering tests that you usually forget to run.

Reason Five: A test case management system can provide a good organizational base for test automation

Readers of this post might make the argument that all manual testing should be exploratory in nature, and that every test that needs to be repeated should be automated. This is a valid point. In a perfect world, our regression test suites would be automated as soon as each feature has been tested, and we could rely on that automation for every release and every round of regression testing while using the time saved for deep exploratory testing. But of course we don’t live in a perfect world, and it often takes extra time to get tests automated. A test case management system can actually help organize automation efforts by clarifying which tests are most important and which tests should be organized together.

If after reading this post you decide that test case management systems are not for you, there’s still an important message here: it’s a good idea to try out new things, even when you are completely opposed to them. I still hate most test case management systems, but by trying Qase, I was able to expand my thinking and see their usefulness.

Managing Your Manager

I often talk with testers who are feeling frustration with their manager. Some of their complaints include:
• My manager doesn’t give me enough time to automate
• My manager expects me to test all the sprint items at the last minute
• My manager signs our team up for too much work
• My manager doesn’t appreciate how much work I do

Have you ever struggled with any of these issues? Then it’s time to learn how to manage your manager! Read on for six ideas on how to do this.

  1. Think about what your manager wants
    The best way to get someone to behave the way you want is to figure out what they want, and then show them how what you want and what they want align. What does your manager want? Your manager probably reports to a manager themselves, and your manager is probably accountable to their manager for things like releasing software on time and reducing the number of customer complaints that occur after a software release. You want good software to be released on time as well! So when you talk with your manager, point out the ways your ideas can achieve this.
  2. Explain how your strategy will help your manager
    Once you know what your manager wants, you can tailor your suggestions to show how your manager will be helped by them. For example, you could say: “I know that we’ve had some issues with defects escaping to Production that have resulted in customer complaints, and I know that your manager isn’t happy about that. I think if we schedule a one-hour Bug Bash before each release, we could catch most of those bugs.”
  3. Be a team player
    Are you a team player? Do you show up to work each day with a positive attitude? Do you help and encourage other people on your team? Do you go the extra mile without being asked to do so?
    The kind of attitude you bring to work has a huge effect on whether or not your manager respects and listens to you. Nobody likes working with a complainer. When you are pleasant to work with, your manager will be more likely to want to have conversations with you about how you can work together to improve your team’s processes.
  4. Approach your manager with data
    It’s always easier to convince someone of something if you have cold, hard facts backing up your assertions. If you feel that your workload has increased over the last six months, you can show your manager the average number of stories you tested six months to a year ago, and the average number of stories you tested in the latest six months. If you think your manager should give you more time to write test automation, you can use metrics to show how much time would be saved if you had an automated regression suite that could be run with each release.
  5. Enlist the help of other team members
    You are not the only person on your team! It’s likely that there are other testers or developers on your team that feel the same way you do. Why not talk to them about the situation? You could share your ideas and listen to their ideas as well. It could be that the Dev Lead on your team has a great idea for limiting the number of sprint items that the team is taking on. If you work together, you can make a convincing case to your manager.
  6. Suggest an experiment
    Sometimes you might have an idea that you are sure will improve things on your team, but the team doesn’t agree. They might be resistant to change, or they might think the change represents too much work. In cases like this, you can suggest an experiment: “Let’s try doing Dev-QA handoffs for two sprints. If it doesn’t save any time for us after those two sprints, we can stop the handoffs and go back to the way things were.” It’s been my experience that most of the times I suggested an experiment, the rest of the team realized I was right! But even if that doesn’t happen, you show your manager and the team that you are an innovator and someone willing to try new things.

Managers want what you want: a happy and successful team! By following these six suggestions, you’ll be able to work more effectively with your manager and team to build and deliver software your customers will love.

Nine Reasons Testing Becomes a Bottleneck

It’s a new year once again, and time to think about what improvements you and your team can make to increase the quality of your products! One complaint I often hear from testers is that they have become a bottleneck on their team. They feel constant pressure to get their testing done, and they feel that they don’t have time to do good exploratory testing or write quality automation.

In my experience, there are nine main reasons why testing becomes a bottleneck. Read on to see if any of them apply to your team!

Reason One: The team has too much tech debt
When a product has crushing tech debt, testing and fixing bugs both become more difficult. The software often requires careful configuration before running any tests, and any test automation will likely be flaky. When developers fix bugs, it’s likely that they’ll break something in the process, resulting in additional bugs that then need to be fixed and validated. The remedy for this is to prioritize fixing tech debt, which will pay off in faster development and testing processes.

Reason Two: The team is given work without adequate acceptance criteria
Have you ever been assigned to a project that doesn’t have clear user stories and acceptance criteria? It results in the whole team flying blind. No one knows when a last-minute request for a forgotten feature will come in. When you don’t know the scope of a project, you can’t correctly estimate how long the work will take. When the development work takes longer than expected, it’s often the testers who are expected to work miracles and test twice as many features as time allows. The best solution for this problem is for the team to refuse to take on any work that does not have clear user stories and acceptance criteria. If the Product Owner wants to add a last-minute feature, the project deadlines should be renegotiated to allow for extra time to do the development and testing.

Reason Three: The developers aren’t completely coding their stories
Developers can sometimes get so mired in solving coding problems that they forget to go back and check the acceptance criteria of a story to make sure they’ve met all the requirements. They pass the story off to the tester, who quickly discovers the missing requirements and sends the story back to the developer. This kind of story ping-pong can slow things down for everyone. The developer has probably started work on their next story, and now has to switch their context back to the first story. The tester will now have to find something else to work on while the story is completed, and will then have to switch contexts back to the story once it’s completely ready. If this is happening on your team, calling it out in a team retro meeting might be enough to get the team to understand the problem. They can then develop the habit of checking the acceptance criteria before handing off the story.

Reason Four: The developers aren’t scheduling a handoff meeting with the testers
In a handoff meeting, the developer who worked on a story meets with the tester who will be testing the story. They demonstrate the work they did, show how the work can be tested, and outline which adjoining areas should be regression tested. Then the tester can ask any questions they have about the feature and how to test it. When a team does not use handoff meetings, it’s more likely that the tester will discover that the feature hasn’t been deployed, or that they don’t understand how to test the feature. This results in more back-and-forth questions that slow down the testing process.

Reason Five: There are not enough unit tests
Unit tests, typically created by developers, are an awesome way to catch problems quickly. When the developer pushes their code to a branch, the branch is built and unit tests are run against it. Any functionality that the developer broke with their code is detected in a matter of seconds. They can then fix the problem before handing the feature off for testing. When there aren’t enough unit tests to catch problems, the problems will go directly to the tester, who will then have to report the bugs. The time spent reporting bugs is time that the tester could have been doing exploratory testing and finding more challenging bugs.

Reason Six: The team has not tied API automation to builds and deployments
Almost as useful as unit tests, API automated tests catch any changes in the code that may have introduced bugs into how the API behaves. For example, a code change might have inadvertently changed a response code from a 401 to a 403, which would then result in an error in the UI. If API automation is run with every build and every deployment, problems are detected before the feature goes to the tester, once again saving them time and energy.

Reason Seven: There is no good process for reusing regression test plans

Exploratory testing is a great way to find bugs, but when you have an existing product or feature, it’s important to have some record of what should be tested during a regression. Without a record, it’s easy to forget to test some less-used part of a feature. For example, when testing an e-commerce application, the tester could forget to test removing an item from a cart, and instead focus only on purchasing the cart items. Without a good system for creating and saving regression test plans, the plans will need to be recreated with every release. This takes up valuable time that could be used for testing. Having a set of regression test plans for reuse can save enough time that the tester can do exploratory testing before the release.

Reason Eight: Developers are not contributing to test automation
Yes, good software testers know how to write good test automation, but developers are great at writing code, and they are a valuable resource for writing clean code, creating reusable methods, and setting up mocks for testing. When a team has only one tester who is expected to do all the automation on their own, the process of automating the tests will be slow. It may be worth pointing out to your team that when there are good, reliable, automated tests, it saves time for everyone, not just the tester. Developers will get fast feedback, which prevents time-consuming context switching.

Reason Nine: The team is relying too much on UI automation
There is definitely a place for UI automation, because this is where the visibility and functionality of UI elements are tested. But most of the business logic of an application can be tested with unit and API tests. We all know that UI automation tends to be slow and flaky, even with well-written tests. With unit and API tests, teams can get accurate feedback more quickly. And these tests are easier to maintain because there is less flake.

Do any of these reasons apply to your team? Do most of them apply? In this New Year, I challenge you to pick one of these reasons and discuss it with your team. See whether you can come up with a plan to address it, and watch the testing bottleneck start to speed up!

Logical Fallacies for Testers XII: The Slippery Slope Fallacy

As you know, this blog has focused for the entire year on logical fallacies. We’ve learned about all kinds of fallacies, from the Red Herring Fallacy to the Appeal to Ignorance Fallacy! It’s time now for the last blog post of the year: the Slippery Slope Fallacy.

The Slippery Slope Fallacy occurs when someone assumes that one negative event will lead to a chain of negative events, causing disaster, when there’s no proof that each event will be the cause of the next.

This is a common fallacy used by parents when they don’t want to let their teenagers do something. Imagine this scenario between a father and his daughter. “If I let you go to the rock concert and stay out until 2 AM on a school night, soon you’ll be staying out until 2 AM every night. Then you’ll be too tired to get up and go to school on time, which means that your grades will suffer, and then you won’t get into a good college.”

The fallacy is obvious to teenagers: staying out until 2 AM one night will not lead to staying out until 2 AM every night, because the parent won’t actually let that happen. The father in this example is using the Slippery Slope Fallacy as an excuse for why he doesn’t want his daughter to attend the concert.

The Slippery Slope Fallacy happens in software testing as well! You may have encountered a well-meaning tester who has found a small UI bug in the team’s application. They log the bug, but rather than letting it go to the backlog, they insist that the bug be fixed NOW. The logic they use goes something like this: “If we don’t make the developers fix this bug right now, it will mean that they will ignore bigger bugs in the future. Then we’ll wind up with a ton of tech debt that we will never be able to get out of, and our application will be filled with bugs. Our customers will desert us and then we will go out of business.”

The fallacy here might be a bit harder to see for testers who feel strongly that their application should be as close to perfect as possible. But here’s the error: putting one small bug on the backlog will not necessarily result in the team ignoring big bugs. A well-functioning team will have a triage process in place where the whole team can determine the user impact of a bug, how important the fix is compared to other tasks the team is working on, and the potential cost of waiting to fix the bug.

Yes, ignoring too many bugs can result in too much tech debt, but a small UI bug that doesn’t impact the functioning of the application is not going to significantly contribute to that debt. It’s important that a tester choose their battles and let some small bugs slide, because if they protest loudly about every bug, the team will stop taking them seriously.

I hope that you have enjoyed my series on logical fallacies! If you would like to learn about more fallacies, I have great news for you! In early 2024 I will be publishing an mini-book called “Logical Fallacies for Testers”, which will include the twelve fallacies I wrote about this year, plus three additional fallacies!

Logical Fallacies for Testers XI: Appeal to Ignorance

The Appeal to Ignorance Fallacy is an interesting one: it states that something must be true because it hasn’t been proven false.

This fallacy is often used by people who believe in entities like Bigfoot, the Yeti, or the Loch Ness Monster: they will say that no one has proven that Bigfoot doesn’t exist, therefore he must exist! With an example like this, it’s very easy to see the false logic.

The same kind of fallacy is common in software testing as well. Consider this statement: “We know our software is secure because we’ve never had a security breach.” Having no security breaches does NOT mean there are no vulnerabilities in the software. It is possible that there are dozens of security holes in the software, but the company hasn’t grown enough for a malicious actor to decide they are worth exploiting. Some companies might also say “We’ve never found a security vulnerability in our software.” That might be true, but it could be that the reason it is true is because they’ve never looked for vulnerabilities. It’s bad logic, and bad practice, to say something doesn’t exist because you’ve never looked for it.

Another example of the fallacy happens when someone announces that their company’s app is “bug-free”. This is an impossibility. Lack of found bugs doesn’t mean that an application is bug-free. It means that the testers haven’t found any bugs recently, nothing more. A simple application with just two buttons has at least two different testing paths. Add a third button and you have at least six different testing paths. So imagine how many testing paths an application with a dozen different features could have! There is no possible way to test them all, so there is no possible way to prove that the app is bug-free.

Watch for this fallacy when your team is discussing software. When someone makes a bold claim, ask yourself if it has actually been proven, or if it is merely wishful thinking.

Logical Fallacies for Testers X: Equivocation

Equivocation is a technique used to mislead others through the use of imprecise language. There are many words in the English language that have more than one meaning, such as the word “light”, which could mean “bright”, or it could mean “not heavy”. It’s also possible to use equivocation by being deliberately ambiguous about time or quantity. Children are excellent at equivocation, as you will see in the example below.

When I was a much younger woman, I taught piano lessons, mostly to young students. I gave each student an assignment book in which I would assign their lesson for the week. I expected each student to practice for fifteen minutes a day, six days a week, and the assignment book included a little chart where they could enter their practice time.

I discovered all kinds of ways that children equivocated about their piano practice! When a child’s mother asked her, “Did you practice piano?”, she might answer “Yes”. But upon further examination, it became clear that what she really did was practice yesterday. Other students would mark down the time they spent sitting at their piano bench looking out the window as “practice”. Still others would play the piano, but not play their assigned music, and call that “practice”. And one creative young man recorded one fifteen-minute practice session and replayed it on a tape player every day so his mother would hear him “practicing”.

The same thing happens in software testing. Many terms are used in an equivocating fashion to convey that extensive testing has been done, when in fact it hasn’t. Consider the following examples:

  • “Code coverage”: a team could boast that they have 95% code coverage, when many of their unit tests are simply set to return true regardless of their state
  • “Automation coverage”: saying that a team has 100% automation coverage could mean that they only ever run ten manual tests and they’ve automated all ten
  • “Test plan”: this could refer to anything from a pages-long document to an idea for a few tests to run that the tester thought up while in the shower
  • “Test results dashboard”: is this a tool that shows successes and failures over time, highlighting flaky tests, or a colorful page that doesn’t convey any meaningful data?
  • “Continuous Deployment”: for some teams, this could mean “when I commit code, it is automatically deployed to production”, or it could mean “after I submit a change control request and it is evaluated, approved, and scheduled, then it goes to production”

I could go on and on. Even the word “testing” has been highly debated. Are automated scripts that exercise an application’s functionality “tests”, or merely “checks”? My point here is NOT to arrive at common definitions for everyone. My point is that it is very easy to equivocate to give an impression of software quality that is simply not true.

As software testers, we owe it to our end users to be honest about our testing practices. This means reporting our activities to our team with clear definitions and metrics.