Logical Fallacies for Testers XI: Appeal to Ignorance

The Appeal to Ignorance Fallacy is an interesting one: it states that something must be true because it hasn’t been proven false.

This fallacy is often used by people who believe in entities like Bigfoot, the Yeti, or the Loch Ness Monster: they will say that no one has proven that Bigfoot doesn’t exist, therefore he must exist! With an example like this, it’s very easy to see the false logic.

The same kind of fallacy is common in software testing as well. Consider this statement: “We know our software is secure because we’ve never had a security breach.” Having no security breaches does NOT mean there are no vulnerabilities in the software. It is possible that there are dozens of security holes in the software, but the company hasn’t grown enough for a malicious actor to decide they are worth exploiting. Some companies might also say “We’ve never found a security vulnerability in our software.” That might be true, but it could be that the reason it is true is because they’ve never looked for vulnerabilities. It’s bad logic, and bad practice, to say something doesn’t exist because you’ve never looked for it.

Another example of the fallacy happens when someone announces that their company’s app is “bug-free”. This is an impossibility. Lack of found bugs doesn’t mean that an application is bug-free. It means that the testers haven’t found any bugs recently, nothing more. A simple application with just two buttons has at least two different testing paths. Add a third button and you have at least six different testing paths. So imagine how many testing paths an application with a dozen different features could have! There is no possible way to test them all, so there is no possible way to prove that the app is bug-free.

Watch for this fallacy when your team is discussing software. When someone makes a bold claim, ask yourself if it has actually been proven, or if it is merely wishful thinking.

Logical Fallacies for Testers X: Equivocation

Equivocation is a technique used to mislead others through the use of imprecise language. There are many words in the English language that have more than one meaning, such as the word “light”, which could mean “bright”, or it could mean “not heavy”. It’s also possible to use equivocation by being deliberately ambiguous about time or quantity. Children are excellent at equivocation, as you will see in the example below.

When I was a much younger woman, I taught piano lessons, mostly to young students. I gave each student an assignment book in which I would assign their lesson for the week. I expected each student to practice for fifteen minutes a day, six days a week, and the assignment book included a little chart where they could enter their practice time.

I discovered all kinds of ways that children equivocated about their piano practice! When a child’s mother asked her, “Did you practice piano?”, she might answer “Yes”. But upon further examination, it became clear that what she really did was practice yesterday. Other students would mark down the time they spent sitting at their piano bench looking out the window as “practice”. Still others would play the piano, but not play their assigned music, and call that “practice”. And one creative young man recorded one fifteen-minute practice session and replayed it on a tape player every day so his mother would hear him “practicing”.

The same thing happens in software testing. Many terms are used in an equivocating fashion to convey that extensive testing has been done, when in fact it hasn’t. Consider the following examples:

  • “Code coverage”: a team could boast that they have 95% code coverage, when many of their unit tests are simply set to return true regardless of their state
  • “Automation coverage”: saying that a team has 100% automation coverage could mean that they only ever run ten manual tests and they’ve automated all ten
  • “Test plan”: this could refer to anything from a pages-long document to an idea for a few tests to run that the tester thought up while in the shower
  • “Test results dashboard”: is this a tool that shows successes and failures over time, highlighting flaky tests, or a colorful page that doesn’t convey any meaningful data?
  • “Continuous Deployment”: for some teams, this could mean “when I commit code, it is automatically deployed to production”, or it could mean “after I submit a change control request and it is evaluated, approved, and scheduled, then it goes to production”

I could go on and on. Even the word “testing” has been highly debated. Are automated scripts that exercise an application’s functionality “tests”, or merely “checks”? My point here is NOT to arrive at common definitions for everyone. My point is that it is very easy to equivocate to give an impression of software quality that is simply not true.

As software testers, we owe it to our end users to be honest about our testing practices. This means reporting our activities to our team with clear definitions and metrics.

Logical Fallacies for Testers IX: The Red Herring Fallacy

You may have heard of the term “red herring” if you have ever read a mystery story. When a mystery author wants to keep their readers guessing about who the murderer is, they may throw in clues that point to another suspect. These clues are called red herrings.

The Red Herring Fallacy is similar; rather than addressing an important issue, the speaker diverts attention from the issue by introducing information that might seem to be related, but is in fact irrelevant.

Here’s a real-world example: in the debate about green energy, some environmentally-minded people point out that the use of solar panels puts stress on the planet because solar farms often reduce the number of trees in a community, and because discarded solar panels can fill landfills. Another proponent of green energy might counter this argument with data about how the solar industry is booming and providing many people with good jobs. While this fact may be true, it doesn’t address the actual issue being discussed, which is whether solar panel usage puts more stress on the planet than it relieves.

The Red Herring Fallacy is present in software testing as well! Here are a couple of examples. A company is getting ready for an important release, and a tester finds a bug one day before the release is scheduled. The team begins discussing whether the fix should go in the next day’s release or wait for the following release, but soon the product owner and the engineering manager get in a heated discussion about how the bug was missed. While it might be important to determine how the bug was missed, it’s not relevant to the current issue, which is to decide when the bug fix should be released.

Another example is a situation where customers are complaining about a feature that an engineering team created. The team is asked to explain why their feature was released with so many bugs. The team responds with the fact that that they have a large number of automated tests that run with every release. This fact is not relevant, because the bugs went undetected. The number of automated tests is a red herring; what should be discussed instead is why the team didn’t discover the bugs.

Good software testers and teams know how to stick to the issue at hand when discussing product quality. The next time you are confronted with a problem, remember to focus on the problem and its solution rather than getting distracted by other details.

Logical Fallacies for Testers VIII: Circular Reasoning

This month we continue our journey into logical fallacies with Circular Reasoning. Circular Reasoning can be explained in these two statements:

• X is true because Y is true
• Y is true because X is true

A quick examination of these assertions shows that they aren’t proving anything. It’s possible that neither X nor Y are true, but the person asserting that X is true will go around and around with these two statements as if they prove their assertion.

Here’s an example: your neighbor insists that driving over 55 miles per hour is dangerous. When you ask her to prove that it is dangerous, she says that driving that fast is illegal. Consider those two statements:

• Driving over 55 miles per hour is illegal because it’s dangerous
• It’s dangerous to drive over 55 miles per hour because it’s illegal

That’s Circular Reasoning at work!

We also encounter Circular Reasoning in software testing. Consider these two statements:

• All of our automated tests passed because our feature is working correctly
• We know that our feature is working correctly because all of our automated tests passed

At first glance, this seems to make sense. If our tests are passing, it must be because the feature is working, right? But there is something else to consider here. It’s possible that the tests are passing because they aren’t actually testing the feature.

I learned this lesson several years ago when I first started writing JavaScript tests. I was really proud of my tests and the fact that they were passing, until a developer asked me to create a condition where the value being asserted on was incorrect. I was surprised to see that my test passed anyway!

I wasn’t aware of how promises work in JavaScript. When I thought I was asserting that a value was present on the page, I was actually asserting that the promise of the value was present on the page. I needed to add async/await logic to see my test fail when it was supposed to fail.

To avoid circular logic, make sure to challenge your assumptions. Ask yourself, “How do I really know that this is working?” Test your automated tests: each one should fail if there is a condition present that should cause a failure. Additionally, don’t blindly trust metrics. Dig into the data and make sure that the metrics are measuring what everyone assumes they are measuring.

When we are sure that our tests fail when they are supposed to, and when we are sure that our metrics are measuring what they are claiming to measure, we can then have more confidence that our passing test reports and positive metrics are indicating product quality.

Logical Fallacies for Testers VII: The Hasty Generalization Fallacy

The Hasty Generalization Fallacy is a common one in software testing. But before we look at its impact on testing, let’s learn what it is. This fallacy occurs when someone draws a conclusion based on just one example, or a few examples.

You may have fallen for the Hasty Generalization Fallacy as a child when you met someone from another country for the first time. If they were very nice, you may have concluded that everyone from that country is nice. If they were cold and unfeeling to you, you may have concluded that everyone from that country is cold and unfeeling. This is silly, because countries have millions of people and it’s unreasonable to assume that everyone in an entire country will have the same personality!

This fallacy is very dangerous in software testing, because it results in us not testing enough. If you run one or two tests on a feature, and from those tests determine that the feature is working just fine, and you stop testing, you may miss important bugs. Here are some examples where this might happen:

• Running a passing test in the QA environment, and assuming that it must work in the Production environment without actually checking
• Running a passing test with one Admin user, and assuming that it must work with other types of users, even those that have different permission levels
• Running a passing test on an Android device, and concluding that it must work on an iOS device as well

Recently, I got this question from a team manager: “Why are we expected to test in the Stage environment when we’ve already tested in the QA environment?” There are so many reasons!

• Data could be different in the Stage environment, which might expose a missed bug
• APIs that the application is dependent upon could be at a different state of release, exposing a bug that might then go on to Production
• Connections, such as those that go to a database or call an email service, could have the wrong connection strings. If the connections are wrong in Stage, they could also be wrong in Production.

Before you stop testing a feature, ask yourself whether you might be committing the Hasty Generalization Fallacy. Think about the other tests you could run to make sure that the feature is really working as expected. Always ask yourself, “What else could I test?”

Logical Fallacies for Testers VI: The Bandwagon Fallacy

This month, I’m taking a look at the Bandwagon Fallacy. This fallacy happens when someone makes a choice because “everyone else is doing it”. When you were a child, you may have tried to convince your mother that you should be allowed to do something because all of your friends were allowed to do it. This is the Bandwagon Fallacy at work!

The Bandwagon Fallacy is prevalent in many areas of society. One area where this is very obvious is with diets. In the 1990’s, low-fat, high-carb diets were popular. Then in the 2000s, people switched over to the Atkins Diet, which was a high-fat, low-carb diet. Other recent diet trends include the Whole 30 Diet, the Keto Diet, and the Paleo Diet, all of which have different demands.

Just because a diet is very popular and you know people who feel great and lose weight on it does not mean that it is right for YOU. Everyone is different, and it’s important to run a small test on a diet and see how you feel before jumping on the bandwagon with everyone else.

The Bandwagon Fallacy is also frequently seen in the testing world! Think about how many articles you’ve read recently about AI. It seems that everyone is using it to think of new test cases, write test automation, create self-healing tests, and so on. But as with a diet, just because some teams or testers are finding success with it doesn’t mean it’s right for YOUR project. And dropping your current automation solution just because something new comes along results in wasted time.

Another example of a trending tool is Cypress. Cypress is very popular for both API and UI automation because it’s so easy to set up. Cypress comes with good documentation and examples, and it has a vibrant community. But there are some software projects for which Cypress would not be helpful. Cypress can’t test native mobile code, for instance, and it also doesn’t support Safari. And it only supports JavaScript, so if your team doesn’t know JavaScript, it might be better to use a different tool.

It’s fun to try out new tools and techniques. And it is helpful to learn new skills to stay in-demand by employers. But be sure when you are adopting a tool that you are adopting it because it meets your team’s needs, not just because it’s what everyone else is doing.

Logical Fallacies for Testers V: False Dichotomy

In this installment of my Logical Fallacies series, I’m taking a look at the False Dichotomy fallacy. The False Dichotomy fallacy is used when someone presents two opposing options as if they are the only possibilities; that no middle way exists. This is detrimental to progress because it limits people’s thinking; they feel that they must choose one side or the other. In more extreme cases, this can make people afraid to speak their mind for fear of being associated with the “wrong” side of the debate. And it can make small-minded people unable to look at both sides of an issue objectively.

There are many examples of the False Dichotomy fallacy in society today, but let’s examine one from a few decades ago. Back in the 1990’s, there was a phenomenon known as “The Mommy Wars”. This was a debate about whether mothers should stay at home during the day with their children, or whether they should go to work and put their children in day care. The sides were extremely polarized: the stay-at-home moms cited studies that showed that children thrived when they were at home wth their mothers, implying that working moms didn’t want what was best for their children, and the working moms group cited studies that showed that women who weren’t working outside the home were unfulfilled, implying that staying at home was hurting the movement for women’s equality.

Of course, with the wisdom of thirty years behind us, we can see that this was a False Dichotomy. It’s possible for moms who stay at home with their children to have thriving at-home businesses, and it’s possible for moms who work to choose flexible hours so they can be with their kids when they come home from school. And we can see that fathers were clearly ignored in this False Dichotomy; today I work with many dads who step away from their desks to pick up their kids from school or drop them off at day care, and dads who work longer hours on some days so they can take one day off every two weeks to spend time with their kids.

In the area of software testing, there are two obvious False Dichotomies. The first is the Manual vs. Automation debate. I’ve written about this before, but I’ll summarize why this is a ridiculous debate:
• “Manual” and “automated” are arbitrary designations. Things can be automated as part of a manual test (such as using a script to create users), and things can be manual as part of an automated test (such as doing a visual check after a script runs).
• There are some things that are best tested through running a script, such as performing a load test, and some things that are best tested through a manual test, such as driving up the road with a cell phone to check that the GPS location services in an app are working correctly.
In order to ensure that our software is of the highest quality, we should use all the tools at our disposal, including our hands and eyes, and think of ways to use those tools as efficiently as possible.

The second common False Dichotomy in software testing is the debate about whether we need software testers at all. Some software teams believe that all their testing can be done by software developers and that testers are irrelevant, while other software teams believe that testing should be left to the testers and that it’s not the job of developers to test their code. I believe that both of these positions are misguided and potentially harmful. When developers try to do all of their own testing, they may miss important bugs that are caused by the interaction between two different feature areas of the software. And when developers don’t test at all, they may create buggy software that slows down the team’s progress as testers log more and more bugs to be fixed.

Software testing is most effective when the whole team focuses on quality. What does this look like? It can vary by team, but here are some examples:
• Developers and testers work together to do manual exploratory testing just before a release. Each engineer uses their own expertise to think of ways to test the application.
• Developers create test harnesses for things that are difficult to test. For example, to test file uploads, a developer could create a web page connected to the API that would allow the tester to easily upload files.
• Testers attend meetings where features are architected to provide important insight about the current behavior of the product, raising concerns if the new feature might impact existing functionality.
• Developers and testers work together on test automation. The tester provides insight about what should be tested, and the developer reviews the code for clean coding practices.

Is your company suffering from a False Dichotomy fallacy? If so, see if you can work with people from the opposing side to brainstorm new and innovative solutions.

Logical Fallacies for Testers IV: The Straw Man Fallacy

This month I’m continuing my look at logical fallacies with the Straw Man Fallacy. The Straw Man Fallacy occurs when someone takes another person’s position and exaggerates it in an extreme way, or makes a counter-assertion that is not relevant to the first person’s position.

This is easier to explain with examples, so let’s take a look at a common one: a teenage girl asks her parents if she can go to a party at her friend’s house when the friend’s parents won’t be home. The girl’s parents say no, and the girl counters with: “Why do you hate me so much?!” Of course the girl’s parents don’t hate her. They are making a decision based on their desire to keep her safe and out of trouble. But the girl’s “logic” is: this party is really important to me; I won’t be popular if I don’t go; my parents don’t want me to be popular; therefore they must hate me.

The Straw Man Fallacy often happens in politics. For example, let’s take a look at some town residents who are trying to decide on their school budget. Some citizens might want a million-dollar budget, while other residents might want a $500,000 budget. The first group might accuse the second group of “not caring about children”, while the second group might accuse the first group of “wanting to evict seniors who can’t pay their tax bill”. Neither argument is true, of course. Nearly everyone cares about children and seniors. This is the Straw Man Fallacy at work.

So what does the Straw Man Fallacy look like for testers? Here’s one example. Let’s say that the developers on your team haven’t been writing unit tests. You could take that information to mean “The developers don’t care about quality!” That is probably not true. Developers don’t want to write bad code. They don’t want the company’s product to fail, because that would be bad for the company and they might lose their job as a result. So what else could it mean when the developers aren’t writing unit tests? It could mean:
• Management isn’t giving them enough time to finish their stories, so they are always rushing and don’t have time to write the tests
• They don’t know how to write unit tests
• They know how to write the tests, but the company’s infrastructure doesn’t support running them in any meaningful way

The next time you find someone at work opposing one of your ideas, or not implementing a process that you think is important, rather than thinking that they don’t care about testing or quality, or that they are out to get you, ask this question instead:
What else could this mean?

Be creative in answering this question. You will probably be able to come up with a lot of alternative explanations. Asking the other person or group of people why they are thinking or acting as they are can also yield great insights. And once you and others understand what the issues really are, you can avoid the Straw Man and move forward with brainstorming new solutions.

Logical Fallacies for Testers III: Appeal to Authority

As you can no doubt guess from the title, this is the third post on my series about logical fallacies. (You can find the first two posts here and here.) Logical fallacies are important for testers to learn about because it can help keep them from making mistakes in judgment that will impact their testing speed and accuracy. This is especially true in our third fallacy: the Appeal to Authority.

The Appeal to Authority fallacy happens when someone makes the argument that because an expert said so, something must be true. While many times experts are correct in their assessments, there are also times when they are wrong. They could be blinded by their own cognitive bias, they could have a motivation for not telling the truth, or what they say might be correct in some situations but not in others.

Furthermore, sometimes an “authority” isn’t an authority in the area where they are making a pronouncement. A perfect example of this is when a famous actor, singer, or sports figure weighs in on a political situation. Being a great actress doesn’t make someone an expert in fiscal policy, or a good judge of character in a presidential race.

How is the Appeal to Authority used in testing situations? There are two main ways. One is when a tester adopts a testing framework or tool because a testing expert recommends it. To be clear, there’s absolutely nothing wrong with this! But there are times when the framework or tool might not be right for that particular situation. It’s important for testers to take stock of their testing needs, look at the pros and cons of the tool or framework, and then make an informed decision, rather than blindly following an expert.

A second, more problematic, situation occurs when a company decides to hire a testing consultant rather than listening to their own employees. Testing consultants can provide valuable guidance, but they should not be used as a replacement for listening to the company’s own testers. Let’s use our hypothetical company, Cute Kitten Photos, to examine what can happen in this scenario.

Susie is the lead tester for Cute Kitten Photos. She wants to set up test automation for the mobile app, so she does her research and determines that Tool Y will be the best tool for creating reliable mobile test automation for the company. When she presents this data to her manager, Dharmesh, however, he is unconvinced. Dharmesh thinks that Tool Y is much too expensive, and that there must be a better, cheaper way. He hires a consultant named Bob to take a look at the mobile app and make his own recommendations. Bob takes several weeks to examine the app and then submits his report: Tool Y will be the best strategy for reliable test automation for Cute Kitten Photos! Then he submits his bill for $50,000.

Susie was correct in her original assessment, but Dharmesh didn’t listen because in his mind, she wasn’t an authority. How could Susie have made a more compelling case? In this case, Dharmesh was concerned about the expense of Tool Y, which made him willing to spend extra money for an expert. Susie could have asked for a few weeks to do a proof of concept with Tool Y. She could then have asked the sales representative at Tool Y to give her a trial account for those few weeks. Dharmesh would hopefully be convinced that spending 0 dollars on a proof of concept would be better than spending 50,000 dollars for a consultant. At the end of the trial period, Susie could demonstrate her success with the tool, showing Dharmesh that paying for Tool Y would be a good decision.

It’s important for all of us, testers and managers alike, to consider whether an authority should be followed, or even needed, when we are making decisions. Examining the evidence and using clear reasoning can keep us on the right path with our testing choices.

Logical Fallacies for Testers II: The Sunk-Cost Fallacy

In last month’s post, I introduced a new theme for my blog posts in 2023! Each month, I’ll be examining a different type of logical fallacy, and how the fallacy relates to software testing.

This month we’ll be learning about the Sunk-Cost Fallacy. The Sunk-Cost Fallacy happens when someone has made a decision that turns out not to be the right decision, but because they have already spent so much time, money, or energy on the decision, they decide to continue with their choice rather than make a new choice.

Here’s an example: let’s say that over the holiday season you were so inspired by all the TV commercials you saw for stationary exercise bikes that you decided to splurge and purchase one. You figure this equipment will help you stick to your New Year’s resolution to get more exercise.

The bike arrives and you start using it on January 1. By January 5, you have determined that you absolutely hate the exercise bike. While at a friend’s house, you try out their rowing machine and you discover that you love it! But because you’ve spent so much money on the bike, you feel like you have no choice but to continue to use it. By January 13, you have abandoned your resolution and the bike now becomes a very expensive repository for jackets and hoodies.

You could have decided to sell the exercise bike and purchase a rowing machine instead. You may have lost a bit of money in the process, but the end result would have been that you would own a piece of exercise equipment that you would actually use. Instead, the Sunk-Cost Fallacy has kept you stuck with a bike that you don’t want.

The most common example of the Sunk-Cost Fallacy in software testing is continuing to use an automation tool that’s not working for you. Let’s take a look at this with our hypothetical social media software company, Cute Kitten Photos.

The Cute Kitten Photos test team has decided that they need a tool for test automation to help save them time. Because many of the testers don’t have coding experience, they decide to purchase a low-code automation tool. The test team dives in and starts creating automated tests.

The first few tests go well, because they are fairly straightforward use cases. But when the team starts adding more complex scenarios, they begin having problems. The testers with coding experience take a look at the code generated by the tests, and it’s really hard to understand because it uses a lot of code specific to the test tool. So some of the developers on the team jump in to help.

It takes a lot of time, but finally a complete automated test suite is hacked together. The team sets the tests to run daily, but soon they discover another problem: the tests they edited are flaky. The team spends a lot of team trying to figure out how to make the tests less flaky, but they don’t arrive at any answers. So they wind up appointing one tester each week to monitor the daily test runs and manually re-run any of the failing tests, and one tester to continue working on fixing the flaky tests.

So much for saving time! Half the team is now spending their time keeping the tests running. At this point, one of the testers suggests that maybe it’s time to look for another tool. But the rest of the team feels that they’ve invested so much money, time, and energy into this tool that they have no choice but to keep using it.

Are you using any tools or doing any activities that fall under the Sunk-Cost Fallacy? If so, it may be time to take a fresh look at what you are doing and see if there’s a better alternative. If you have signed an expensive contract, you could continue to use the tool for existing tests while exploring open-source or lower-cost alternatives. Or you could abandon the tool altogether if it’s not providing any value. The bottom line is, it’s best to stop engaging in activities that are wasting time and money, even if they once seemed like a good idea.