Logical Fallacies for Testers VIII: Circular Reasoning

This month we continue our journey into logical fallacies with Circular Reasoning. Circular Reasoning can be explained in these two statements:

• X is true because Y is true
• Y is true because X is true

A quick examination of these assertions shows that they aren’t proving anything. It’s possible that neither X nor Y are true, but the person asserting that X is true will go around and around with these two statements as if they prove their assertion.

Here’s an example: your neighbor insists that driving over 55 miles per hour is dangerous. When you ask her to prove that it is dangerous, she says that driving that fast is illegal. Consider those two statements:

• Driving over 55 miles per hour is illegal because it’s dangerous
• It’s dangerous to drive over 55 miles per hour because it’s illegal

That’s Circular Reasoning at work!

We also encounter Circular Reasoning in software testing. Consider these two statements:

• All of our automated tests passed because our feature is working correctly
• We know that our feature is working correctly because all of our automated tests passed

At first glance, this seems to make sense. If our tests are passing, it must be because the feature is working, right? But there is something else to consider here. It’s possible that the tests are passing because they aren’t actually testing the feature.

I learned this lesson several years ago when I first started writing JavaScript tests. I was really proud of my tests and the fact that they were passing, until a developer asked me to create a condition where the value being asserted on was incorrect. I was surprised to see that my test passed anyway!

I wasn’t aware of how promises work in JavaScript. When I thought I was asserting that a value was present on the page, I was actually asserting that the promise of the value was present on the page. I needed to add async/await logic to see my test fail when it was supposed to fail.

To avoid circular logic, make sure to challenge your assumptions. Ask yourself, “How do I really know that this is working?” Test your automated tests: each one should fail if there is a condition present that should cause a failure. Additionally, don’t blindly trust metrics. Dig into the data and make sure that the metrics are measuring what everyone assumes they are measuring.

When we are sure that our tests fail when they are supposed to, and when we are sure that our metrics are measuring what they are claiming to measure, we can then have more confidence that our passing test reports and positive metrics are indicating product quality.