In last month’s post, I introduced a new theme for my blog posts in 2023! Each month, I’ll be examining a different type of logical fallacy, and how the fallacy relates to software testing.
This month we’ll be learning about the Sunk-Cost Fallacy. The Sunk-Cost Fallacy happens when someone has made a decision that turns out not to be the right decision, but because they have already spent so much time, money, or energy on the decision, they decide to continue with their choice rather than make a new choice.
Here’s an example: let’s say that over the holiday season you were so inspired by all the TV commercials you saw for stationary exercise bikes that you decided to splurge and purchase one. You figure this equipment will help you stick to your New Year’s resolution to get more exercise.
The bike arrives and you start using it on January 1. By January 5, you have determined that you absolutely hate the exercise bike. While at a friend’s house, you try out their rowing machine and you discover that you love it! But because you’ve spent so much money on the bike, you feel like you have no choice but to continue to use it. By January 13, you have abandoned your resolution and the bike now becomes a very expensive repository for jackets and hoodies.
You could have decided to sell the exercise bike and purchase a rowing machine instead. You may have lost a bit of money in the process, but the end result would have been that you would own a piece of exercise equipment that you would actually use. Instead, the Sunk-Cost Fallacy has kept you stuck with a bike that you don’t want.
The most common example of the Sunk-Cost Fallacy in software testing is continuing to use an automation tool that’s not working for you. Let’s take a look at this with our hypothetical social media software company, Cute Kitten Photos.
The Cute Kitten Photos test team has decided that they need a tool for test automation to help save them time. Because many of the testers don’t have coding experience, they decide to purchase a low-code automation tool. The test team dives in and starts creating automated tests.
The first few tests go well, because they are fairly straightforward use cases. But when the team starts adding more complex scenarios, they begin having problems. The testers with coding experience take a look at the code generated by the tests, and it’s really hard to understand because it uses a lot of code specific to the test tool. So some of the developers on the team jump in to help.
It takes a lot of time, but finally a complete automated test suite is hacked together. The team sets the tests to run daily, but soon they discover another problem: the tests they edited are flaky. The team spends a lot of team trying to figure out how to make the tests less flaky, but they don’t arrive at any answers. So they wind up appointing one tester each week to monitor the daily test runs and manually re-run any of the failing tests, and one tester to continue working on fixing the flaky tests.
So much for saving time! Half the team is now spending their time keeping the tests running. At this point, one of the testers suggests that maybe it’s time to look for another tool. But the rest of the team feels that they’ve invested so much money, time, and energy into this tool that they have no choice but to keep using it.
Are you using any tools or doing any activities that fall under the Sunk-Cost Fallacy? If so, it may be time to take a fresh look at what you are doing and see if there’s a better alternative. If you have signed an expensive contract, you could continue to use the tool for existing tests while exploring open-source or lower-cost alternatives. Or you could abandon the tool altogether if it’s not providing any value. The bottom line is, it’s best to stop engaging in activities that are wasting time and money, even if they once seemed like a good idea.