Logical Fallacies for Testers II: The Sunk-Cost Fallacy

In last month’s post, I introduced a new theme for my blog posts in 2023! Each month, I’ll be examining a different type of logical fallacy, and how the fallacy relates to software testing.

This month we’ll be learning about the Sunk-Cost Fallacy. The Sunk-Cost Fallacy happens when someone has made a decision that turns out not to be the right decision, but because they have already spent so much time, money, or energy on the decision, they decide to continue with their choice rather than make a new choice.

Here’s an example: let’s say that over the holiday season you were so inspired by all the TV commercials you saw for stationary exercise bikes that you decided to splurge and purchase one. You figure this equipment will help you stick to your New Year’s resolution to get more exercise.

The bike arrives and you start using it on January 1. By January 5, you have determined that you absolutely hate the exercise bike. While at a friend’s house, you try out their rowing machine and you discover that you love it! But because you’ve spent so much money on the bike, you feel like you have no choice but to continue to use it. By January 13, you have abandoned your resolution and the bike now becomes a very expensive repository for jackets and hoodies.

You could have decided to sell the exercise bike and purchase a rowing machine instead. You may have lost a bit of money in the process, but the end result would have been that you would own a piece of exercise equipment that you would actually use. Instead, the Sunk-Cost Fallacy has kept you stuck with a bike that you don’t want.

The most common example of the Sunk-Cost Fallacy in software testing is continuing to use an automation tool that’s not working for you. Let’s take a look at this with our hypothetical social media software company, Cute Kitten Photos.

The Cute Kitten Photos test team has decided that they need a tool for test automation to help save them time. Because many of the testers don’t have coding experience, they decide to purchase a low-code automation tool. The test team dives in and starts creating automated tests.

The first few tests go well, because they are fairly straightforward use cases. But when the team starts adding more complex scenarios, they begin having problems. The testers with coding experience take a look at the code generated by the tests, and it’s really hard to understand because it uses a lot of code specific to the test tool. So some of the developers on the team jump in to help.

It takes a lot of time, but finally a complete automated test suite is hacked together. The team sets the tests to run daily, but soon they discover another problem: the tests they edited are flaky. The team spends a lot of team trying to figure out how to make the tests less flaky, but they don’t arrive at any answers. So they wind up appointing one tester each week to monitor the daily test runs and manually re-run any of the failing tests, and one tester to continue working on fixing the flaky tests.

So much for saving time! Half the team is now spending their time keeping the tests running. At this point, one of the testers suggests that maybe it’s time to look for another tool. But the rest of the team feels that they’ve invested so much money, time, and energy into this tool that they have no choice but to keep using it.

Are you using any tools or doing any activities that fall under the Sunk-Cost Fallacy? If so, it may be time to take a fresh look at what you are doing and see if there’s a better alternative. If you have signed an expensive contract, you could continue to use the tool for existing tests while exploring open-source or lower-cost alternatives. Or you could abandon the tool altogether if it’s not providing any value. The bottom line is, it’s best to stop engaging in activities that are wasting time and money, even if they once seemed like a good idea.

Logical Fallacies for Testers I: The Causal Fallacy

Lately I’ve been thinking about thinking; specifically, critical thinking skills and how important they are for everyone, especially testers. When testers can’t think critically, they aren’t able to diagnose software problems quickly or find good solutions to testing challenges. In light of this, I’ve decided to focus on critical thinking in my blog posts this year!

Each month, I’ll be writing about a different logical fallacy. A logical fallacy is a common reasoning error that most of us make when thinking about a problem. Logical fallacies are often made when people are arguing for a specific side in a debate, but they are also found when trying to get at the root cause of a problem. In each of my blog posts, I’ll describe the logical fallacy of the month, give a typical example, and then describe how it can be found in software testing. Then I’ll invite you to look for ways you have used that fallacy in your own testing. This month, we begin with the Causal Fallacy.

The Causal Fallacy happens when someone takes two separate events and determines that one causes the other because they correlate. For example, let’s say that researchers on Amity Island are investigating why there have been so many shark attacks that summer. They take a look at other data they have for the island, and they discover that ice cream sales are up that summer as well. They come to the conclusion that all that ice cream consumption must be causing the increase in shark attacks.

Obviously this is ridiculous! A correlation between two data points does not mean that one causes the other. (For some really funny examples of data correlation, see Tyler Vigen’s website, Spurious Correlations.)

Let’s take a look at a software testing example. Imagine there is a social media site, Cute Kitten Photos, where users can create an account and share photos of their kittens. Every Friday, the data collection team at Cute Kitten Photos runs a series of reports where they determine the weekly usage of the platform and the most liked photos for the week. Also every Friday, the IT team has discovered that the CPU usage on the servers has spiked to dangerous levels, so much so that some users are getting 500 errors when they try to go to the site.

It’s pretty clear that these data collection reports are causing the CPU spikes, right? That’s what the IT team thinks! But the data collection team is sure that they are not causing the problem. They point to data that shows that their reports cause very little load on the system.

Finally, a deeper investigation discovers that one user of the platform has been sharing TIFF images of their kittens. The software is not handling this image type well, which is causing several retries, and retries of the retries, putting the heavy load on the servers.

From this example, it’s easy to see that correlation does not mean causation, and that the first and most obvious cause of a problem is not always the right one.

In the next few weeks, when you encounter an odd bug that seems to have an obvious cause, ask yourself, “What else could this mean?”

How to Get Your Bug Fixed

I’ve posted in the past about how to make sure that you really have a bug before you log it; how to investigate a bug; and how to log the bug once you have finished investigating it. But I’ve never posted about how to get your bug fixed. Even if you log a fabulously detailed and clearly described bug, it’s still possible that your developer or your team will decide it’s not worth fixing. So in this post, we’ll take a look at five things you can do to help your bug get fixed.

One: Choose Your Battles
It may seem counterintuitive, but if you argue strongly for every single bug you find to be fixed, you may actually lose the attention of your team. Development teams are always going to leave some bugs behind, either as tech debt to fix later or as bugs marked “Won’t fix”. So instead of fighting for every single bug, choose the ones that are the most important.
How do you know which ones are the most important? Consider two things: first, the user impact of the bug. A bug in which the Submit button is two pixels off from the center of the screen is not going to have a great impact on the user. But a bug in which a user can’t submit a form is going to have a much bigger impact. Second, consider the likelihood that the user will encounter the bug. If the steps to repro the bug involve clicking the Back button twenty times, resizing the browser window twice, and then refreshing the browser, a user probably isn’t very likely to encounter it. But a bug in which a customer can’t input their credit card is very likely to be seen. The bugs to advocate for fixing are those that a user is likely to encounter and will greatly affect the user.

Two: Show the Bug in Action
Sharing a bug will often have much more impact if developers see it for themselves. There are several different ways to do this. First, you can take a video of the bug and attach it to the bug report. This can be especially impactful if you have a video of the feature working the right way as well, so the developer can compare the two. Second, you can invite your developer to your desk- either in real life or virtually- to show them the bug in action. Third, and this is often the most helpful, you can go to the developer’s desk- either in real life or virtually- and have them go through the steps in the application that will show them the bug. When the developer experiences the bug first-hand, they are more likely to empathize with the end user.

Three: Walk Through the User’s Experience
This technique can be used along with the previous strategy, or it can be done in written form. As you demonstrate or describe the bug, point out what the user will be thinking at each step in the process. For example, “The user wants to remove something from their shopping cart. When they change the quantity of that item to 0 and save, they are expecting that the item will disappear from the cart. When the item is still there, they will feel confused and frustrated.”

Four: Share Customer Feedback From Similar Issues
If you have access to customer complaints, see if you can find complaints for an issue similar to your current bug. For instance, if you are dealing with a bug where a calendar tool is loading the wrong month, maybe you can find complaints from a bug you had last year where the calendar was too small. Find the most frustrated customers you can, and read those complaints from last year to the team. This will demonstrate just how important the feature is to your customers.

Five: Point Out the Potential Team Impact
People are often motivated by things that will affect them personally. If your tales of customer woe don’t move your team, they might be moved by these warnings:
• the bug will result in bad data getting into the database, which will be a pain to clean up later
• if a customer complains about this issue in the middle of the night, someone on your team will get paged to fix it
• if the CEO of your company sees the bug when demonstrating your application to peers, they’ll be unhappy and will ask your team why they didn’t do anything about it.

Software development is a time-critical venture, and there will always be tradeoffs between speed to delivery and quality. But with these five tips, you will be able to get your most important bugs fixed.


The Importance of Test Users

Anyone who does any type of software testing understands that having test users is a necessary part of the process. Generally you can’t log into the production version of an application as an actual user because of security concerns, and test environments don’t have real users. In this post, I’ll talk about why test users are so important, and offer suggestions on how to care for them.

Test users make manual testing easier

Most applications have different user levels. For example, there’s often an Admin user that can do things that ordinary users can’t. In HR software, there are Supervisor users that can do things that regular Employee users can’t, such as approve time-off requests or submit a promotion recommendation.
Users often have different configurations as well. There might be users on a paid plan, who have access to more features than the users on the free plan. Or users might have chosen different settings for their account, such as using dark mode, or limiting who can see their posts.
It’s important to test all these different scenarios, and because of this, it’s a great idea to have configured test users all ready to go when it’s time to test. You don’t want to have to set up a bunch of users from scratch each time you have something new to test, because this wastes time that could be used for testing.

Test users make automated testing more complete

Because of all the scenarios mentioned above, it’s worthwhile to set up automated testing so that a number of different scenarios can be tested quickly. For example, you could set up a test that validates the presence of elements on the home screen, and then run the test twice, once for a user on the free plan and once for a user on the paid plan. This saves you from having to manually log in as each user and validate what’s on the home screen.
The more configuration combinations you have, the more important it is to set up automated tests for many or all those configurations. This way you can catch obscure bugs before they make it to production.

Test users allow you to troubleshoot issues quickly

When real-life users have a problem with the software, you’ll want to diagnose the problem as quickly as possible. You may need to log into the production environment, but can’t use a real user’s credentials. You’ll want to have a test user with a similar configuration to the real users available in order to reproduce the issue. Then, when a developer codes a fix for the problem, you’ll want to have a test user with a similar configuration to use in the test environment to validate the fix. If you don’t have these users ready, you’ll need to spend valuable time setting them up; this will slow down the debugging and testing process, and the real-life users will have to wait longer for a fix for their problem.

How to care for your test users

Test users are only good if they are kept up-to-date! It’s so frustrating to try to log in with a test user’s account only to discover that the password has changed and you don’t know what it is. Because of this, it’s important to care for your test users in these ways:

Assign someone to configure and maintain your test users

The person who does this should be the most organized person on the team. Or, if they don’t want that permanent commitment, you can have the job rotate from person to person every quarter or every year. This person is responsible for setting up the test users, keeping a list of those users with their login information, and updating the users when their passwords expire or when something else changes.

Share the test users with your team

Developers love it when the team has a list of test users they can refer to! Because they are not testing every day, they might not be as familiar with users to test with as you are. Having a list that everyone on the team can refer to means that developers can quickly find the right users they need to test out the new feature they just developed.

Share your test users with other teams

I am often mystified when I ask a tester on another team for a test user that I can use to test their application area, and they don’t have one handy. How on earth are they testing? Do they just test with the Admin user over and over again? It’s so helpful for cross-team collaboration when each team has some test users that they can share with other teams. It allows everyone to get their work done more quickly! But it’s very important when someone on another team shares their test users with you to respect those users: don’t change their passwords, usernames, or any other important features. And remember that what seems like a small change to a user when interacting in your application area might mean a huge change in someone else’s area.

Test users save time

With a little bit of preparation and organization, you can have a host of test users that will streamline your testing. Test cases will be executed more quickly, bugs will be caught sooner, and issues in production can be diagnosed at lightning speed.

What’s In a Name?

Software development teams face all kinds of challenges. They need to learn new technologies while keeping legacy products running. They need to balance addressing tech debt with adding new features quickly. With all of these challenges, why should anyone care what groups, teams, products, or tests are named? Here are five reasons why:

Reason One: Names provide a common language

If you work for a company that is growing rapidly, you may find that restructuring happens regularly. In times of change, it’s really important to make sure that you are all using the same names for the same things. Consider a large company that is divided up in to several large groups. Those groups in turn are divided into smaller groups. Then those groups are divided into even smaller groups, and finally those smallest groups are divided into teams.

What are you going call all those groups? Perhaps the largest group you are a part of is calling itself a “Division”, and the next smallest group a “Category”, but a different group is calling the largest group a “Category”. When someone in a third group sends out an email saying that all the Category leaders should meet on Monday, how do you know what level of group they are referring to? Making sure that every group shares the same nomenclature ensures that teams can understand each other.

Reason Two: Names save time

Giving something a name, and sharing that name with others, saves a great deal of time in communication. Imagine that you are a part of a retail company that has departments that do data analytics, and departments that focus on marketing. You often have meetings with people from all of these departments. When you refer to these groups, would it be easiest to say “All of the data analytics teams plus all of the marketing teams”, or “Analytics and Marketing”, or even “A&M”? By giving this conglomeration of teams a special name, and sharing that name with others, it’s easy to refer to the group in conversation, chat, and email.

Reason Three: Names prevent misunderstandings

Sometimes an incorrectly named item can result in misunderstandings between teams. I recently joined my company’s Mobile team, and we often test on physical devices. Some of those devices are ones at our own desks, and some of those devices are accessed through BrowserStack. We had been referring to the devices at our desks as “physical devices”. Because of this label, developers and testers on other teams were assuming that the BrowserStack devices weren’t physical devices. They would tell us, “We need to test on a real device”. To combat this misunderstanding, I’ve now started to refer to the devices on my desk as “devices in hand” rather than “physical devices”.

Reason Four: Names provide a sense of clarity

Testers are very familiar with the fact that a term like “smoke test” will mean different things to different companies. The definition might even vary among different teams in the same company, or among different testers in the same team. We ran into this issue at my company when we wanted to create a series of quick smoke tests to ascertain the health of every application. After a long series of discussions, we finally created a new term: System Smoke Tests. These are tests that simply run a GET call to every API and validate that every application will load, and do no more than that. Having this shared term makes it easy to refer to our test project and trust that everyone understands what is expected.

Reason Five: Names define a purpose

In small companies, or areas of a company where all of the teams are doing exactly the same thing (such as the sales teams), it can be fun to create whimsical team names. It builds a sense of camaraderie, and sometimes even a sense of pride, if the name is something like “The Awesome Avengers”. But in large companies, having team names like this is a recipe for frustration.

Imagine that you are a new tester at a social media company. You are testing a feature that depends on video playback, and you’re seeing an issue. The developer you’re working with tells you to take the issue up with the video playback team. So you look in the team directory to find the correct team, but you don’t see a team named “Video” or “Video Playback”. Which team is the correct one? The Otters? The Spartans? It’s impossible to tell. When names define a purpose, it makes communication easier for everyone.

What’s In a Name?

A whole lot is in a name! A common language, a sense of clarity, a shared purpose, and the ability to save time and communicate clearly. Why not take some time this week to examine your company’s names and see if they are working for you?

Working With Your Product Owner

I didn’t understand the importance of Product Owners until I created my own web app. It was such a simple app (you can see it at https://thinking-tester-contact-list.herokuapp.com), but I had to figure out how to get from one page to another, and how to make sure a user never gets stuck at a dead end. This was harder than I thought it would be. Then I understood that the work that Product Owners do is about more than just designing pages! It’s about making sure that the user has a great experience doing what they need to do in your application.

Testers share the desire of making users happy, so it’s a great idea for them to work with Product Owners to achieve that goal! Below are four steps for working with your Product Owner.

Step One: Attend planning meetings
I will freely admit that planning meetings are not my favorite meetings. I like to have a list of things to do and attack that list, and often planning meetings are a free-form exchange of ideas. But it’s important to attend these meetings, because seeing new features take shape can get you started with thinking how to test them. Also, because you interact with your application so often, you may have more knowledge about its features than your Product Owner does, especially if they are new to your company. You can use that knowledge to point out potential problems with a plan. For example, at one company where I worked the Product Owner and developers were redesigning their reporting tool. During a planning meeting I was able to remind the group that the tool needed to work with an existing assignment engine.

Step Two: Have 3 Amigos meetings
A 3 Amigos meeting is a meeting with you, a developer on your team, and your Product Owner. This is where discussions take place about how the feature will be built and what the Acceptance Criteria will be. You are critical to this meeting because you will be able to ask questions about how the feature will work that the Product Owner and the developer might not have thought of. You can also help write the Acceptance Criteria to reflect important negative cases. For example, if your team is building a new SMS feature, you could suggest that one of the Acceptance Criteria should be that the system handles cases where the user hasn’t added a phone number.

Step Three: Test above and beyond the Acceptance Criteria
Even though you helped create the Acceptance Criteria, there are probably many more things to test beyond those AC! You’ll want to test on a variety of different browsers and devices, you’ll want to test how the new feature works with other features, and you’ll want to discover what happens in rare edge cases, such as clicking the back button several times or losing your internet connection during a transaction. When you discover anything that could be a potential problem, discuss it with both the developer and the Product Owner to see if it’s important to fix before the release.

Step Four: Have your Product Owner do Acceptance Testing
You’ve now tested the feature extensively, and you’re feeling good about it. Because you attended the planning meetings, you probably also understand very well how the feature will be used. But before you release the feature, it’s important to have the Product Owner do some testing to make sure they are really getting what they want. Once I was testing a new email feature, and the emails being sent were not formatted in the way the Product Owner was expecting. The developer on the team was then able to re-format the email so that it looked much more professional before the feature was released.

One of the great things about working on a software team is that all the team members have different skills. As a tester, you know the application really well and you can think of great edge cases to test. Your Product Owner understands the business needs of your application and how to craft user journeys. Working together, the two of you can make sure that your users have a great experience using your software!

One Button

As software testers, we have a lot of different things to think about.  We need to test new features and existing features.  We need to make sure different features work correctly together.  We need to run manual tests and make sure that our test automation is running correctly.  And we need to test on different browsers and platforms.  

Because of this, it’s often easy to get bogged down in our day-to-day work and forget that the whole point of our testing is to make sure that our end users have a good experience with our product.  This point was really driven home to me recently when I had an experience as a customer.  Here’s my sad story.

I was making a change to a financial account, and the change required that I fill out an online form that was several pages long.  I took the time to fill out the entire form, and when I got to the end, there was a Submit button.  I clicked the button, and…nothing happened.

I was hoping this was just a fluke, so I waited a day and tried submitting my form again.  No luck.  Then I tried a different browser.  No luck.  Then I submitted a bug report on the company’s website and waited a few more days.  I tried the Submit button again, and still nothing happened. 

Then I called customer service.  I spent an hour and a half on the phone talking with four different representatives: two were customer service reps and two were tech service reps.  The service reps told me to try all sorts of things, including rebooting my router, which seemed like a very odd suggestion to me.  Finally the last customer service rep told me there was nothing they could do; I was going to have to print out the form and fill it out manually, which was a further waste of my time.

Ultimately, it turned out that the problem was that I was filling out the wrong form for my account type.  But where was the validation for this?  When I started to fill out the form and I added my account number, there should have been some validation that determined that I was using the wrong form and returned an error telling me that.  Instead I wasted hours of my time trying to submit the wrong form. 

So what can we learn from my experience?  The testing team at that financial institution probably has hundreds of tests that exercise the functionality of this form and other forms. The testers might be tempted to say, “It’s just one button”.   But to me, the end user, this one button represented hours of aggravation and wasted time.

The moral of the story is to remember to think about customer workflows when you are testing.  What are the customers doing when they interact with your application?  If you don’t know, talk to the Product Owners at your company and find out!  What kinds of things do the customers typically do wrong?  How can you help the customer via the application when they do something wrong?

One button seems like a little thing when you have so many things to test, but to your customer, that button could mean everything.

The Ideal Tester-Developer Ratio

People often ask me, “What’s the ideal tester-developer ratio?” My answer is always, “It depends.” There are a number of factors that determine what a good tester-developer ratio should be. Things to consider are: whether you are working on cutting-edge technology or a legacy product, the talent and experience of your team members, and what kind of release cadence you are expected to maintain. The truth is that there are all different kinds of ratios that can work, but there are pros and cons to each. Let’s take a look at some examples.

1 tester: 1 developer

The 1:1 ratio is great when you have developers who don’t know much about testing and testers who don’t know much about development. A developer-tester pair can work together to release a new feature, and because they are both so focused on that one feature, they may be able to find and fix all the bugs. However, the developer probably won’t contribute to any test automation, and the tester will likely be the only one who knows how to run and fix the automation. This will mean that with any future development on the feature, the tester will become a bottleneck, slowing down the work.

1 tester: 2 developers
This ratio is good for a situation where there is a front-end and a back-end developer working on a feature. The tester can be responsible for testing the integration between the front and back ends. Like the 1:1 ratio, these three will become the experts on the feature. However, this can result in silos, where it’s difficult later in the project for someone else to come in and assist with the work.

2 testers: A team of developers
This is a very common scenario. The testers can divide up the stories to test according to their skillset and availability. If both testers are talented and organized, they will usually be able to keep up with the manual testing and the test automation work. They can also swap features to see if one tester has missed a bug that the other finds. However, this ratio will occasionally result in bottlenecks when there’s a feature that needs a lot of testing or if one tester goes on vacation.

1 tester: A team of developers
In this scenario, the tester becomes a “quality coach”. They are not responsible for all of the testing or all of the test automation. They guide and train the developers to understand what should be tested and automated. In this way, the whole team owns quality. Any time the tester is not available, the developers are able to fill in the gaps by creating test plans and testing each other’s work. Because the developers are contributing to and helping to maintain the automated tests, test automation never becomes a bottleneck.

0 testers: A team of developers
Some might cringe in horror at this idea, but a team of very well-trained software developers is capable of doing all their own testing. In order for this to be successful, developers need to understand the importance of exploratory testing and how to create test plans. They need to understand what kinds of tests should be automated, and they should be committed to maintaining their test code as carefully as they maintain their production code. Although they will do some initial testing on their own features, they will also create “test buddy” pairs where one developer will act as the tester for another developer’s work. In this way, they’ll have two sets of eyes on each feature and will be more likely to catch bugs.

These ratios all have a few things in common. First, and most importantly, at least one person on the team has excellent testing skills. Those skills are necessary to find elusive bugs. Next, good communication skills are needed. There is no “throwing software over the wall to be tested”; testers and developers are working together. Finally, there is the willingness to be a team player. Testers and developers alike need to be willing to step forward and do testing tasks whether or not it’s part of their assigned role. When all three things are present on a team, any of these ratios can bring success.

How to Hire a Software Tester

For this month’s post, I’m doing something a little different! Usually my posts are aimed at software testers who want to improve their skills and improve their thinking about what to test. But this month, I want address the people who hire the software testers!

Making the right hiring decisions is crucial. Being saddled with an ineffective tester is often worse than having no tester, for the following reasons:
• They may fail to automate any tests, meaning that all testing will be manual and will be a huge bottleneck to releasing software
• They may automate their tests poorly, resulting in flaky tests that no one trusts and that require tons of maintenance
• They may fail to grasp technical concepts, meaning that other team members will have to waste time explaining those concepts to them again and again
• They may be inept at creating test strategies, resulting in the wrong things tested and the right things not tested at all

So, to avoid hiring an ineffective tester, I recommend that you look for the following ten skills:

Able to find bugs
If the tester can’t find bugs, there’s really no reason to employ them. Any developer on the team can do “happy-path” testing and put together automated tests for their code. What sets testers apart from the others is their ability to think of things to test that the developers might miss: negative tests, strange edge cases, interactions between features, and so on. To determine whether a candidate can find bugs, give them a buggy application and ask them to find and report on the bugs. You will be surprised how many “experienced” testers can’t do it! And these are the people you will not want to hire.

Makes good decisions about what to test
A tester that can’t prioritize what should be tested will not be an effective worker on your team. Testers should be able to determine what should go in a smoke test, what areas of an application should have regular regression tests, when it’s the right time to search for an elusive bug, and when it’s time to save the search for later. To determine whether your candidate can think strategically, present them with an example application or system and ask them to design a test plan for it. One easy example is a soda vending machine. While your candidate may not know the mechanisms involved in delivering the soda, they should be able to identify the different tasks of the machine and come up with systematic ways to to test them.

Understands API testing
If your application uses APIs at all- and most modern applications do- you’ll want to make sure that your candidate understands how to test them. I continue to be surprised at how many testers still don’t understand how to test APIs (a problem that could be easily solved by taking my Postman course or reading my new book) when they are so prevalent today. A poorly developed and tested API can result in serious security holes and a poor user experience. To make sure that your candidate knows how to execute API tests, give them a sample API to test. See if they are capable of creating both positive and negative test scenarios.

Communicates clearly
A tester who can find bugs, but can’t explain to anyone how to reproduce them is not going to be particularly helpful. Software testers need to be excellent communicators. To check if your candidate is a clear communicator, ask them to explain a complicated bug they’ve found in their current position. This actually accomplishes two things: it verifies that they actually find bugs in their current position, and it also gives you a sense of how the candidate explains a complicated situation. If you can’t follow along with their explanation, this might not be the right tester for you.

Writes good test automation
According to Robert C. Martin, the author of the widely-respected book Clean Code, test code is just as important as production code. This is because test code is used to validate that changes in production haven’t broken anything. If the test code is unreliable, then all changes to production code have to be manually tested in addition to being tested with automation, which slows the entire development process down. Because of this, you want a software tester who writes clean automated tests: common tasks should be separated out into methods or classes, and the code should be well-organized. To check whether the candidate can write good test automation, ask them to write some automated tests for a simple application. Then ask them to run the tests for you and explain why they wrote the tests they did, and why they organized the code the way they did.

Understands databases
Whether you use relational databases like SQL Server or non-relational databases like Mongo DB, you’ll want your candidate to be able to interact with the databases to get the information they need for testing. If you use relational databases, ask them to write a simple query or a join. For non-relational databases, you can ask them how they would get a specific record from the database.

Understands the challenges of mobile testing
If you have a mobile application, or if your application has a mobile version, you’ll want to make sure that the candidate understands the the challenges of mobile testing. Ask them what those challenges are; you should hear things like differences in screen size, operating system, version, carrier, and so on. If your application is primarily mobile, you’ll also want to make sure that your candidate has experience with mobile automated testing.

Understands the basics of security and performance testing
Even if you have security and performance test teams at your organization, you’ll still want to make sure that your candidate understands some basic security concepts like privilege escalation and IDOR, and some simple performance concepts like measuring page load times and API response times. For smaller companies that don’t have security and performance test teams, understanding these basics is even more important.

Understands the importance of visual and accessibility testing
The candidate should be able to identify some reasons why visual automated testing might be needed. For example, ordinary UI test automation does not validate that an image is appearing correctly in a browser, but a visual testing tool can do that. They should also understand that accessibility is an important aspect of web applications today; they should be able to give you some examples of accessibility needs, such as the ability to zoom in on text, use a screen reader, and view videos with captions.

Can be an advocate for quality on your team
Finally, you will want a candidate who is able to speak up when the situation calls for it. A tester who can find bugs, but who can’t advocate for those bugs to be fixed, will not be much help to your team. In today’s Agile software teams, the tester acts as more of a quality coach, helping all the team members to think like testers, do exploratory testing, and contribute to the automation code.

A poor software tester can be a drag on the entire team, but a good software tester can spur the team on to new heights of quality and productivity! With these skills in mind, you will be able to hire testers you will be grateful to work with.

Adventures in Node: Npm Scripts

When I was first introduced to JavaScript automated testing, I was working with a test framework that one of the developers I worked with had set up. I started the tests the way he told me to, with this command: npm run protractor. Later, when I was working with a different project, the command to run the tests was npm run test. Why was it “test” sometimes and “protractor” other times? It’s because these commands referred to npm scripts!

Npm scripts can be set in the package.json file of your project. They are like shortcuts that you can use to execute commands to minimize the amount of typing you need to do. Let’s say you want to run some Cypress tests, and you have a couple of different configuration files to choose from when you run your tests that represent your development and production environments. To run your tests in your development environment, you could type in this command: npx cypress run -C cypress/config/dev-config.json, or you could type npm run dev. Chances are you’ll be running your tests over and over again; which method would you prefer?

Let’s do a simple exercise to see how npm scripts work. You’ll need to have Node installed for this exercise; you can download and install it here. We’ll be working with the command line; if you need some command line basics, you can find them in this post.

Step One: Set up a Node project
In your command window, navigate to the folder where you want to save your project. Then type mkdir npm-script-project. A new folder should be created with this name. Next, navigate to that new project by typing cd npm-script-project. Now initialize the project as a Node project by typing npm init –y.

Step Two: Open your Node project in a code editor
Now that your Node project has been initialized, open it up in your favorite code editor. My favorite is Visual Studio Code, which is available for free on Windows and Mac. When you open the npm-script-project folder in your editor, you will see that a package.json file has been generated for you. This is the file where you will add your script.

Step Three: Add your script
Open the package.json file in your code editor. You will see that there is a “scripts” section in this file, and that there is currently a “test” command listed in the script. You can run the test command by opening up a command window (you can do this within VS Code by choosing “New Terminal” from the “Terminal” menu) and typing npm run test. You’ll get the error message that is specified in the script, which is expected.
Let’s change the “test” script to do something different! Change the “test” name to “hello”, and change the script to “echo \”This is my test script!\””. Now execute the script with this command: npm run hello. You will see the message “This is my test script!” returned in the command window.

Step Four: Make your script do something useful
Now that you know how to make npm scripts, it’s time to make them do something useful! For example, if you install Cypress in your project, you can create a “test” command to run your Cypress tests. Let’s try this out. In your command line, type npm i cypress. This will install Cypress in your project. Next, start Cypress by typing npx cypress open. You’ll see the Cypress test window open, and a cypress folder with some example tests will be installed in your project. You can run the tests from the Cypress window to watch them work and close the windows when they have finished, or you can simply close the test window.
To create an npm script to run those Cypress tests, add a new line to the “scripts” section of the package.json file: “test”: “npx cypress run”. (Be sure to add a comma at the end of your “hello” script so that the JSON will be correct.) Now try running the Cypress tests by using your new script: npm run test. You should see the Cypress tests run in the command window!

This is a pretty simple example, where the command we were replacing was just as short as the script we replaced it with. But hopefully this illustrates how easy it is to replace longer commands with a script that is easy to type. Happy scripting!