How to Be a QA Leader

The most frequent question I get from readers of my blog is one like this: “I’ve just been promoted to QA Lead. What do I need to do to be successful in this position?”

Whether you have been made a lead or a manager, it can be a bit daunting to be leading a group, especially if you have never been a leader before. Here are seven things you can do to be a successful QA leader, gleaned from both my experience as a QA Lead and QA Manager, and from leadership experience I’ve had in other areas of my life.

  1. Pay attention to the needs of your customer
    When we are testing software, it’s easy to get lost in the weeds of the day-to-day testing, without stopping to think about who our end user is and what they need. As a QA leader, it’s important to pay attention in product design meetings and look at the feedback you are receiving from your customers, and pass that information on to your team. When your testers know why a feature is being created and how it is being used, they can make better decisions about what to test.

  2. Communicate company information to your team
    As a leader, you will be invited to attend meetings that your team may not be invited to. This means that you have information about what’s going on in the company, such as whether there will be hiring or restructuring, or what the development strategy will be for the coming year. You should communicate this information to your team so that they will feel “in the loop” and won’t be worried about the company’s future.

  3. Solve problems for your team
    Testers have all kinds of annoyances that keep them from doing their job, for example: test environments that keep crashing, inaccurate test data, and incomplete Acceptance Criteria in stories. The more of these problems you can solve for your team, the happier and more productive they will be.

  4. Provide growth opportunities
    When you are a leader and already know the “right” way to do things, it’s easy to take on all the challenging work for yourself, and give the simpler tasks to your team. But if you do this, your team will never grow! You want your team to improve their testing skills, and the best way to do that is to give them challenges. Identify the next step in the growth of each team member and think of a task they can do to take on that next step. For example, if you have a team member who has been updating existing automated tests, but has never written one herself, challenge her to write a test for a new feature. Provide guidance and feedback when she needs it, and celebrate her success when she accomplishes the task. It’s also possible that your team member might discover a better way to do things than the way you were doing them, which will make your team even more effective!

  5. Express appreciation for your team
    Be sure to publicly praise your testers whenever they do something great, like find an important bug, create reliable test automation, or meet a crucial deadline. And make sure that you express your appreciation for them privately as well, for example: “Thanks for working late on Friday to test that new release in Production. I really appreciate your hard work.” People who feel appreciated are more likely to approach their work with a good attitude, which helps with team cohesion and productivity.

  6. When things go right, give credit to your team
    As a leader, you will probably be praised when your team has a successful software release. Make sure when you get that praise to give credit to your team. For example, you could say, “Well, I’m really grateful for Sue for the test harness she created. It enabled us to test through many more scenarios than we could have done if we were doing all manual testing.” Or, “Joe gets the credit for chasing down that last tricky bug. Customers would have been impacted if he hadn’t been so persistent.” When you do this, your team will see you as their cheerleader, and not as someone who takes all the glory for their hard work.

  7. When things go wrong, accept the blame yourself
    When a crucial mistake is made, such as a bug that made it into production, or important customer requirements that weren’t added to the product, it’s very tempting to play “the blame game”. No one wants to look bad, and if you feel like the mistake wasn’t your fault, you might want to explain whose fault it was. But don’t do this! Take the blame on the behalf of the team, and don’t specifically name others. For example, if it was Matt’s job to do the mobile testing, and he only tested on Android, don’t publicly blame Matt. Say: “We forgot to test the new feature on iOS devices. It’s my fault for not checking that the team did this.” After you explain the failure, talk about how you will prevent it in the future: “We now have a feature checklist that the team will be using for each release, which will include line items for both Android and iOS testing.” This is a great way to build team loyalty; when people know that they won’t be publicly shamed for their mistakes, they are more likely to innovate and take on new challenges, and they’ll also be very loyal to you and to the company.

Leadership is such an important skill, and so important in the area of software testing, where we can often be misunderstood or taken for granted. By following these seven steps, you’ll ensure that you have a happy, productive, accurate, and loyal team!

Fix Your Automation Hourglass

You have no doubt heard about the Test Automation Pyramid, where the recommendation is that your code has many unit tests, a smaller number of integration tests, and an even smaller number of UI tests. You may have also heard of the anti-pattern called the Test Automation Ice Cream Cone: this is where the code has many UI tests, a smaller number of integration tests, and an even smaller number of unit tests.

But have you heard of the Test Automation Hourglass? As you can probably guess, this is a situation where the developers have written a lot of unit tests and the testers have written a lot of UI tests, but nobody has written any integration tests. This is often a symptom of having test automation silos, which I wrote about a few weeks ago. Usually the Test Automation Hourglass means that you have too many UI tests. And as I’ve written about before, having too many UI tests means slower test runs. Below are three ways to fix this issue.

Step One: Look at your entire test suite
The first step in correcting your Test Automation Hourglass is to meet with your developers and take a look at the entire test suite. What unit tests are you running? What UI tests are you running? Do you have any integration tests, and if so, what are they testing?

Identify any duplicated efforts. For example, do you have a UI test that’s already being covered by a unit test? For any duplicated tests, delete the test that’s closest to the UI.

Take a look at your UI tests. Are there any that are testing code logic that would be better tested at the unit or integration level? Make a list of those tests and begin to consider them as tech debt. You can address that tech debt by creating a replacement unit or integration test. When the new test is created, you can delete the old test.

Identify missing tests. Are you fully exercising your business logic? There may be many API tests you can write to fix this. Think about what you could do with Create, Read, Update, and Delete requests that you may not be currently testing. Make a list of these missing tests and add them to your product backlog.

Step Two: Identify ways you can run integration tests
There are many different types of integration tests, and many different ways to run them. Here are some examples:

Testing directly in the code: this is a good place for tests that just want to validate that it’s possible to connect to a datastore or to a dependency such as a third-party API.

Using unit test tools such as Jest, Mocha, or Jasmine: these tools are commonly used by developers for unit tests, but they are great for integration tests as well. Calls to create, update, or delete data can be tested here.

Using API test tools such as Postman: APIs can be tested directly in the code or through unit testing tools, but it’s especially easy to set up API tests with tools specifically designed for that purpose. API test tools typically come with some kind of command-line runner, which makes it easy to automate the tests. For example, Postman uses Newman to automate API test execution.

Step Three: Start converting your tests
Once you have a list of the tests that you need to convert or write, you can start working on them. If you have chosen a test tool that both developers and testers feel comfortable with, you can all work on your test backlog.

Don’t try to change everything at once! Set some goals for the sprint, the month, or the quarter. Order your test backlog by importance, just as you would for other tech debt. As you gradually move your tests from the UI level to the integration level, make sure to celebrate your successes! You’ll be rewarded with faster-running, more reliable tests.

UI Unit Testing

Are you confused by this title? We generally think of unit testing as something that’s done deep in the code, and we think of UI testing as something that’s done with the browser once an application has been deployed. But recently I learned that there’s such a thing as UI unit testing, and I’m really excited about it!

Wikipedia defines unit testing in this way: “a software testing method by which individual units of source code—sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures—are tested to determine whether they are fit for use”. We can think of individual UI components as units of source code as well. Two popular front-end frameworks make it easy to do UI unit testing: React and Angular. In this post, I’ll walk through a simple example for each.

Fortunately the good people who write documentation for React and Angular have made it really easy to get set up with a new project! You can find the steps for setting up a React project here:, and the steps for setting up an Angular project here: I’ll be walking you through both sets of steps in this post. Important note: both React and Angular will need to have Node installed in order to run.

From the command line, navigate to the folder where you would like to create your React project. Then type the following:
npx create-react-app my-react-app
cd my-react-app

This will create a new React project called my-react-app, and then change directories to that project.
Next you’ll open that my-react-app folder in your favorite code editor (I really like VS Code).
Open up the src folder, then the App.js file. In that file you can see that there is some HTML which includes a link to a React tutorial with the text “Learn React”.
Now let’s take a look at the App.test.js file. This is our test file, and it has one test, which checks to see that the “Learn React” link is present.

Let’s try running the test! In the command line, type npm test, then the Enter key, then a. The test should run and pass.

If you open the App.js file and change the link text on line 19 to something else, like Hello World! and save, you’ll see the test run again and fail, because the link now has a different text. If you open the App.test.js file and change the getByText on line 7 from /learn react/ to /Hello World!/ and save, you’ll see the test run again, and this time it will pass.

To create an Angular app, you first need to install Angular. In the command line, navigate to the folder where you’d like to create your new project, and type npm install -g @angular/cli.
Now you can create your new app by typing ng new my-angular-app. Then change directories to your new app by typing cd my-angular-app.

Let’s take a look at the app you created by opening the my-angular-app folder in your favorite code editor. Open the src folder, then the app folder, and take a look at the app.component.ts file: it creates a web page with the title ‘my-angular-app’. Now let’s take a look at the the test file: app.component.spec.ts. It has three tests: it tests that the app has been created, that it has my-angular-app as the title and that the title is rendered on the page.

To run the tests, simply type ng-test in the command line. This will compile the code and launch a Karma window, which is where the tests are run. The tests should run and pass.

Let’s make a change to the app: in the app.component.ts file, change the title on line 9 to Hello Again!. The Karma tests will run again, and now you will see that two tests are failing, because we changed the title. We can update our tests by going to the app.component.spec.js file and changing lines 23, 26, and 33 to have Hello Again! instead of my-angular-app. Save, and the tests should run again and pass.

Did you notice something interesting about these tests? They didn’t actually require the application to be deployed! The Angular application did launch a local browser instance, but it wasn’t deployed anywhere else. And the React application didn’t launch a browser at all when it ran the tests. So there you have it! It’s possible to run unit tests on the UI.

Why We’ll Always Need Software Testers

Are you familiar with the Modern Testing Principles, created by Alan Page and Brent Jensen? I first heard of the principles about a year ago, and I was really excited about the ideas they contained. But I was uncomfortable with Principle #7, which is “We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.” Eliminating a testing specialist? This seemed wrong to me! I thought about it over several months and realized that yes, it’s possible for a team to develop such a good testing mindset and skillset that a dedicated testing expert wouldn’t be needed. But don’t hang up your QA hat just yet! Here are three reasons we will always need software testers.

Reason One: Teams Change

Did you hear about the software team that was so good that it never changed personnel? Nope, neither did I. Life brings about changes, and even a perfect team doesn’t last forever. A member could retire, take another job, or be moved to another team. New members could join the team. I’ve heard that every time a team changes even by one person, that team is brand new. This means that there will be new opportunities to coach the team to build in quality.

Even if the team members don’t change, there can be changes that are challenging for a team, such as a new technical problem to solve, a big push towards a deadline, or a sudden increase in the number of end users. All of these changes might result in new testing strategies, which means the team will need a software tester’s expertise.

Reason Two: Times Change

When I started my first software testing job, I had never heard of the Agile model of software development. But soon it became common knowledge, and now practically no one is using the old Waterfall model. Similarly, there was a time when I didn’t know what CI/CD was, and now it’s a goal for most teams. There are pushes to shift testing to the left (unit and integration tests running against a development build), and to shift testing to the right (testing in production).

Some of these practices may prove to be long-lasting, and some will be replaced by new ideas. Whenever a new idea emerges, new ways of thinking and behaving are necessary. Software testing experts will be needed to determine the best ways to adapt to these new strategies.

Reason Three: Tech Changes

Remember when Perl was a popular scripting language? Or when Pascal was the language of choice? The tools and languages that are in use today won’t be in use forever. Someone will come along and create a newer, more efficient tool or language that will replace the previous one. For example, Cypress recently emerged as an alternative to Selenium Webdriver, which has been the tool of choice for years. And companies are frequently moving toward cloud providers such as AWS or Azure Web Services to reduce the processing load on their servers.

When a team adopts a new technology, there will always be some uncertainty. As a feature is developed, there may be changes made to the configuration or coding strategy, so it may be unclear at first how to design test automation. It can take a team a while to adapt and learn what’s best. A testing expert will be very valuable in this situation.

Change happens, and teams must adapt to change, so it is helpful to have a team member who understands the best way to write a test plan, conduct exploratory testing, or evaluate risk. So don’t go looking for a software development job! Your testing expertise is still needed.

Book Review: Performance Testing- A Practical Guide and Approach

It’s book review time once again, and this month I read “Performance Testing- A Practical Guide and Approach” by Albert Witteveen. I’ve been looking for a resource on performance testing for a long time, because I’ve found that most of the articles and presentations on performance testing either assume a lot of prior knowledge, or focus on using a specific tool without explaining the reasoning behind what is being tested.

This book was definitely what I needed! It explains very clearly why we should be doing performance testing, what kinds of tests we should run, how we should set the tests up, how to run them, and how to report on the results.

Here are some of the things I learned in this book:

Performance testing simply means testing a system to see if it performs its functions within given constraints such as processing time and throughput rate.

Load testing is measuring the performance of an application when we apply a load that is the same as what we would expect in the production environment.

Stress testing is finding out at what point an application will break when we apply an increasing load, and determining what will happen when it breaks.

Endurance testing is about testing with a load over an extended period of time. This can be helpful in discovering problems such as memory leaks.

Virtual users refers to the number of simulated users we are using to test the system.

Performance tests need to be planned thoughtfully; it’s not just a matter of throwing load on every web page or API call. The tester needs to think about where the potential risks of load are and design tests to investigate those risks.

How to create a load test:

  • Generally load tests are created by recording an activity in order to create a test script. Next, you’ll need to add a data set for use in testing.
  • You should run your load test first with just one user. You’ll need to build in some kind of check in your script to make sure that the script is working as you are expecting it to. For example, if you are load testing the login page, you’ll want to see that the user was able to log in successfully instead of getting an error message.
  • Once your script is working well with one user, try it with three users, and make sure that it’s using unique values from your test data instead of using the same values three times.
  • When you have validated that your script is working correctly, you can execute your load test by adding the number of virtual users and ramp-up time that are appropriate to what you would expect your application to be able to handle in production.

It’s very important to monitor things such as CPU usage, memory usage, database activities, and network activity when you are running a load test. Just measuring response times isn’t going to give you the whole picture.

It’s also important to know what kind of queuing your application is using so you can locate bottlenecks in performance. The author uses an easy to understand analogy with a supermarket:

  • A small market with just one checkout lane is like a system with a single CPU. Every customer has to go through this queue, and it’s first come, first served.
  • A larger market with more than one checkout lane is like a system with multiple web servers. Every customer (or in the case of the web servers, the load balancer) picks one checkout lane, and waits to go through that lane.
  • A deli where the customer takes a number and waits their turn is like Web server software where multiple workers can process the request. The customer waits their turn and can be picked up by any one of the worker processes.

Load testing tools themselves generate load when they are running! For that reason, it’s best to keep test scripts simple.

This is just a small sampling of what I learned from this book. It’s a short book, but filled with great explanations and advice. One thing worth mentioning is that there are a number of grammatical errors in the book, and even a few chapters that are missing the last words in the last sentence. It makes reading the book a little slower, since the reader sometimes has to guess at what was meant.

But in spite of these issues, it’s a great book for getting started with performance testing! I recommend it to anyone who would like to understand performance testing or run load tests on their application.

Adventures in Node: Async/Await

As I’ve mentioned in previous posts, I’ve been taking an awesome course on Node for the last several months. This course has really helped me learn JavaScript concepts that I knew about but didn’t understand.

Last month I explained promises. The basic idea of promises is that node functions happen asynchronously, so promises are like place-savers that wait for either a resolve or reject response.

Here’s an example of a function with a promise:

const add = (a, b) => {
    return new Promise((resolve, reject) => {
        setTimeout(() => {
        }, 2000)

In this function, we’re simply adding two numbers. The setTimeout for two seconds is added to make the function take a little longer, as a real function would.

Here’s what calling the function might look like:

add(1, 2).then((sum) => {

The add function is called, and then we use the then command to log the result. Using then() means that we’re waiting for the promise to be resolved before we go on.

But what if we needed to call the function a second time? Let’s say we wanted to add two numbers, then we wanted to take that sum and add a third number to it. For this we’d need to do promise chaining. Here’s an example of promise chaining:

add(1, 2).then((sum) => {
    add(sum, 3).then((sum2) => {

The add function is called, and we use the then() command with the sum that is returned. Then we call the function again, and use the then() command once again with the new sum that is returned. Finally, we log the new sum.

This works just fine, but imagine if we had to chain a number of function calls together? It would start to get pretty tricky with all the thens, curly braces and indenting. And this is why async/await was invented!

With async/await, you don’t have to chain promises together. You create a new async function call, and then you use the await command when you are calling a function with a promise.

Here’s what the chained promise call would look like if we used async/await instead:

const getSum = async () => {
    const sum = await add(1, 2)
    const sum2 = await add(sum, 3)


We’re creating a new async function called getSum, by using this command:
const getSum = async () =>. In that function, we’re first calling the add function with an await, and we’re setting the result of that call to the variable called sum. Then we’re calling the add function again with an await, and we’re setting the result of that call to the variable called sum2. Finally, we’re logging the value of sum2.

Now that the async function has been created, we’re calling it with the getSum() command.

It’s pretty clear that this code is easier to read with async/await! Keep in mind that promises are still being used here; the add() function still returns a promise. But async/await provides a way to call a promise function without having to add in a then() statement.

Now that I understand about async/await, I plan to use it whenever I am doing development or writing test code in Node. I hope you’ll take the time to try out these examples for yourself to see how they work!

Tear Down Your Automation Silos!

On many software teams, developers are responsible for writing unit and component tests, and software testers are responsible for writing API and UI tests.  It’s great that teams have so much test coverage, but problems can arise when test automation is siloed in this way.  For one thing, developers and software testers often don’t know how each other’s tests work, which means if a developer makes a change that breaks a test, they don’t know how to fix it.  And if only one person on the team knows how the deployment smoke tests work, then that person will need to be on call for every single deployment.  

I recommend that every developer and software tester on the team know how to write and maintain every type of test automation for their product.  Here are three good reasons to break down automation silos: 

No more test overlap: If automated tests are siloed between developers and testers, it’s possible that there is work that is duplicated.  Why have several UI tests that exercise business logic when there are already integration tests that do this?

No more bottlenecks: Testers are often required to create and maintain all the UI automation while at the same time doing all the testing.  If a developer pushes a change that breaks a UI test, it’s often up to the tester to figure out what’s wrong.  If developers know how the UI automation works, they can fix any tests they break, and even add new tests when needed, allowing testers to finish testing new features.  

Knowledge sharing: Software testers have a very special skill set; they can look at application features and think of ways to test the limits of those features.  By learning from testers, developers will become better at testing their own code.  Developers have a very special skill set as well; they are very familiar with good coding patterns.  Many software testers arrive at QA from diverse backgrounds, and don’t always have formal training in coding.  They can benefit from learning clean coding skills from developers.  

By breaking down automation silos and taking responsibility for test automation together, software developers and software testers can benefit from and help each other, speeding up development and improving the quality of the application.  

Book Review: Unit Testing Principles, Practices, and Patterns

It’s book review time once again, and this month I read Unit Testing Principles, Practices, and Patterns by Vladimir Khorikov.  I thought that a book about unit testing would be pretty dry, but it was really interesting!

Since I am not a developer I don’t usually write unit tests, although I have done so occasionally when a developer asks me to help.  Being a good tester, I knew to do things like mock out dependencies and keep my tests idempotent, but through this book I discovered lots of things I didn’t know about unit testing.

The author has a background in mathematics, and it shows.  He is very systematic in his process of explaining good unit test patterns, and each chapter builds upon the previous one.  Here are some of the important things I learned from this book:

  • There are two schools of thought about unit testing: the classical school and the London school.  In the classical school, unit tests are not always limited to a single class.  The tests are more concerned with units of behavior.  Dependencies, such as other classes, don’t need to be mocked if they aren’t shared.  In the London school, unit tests are limited to a single class, and calls to other classes are always mocked, even if they are part of the same code base and not shared with any other code.
  • Unit tests should always follow this pattern:
    • Arrange- where the variables, mocks, and system under test (SUT) are set up
    • Act- where something is done to the SUT
    • Assert- where we assert that the result is what we expect
  • The Act section of the unit test should only have one line of code.  If it has more than one line of code, that probably means that we are testing more than one thing at a time.
  • A good unit test has the following characteristics:
    • It’s protected against regressions- it shouldn’t break when you change something unrelated in the code
    • It’s resistant to refactoring- refactoring the code shouldn’t break the test
    • It provides fast feedback
    • It’s maintainable- it’s easy for someone to look at the test, see what it’s supposed to do, and make changes to it when needed
  • Mocks and stubs are both types of test doubles: faked dependencies in tests which are used instead of calling the real dependencies in order to keep the tests fast, resilient, and focused only on the code being tested.
    • Mocks emulate outgoing interactions, such as putting a message on a service bus
    • Stubs emulate incoming interactions, such as receiving data from a database
  • Test doubles should only be used with inter-system communications: calls to something outside the code, like a shared database or an email server.  For intra-system communications, where a datastore or class is solely owned by the code, the call shouldn’t be mocked or stubbed.

The most interesting thing I learned from the book was that it’s really hard to write good unit tests when the code is bad.  The author provides lots of examples of how to refactor code in order to make tests more robust.  These practices also result in better code!  Reading through the examples, I now understand how to better organize my code by separating it into two groups: code that makes a decision, such as function that adds two numbers, and code that acts upon a decision, such as writing a sum to a database.

The author doesn’t just write about unit tests in this book; he also describes how to write integration tests, and provides examples of writing tests for interacting with databases.  

I learned much more than I was expecting to from this book!  Software test engineers will find many helpful ideas for all types of automation code in this book. Software developers will not only improve their unit test writing, but also their coding skills.  I recommend it to anyone who would like to improve their test automation.

Managing Test Data

It’s never fun to start your work day and discover that some or all of your nightly automated tests failed. It’s especially frustrating when you discover that the reason why your tests failed was because someone changed your test data.

Test data issues are a very common source of aggravation for software testers. Whether you are testing manually or running automation, if you think your data is set the way you want it, and it has been changed, you will waste time trying to figure out why your test results aren’t right.

Here are some of the common problems with test data:

Users overwrite each others’ data
I was on a team that had an API I’ll call API 1. I wrote several automated tests for this API using a test user. API 1 was moved to another team, and my team started working on API 2. I wrote several automated tests for API 2 as well. Unfortunately, I used the same test user for API 2, and this test user needed to have a different email address for API 2 than it did for API 1. This meant that whenever automated tests were run on API 1, it changed the address of the test user, and then my API 2 tests would fail.

Configuration is changed by another team
When teams need to share a test environment, changes to the environment configuration made by one team can impact another team. This is especially common when using feature toggles. One team might have test automation set up with the assumption that a feature toggle will be on, but another team might have automation set up with the expectation that the feature toggle is off.

Data is deleted or changed by a database refresh
Companies that use sensitive data often need to periodically scramble or overwrite that data to make sure that no one is testing with real customer information. When this happens, test users that have been set up for automation or manual testing can be renamed, changed, or deleted, causing tests to fail.

Data becomes stale
Sometimes data that is valid at one point in time becomes invalid as time passes. A great example of this is a calendar date. If an automated test needs a date in the future, the test writer might choose a date a year or two from now. Unfortunately, in a year or two, that future date will become a past date, and then the test will fail.

What can we do about these problems? Here are some suggestions:

Use Docker
Using a virtual environment like Docker means that you have complete control over your test environment, including your application configuration and your database. To run your tests, you spin up a virtual machine, run the tests, and destroy the machine when the tests have completed.

Create a fresh database for testing
It’s possible to create a brand-new database for the sole purpose of running your test automation. With Windows, this can be accomplished by creating a SQL DACPAC. You can set your database schema, add in only the data that you need for testing, create the database, point your tests to that database, and destroy the database when you are finished.

Give each team their own test space
Even if teams have to share the same test environment, they might be able to divide their testing up by account. For example, if your application has several test companies, each team can get a different test company to use for testing. This is especially helpful when dealing with toggles; one team’s test company can have a feature toggled on while another team’s test company has that feature toggled off.

Give each team their own users
If you have a situation where all teams have to use the same test environment and the same test account, you can still assign each team a different set of test users. This way teams won’t accidentally overwrite one another’s data. You can give your users names specific to your team, such as “Sue GreenTeamUser”. 

Create new data each time you test
One great way to manage test data is to create the data you need at the beginning of the test. For example, if you need a customer for your test, you create the new customer at the beginning of your test suite, use that customer for your tests, and then delete the customer at the end of your tests. This ensures that your test data is always exactly the way you want it, and it doesn’t add bloat to the existing database.

Use “today+1” for dates in the future
Rather than choosing an arbitrary date in the future, you can get today’s date, and then use an operation like DateAdd to add some interval, like a day, month, or year, to today’s date. This way your test date will always be in the future. 

Working with test data can be very frustrating. But with some planning and strategy, you can ensure that your data will be correct whenever you run a test. 

Why We Test

Most software testers, when asked why they enjoy testing, will say things like:

  • I like to think about all the ways I can test new features
  • It’s fun to come up with ways to break the software
  • I like the challenge of learning how the different parts of an application work

I certainly agree with all of those statements!  Testing software is fun, creative, and challenging. 

But this is not WHY we are testing.  We test to find out things about an application in order to ensure that our end users have a good experience with it.  Software is built in order to be used for something; if it doesn’t work well or correctly, it is not accomplishing its purpose.  

For example:

  • If a mobile app won’t load quickly, users will stop using it or delete the app from their phone
  • If a financial app has a security breach, they’ll lose customers and may even be sued for damages
  • If an online store has a bug that keeps shoppers from completing their purchases, the company will lose out on sales

There are even documented cases of people losing their lives because of problems with software!  

So while it’s fun to find bugs, it’s also critically important.  And it’s even more important to remember that the true test of software is how it behaves in production with real users.  Often testers keep their focus on their test environment, because that’s where they have the most control over the software under test, but it’s crucial to test in production as well.  

I have seen situations where testers only tested a new feature in their test environment, and then were totally surprised when users reported that the feature didn’t work at all in production!  This was because there were environment variables that were hard-coded to match the test environment.  The feature was released to production, and the testers didn’t bother to check it.

Having things “work” in production is only one facet of quality, however.  We also need to make sure that pages load within a reasonable amount of time, that data is saved correctly, and that the system behaves well under times of high use.

Take a moment to think about the application you test.  In production:

  • Is it usable?
  • Is it reliable?
  • Is the user’s data secure?
  • Do the pages load quickly?
  • Are API response times quick?
  • Do you monitor production use, and are you alerted automatically if there’s a problem?
  • Can you search your application’s logs for errors?

Saying “But it worked in the test environment” is the tester’s equivalent of the developer saying “But it worked on my machine”.  It’s fun to test and find bugs.  It’s fun to check items off in test plans.  It’s fun to see test automation run and pass. But none of those things matter if your end user has a poor experience with your application.