How to Use Breakpoints in Dev Tools- Part I

The modern software development team is beginning to blur the traditional lines between developer and tester. This means that developers are starting to get more involved with writing test automation at all levels of the pyramid, and it also means that testers are starting to get more involved with the inner workings of an application. Wouldn’t it be great if you were able to find the causes of JavaScript bugs rather than just reporting on them? Using breakpoints in Chrome’s Dev Tools is a great way to do this!

In this post, I’ll be leading you through a hands-on tutorial on using Chrome Dev Tools for debugging. I learned this information from this helpful Google blog post, but I’m using an example application so you can try the various breakpoint methods yourself.

It’s important to note that in order to use these breakpoints, you’ll need to have your application installed locally. You can’t just go to a deployed version of your company’s application and debug it. This post shows you how to run a sample React application locally; you can work with your developers to get your team’s application running locally as well.

Getting Started

To follow along with this post, you’ll need to have Node.js and Git installed. Then you’ll need to clone this repository. If you’ve never cloned a repository before, you can find instructions here. This repo is a very simple counter application that could be used for something like adding items to a shopping cart. It doesn’t do much, but it does enough that we’ll be able to try out some breakpoints with it.

Once you’ve cloned the repository, then you’ll need to do the following:
cd counter-app This moves you to the application folder.
npm install This gets all the packages needed to run the application and builds it.
npm start This starts the application.
You should see the application open up in your Chrome browser. It will look like this:

Next you’ll need to open up Dev Tools. You can do this by using the three-dot menu in the upper-right corner of the Chrome browser. From that menu, choose “More Tools”, and then “Developer Tools”. The Dev Tools will open at the side of the browser, and will look something like this:

We are now nearly ready to begin! With each type of breakpoint, we’re going to set it, watch it work, press the “Resume script execution” button, then clear the breakpoint. The “Resume script execution” button is found at the top of the right-most panel, and looks like this:

To clear a breakpoint, simply right-click on it, and choose “Remove breakpoint”. Let’s get started!

Line of Code Breakpoint
The line of code breakpoint is the most commonly used breakpoint. When the application code is executing and it arrives at this line, the code execution will pause. You can use this pause to analyze the current state of the application, from what is visible to what the variable values are.

If the “Sources” tab is not already selected at the top of the Dev Tools pane, select it now. Then look in the left section of Dev Tools to find the src folder, and inside that folder, click on the App.js file. The App.js file will open up in the center pane of Dev Tools.

Now find line 17, and click on the line number. The number will be highlighted with a blue arrow like this:

This line of code is in the handleIncrement function, which is the function that is called when you press one of the plus (+) buttons in the application. Press a plus button now. You should see Line 17 highlighted, and a message in the rightmost pane that says “Paused on breakpoint”. While you are paused, you can look at the Call Stack and Scope sections to learn more about what’s happening in the application at this moment.
Don’t forget to remove the breakpoint on line 17, and click the Resume button before going on to the next section!

Conditional Line of Code Breakpoint
Now let’s try out a conditional line of code breakpoint. This is a breakpoint that only executes if a certain parameter has been met. Right-click on line 19 of the App.js code, and choose “Add conditional breakpoint”. A breakpoint will be added, and there will be a little popup window where you can add your condition. Add this condition: index === 0 and click the Return or Enter key.

Now that the conditional breakpoint has been set, let’s see it in action. If you haven’t clicked any other buttons since the previous exercise, you should have a counter that is currently set to 1. Click the plus button again. The code execution won’t stop at the breakpoint, because the counter is currently set to 1. Click the minus button () to return the counter to Zero. Now click the plus button again. This time the code execution will stop at the breakpoint, because the current condition of the counter at line 19 is Zero.
Don’t forget to remove the breakpoint on line 19, and click the Resume button before going on to the next section!

DOM Breakpoint

It’s also possible to set a breakpoint directly in the DOM. To do this, we’ll first click on the Elements tab in the top bar of Dev Tools. Change the topmost counter from 1 back to Zero. Now right-click on that Zero counter and choose “Inspect”. The center pane of Dev Tools will navigate to and highlight the section of the DOM where this counter exists. Right-click on that highlighted section and choose “Break on” and then “Attribute modifications”. When this counter moves from Zero to 1, it changes attributes (namely, the color), so by selecting “Attribute Modifications” we are telling the code to stop whenever the attribute of this icon is about to change.

Click the plus button and you’ll see the execution stop at the breakpoint, because the color of the counter is about to change from yellow to blue. Click the “Resume execution” button and you’ll see the code execute and the counter change to 1 (and turn blue). Click the plus button again, and the execution won’t stop at the breakpoint, because when going from 1 to 2, the color doesn’t change. But if you click the minus button twice to go back to Zero, you’ll see the code execution pause at the breakpoint again, because the color of the counter is about to change from blue to yellow.
Don’t forget to remove the breakpoint and click the Resume button when you are done with this section!

This is becoming quite a long post, and we’ve got four more breakpoints to learn about! So I’m going to stop here and continue next week. Until next time!

A Tale of Two Testers

Meet Derek and Emma. They are both Software Test Engineers. Derek works for a company called ContactCo, which is building a web application to allow users to add and manage their contacts. Emma works for a competitor of ContactCo, called ContactsRUs. ContactsRUs is building a similar application to the one ContactCo is buildling.

Emma is very proud of her ability to create test automation frameworks. As soon as development begins on the new app, she gets to work on a UI automation suite. She writes dozens of well-organized automated tests, and sets them to run with every build that the developers check in. The tests are all passing, so she feels really good about the health of the application. She also creates a set of smoke tests that will run with every deploy to every environment. If the smoke tests pass, the deployment will automatically continue to the next environment, all the way through Production; if the tests fail, the deployment will be rolled back and the deployment process will stop. After just three weeks, she’s got a CI/CD system in place, and everyone praises her for the great job she’s done.

Derek begins his involvement with ContactCo’s new app by attending the product design meetings and asking questions. He reads through the user stories so he understands the end user and knows what kinds of actions they’ll be taking. As the application takes shape, he does lots of manual exploratory testing, both with the API and the UI. He tries out the application on various browsers and with various screen sizes. At the end of the first two weeks of development, he’s found several UI and API bugs that the developers have fixed.

Next, Derek works with developers to find out what unit and integration tests they currently have running, and suggests some tests that might be missing. He talks with the whole team to determine what the best automated framework would be for API and UI testing, and works with them to get it set up. He spends a lot of time thinking about which tests should run with the build, and which should run with the deployment; and he thinks about which tests should be run solely against the API in order to minimize the amount of UI automation. Once he has a good test strategy, he starts writing his automated tests. At the end of the third week of development, he’s got some automated tests written, but he’s planning to add more, and he doesn’t quite have the CI/CD process set up yet.

At the end of the three weeks, both ContactCo and ContactsRUs release their applications to Production. Which application do you think will be more successful? Read on to find out!

**********

Derek’s application at ContactCo is a big hit with users. They comment on how intuitive the user interface is, and by the end of the first week, no bugs have been reported. Customers have suggestions for features they’d like to see added to the application, and the team at ContactCo gets started with a new round of product design meetings, which Derek attends. When he’s not in meetings, he continues to work on adding to the automated test framework and setting up CI/CD.

Emma’s application at ContactsRUs was released to Production, and the very same day the company started to get calls from customers. Most of the ContactsRUs customers use the Edge browser, and it turns out there are a number of rendering issues on that browser that Emma didn’t catch. Why didn’t she catch them? Because she never tested in Edge!

The next day the company receives a report that users are able to see contacts belonging to other customers. Emma thinks that this can’t be possible, because she has several UI tests that log in as various users, and she’s verified that they can’t see each other’s data. It turns out that there’s a security hole; if a customer makes an API call to get a list of contacts, ALL of the contacts are returned, not just the contacts associated with their login. Emma never checked out the API, so she missed this critical bug.

Developers work late into the night to fix the security hole before anyone can exploit it. They’ve already lost some of their customers because of this, but they release the fix and hope that this will be the last of their problems. Unfortunately, on the third day, Emma gets an angry message from the team’s Product Owner that the Search function doesn’t work. “Of course it works,” replies Emma. “I have an automated test that shows that it works.” When Emma and the Product Owner investigate, they discover that the Search function works fine with letters, but doesn’t work with numbers, so customer’s can’t search their contacts by phone number. This was a critical use case for the application, but Emma didn’t know that because she didn’t attend the product meetings and didn’t pay attention to the feature’s Acceptance Criteria. As a result, they lose a few more customers who were counting on this feature to work for them.

The Moral(s) of the Story

Were you surprised by what happened to ContactsRUs? It might have seemed that they’d be successful because they implemented CI/CD so quickly into their application. But CI/CD doesn’t matter if you neglect these two important steps:

  1. Understand the product you are testing. Know who your end users are, what they want from the product, and how they will be using it. Pay attention in planning meetings and participate in the creation of Acceptance Criteria for development stories.
  2. Look for bugs in the product. Many software test engineers jump right to automation without remembering that their primary role is to FIND THE BUGS. If there are bugs in your product, the end users aren’t going to care about your really well-organized code!

Every good fable deserves a happy ending! Hopefully you have learned from Derek and Emma and will make sure that you understand and test your software before writing good automation.

Adventures in Node: Shorthand and Destructuring

One of the things I struggled with when I started learning object-oriented programming (OOP) was the way objects were created and referenced. Things seemed so much simpler to me years ago when a computer program was in one file and you declared all your variables at the top!

But the more I’ve worked with OOP, the more I’ve been able to understand it. And the work I’ve done while taking this awesome Node.js course has helped me even more. I’ve already written a few posts about the great things I’ve learned in Node.js: Async/Await functions, Promises, and Arrow Functions. In this post, I’ll talk about two great ways to write less code: Object Property Shorthand and Object Destructuring.

First, let’s take a look at how Object Property Shorthand works. We’ll start with a few variables:
var name = ‘Fluffy’
var type = ‘Cat’
var age = 2

Now we’ll use those variables to create a pet object:
const pet = {
name: name,
type: type,
age: age
}

When we create this object, we are saying that we want the pet’s name to equal the name variable that we set above, the pet’s type to equal the type variable we set above, and the pet’s age to equal the age variable we set above.

But doesn’t it seem kind of silly to have to repeat ourselves in this way? Imagine if the pet object had several more properties: we’d go on adding things like ownerName: ownerName, address: address, and so on. Wouldn’t it be nice to save ourselves some typing?

Good news- we can! If the property name in our object matches the variable name we are setting it to, we can save typing time by doing this:
const pet = {
name,
type,
age
}

That’s a lot easier to type, and to read! If you have Node.js installed, you can try it for yourself by creating a simple file called pet.js. Begin by declaring the variables, then create the object the old way, then add console.log(pet). Run the program with node pet.js, and you should get this response: { name: ‘Fluffy’, type: ‘Cat’, age: 2 }. Now delete the object creation and create the pet the new way. Run the program again, and you should get the same response!

Now let’s take a look at Object Destructuring. We’ll start by creating a new book object:
const book = {
title: ‘Alice in Wonderland’,
author: ‘Lewis Carroll’,
year: ‘1865’
}

The old way of referencing the properties of this book object is to refer to them by starting with “book.” and then adding on the property name. For example:
const title = book.title,
const author = book.author,
const year = book.year

But just as we saw with Object Property Shorthand, this seems a little too repetitive. If the property name is going to match the variable name we want to use, we can set all of the variables like this:
const {title, author, year} = book

What this command is saying is “Take these three properties from book, and assign them to variables that have the same name.”

You can try this out yourself by creating the book object, setting the variables the old way, then adding a log command: console.log(title, author, year). Run the file, and you should see Alice in Wonderland Lewis Carroll 1865 as the response. Then replace the old variable assignments with the new Destructuring way, and run the file again. You should get the same response.

Keep in mind that Shorthand and Destructuring only work if the property name and the variable name match! In our pet example, if we wanted to set a pet property called petAge instead of age, we’d need to do this:
const pet = {
name,
type,
petAge: age
}

And in our book example, if we wanted to use a variable called authorName instead of author, we’d need to do this:
const {title, author:authorName, year} = book

Even if we don’t have a perfect match between all our object properties and variables, it’s still easy to see how much less typing we need to do when we use Shorthand and Destructuring! I hope you’ll find this helpful as you are writing automation code.

Book Review: Explore It!

I’ve been recommending the book Explore It! by Elizabeth Hendrickson for years, because I had heard the author interviewed in a podcast, and because I was familiar with her Test Heuristics Cheat Sheet. This month, I decided it was about time that I actually read her book, and I am so glad I did!

This book should be required reading for anyone who wants to test software. It contains the most thorough descriptions I have ever read of how to think about what to test. Developers will find the book extremely helpful as well; if they would use its ideas to test their own software, they’d find bugs long before anyone else did!

The author begins the book with a discussion of the difference between exploring and checking. Checking is what a tester does when they want to make sure that the software does what it’s supposed to do. Exploring is what a tester does when they want to find out what happens if the user or the system doesn’t behave as expected. In order to fully test a system, you need to use both methods. I’ve met many software testers who just check the software. Then they (and their team) are surprised when bugs are discovered in production!

My favorite thing about the book is the testing heuristics that are suggested. These are concepts that you can apply to virtually any application, and they’d make a great mental (or physical) checklist when you are testing. Here are some of my favorites:

  • Zero, One, Many: what happens if you have zero of something that is expected, or just one of something where there are supposed to be many, or many of something where is supposed to be one?
  • Beginning, Middle, End: what if you do something at the beginning of a list or a text block? What if you do it at the end? What if you do it somewhere in the middle?
  • Nesting: for a file structure, XML, or JSON that is nested, how deep can the nesting go? Is there a point where nesting is so deep that it breaks the system?
  • Nouns and Verbs: think about all the nouns in your application, such as “file”, “message”, “record”. Then think about all the verbs in your system, such as “create”, “send”, “delete”. Try pairing the nouns and verbs in new ways and doing those actions to see if you can find any action that is unsupported and causes errors.
  • Reverse: think about the typical user workflow and try it all in reverse. See if you can confuse the system or generate unexpected behavior.
  • Files: delete a file that is expected by the application, or replace it with a corrupted or empty file. See how the system responds.
  • Stress the system: leave the application running overnight and see what happens. Or try chaos testing, either through using chaos testing software or by putting a shoe (or a toddler or cat) on the keyboard.
  • Interrupt the state: see what happens if you disconnect from the network or put the device to sleep in the middle of an operation.

Another thing I really enjoyed about the book was the stories the author told of real-world bugs she found and situations she encountered. These are always so entertaining and instructive for testers! My favorite story was the one where she had a product owner and a lead developer who she was absolutely sure were misunderstanding each other. When she voiced her concerns to each of them, they dismissed her and said they didn’t need another product meeting; they knew they were both clear about the product expectations. Finally, she got them to come to a “test planning” meeting, and when she outlined the most basic use case that the product owner was expecting, the developer said that the use case would be impossible. It was eye-opening for both of them!

Talking with product designers and developers is an important theme in the book. The author advises that it’s important to meet with designers and developers to determine what the Never and Always rules are for a feature or application. For example, if you are testing a financial app, it’s fairly certain that you would Never want a monetary calculation to be rounded incorrectly. And when you are working with a medical device, you would Always want it work, even in the event of a power failure. When you learn what the Never and Always rules are for your application, you can use that knowledge to design appropriate tests.

Explore It! was clearly written, entertaining, and filled with great advice for exploratory testing and for creating test plans. I recommend it for anyone who wants to make sure they are releasing high-quality software!

How to Be a QA Leader

The most frequent question I get from readers of my blog is one like this: “I’ve just been promoted to QA Lead. What do I need to do to be successful in this position?”

Whether you have been made a lead or a manager, it can be a bit daunting to be leading a group, especially if you have never been a leader before. Here are seven things you can do to be a successful QA leader, gleaned from both my experience as a QA Lead and QA Manager, and from leadership experience I’ve had in other areas of my life.

  1. Pay attention to the needs of your customer
    When we are testing software, it’s easy to get lost in the weeds of the day-to-day testing, without stopping to think about who our end user is and what they need. As a QA leader, it’s important to pay attention in product design meetings and look at the feedback you are receiving from your customers, and pass that information on to your team. When your testers know why a feature is being created and how it is being used, they can make better decisions about what to test.

  2. Communicate company information to your team
    As a leader, you will be invited to attend meetings that your team may not be invited to. This means that you have information about what’s going on in the company, such as whether there will be hiring or restructuring, or what the development strategy will be for the coming year. You should communicate this information to your team so that they will feel “in the loop” and won’t be worried about the company’s future.

  3. Solve problems for your team
    Testers have all kinds of annoyances that keep them from doing their job, for example: test environments that keep crashing, inaccurate test data, and incomplete Acceptance Criteria in stories. The more of these problems you can solve for your team, the happier and more productive they will be.

  4. Provide growth opportunities
    When you are a leader and already know the “right” way to do things, it’s easy to take on all the challenging work for yourself, and give the simpler tasks to your team. But if you do this, your team will never grow! You want your team to improve their testing skills, and the best way to do that is to give them challenges. Identify the next step in the growth of each team member and think of a task they can do to take on that next step. For example, if you have a team member who has been updating existing automated tests, but has never written one herself, challenge her to write a test for a new feature. Provide guidance and feedback when she needs it, and celebrate her success when she accomplishes the task. It’s also possible that your team member might discover a better way to do things than the way you were doing them, which will make your team even more effective!

  5. Express appreciation for your team
    Be sure to publicly praise your testers whenever they do something great, like find an important bug, create reliable test automation, or meet a crucial deadline. And make sure that you express your appreciation for them privately as well, for example: “Thanks for working late on Friday to test that new release in Production. I really appreciate your hard work.” People who feel appreciated are more likely to approach their work with a good attitude, which helps with team cohesion and productivity.

  6. When things go right, give credit to your team
    As a leader, you will probably be praised when your team has a successful software release. Make sure when you get that praise to give credit to your team. For example, you could say, “Well, I’m really grateful for Sue for the test harness she created. It enabled us to test through many more scenarios than we could have done if we were doing all manual testing.” Or, “Joe gets the credit for chasing down that last tricky bug. Customers would have been impacted if he hadn’t been so persistent.” When you do this, your team will see you as their cheerleader, and not as someone who takes all the glory for their hard work.

  7. When things go wrong, accept the blame yourself
    When a crucial mistake is made, such as a bug that made it into production, or important customer requirements that weren’t added to the product, it’s very tempting to play “the blame game”. No one wants to look bad, and if you feel like the mistake wasn’t your fault, you might want to explain whose fault it was. But don’t do this! Take the blame on the behalf of the team, and don’t specifically name others. For example, if it was Matt’s job to do the mobile testing, and he only tested on Android, don’t publicly blame Matt. Say: “We forgot to test the new feature on iOS devices. It’s my fault for not checking that the team did this.” After you explain the failure, talk about how you will prevent it in the future: “We now have a feature checklist that the team will be using for each release, which will include line items for both Android and iOS testing.” This is a great way to build team loyalty; when people know that they won’t be publicly shamed for their mistakes, they are more likely to innovate and take on new challenges, and they’ll also be very loyal to you and to the company.

Leadership is such an important skill, and so important in the area of software testing, where we can often be misunderstood or taken for granted. By following these seven steps, you’ll ensure that you have a happy, productive, accurate, and loyal team!

Fix Your Automation Hourglass

You have no doubt heard about the Test Automation Pyramid, where the recommendation is that your code has many unit tests, a smaller number of integration tests, and an even smaller number of UI tests. You may have also heard of the anti-pattern called the Test Automation Ice Cream Cone: this is where the code has many UI tests, a smaller number of integration tests, and an even smaller number of unit tests.

But have you heard of the Test Automation Hourglass? As you can probably guess, this is a situation where the developers have written a lot of unit tests and the testers have written a lot of UI tests, but nobody has written any integration tests. This is often a symptom of having test automation silos, which I wrote about a few weeks ago. Usually the Test Automation Hourglass means that you have too many UI tests. And as I’ve written about before, having too many UI tests means slower test runs. Below are three ways to fix this issue.

Step One: Look at your entire test suite
The first step in correcting your Test Automation Hourglass is to meet with your developers and take a look at the entire test suite. What unit tests are you running? What UI tests are you running? Do you have any integration tests, and if so, what are they testing?

Identify any duplicated efforts. For example, do you have a UI test that’s already being covered by a unit test? For any duplicated tests, delete the test that’s closest to the UI.

Take a look at your UI tests. Are there any that are testing code logic that would be better tested at the unit or integration level? Make a list of those tests and begin to consider them as tech debt. You can address that tech debt by creating a replacement unit or integration test. When the new test is created, you can delete the old test.

Identify missing tests. Are you fully exercising your business logic? There may be many API tests you can write to fix this. Think about what you could do with Create, Read, Update, and Delete requests that you may not be currently testing. Make a list of these missing tests and add them to your product backlog.

Step Two: Identify ways you can run integration tests
There are many different types of integration tests, and many different ways to run them. Here are some examples:

Testing directly in the code: this is a good place for tests that just want to validate that it’s possible to connect to a datastore or to a dependency such as a third-party API.

Using unit test tools such as Jest, Mocha, or Jasmine: these tools are commonly used by developers for unit tests, but they are great for integration tests as well. Calls to create, update, or delete data can be tested here.

Using API test tools such as Postman: APIs can be tested directly in the code or through unit testing tools, but it’s especially easy to set up API tests with tools specifically designed for that purpose. API test tools typically come with some kind of command-line runner, which makes it easy to automate the tests. For example, Postman uses Newman to automate API test execution.

Step Three: Start converting your tests
Once you have a list of the tests that you need to convert or write, you can start working on them. If you have chosen a test tool that both developers and testers feel comfortable with, you can all work on your test backlog.

Don’t try to change everything at once! Set some goals for the sprint, the month, or the quarter. Order your test backlog by importance, just as you would for other tech debt. As you gradually move your tests from the UI level to the integration level, make sure to celebrate your successes! You’ll be rewarded with faster-running, more reliable tests.

UI Unit Testing

Are you confused by this title? We generally think of unit testing as something that’s done deep in the code, and we think of UI testing as something that’s done with the browser once an application has been deployed. But recently I learned that there’s such a thing as UI unit testing, and I’m really excited about it!

Wikipedia defines unit testing in this way: “a software testing method by which individual units of source code—sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures—are tested to determine whether they are fit for use”. We can think of individual UI components as units of source code as well. Two popular front-end frameworks make it easy to do UI unit testing: React and Angular. In this post, I’ll walk through a simple example for each.

Fortunately the good people who write documentation for React and Angular have made it really easy to get set up with a new project! You can find the steps for setting up a React project here: https://create-react-app.dev/docs/getting-started/, and the steps for setting up an Angular project here: https://angular.io/guide/setup-local. I’ll be walking you through both sets of steps in this post. Important note: both React and Angular will need to have Node installed in order to run.

React:
From the command line, navigate to the folder where you would like to create your React project. Then type the following:
npx create-react-app my-react-app
cd my-react-app

This will create a new React project called my-react-app, and then change directories to that project.
Next you’ll open that my-react-app folder in your favorite code editor (I really like VS Code).
Open up the src folder, then the App.js file. In that file you can see that there is some HTML which includes a link to a React tutorial with the text “Learn React”.
Now let’s take a look at the App.test.js file. This is our test file, and it has one test, which checks to see that the “Learn React” link is present.

Let’s try running the test! In the command line, type npm test, then the Enter key, then a. The test should run and pass.

If you open the App.js file and change the link text on line 19 to something else, like Hello World! and save, you’ll see the test run again and fail, because the link now has a different text. If you open the App.test.js file and change the getByText on line 7 from /learn react/ to /Hello World!/ and save, you’ll see the test run again, and this time it will pass.

Angular:
To create an Angular app, you first need to install Angular. In the command line, navigate to the folder where you’d like to create your new project, and type npm install -g @angular/cli.
Now you can create your new app by typing ng new my-angular-app. Then change directories to your new app by typing cd my-angular-app.

Let’s take a look at the app you created by opening the my-angular-app folder in your favorite code editor. Open the src folder, then the app folder, and take a look at the app.component.ts file: it creates a web page with the title ‘my-angular-app’. Now let’s take a look at the the test file: app.component.spec.ts. It has three tests: it tests that the app has been created, that it has my-angular-app as the title and that the title is rendered on the page.

To run the tests, simply type ng-test in the command line. This will compile the code and launch a Karma window, which is where the tests are run. The tests should run and pass.

Let’s make a change to the app: in the app.component.ts file, change the title on line 9 to Hello Again!. The Karma tests will run again, and now you will see that two tests are failing, because we changed the title. We can update our tests by going to the app.component.spec.js file and changing lines 23, 26, and 33 to have Hello Again! instead of my-angular-app. Save, and the tests should run again and pass.

Did you notice something interesting about these tests? They didn’t actually require the application to be deployed! The Angular application did launch a local browser instance, but it wasn’t deployed anywhere else. And the React application didn’t launch a browser at all when it ran the tests. So there you have it! It’s possible to run unit tests on the UI.

Why We’ll Always Need Software Testers

Are you familiar with the Modern Testing Principles, created by Alan Page and Brent Jensen? I first heard of the principles about a year ago, and I was really excited about the ideas they contained. But I was uncomfortable with Principle #7, which is “We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.” Eliminating a testing specialist? This seemed wrong to me! I thought about it over several months and realized that yes, it’s possible for a team to develop such a good testing mindset and skillset that a dedicated testing expert wouldn’t be needed. But don’t hang up your QA hat just yet! Here are three reasons we will always need software testers.

Reason One: Teams Change

Did you hear about the software team that was so good that it never changed personnel? Nope, neither did I. Life brings about changes, and even a perfect team doesn’t last forever. A member could retire, take another job, or be moved to another team. New members could join the team. I’ve heard that every time a team changes even by one person, that team is brand new. This means that there will be new opportunities to coach the team to build in quality.

Even if the team members don’t change, there can be changes that are challenging for a team, such as a new technical problem to solve, a big push towards a deadline, or a sudden increase in the number of end users. All of these changes might result in new testing strategies, which means the team will need a software tester’s expertise.

Reason Two: Times Change

When I started my first software testing job, I had never heard of the Agile model of software development. But soon it became common knowledge, and now practically no one is using the old Waterfall model. Similarly, there was a time when I didn’t know what CI/CD was, and now it’s a goal for most teams. There are pushes to shift testing to the left (unit and integration tests running against a development build), and to shift testing to the right (testing in production).

Some of these practices may prove to be long-lasting, and some will be replaced by new ideas. Whenever a new idea emerges, new ways of thinking and behaving are necessary. Software testing experts will be needed to determine the best ways to adapt to these new strategies.

Reason Three: Tech Changes

Remember when Perl was a popular scripting language? Or when Pascal was the language of choice? The tools and languages that are in use today won’t be in use forever. Someone will come along and create a newer, more efficient tool or language that will replace the previous one. For example, Cypress recently emerged as an alternative to Selenium Webdriver, which has been the tool of choice for years. And companies are frequently moving toward cloud providers such as AWS or Azure Web Services to reduce the processing load on their servers.

When a team adopts a new technology, there will always be some uncertainty. As a feature is developed, there may be changes made to the configuration or coding strategy, so it may be unclear at first how to design test automation. It can take a team a while to adapt and learn what’s best. A testing expert will be very valuable in this situation.

Change happens, and teams must adapt to change, so it is helpful to have a team member who understands the best way to write a test plan, conduct exploratory testing, or evaluate risk. So don’t go looking for a software development job! Your testing expertise is still needed.

Book Review: Performance Testing- A Practical Guide and Approach

It’s book review time once again, and this month I read “Performance Testing- A Practical Guide and Approach” by Albert Witteveen. I’ve been looking for a resource on performance testing for a long time, because I’ve found that most of the articles and presentations on performance testing either assume a lot of prior knowledge, or focus on using a specific tool without explaining the reasoning behind what is being tested.

This book was definitely what I needed! It explains very clearly why we should be doing performance testing, what kinds of tests we should run, how we should set the tests up, how to run them, and how to report on the results.

Here are some of the things I learned in this book:

Performance testing simply means testing a system to see if it performs its functions within given constraints such as processing time and throughput rate.

Load testing is measuring the performance of an application when we apply a load that is the same as what we would expect in the production environment.

Stress testing is finding out at what point an application will break when we apply an increasing load, and determining what will happen when it breaks.

Endurance testing is about testing with a load over an extended period of time. This can be helpful in discovering problems such as memory leaks.

Virtual users refers to the number of simulated users we are using to test the system.

Performance tests need to be planned thoughtfully; it’s not just a matter of throwing load on every web page or API call. The tester needs to think about where the potential risks of load are and design tests to investigate those risks.

How to create a load test:

  • Generally load tests are created by recording an activity in order to create a test script. Next, you’ll need to add a data set for use in testing.
  • You should run your load test first with just one user. You’ll need to build in some kind of check in your script to make sure that the script is working as you are expecting it to. For example, if you are load testing the login page, you’ll want to see that the user was able to log in successfully instead of getting an error message.
  • Once your script is working well with one user, try it with three users, and make sure that it’s using unique values from your test data instead of using the same values three times.
  • When you have validated that your script is working correctly, you can execute your load test by adding the number of virtual users and ramp-up time that are appropriate to what you would expect your application to be able to handle in production.

It’s very important to monitor things such as CPU usage, memory usage, database activities, and network activity when you are running a load test. Just measuring response times isn’t going to give you the whole picture.

It’s also important to know what kind of queuing your application is using so you can locate bottlenecks in performance. The author uses an easy to understand analogy with a supermarket:

  • A small market with just one checkout lane is like a system with a single CPU. Every customer has to go through this queue, and it’s first come, first served.
  • A larger market with more than one checkout lane is like a system with multiple web servers. Every customer (or in the case of the web servers, the load balancer) picks one checkout lane, and waits to go through that lane.
  • A deli where the customer takes a number and waits their turn is like Web server software where multiple workers can process the request. The customer waits their turn and can be picked up by any one of the worker processes.

Load testing tools themselves generate load when they are running! For that reason, it’s best to keep test scripts simple.

This is just a small sampling of what I learned from this book. It’s a short book, but filled with great explanations and advice. One thing worth mentioning is that there are a number of grammatical errors in the book, and even a few chapters that are missing the last words in the last sentence. It makes reading the book a little slower, since the reader sometimes has to guess at what was meant.

But in spite of these issues, it’s a great book for getting started with performance testing! I recommend it to anyone who would like to understand performance testing or run load tests on their application.

Adventures in Node: Async/Await

As I’ve mentioned in previous posts, I’ve been taking an awesome course on Node for the last several months. This course has really helped me learn JavaScript concepts that I knew about but didn’t understand.

Last month I explained promises. The basic idea of promises is that node functions happen asynchronously, so promises are like place-savers that wait for either a resolve or reject response.

Here’s an example of a function with a promise:

const add = (a, b) => {
    return new Promise((resolve, reject) => {
        setTimeout(() => {
            resolve(a+b)
        }, 2000)
    })
}

In this function, we’re simply adding two numbers. The setTimeout for two seconds is added to make the function take a little longer, as a real function would.

Here’s what calling the function might look like:

add(1, 2).then((sum) => {
    console.log(sum)
})

The add function is called, and then we use the then command to log the result. Using then() means that we’re waiting for the promise to be resolved before we go on.

But what if we needed to call the function a second time? Let’s say we wanted to add two numbers, then we wanted to take that sum and add a third number to it. For this we’d need to do promise chaining. Here’s an example of promise chaining:

add(1, 2).then((sum) => {
    add(sum, 3).then((sum2) => {
        console.log(sum2)
    })
})

The add function is called, and we use the then() command with the sum that is returned. Then we call the function again, and use the then() command once again with the new sum that is returned. Finally, we log the new sum.

This works just fine, but imagine if we had to chain a number of function calls together? It would start to get pretty tricky with all the thens, curly braces and indenting. And this is why async/await was invented!

With async/await, you don’t have to chain promises together. You create a new async function call, and then you use the await command when you are calling a function with a promise.

Here’s what the chained promise call would look like if we used async/await instead:

const getSum = async () => {
    const sum = await add(1, 2)
    const sum2 = await add(sum, 3)
    console.log(sum2)
}

getSum()

We’re creating a new async function called getSum, by using this command:
const getSum = async () =>. In that function, we’re first calling the add function with an await, and we’re setting the result of that call to the variable called sum. Then we’re calling the add function again with an await, and we’re setting the result of that call to the variable called sum2. Finally, we’re logging the value of sum2.

Now that the async function has been created, we’re calling it with the getSum() command.

It’s pretty clear that this code is easier to read with async/await! Keep in mind that promises are still being used here; the add() function still returns a promise. But async/await provides a way to call a promise function without having to add in a then() statement.

Now that I understand about async/await, I plan to use it whenever I am doing development or writing test code in Node. I hope you’ll take the time to try out these examples for yourself to see how they work!