New Year’s Resolutions for Testers

There’s nothing like a shiny new year to make people think about how they can improve their careers! Here are five ideas for software testers in 2025:

I. Learn Test Automation

Will this be the year that you finally- FINALLY- learn to create automated tests? Coding skills make you more employable and provide better-paying roles. I’ve made learning test automation easy through my completely FREE YouTube course, Monday Morning Automation! The lessons start with the command line, then move into Git commands, then teach JavaScript basics, and finally get you started on automating tests with Cypress. Each lesson takes about ten minutes, and when you have completed the lessons, you will know enough to create automated tests for your own company’s product.

II. Read a Book

Books are a great way to dive deeply into understanding a topic. Here are four books I recommend:

  • The Complete Software Tester: I wrote this book to help beginning and middle-level testers improve their testing in all areas, including API testing, security testing, and performance testing. It’s available in Kindle and in paperback, and is also available in Russian!
  • Logical Fallacies for Testers: This is a super-short book that you could read in one weekend. I wrote it to help testers think critically when evaluating their test projects.
  • Explore It! by Elisabeth Hendrickson: I recommend this book to every tester I know, and to everyone who is considering a career in testing. Exploratory testing is so important, because this is where the tricky bugs are found.

III. Take a Course

Whenever I’m struggling to understand something, I know it’s time for me to take a hands-on course. Following along with the course and actually doing the activities is a great way to solidify your understanding of concepts. Here are four courses I recommend:

  • Postman Essential Training: I created this LinkedIn Learning course to help people understand how to create and execute API tests. API tests are faster and more reliable than UI automation and also provide a great way to test business logic and find potential security issues.
  • Playwright: Web Automation Testing From Zero to Hero by Artem Bondar: This Udemy course is a great introduction to Playwright. Playwright is a powerful test automation tool, but it’s a little tricker than Cypress. I found this course extremely helpful for understanding Playwright concepts.
  • The Complete Node.js Developer Course by Andrew Mead: I took this Udemy course when I needed to learn how to create a test application. It is super clear and provides exercises that force you to think about what you’re doing. Understanding Node and JavaScript is very useful for Web testing.
  • Android 14 App Development Bootcamp 2024 by Vin Norman: This is another Udemy course that I took when I was doing mobile testing, and wanted to understand more about how mobile apps are developed. When I took the course, I was surprised to find that the course was incomplete, but it still provided me with a good understanding about how Kotlin development works.

IV. Attend a Meetup or Conference

Meetups and conferences can be really helpful in introducing you to new concepts. It’s so great that modern technology allows us to attend virtual meetups and conferences anywhere in the world! Here are a few meetup and conference ideas:

  • Next Wednesday (January 8, 2025), I’ll be presenting at the Testomat.io Test Automation Meetup. My topic will be “Configuring Playwright for Test Automation Success”. If you are just getting started with Playwright, or are thinking about using it, this presentation should be very informative!
  • Automation Guild ’25: This online conference is coming up in February and is a great way to improve your test automation skills. Once you have registered, the training sessions are available for you to attend in real-time, or to view anytime via recordings and transcripts.
  • Testflix: I love this conference because each presentation lasts less than 15 minutes- great for my short attention span! The conference features presenters from around the world on a wide variety of themes, such as API Testing, Career Development, and AI and ML in Testing. The conference usually happens in October, so be sure to watch for it later this year!

V. Learn a New Skill

Is there an area of testing that you have always wanted to know more about, but you either haven’t had the time or the will to learn it? For me in 2024, that was API Contract Testing. I had always wanted to learn it, but reading about it in blog posts wasn’t providing me with enough understanding. Fortunately, Marie Cruz and Lewis Prescott came out with a great book in 2024 called Contract Testing in Action! Between the examples in that book, and the hands-on learning project at pact.io, I was able to finally figure it out.

Learning new skills can be daunting, and sometimes frustrating! But it’s worth the journey. You will become a better thinker and a better tester in the process and the more you learn, the more your skills will be in demand! Let’s make 2025 our best year ever!

What Does Test Coverage Mean?

We live in a time where it is easy to measure things. Websites measure visits from users all over the world; YouTube videos measure views and likes; mobile apps measure crash statistics. So it makes sense that software managers would want to measure quality activities.

Unfortunately, we don’t always have clear language to describe what is being measured. You may have heard a manager talk about getting to “100% Test Coverage”. But what do they mean by this statement? Here are a few things this could mean, and one thing that this cannot mean.

Test Coverage can actually mean Code Coverage

Sometimes when people refer to test coverage, what they really mean is code coverage. Code coverage is the number of lines of code that are executed when automated tests run against the product or feature. Areas that are not touched by automated tests could indicate gaps in testing. Unit tests are a great way to increase code coverage, because they are designed to test the code directly. And there are number of tools available to measure code coverage, such as DotCover, Coverlet, Cobertura, and SonarQube.

Test Coverage does not mean the percent of every possible test

A common mistake made by those who have never tested software is thinking that there are a finite number of things that can be tested in an application, so it is possible to have 100% test coverage. This is simply impossible. There is no limit to the number of tests that could be created even for the simplest applications. Consider a simple web form with five fields, none of which are required. A user could fill in just one field (five different possibilities), just two fields (ten different possibilities), three fields (ten different possibilities), four fields (five different possibilities), or all five fields. That’s 31 test cases, and that isn’t even considering negative testing. What if one field has an error and two fields are correct? What if three fields have an error and one field is correct? It’s easy to see how the number of test scenarios can quickly move to the millions of test cases. So no team will ever reach 100% test coverage as defined here.

Test Coverage can refer to Automation Coverage

It’s possible to use the term test coverage to refer to the percent of manual test cases that have been automated. This is useful for showing the status of an automation project. If you have 500 documented manual test cases, and you have automated 200 of them, your automation coverage is 40%. However, this meaning is less useful if you don’t already have a documented suite of manual tests. If only a handful of manual tests have been documented, a team could claim to have 100% automation coverage after automating those tests. Meanwhile, hundreds of other test cases could be neglected.

Test Coverage can refer to Feature Coverage

Another possible meaning for test coverage could be the measurement of which application features have existing tests. If a product has ten features, and the tester executes tests for five of those features, it would be possible to say that 50% of the features have been tested. In this usage, it’s also important to distinguish what is meant by “tested”. Tested could mean that exploratory testing has been conducted and documented, but no regression test suite has been created. Or it could mean that manual regression tests have been created, or that test automation has been written. Also keep in mind that it’s not possible to say that a feature has 100% coverage, for the same reason listed above.

What can Test Coverage tell us?

Test coverage can tell us how many lines of code are executed by tests, how much testing has been done on a feature, how many regression tests have been created, or how many automated tests have been written. But it is only useful if all members of the team or organization have agreed upon what the term means. If your team is asked to start measuring “Test Coverage”, share this blog post with the requester and ask them to clarify what information they are looking for.

Five Quality Changes to Make When Your Company Grows

Working for a startup is fun, because a small group of people get to make all the decisions. You know the whole team well, and decisions can be made with a few simple conversations.

As a company grows, you don’t know all your colleagues as well, but it’s still fairly easy to think of them as part of one big family. Each team can have its own autonomy to decide things, and teams can operate in spite of cross-team differences.

But when a company grows to have 50 or 100 software teams, team differences can start to take their toll. It can become very cumbersome to determine who is responsible for which product, and which team’s deployment broke your team’s product.

At this point it’s time to make some quality changes. Here are five you should consider:

Naming
When you’re at a small company, it can be fun to give your teams whimsical names: The Jetsons! The Hobbits! It’s easy to know who’s who because there are so few teams. But the more teams you have, the harder it’s going to be to figure out who does what. Which team should you talk to about authentication? Is it Team Magneto, or The Borg? Which team pushed a change to their API which broke your entire backend: Team Fluffy Bunnies, or Team Doge?

At a large company, team names should reflect either the product the team works on or the team’s function. Yes, having serious names is less fun, but trying to remember whether you should page Team Black Widow or Team Hawkeye when your site goes down is also not fun.

Documenting
At a small company, knowledge can be passed down tribally. If a new hire has questions, they can just ask you, and you can spend a few minutes with them telling them what they need to know. You can even keep this going for a while at a medium-sized company, if you have some motivated people who are happy to spend time answering others’ questions. But at a large company, trying to spread tribal knowledge becomes an incredible time suck.

Documenting information not only helps with cross-team communication, it can also lessen your workload. If you are the only person who knows how to do something, who’s going to get called when that something needs to be done? You, that’s who! Providing training to a few others can be helpful, but that just means the workload has been spread to a handful of people, who will all be overwhelmed with inquiries. But if you can create clear documentation so that anyone can read exactly what to do and do it without asking any questions, you’ve just freed up a ton of time for yourself and for your fellow experts.

Monitoring

At a small company, everyone knows what’s going on. A deployment is a common knowledge, because most of the company has been working on it. Even at a medium-sized company, it’s fairly easy to keep track of what other teams are doing. But at a large company, a team that you don’t even know about can do a deployment that you don’t know is coming, and that deployment can have ripple effects on your team’s product!

This is why monitoring is so important. It’s not just the changes that your team is making that can result in failures of your product. It’s not even just the teams adjacent to you that can affect your product. Having monitoring in place can alert you the second things go wrong with your application. You may even be able to pinpoint the issue, identify the team that caused the failure, and have them fix the problem before any of your end users notice.

Standardizing
Can you imagine a scenario where each developer on a team decides to code in their favorite language? One might choose Python, and another C#. Of course it’s easy to see that this makes no sense. How could the team share their code? But for some reason, teams don’t realize that this is also a problem with test automation code. In large companies, the different systems are dependent upon one another. One team creating a chat feature will be dependent upon another team’s authentication software, and yet another team’s notification software. Having automation code that is in the same language and follows the same standards from team to team makes it easy to understand all the automated tests. Moreover, it makes it easy to set up systems where a build in one team’s product can trigger tests in another team’s product. It also makes it easier on the team setting up the CI/CD environment, because they don’t have to set up systems to accommodate many different test frameworks.

Measuring

In a small or medium-sized company, it’s pretty easy to notice when there is a quality problem. When a product is unreachable or when a customer complains about a critical bug, everyone hears about it. But the larger the company gets, the more likely it is that some problems won’t be common knowledge. For this reason, it is a great idea to set up metrics to measure team success. Metrics to measure could include bugs found by the team, bugs found by customers, average response time, and average error rate.

As I mentioned in a previous post, you often can’t compare the metrics of one team to the metrics of another team. But you can compare a team’s metrics to their previous metrics. What was the team-to-customer bug ratio six months or a year ago? Was it better or worse than it is today? And if one team’s metrics seem wildly different from all the other teams’ metrics, that might be a data point worth looking into.

Change is not often easy, especially when it comes to companies that have become used to their own way of doing things. But as your company grows, it’s important to grow in your quality maturity as well, and this can mean adopting new standards for naming, documentation, monitoring, tool standardization, and success metrics.

Testathon

Recently, I was talking with some technology leaders at my company about how we needed to encourage testers to do more exploratory testing. We were getting ready to hold our second annual Hackathon, and one of our group suggested “Why not have a competition for testing as well?” And thus, Testathon was born!

To prepare for Testathon, we invited people to sign up in teams of three to five people. Testathon was open to any participants, not just software testers. We had 48 participants sign up in ten teams; most of the participants were testers, but we had a few software engineers, managers, and product owners as well.

Once I knew how many teams we had, I chose ten products for the teams to test. I made sure that no person was testing their own product. I worked with the product teams to create documentation and user logins that the test teams could use during Testathon.

Testathon lasted two days; during those days the teams tested their assigned products in as many ways as they could think of. They logged the bugs they found in a special Jira board, and marked them with their team name.

Once Testathon was over, five judges- all Principal Software Test Engineers- divided up the bugs and worked for days to attempt to reproduce them. This showed the importance of being able to write up good repro steps in a bug; if a judge wasn’t able to reproduce the bug with the instruction steps provided, they moved the bug to Closed.

Bugs that were reproduced were assigned a severity and labeled by bug type, such as API, Performance, Security, Usability, and so on.

Next, each of the judges nominated which bugs they thought should win for each category. Our categories included: Most Unique Bug, Best Bug Report, Most Critical Bug, Best Security Bug, Best API Bug, Best Usability Bug, and Best Performance Bug.

The judges convened and voted on a winner for each category, and then announced the winners to the whole Engineering group. The winners received cash prizes.

We received lots of positive feedback from the Testathon participants. They really enjoyed working with testers from other teams, testing new products, and learning test strategies from each other. Moreover, some of the bugs logged were great finds! A serious security bug was identified and fixed in two days. Another tricky bug that a product team had known about, but couldn’t consistently reproduce, was finally given good repro steps so the team could fix the bug. Many API bugs were found where the response code was incorrect. And there were several small UI bugs found that when fixed will help improve the user experience.

After Testathon, all of the bugs were presented to the product teams for analysis. They sorted through the bugs for duplicates, and moved any new bugs to their Jira board.

Testathon was a great way for our company to learn the value of exploratory testing! We are already looking forward to our next Testathon. Perhaps you might find a Testathon valuable for your company!

What Does a Bug Count Mean?

As software companies grow, they often find it necessary to start measuring things. When a company has just a couple of dozen people, it’s pretty easy to see whether a software tester is performing well, or whether they are struggling. But when a company grows to several hundred or several thousand people, it becomes more difficult to keep track of how everyone is doing.

At this point, the management of the growing company will conclude that they need to look at metrics to gauge how a software team is performing. And often, it will be suggested that teams start counting their bugs. But what does a bug count mean? This post will discuss what a bug count will and will not tell you.

A Bug Count By Itself Tells You: Absolutely Nothing
Saying that a team found fifty bugs this month means nothing. It makes as much sense as trying to determine whether a book is good by counting how many pages it has. “Was it a good book?” “Well, I haven’t read it, but it has five hundred pages!!!”

Comparing Bug Counts Between Teams Tells You: Absolutely Nothing
The number of bugs a team finds is dependent on many things. Each product tested is different. One team’s product might be very simple or very mature, while another team’s product might be quite complex or brand new. Further, each team might have a different way of logging bugs. One team might have a practice of verbally reporting bugs to the developer if the developer is still working on the feature. Another team might have a practice of logging every single bug they find, down to a single misplaced pixel. Finally, one person’s single bug might be three bugs to someone else. Consider two teams who test their product on iOS and Android. One team might log a single bug for a flaw they’ve found on both operating systems, while another team might log two bugs for the flaw, one for each operating system.

Comparing Bug Counts by the Same Team, Over Time Tells You: Maybe Something
If you are tracking the number of bugs a team finds each month, and there is a big change, that might mean something. For example, an increase in the number of bugs found could mean:
* The testers are getting better at finding bugs
* The developers are creating more bugs
* There’s a new complicated feature that’s just been added
But it could also mean:
* There’s been a change in the procedures for logging bugs
* Last month, half the team went on vacation so there was less work done, and now everyone is back

Analyzing Who Found The Bug, the Customer or the Tester, Tells You: Likely Something
When bugs are logged, it’s important to know who found them and when. Obviously, the best case scenario is for the tester to find a bug in the test environment, long before the feature goes to production. The worst case scenario is for the customer to find a bug in production. So if you pay attention to what percentage of bugs logged are found by testers and what percentage are found by customers, this could tell you something, especially if you look at the metric over time.

If your team started to track this metric, and the first month you tracked it, 75% of the bugs were found by the testers and 25% of the bugs were found by the customers, you’d have a baseline to compare to. Then in the second month, if 85% of the bugs were found by the testers and 15% of the bugs were found by the customers, you could surmise that your team is getting better at finding the bugs before the customers find them.

Metrics are a double-edged sword: on the one hand, they can be used to illustrate a point such as how well a team is performing, whether more hiring is needed, or whether there’s a problem in your software release processes. But on the other hand, metrics can be weaponized, manipulated, and gamified. By considering all the possibilities of what the number of bugs could mean in a specific context, you’ll be more likely to find metrics that will be helpful and instructive.

SOLID Principles for Testers: The Dependency Inversion Principle

It’s time for the last SOLID principle! The Dependency Inversion Principle has two parts, and we’ll take a look at them one at a time. First, the principle states that “High-level modules should not depend on low-level modules, but should instead depend on abstractions.”

In order to understand this, we first need to know the difference between “high-level modules” and “low-level modules”. A low-level module is something that handles one specific task, such as making a request from a database or sending a file to a printer. For the first example in this post, we’ll use a class called “AddText” that will clear a text field and enter new text into it:

class AddText {
    clearAndEnterText(id: string, value: string) {
        driver.findElement(By.id(id)).clear().sendKeys(value)
    }
}

A high-level module is something that provides core functionality to an application or a test, such as generating a report or creating a new user. In this example, we’ll use a class called “UpdatePerson” that will update a value in a person record:

class UpdatePerson {
    private addText: AddText
    constructor(addText: AddText) {
        this.addText = addText
    }
    update(updateId: string, updateValue: string) {
        this.addText.clearAndEnterText(updateId, updateValue)
    }
}

The way we would update a record in this example is by first initializing an instance of the AddText class, then initializing an instance of the UpdatePerson class, and then calling the update function to make the update:

const addText = new AddText()
const updateUser = new UpdatePerson(addText)
updateUser.update('lastName', 'Smith')

But this example violates the Dependency Inversion Principle! The “UpdatePerson” class is very dependent on the “AddText” class. If the signature (parameters and return type) of the “clearAndEnterText” function changes in the “AddText” class, the “UpdatePerson” class will have to change as well.

So let’s update our code to comply with the principle. Instead of creating an “AddText” class, we’ll create an “AddText” interface:

interface AddText {
  clearAndEnterText(id: string, value, string): void
}

Then we’ll create a class called “PersonForm” that will implement the interface:

class PersonForm implements AddText {
    clearAndEnterText(id: string, value: string) {
        driver.findElement(By.id(id)).clear().sendKeys(value)
    }
}

And finally, we’ll update our UpdatePerson class so that it will use the PersonForm:

class UpdatePerson {
    private form: PersonForm
    constructor(form: PersonForm) {
        this.form = form
    }
    update(updateId: string, updatevalue: string) {
        this.form.clearAndEnterText(updateId, updateValue)
    }
}

Now we can update the person’s value by first creating an instance of the PersonForm class, and then creating and using the UpdatePerson class:

const userForm = new PersonForm()
const updateUser = newUpdatePerson(userForm)
updateUser.update('lastname', 'Smith')

Now both the PersonForm class and the UpdatePerson class depend on the interface instead of a low-level module. If the signature of the “clearAndEnterText” interface changes, we’ll need to update the PersonForm, but we won’t need to make changes to the UpdatePerson class.

The second part of the Dependency Inversion Principle states that “Abstractions should not depend on details; details should depend on abstractions”. An abstraction is an interface or abstract class that defines a set of behaviors without providing specific implementations. Both high-level and low-level modules should depend on abstractions, and if the details of their implementation change, they should not affect the abstraction.

In other words, the PersonForm class can make any changes to the “clearAndEnterText” function, and it will have no effect on the “AddText” interface. For example, we could change the PersonForm class to have a log statement, but that won’t have any impact on the “AddText” interface:

class PersonForm implements AddText {
    clearAndEnterText(id: string, value: string) {
        driver.findElement(By.id(id)).clear().sendKeys(value)
        console.log('Element updated')
    }
}

This concludes my five-post examination of SOLID principles! I’d like to extend a special thank you to my colleague Monica Standifer, who helped me better understand the principles. I definitely learned a lot in this process, and I hope that you have found these simple examples that use methods commonly found in software testing to be helpful!

SOLID Principles for Testers: The Interface Segregation Principle

We’re over halfway done learning about the SOLID principles! Today it’s time to learn about the “I”: the Interface Segregation Principle.

In order to understand this principle, we first need to understand what an interface is. An interface is a definition of a set of methods that can implemented in a class. Each class that implements the interface must use all of the methods included in the interface. Because the interface only defines the method signature (name, parameters, and return type), the methods can vary in each implementation.

The Interface Segregation Principle states that no class should be forced to depend on methods that it does not use. In order to understand this, let’s look at an example. Imagine you have a website to test that has two different forms: an Employee form and an Employer form. So you decide to create a Form interface that has methods for interacting with the various objects that can be found in a form:

interface Form {
    fillInTextField(id: String, value: String): void;
    selectRadioButton(id: String): void;
    checkCheckbox(id: String): void;
}

When you create an EmployeeForm class, you set it to implement the Form interface:

class EmployeeForm implements Form {
    fillInTextField(id: String, value: String): void {
      driver.findElement(By.id(id)).sendKeys(value)
    }
  
    selectRadioButton(id: String): void {
      driver.findElement(By.id(id)).click()
    }
  
    checkCheckbox(id: String): void {
      driver.findElement(By.id(id)).click()
    }
}

This works great, because the Employee form has text fields, radio buttons, and checkboxes.

Next, you create an EmployerForm class, which also implements the Form interface. But this form only has text fields and no radio buttons or checkboxes. So you implement the interface like this:

class EmployerForm implements Form {
    fillInTextField(id: String, value: String): void {
      driver.findElement(By.id(id)).sendKeys(value)
    }
  
    selectRadioButton(id: String): void {
      // no radio button
      throw new Error("No radio button exists");
    }
  
    checkCheckbox(id: String): void {
      // no checkbox
      throw new Error("No checkbox exists");
    }
}

You’ll never call the selectRadioButton and checkCheckbox methods in the EmployerForm class because there are no radio buttons or checkboxes in that form, but you need to create methods for them anyway because of the interface. This violates the Interface Segregation Principle.

So, how can you use interfaces with these forms without violating the principle? You can create separate interfaces for text fields, radio buttons, and checkboxes, like this:

interface TextField {
    fillInTextField(id: String, value: String): void;
}
  
interface RadioButton {
    selectRadioButton(id: String): void;
}
  
interface Checkbox {
    checkCheckbox(id: String): void;
}

Then when you create the EmployeeForm class you can implement the three interfaces like this:

class EmployeeForm implements TextField, RadioButton, Checkbox {
    fillInTextField(id: String, value: String): void {
      driver.findElement(By.id(id)).sendKeys(value)
    }
  
    selectRadioButton(id: String): void {
      driver.findElement(By.id(id)).click()

    }
  
    checkCheckbox(id: String): void {
      driver.findElement(By.id(id)).click()
    }
}

Now when you create the EmployerForm class, you only need to implement the TextField interface:

  class EmployerForm implements TextField {
    fillInTextField(id: String, value: String): void {
      driver.findElement(By.id(id)).sendKeys(value)
    }
  }

When each class only needs to implement the methods they need, classes become easier to maintain. And having smaller interfaces means that they can be used in a greater variety of scenarios. This is the benefit of the Interface Segregation Principle.

SOLID Principles for Testers: The Liskov Substitution Principle

It’s time to learn about the “L” in SOLID!  The Liskov Substitution Principle is named for Barbara Liskov, a computer scientist who introduced the concept in 1987.  The principle states that you should be able to replace objects in a superclass with objects of a subclass with no alterations in the program. 

In order to understand this, let’s use an example that is very familiar with testers: waiting for an element.  Here’s a class called WaitForElement that has two methods, waitForElementToBeVisible and waitForElementToBeClickable:

class WaitForElement {
  constructor() {}
  async waitForElementToBeVisible(locator) {
    await driver
      .wait(until.elementIsVisible
      (driver.findElement(locator)),10000)
  }
  async waitForElementToBeClickable(locator) {
    await driver
      .wait(until.elementIsEnabled
      (driver.findElement(locator)),10000)
  }
}

This class could be used to locate all kinds of elements.  Now imagine that the tester has created a class specifically for clicking on elements in dropdown lists, that extends the existing WaitForElement class:

class WaitForDropdownSelection extends WaitForElement {
  constructor() {
    super()
  }
  async waitForElementToBeClickable(locator) {
    let selection = await driver
    .wait(until.elementIsEnabled(driver.findElement(locator)),
      10000)
    selection.click()
  }
}

If we were going to use the WaitForElement class to select a city from a dropdown list of cities, it would look like this:

let waitForInstance = new WaitForElement()
waitForInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForInstance.waitForElementToBeClickable
  (By.id(‘New York’)).click()

But if we were going to use the WaitForDropdownSelection class to select a city instead, it would look like this:

let waitForDropdownInstance = new WaitForDropdownSelection()
waitForDropdownInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForDropdownInstance.waitForElementToBeClickable
  (By.id(‘New York’))

Do you see the difference?  When we use the waitForElementToBeClickable method in the WaitForDropdownSelection class, the method includes clicking on the element:

selection.click()

But when we use the waitForElementToBeClickable method in the WaitForElement class, the method does not include clicking.  This violates the Liskov Substitution Principle.

To fix the problem, we could update the waitForElementToBeClickable method in the WaitForDropdownSelection to not have the click() command, and then we could add a second method that would wait and click:

class WaitForDropdownSelection extends WaitForElement {
  constructor() {
    super()
  }
  async waitForElementToBeClickable(locator) {
    await driver
      .wait(until.elementIsEnabled
      (driver.findElement(locator)), 10000)
  }
  async waitForElementAndClick(locator) {
    let selection = await driver
      .wait(until.elementIsEnabled
      (driver.findElement(locator)), 10000)
    selection.click()
  }
}

We’ve now adjusted things so that the classes can be used interchangeably. 

With the WaitForElement class:

let waitForInstance = new WaitForElement()
waitForInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForInstance.waitForElementToBeClickable
  (By.id(‘New York’)).click()

With the WaitForDropdownSelection class:

let waitForDropdownInstance = new WaitForDropdownSelection()
waitForDropdownInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForDropdownInstance.waitForElementToBeClickable
  (By.id(‘New York’)).click()

Or we could use the new method in the WaitForDropdownSelection class instead:

let waitForDropdownInstance = new WaitForDropdownSelection()
waitForDropdownInstance.waitForElementToBeVisible
  (By.id(‘cities’)).click()
waitForDropdownInstance.waitForElementAndClick
  (By.id(‘New York’))

Using extended classes is a great way to avoid duplicating code while adding new functionality.  But make sure when you are extending a class that the methods are interchangeable. 

SOLID Principles for Testers: The Open-Closed Principle

This month we are continuing our investigation of SOLID principles with the “O” value: the Open-Closed principle. This principle states the following: a class should be open for extension, but closed for modification.

What does this mean? It means that once a class is used by other code, you shouldn’t change the class. If you do change the class, you risk breaking the code that depends on the class. Instead, you should extend the class to add functionality.

Let’s see what this looks like with an example. We’ll use a Login class again, because software testers encounter login pages so frequently. Imagine that there’s a company with a number of different teams that all need to write UI test automation for their features. One of their test engineers, Joe, creates a Login class that anyone can use. It takes a username and password as variables and uses them to complete the login:

class Login {
    constructor(username, password) {
        this.username = username
        this.password = password
    }
    login() {
        driver.findElement(By.id('username'))
            .sendKeys(this.username)
        driver.findElement(By.id('password))
            .sendKeys(this.password)
        driver.findElement(By.id('submit)).click()
    }
}

Everybody sees that this class is useful, so they call it for their own tests.

Now imagine that a new feature has been added to the site, where customers can opt to include a challenge question in their login process. Joe wants to add the capability to handle this new feature:

class Login {
    constructor(username, password, answer) {
        this.username = username
        this.password = password
    }
    login() {
        driver.findElement(By.id('username'))
            .sendKeys(this.username)
        driver.findElement(By.id('password'))
            .sendKeys(this.password)
        driver.findElement(By.id('submit')).click()
    }
    loginWithChallenge() {
        driver.findElement(By.id('username'))
            .sendKeys(this.username)
        driver.findElement(By.id('password'))
            .sendKeys(this.password)
        driver.findElement(By.id('submit')).click()
        driver.findElement(By.id('answer'))
            .sendKeys(this.answer)
        driver.findElement(By.id('submitAnswer')).click()
    }
}

Notice that the Login class is now expecting a third parameter: an answer variable. If Joe makes this change, it will break everyone’s tests, because they aren’t currently including that variable when they create an instance of the Login class. Joe won’t be very popular with the other testers now!

Instead, Joe should create a new class called LoginWithChallenge that extends the Login class, leaving the Login class unchanged:

class LoginWithChallenge extends Login {
    constructor(username, password, answer) {
        super()
        this.username = username
        this.password = password
        this.answer = answer
    }
    loginWithChallenge() {
        this.login(username, password)
        driver.findElement(By.id('answer'))
            .sendKeys(this.answer)
        driver.findElement(By.id('submitAnswer')).click()    
    }
}

Now the testers can continue to call the Login class without issues. And when they are ready to update their tests to use the new challenge question functionality, they can modify their tests to call the LoginWithChallenge class instead. The Login class was open to being extended but it was closed for modification.

SOLID Principles for Testers: The Single Responsibility Principle

Those who have been reading my blog for several years have probably figured out that when I want to learn something, I challenge myself to write a blog post about it. In 2020, I read one book on software testing each month and wrote a book review. In 2023, I learned about one Logical Fallacy each month, and wrote a post explaining it (which eventually turned into my book, Logical Fallacies for Testers).

For the next five months, I’ll be taking on a new challenge: learning about the SOLID principles of clean code. I’ve wanted to understand these for years, but I’ve always felt intimidated by the terminology (“Liskov Substitution” just sounds so complicated!). But I’m going to do my best to learn these principles, and explain them with examples that will be useful to software testers. So let’s begin with the Single Responsibility Principle, the “S” in SOLID.

The Single Responsibility Principle says that a class should have only one responsibility. Let’s take a look at this Login class:

class Login {
     constructor() {}
     login(username, password) {
          driver.findElement(By.id('username'))
               .sendKeys(username)
          driver.findElement(By.id('password'))
               .sendKeys(password)
          driver.findElement(By.id('submit')).click()
     }

     navigate(url) {
          driver.get(url)
     }
}

The Login class has two methods: login and navigate. For anyone writing test automation this appears to make sense, because login and navigation often happen at the beginning of a test. But logging in and navigating are actually two very different things.

Let’s imagine that multi-factor authentication is now an option for users of the application. The test automation engineer will now want to add a new login method that includes multi-factor auth. But what will the impact be on the navigation method? It could be that there is no impact at all, but if new package imports need to be added to the class, it’s possible that they could break the navigation method.

Any time a class changes, there’s a chance of breaking existing functionality. The test automation engineer will need to check every test that has navigation in addition to checking every test that has logging in.

This is why it’s best to give classes just one single responsibility. The navigation method should be moved to its own class. Then when new login methods are added, the navigation method will be completely unaffected. And it will also be possible to add new navigation methods to the navigation class without affecting the login methods!