Healthy Testing Habits

Anyone who has focused on improving their health has learned that health begins with good habits: taking care of one’s teeth, exercising regularly, eating a healthy diet, and so on. Quick fixes like diet pills and one jog around the block might provide temporary improvement, but for permanent results, healthy habits are the way to go.

It occurred to me recently that this is true for software quality as well! It’s not enough to splurge on the latest test case management system or adopt the newest test framework; real software quality is the result of healthy testing habits! Below are six healthy testing habits you’ll want your team to adopt.

Habit 1: Check your overnight tests and fix any failures

Many software testers create automated tests that are designed to run overnight. But how many testers take the time to check and fix any failing tests? Too often our nightly test runs become repositories of mediocrity; if most of the tests pass, then we assume everything is OK and dismiss the flaky tests. With a test suite like that, how will you be alerted to actual problems?

Make a commitment to having zero flaky tests. Check your overnight test runs in the morning and rerun any failures. If you notice that you are getting false failures in specific tests, fix those tests so they won’t be flaky. Then you will have overnight tests that your team can truly rely upon.

Habit 2: Run unit and integration tests with every build

Hopefully the developers on your team have created unit tests for their code! When unit tests are run with every code commit, they provide incredibly fast feedback. Integration tests are also extremely helpful because they can alert the team to potential lost connections to the database or to another team’s API.

Set up your build system so that every build will run your unit and integration tests and fail if any tests fail. You’ll be pleased to see how well this practice keeps bugs from escaping to production.

Habit 3: Set up regular security checks

Security problems can pop up at unexpected times. Recently it’s become more common for malicious users to find and exploit package vulnerabilities. One way to avoid having vulnerable packages or other security holes in your code is to scan your code periodically. There are many automated scanning tools- both free and paid- available for this purpose. You can set your automated scanning tools to run once a week and alert you to any possible vulnerabilities. And don’t forget that you can also set up your own security automation to check for things like access control and security misconfiguration.

Habit 4: Run load tests before releasing

Do you know how your application behaves under load? Hopefully you have run load tests on your application at some point. But remember that every time there is a change to your application’s behavior- for example, adding a new API or altering a database query- the performance of your application may change. A healthy load testing habit involves load testing your application before you release any changes. If there is a significant slowdown in your app’s performance you’ll want to investigate that and make whatever changes are needed.

Habit 5: Ping your APIs at regular intervals.

Just because your API was working correctly last night, and the last time you deployed your software, doesn’t mean that it’s working correctly right now. All kinds of things can cause your API to be unresponsive. Perhaps someone in IT accidentally changed a firewall rule. Maybe a third-party API that your API relies upon has gone down. When things like this happen, you’ll want to find out about it before your customers do.

A great healthy habit is to set up a ping check on your APIs. You can set up a specific health endpoint that returns positive or negative results, or you can use an existing GET request. Whichever method you use, you’ll want to set your check to run every few minutes and alert you if something is wrong.

Habit 6: Set up monitors and alerts

How many problems has your team encountered with your application that could have been prevented if you were alerted to them ahead of time? Perhaps you ran into an issue where a server had reached maximum CPU and stopped responding to requests. Or maybe a database was filled to capacity and started rejecting all additions of new data. Setting up monitoring and alerting means that you can be notified of a problem before it becomes a disaster and affects your customers.

It’s important to make sure that you aren’t alerting too often, though! If you and your team get too many alerts, you’ll begin to suffer from “alert fatigue” and stop paying attention to them entirely. Work with your team to determine exactly what you should alert on and make sure that you and your team take turns responding to the alerts, so no one person bears all the responsibility.

It’s easy to see how healthy testing habits like these can greatly contribute to software quality! They aren’t as exciting as buying a new testing tool, but they are cheaper and more effective.

Getting Started With Accessibility Testing, Plus Two Easy Fixes

Web Accessibility means making a website easier to use and understand for people with visual, auditory, physical, or cognitive difficulties. Did you know that there are specific guidelines for how to make a website accessible? The guidelines are called the Web Content Accessibility Guidelines, or WCAG for short. The guidelines were created by the Web Accessibility Initiative, which is a subset of the World Wide Web Consortium (W3C). You can see all of the Accessibility Guidelines and learn how to meet them in this Quick Reference Guide.

When you first look at the guidelines, they can seem daunting because there’s so many of them! But don’t despair: it’s really easy to get started with accessibility testing. In this post, I’ll show you how to check a web page for accessibility and how you can make two quick fixes to your application’s code to make your page more accessible!

Illustration of web interactions

The easiest way to audit your website for accessibility is by using the WAVE tool. This extension is available for Chrome, Firefox, and Edge, and it’s completely free. To get the extension, simply go to https://wave.webaim.org/extension/ and click on the browser you’d like to use. Once the extension is installed, it will be available in your browser’s toolbar. To use the extension, begin by navigating to the page you’d like to check. Then click on the extension to turn it on. Instantly you will get a series of icons on your page that will show you where you are complying with WCAG and where you are in violation of the guidelines. If you click on the icons, you’ll get more information about what guideline is being violated or complied with, and there will be a link to the WCAG reference page and a link that will take you directly to your code.

Fixing accessibility violations is often very easy as well! Here are two easy fixes that anyone who is familiar with HTML can make:

Adding an alt-text to an image

When people with impaired vision use the Web, they usually rely on a screen reader. The screen reader tells them what elements are on the page. When there is an image on the page, if it doesn’t have an alt-text, the screen reader doesn’t know how to describe it. All that’s required is to add an alt-text that describes what’s in the image. Here’s an example of an image that’s missing an alt-text:
<img src=”/img/thinkingTesterLogo.png”>
And here’s an example of the same image with an alt-text added:
<img src=”/img/thinkingTesterLogo.png” alt=”Thinking Tester logo”>

Adding a “for” setting to a label:
When a user who relies on a screen reader fills out a web form, they need to know what all of the input fields on the form represent. It’s not enough just to have a label; having a “for” setting makes it every clear what the purpose of the input field is. Here’s an example of an input with a label that is missing the “for” setting:
<label>Email</label>
<input id=”email” placeholder=”Email”>

And here’s what that same field would look like with the “for” setting added:
<label for=”email address”>Email</label>
<input id=”email” placeholder=”Email”>

These simple changes have no impact on the visual aspects of the page, but they have a big impact on someone who relies on a screen reader! And they are so easy to do that anyone who can type and submit a pull request can make the changes. Many other fixes, such as making sure that your header elements are in the proper order, are easy to do as well.

Over 7 million people in the United States regularly use a screen reader, and of course they are also used frequently throughout the world. With just a few simple changes, it’s possible to give people with visual impairments a much better user experience.

What Makes a Good Automated Test?

Recently I was meeting with some coworkers who were looking to improve our Continuous Delivery practices. They were thinking of ways to measure our progress with automated testing, and one of the first suggestions was to measure code coverage.

“Nope,” I said. “Measuring code coverage isn’t helpful because it doesn’t indicate whether we have good tests; only that the tests execute certain parts of the code.”

The next suggestion was measuring the lines of test code. “No,” I said. “That won’t help either. Just as the number of pages in a book don’t indicate how good it is, the number of lines of test code won’t indicate how good the tests are.”

“OK,” they said. “How about the number of tests? Surely that would indicate our progress.”

“Nope, that won’t do it either,” I responded. “Because you could have hundreds of tests, and every one of them could be unreliable or testing the wrong things.”

At this point, they asked: “Well, then what DOES indicate a good test?” The answer to that question is the topic of this blog post! Below are six indications that you have a good automated test.

1. It tests something important

It’s possible to create automated tests to do all kinds of things, but you want to make sure that what you are actually testing is worth it. Code of any type requires maintenance, so it’s not a good idea to create tests just for the sake of having them.

Here’s an example of something you might not want to automate: let’s say that when you are testing a new page in your application, you discover that the “Last Name” header is misspelled “Lsat Name”. You log a bug, the developer fixes the issue, and you verify the fix. What are the chances that bug will appear again? Probably fairly low; so it makes no sense to write an automated test to check the spelling.

2. It fails when it should

Test engineers new to automation often forget to check that their automated test fails when it’s supposed to. This happened to me when I first started writing automation in JavaScript. I wrote a test to check an uploaded record and was so happy that it was passing; then my developer asked me to check if it would fail if the record didn’t match. It didn’t fail! It turned out that I didn’t understand promises; I was actually validating that the promise existed rather than validating the value of the web element. Having tests that pass 100% of the time regardless of the circumstances might look good in a test report, but they provide no value whatsoever.

3. It’s reliable

A test should provide accurate information 100% of the time. This level of perfection is probably not attainable, but the creator of the test should strive for this ideal. Flaky tests should be fixed or eliminated. A flaky test cannot provide reliable information: did it fail because there’s a bug present, or because of the flakiness?

4. It’s maintainable

An automated test suite filled with spaghetti code will be an incredible chore to maintain. Every time a change is made to the feature code, the test suite will take hours or days to update. Test code should be clean code that makes sense to read, is well-organized, and doesn’t repeat itself.

5. It runs quickly

An automated regression suite that takes eight hours to run is not going to provide fast feedback to a development team. We want tests that run as quickly as possible to let the team know if there’s a problem. There are a lot of ways to speed up test automation; my favorite way is to run more API tests than UI tests. I like to have as few UI tests as possible, testing all of the feature logic through the API instead. Other ways to speed up automation include running tests in parallel, and running a test setup once before running the test suite instead of running it before every test.

6. It runs at appropriate times

As software testers, we often want to test as much as possible, as often as possible. But this isn’t always the most efficient thing to do. Smoke tests are a good example of efficient testing. A smoke test is often used during a deployment. We want very few smoke tests so that we can get a clear indication immediately of how the deployment is behaving. It’s OK to run a longer suite of regression tests at some other time, perhaps overnight.

As you can see, determining whether a test is a good test is not simply a matter of looking at metrics. You need to be familiar with the product, understand what the product’s most critical features are, and be able to read the test for code quality and validity.

Creating a Quality Strategy

In my last post, I introduced the concept of the Quality Maturity Model: a series of behaviors that help teams attain various attributes of quality in their software. One of the things it’s important to note is that adopting a Quality Maturity Model requires the whole team to contribute to quality. Quality is not something to be thrown “over the wall” to testers; rather it is a goal that both developers and testers share.

But how can you get the whole team to own quality? One way is through the creation of a Quality Strategy. A Quality Strategy is a document that the whole team agrees on together. It’s like a contract that describes how quality software will developed, tested, and released by the team. In this post, I’ll discuss some of the questions you may want to answer in your team’s Quality Strategy.

Creating and Grooming Stories:
How does the team decide what stories to work on? This could be a decision by the whole team, or by the product owner only. Or the prioritization could be done by someone outside the team.

Who grooms the stories to get them ready for development? This could be the whole team or a subset of the team. Ideally, you’d want to have at least the product owner, one developer, and one tester participating.

Development Process:
How does the team decide who works on which story? It could be that the developers can pick any story from the board, or it could be that developers each have stories in specific feature areas that they can choose from. On some teams, software testers work on simple development stories as well, such as changing words or colors on a webpage or adding in automation ids to make automation easier.

What does “Done” look like for the story? Is it measured by meeting all of the acceptance criteria in the story? Is the developer required to add unit tests before the story can be considered done? How do you know that a feature is ready for testing? On many teams, it’s expected that the developer will do some initial testing to verify that what they coded is ready for further testing.

Feature Handoff:
How will a story be handed off for testing? On some teams this is done by simply moving the story into the “Testing” column on the story board. On other teams, a more formal handoff ceremony is required, where the developer demonstrates the working story and provides suggestions for further testing.

Who deploys the code to the test environment? This seems like a trivial thing, but it can actually be the cause of many misunderstandings and much wasted time. If the developer thinks it’s the tester’s job to deploy the code to the test environment, and the tester assumes that the developer has done it, the tester could begin testing and not realize that the new code is missing until after they have spent a significant amount of time working with the application.

Who will be doing the testing? On some teams, developers can pick up simple testing stories to help improve product velocity, while the more complex stories are left for the testing experts.

Test Plan Creation:
Who will create the test plans? How will they be created? Where will the test plans be stored? Some teams might prefer to do ad-hoc exploratory testing with minimal documentation. Other teams might have elaborate test case management systems that document all the tests for the product. And there are many other options in between. Whatever you choose should be right for your team and right for your product.

Who will write the test automation? On some teams, the developers write the unit tests, and the testers write the API and UI tests. On other teams, the developers write the unit and API tests, and the testers create the UI tests. Even better is to have both the developers and the testers share the responsibility for creating and maintaining the API and UI tests. In this way, the developers can contribute their code management expertise, while the testers contribute their expertise in knowing what should be tested.

Who will be doing other types of testing, such as security, performance, accessibility, and user experience testing? Some larger companies may have dedicated security and performance engineers who take care of this testing. Small startups might have only one development team that needs to be in charge of everything.

Test Tools:
What tools will be used for manual and automated testing? Selecting test tools is very important when you want the whole team to own testing. Developers will most likely want to use tools that use the same language they are using for development, because this minimizes how much context switching they’ll need to do.

Test Maintenance:
Who is responsible for maintaining the tests? It’s amazing how fast test automation can become out of date. One word change on a page can mean a failed test. Ideally, a team should have a “you break it, you fix it” policy where tests are fixed by the person who checked in the code that broke them. If that’s not possible, at least make sure that everyone on the team understands how the tests work and how to fix them in a situation where a fix is needed quickly.

Bugs and Tech Debt:
How are bugs handled when found in testing? Are they discussed by the developer and the tester, triaged by the whole team, or logged on a backlog to be looked at later? It’s often a good idea to fix bugs as soon as they are found, because the developer is already working in that section of code.

How will the team deal with tech debt? Does the team have an agreement to take on a certain amount of tech debt per sprint? Some teams have a policy that when a developer has run out of stories to work on, they pick up tech debt items from the backlog.

Releases:
What kind of testing will you do before a release? Will there be a regression test plan that the whole team can execute together? How about exploratory testing? One high-performing team I know gets together for exploratory testing right before they release. Using this strategy, they’ve uncovered tricky bugs and fixed them before they were released to production.

How will the software be released? In some companies there is a release manager who takes care of executing the release. In other companies, the team members take turns releasing the software. One very helpful technique is Continuous Deployment, where the software is automatically deployed and tests automatically run to verify the deployment to each environment, saving everyone time and effort.

Maintenance:
How will you measure the success of the release? Once software has been released, it’s easy for development teams to forget about it; but this is the time the users begin working with it. What kinds of metrics might you use to measure how well your product is working? You could keep track of defects reported by customers, or look at logs for unexpected errors.

How will you monitor the health of your application? It would be a good idea to have alerts set up so that you can find out about problems with your application before your users do. What kinds of behaviors should you be looking for?

Quality Strategies can be as varied as snowflakes. Imagine the differences between a small startup of ten people who are making a mobile chat app and a company of twenty thousand people who are designing software that flies airplanes. These two companies will need very different strategies! You can design a Quality Strategy that works well for your team by discussing these questions together and drafting a strategy that you can all agree upon.

The Quality Maturity Model

One year ago, my company adopted something we call the “Quality Maturity Model”. We created the model to help teams measure how they are doing with behaviors that support creating quality applications. The project has been a big success, so I’ve decided to share some details about it with the world!

We started out by coming up with a definition of quality. Using this excellent blog post as a jumping-off point, we defined the seven Attributes of Quality at Paylocity.
A Quality application is:
Valuable: It meets the customer’s needs
Functional: It does what we say it does, and we can measure those interactions
Reliable: It is available when needed
Secure: It protects customer and company information
Performant: It responds within an acceptable time
Usable: It is easy and intuitive to use
Maintainable: It is easy to test, deploy, automate, monitor, update, and scale

After we had defined these attributes, we created a list of behaviors development teams could do that would help ensure those attributes were part of our products. For each of those behaviors, we determined what a minimum version of that behavior would look like, what a standard version would look like, and what excellent would look like. From there, we created the Quality Maturity Model.

Here are some examples of the behaviors defined in the Quality Maturity Model:
Valuable: Team identifies and investigates customer needs
Functional: Team creates, executes, monitors, and maintains reliable test automation
Reliable: Team actively monitors the health of their applications and takes appropriate action as needed
Secure: Team creates and adheres to a security strategy following security best practices
Performant: Team consistently meets SLO standards for their product
Usable: Team ensures the product is usable on multiple devices and supported browsers/platforms when applicable
Maintainable: Team manages and owns their deployments following the release management process

We rolled out the Quality Maturity Model to all the teams and asked them to identify which of these behaviors they were already demonstrating. From there, we asked teams to create quarterly goals to adopt more of the behaviors. Quality Leaders were each assigned a group of teams to meet with monthly to help answer any questions and hold teams accountable.

After a year of working on adoption of the model, we’ve made significant progress! Here are some examples:

A team committed to having the whole team own the test automation. The team works together to make sure that tests aren’t duplicated; for example, if a unit test already covers what’s needed to test a feature, there’s no need to write a UI test. This saves the team significant time in test creation and maintenance, freeing them up to focus on new features.

Another team made sure that the whole team knew how to use the UI automation framework. A developer was able to do a complete regression test on the UI work he was doing and fix all the bugs he found without involving anyone else on the team.

One tester on a team created a reusable test plan so the developers would be able to determine what to test. When both the testers on the team were on leave at the same time, the developers were able to carry on with feature development and testing with no problems.

A team was able to use the progress they made in test automation to speed up their release times from once a month to twice a month.

If you are looking for a way to enhance the quality of your product, minimize escaped defects, and speed up your delivery time, the Quality Maturity Model may be a great way to help. I recommend starting a discussion with the leaders at your company about what quality behaviors you’d like to see in your teams!

Software Tester or Lazy Developer?

Most software companies would agree that in order to release quality software in a timely fashion, good test automation is necessary. There’s simply not enough time to wait around for manual testers to do a complete pass of regression tests before releasing new features; progress would slow to a point that the company would not be competitive.

While it’s possible for software developers to write good automated tests, what’s even better is to have a software tester guide the automation projects to make sure that the right things are being tested, and to think of the edge cases that developers might not think of.

But there are many software testers out there who are not actually testers! They can write test automation, but they don’t know how to think like a tester. I call these folks “lazy developers”. They enjoy writing code, and they also enjoy not having to feel responsible for writing good feature code. They can quietly work on their automation projects without worrying about the quality of their code. Unfortunately their poor automation code results in tests that don’t provide the company with the information it needs about the product, which results in poor quality software.

How can you tell if you have a lazy developer rather than a software tester on your team? Here are five key signs:

  1. They are not particularly interested in thinking about what to test
    In story grooming sessions, the lazy developer doesn’t have any questions for the team. They don’t participate in the conversation, and they don’t point out any possible problem areas of new features or areas of regression for bug fixes. This can result in many missed opportunities to find bugs.
  2. They refuse to do any manual or exploratory testing
    The lazy developer thinks of themselves as an automation engineer and nothing else. They are insulted by suggestions that they do exploratory or manual testing on a new feature. Rather than get to know the product, they’d prefer jumping right in to writing tests. Unfortunately, this means that they really don’t understand how the product works, and they may be automating the wrong things.
  3. They are protective of their code and don’t want to share it with others
    Real developers understand that the product code base is a shared code base. No one has exclusive claim to any section of the code. But lazy developers like to have their own little corner of code that no one else looks at. They’ve discovered that by writing automated tests, they can write code that the other developers won’t be interested in, so they won’t interfere with the lazy dev’s pet project. See my post on lone wolves for more information.
  4. They have no real interest in the quality of software releases
    The lazy developer doesn’t pay much attention when the team releases software. They are only interested in their automated tests. If the tests run and mostly pass, that’s good enough for them. They don’t make the connection between escaped defects and their tests, because they’re not writing tests to check the quality of the product; they’re writing tests because writing them is fun.
  5. They aren’t good at finding bugs
    The lazy developer isn’t good at finding bugs, because they don’t actually care about finding the bugs. If their automation finds a bug, that’s great; they feel like they’ve done their job. But if a bug can’t be found by their automation, they feel it’s probably not that important anyway.

Are YOU a lazy developer?

If any of the above descriptions sound like you, then you might be in the wrong career! Take some time to do the following soul-searching:

  • Pay attention to what you enjoy spending time on
    You can definitely be a good software tester and enjoy writing automation code! But if you find yourself sighing inwardly whenever you need to write a test plan, you might really be a developer.
  • Notice what kind of stories catch your interest
    When you are reading stories about software online, what gets you excited? Is it the story of an elusive bug that took days of searching to find, or is it a new coding language or an engineering challenge? If development stories excite you more than testing stories, you might be in the wrong role.
  • Are you envious of the developers on your team because it seems like they are having more fun?
    Ask your manager if you can do some simple development stories for the team. See if you actually enjoy working on those stories more than you enjoy testing them.
  • Is fear holding you back from becoming a developer?
    Perhaps you actually trained to become a software developer, but got stuck in the testing role and never got the courage to leave. Would you be willing to face your fear and take on some additional training? Would you be willing to ask for some coaching from a developer you trust?
  • Explore what it would take to become a developer at your company
    If you’ve decided that you are really a developer, talk to your manager. See if there is an established path at your company for making the transition from tester to developer. If there isn’t a path, look for someone at your company who made the transition and ask them how they did it.
  • Find a mentor
    Find a developer at your company who has good communication skills, and ask them if they would be willing to mentor you. Let them suggest projects that you can work on that will develop your skills and fill in any technical gaps.

I personally love software testing, and I want to see as many great testers out there as possible! We need people who care about quality so much that they are willing to go the extra mile. But we also need passionate developers. Our software teams will be more effective if everyone is doing a job that they love!

What I Learned When I Developed a Web App

Last week, I released a web application for people to use to practice UI and API automation. It took me about four months to develop, working on weeknights and weekends, and it was quite an adventure! Here are five lessons I learned while developing the app.

Lesson One: Software development is hard; really hard
You’ve probably heard this before, and probably from the developers you work with. But unless you’ve actually tried to develop an app yourself, you really don’t know how hard it is. I remember in my early days of software testing, when I would find bugs where field validations weren’t working correctly, and I would assume that the developer just didn’t care. But now that I know just how hard it is to get validations to even work at all, I am really sympathetic!

Lesson Two: We have product owners for a reason
Take a moment to appreciate your product owner this week; the work they do is more important than you realize. When I started to work on the front end of my application I didn’t think it would be very complicated because it only has a few simple screens. But as I was coding the pages, I kept coming up with questions about how each page should be connected to the others. Finally I had to stop what I was doing and sketch out wireframes. This is the kind of thing that product owners do for us so that we don’t have to!

Lesson Three: Even the best coding courses won’t teach you everything
As I’ve mentioned in previous posts, last year I took a really great Node.js course to prepare me for this project. I learned a ton about how to create and validate APIs, how to create a web page, how to work with a database, and how to publish an application. But there were some details missing about working with web pages and JavaScript files. Even though the course was comprehensive, it couldn’t teach me absolutely everything. I didn’t have time to take another course, so I wound up doing a lot of web searching.

Lesson Four: There are a lot of different ways to solve a problem
All that web searching brought me to my fourth realization: there are a lot of different ways to do things with code, and this makes software development very confusing. One of the things I struggled with was how to get authentication working on my site. I had it working in the back end- thanks to the Node.js course- but I had no idea how to get it working in the front end. My searches would bring up several different blog posts, each with a different solution. I’d start trying to follow one tutorial, and then hit a roadblock and decide that it wouldn’t work for me. Finally I found this post, which came the closest to meeting my needs.

Lesson Five: Testing and developing are different skill sets
There has been a lot of focus recently on making sure that the whole development team owns quality, and this is a really good thing! Test automation is more reliable and scalable when developers help with the automation frameworks and add new tests. But working on this project, it was so clear to me that testing and developing need really different mindsets. While I was developing the app, I kept on thinking of all the different things I could do to test it, and I had to tell myself to stop thinking like that so that I could focus and get the development work done! When I’d finished creating the API, then I took a break and created all the tests I could think of. And when I finished the UI I had a lot of fun doing exploratory testing on the app. There may be some people out there in the world who enjoy both developing and testing, but I’ve never met anyone who is truly passionate about both.

Developing my app made me understand the big picture of web applications better, and made me appreciate the developers and product owners I work with even more! If you have some time to spare, I recommend going on an adventure like this yourself.

New API and UI Test App!

I am writing a book on software testing, which I’m hoping to publish by the end of the year. In order to help my readers learn about manual testing, API testing, and API and UI automation, I decided I’d like to have an example application to accompany the book.

There are a lot of great practice sites out there for API testing and UI automation, but not a lot of sites that offer both, and some of those sites are a little too complex for someone learning the basics. So I decided to create my own!

The Contact List App is a simple web application that allows testers to create an account, log in, and add, edit, and delete a list of contacts. The web elements are easy to access, so it’s great for getting started with UI automation. The application includes an API for testers to practice GET, POST, PUT, PATCH, and DELETE operations.

This was my first time creating a complete application, and it was quite a learning experience! I’ll share some of the things I learned in a future post. I hope that you will find the application helpful as you hone your testing skills, and please email me with any feedback.

If you’d like to stay informed about my upcoming book, and perhaps even preview chapters, you can sign up for my mailing list here. In the meantime, enjoy the Contact List App!

HTTP Standards

Have you ever been testing an API and gotten involved in a dispute about what a response code should be?  Perhaps you witnessed a disagreement between two developers, or perhaps a developer was insisting the response code should be one thing and you thought it should be something different.  You might have gone to Stack Overflow to see what others think, and discovered people referencing something called an RFC.  

RFC stands for “Request for Comments”.  The RFCs related to HTTP protocols are produced by the ISOC: the Internet Society.  The Internet Society consists of tens of thousands of members, a staff, and a board of trustees.  They produce the standards that most developers use when developing websites and web applications, and those standards are written up in the form of RFCs.  

The first HTTP RFC was created in 1996, and RFC 2616, the most recent RFC that discusses response codes, was created in 1999.  These RFCs are surprisingly easy to read!  Let’s take a look at some of the contents of RFC 2616 and see how we can apply them to real-life scenarios.

Responses to POST requests:

In section 9.5, the RFC states “The action performed by the POST method might not result in a resource that can be identified by a URI. In this case, either 200 (OK) or 204 (No Content) is the appropriate response status, depending on whether or not the response includes an entity that describes the result.” So when you are doing a POST that doesn’t create a new resource, such as a request that checks for the existence of a resource, you could use a 200 or a 204 response code. But if your POST is returning a response body, you must use the 200 response code rather than the 204 response code, because the 204 response code cannot include a response body.

Responses to PUT requests:

In section 9.6, the RFC says “If an existing resource is modified, either the 200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful completion of the request.” So as with the POST request, if you are modifying a resource and not returning anything in the body of the response, you could use a 200 or a 204 response code. But if you were returning something in the response body, you’d definitely want to use a 200.

Responses where the user is not allowed access to a resource:

Section 10.4.4 says of a 403 “Forbidden” response code: “The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity.  If the server does not wish to make this information available to the client, the status code 404 (Not Found) can be used instead.” So if your user does not have permission to view a resource, and you don’t mind if the user knows that the resource exists, then you can use a 403 response code. But if you don’t want the user to even know that the resource exists, because perhaps that might give a malicious user too much information, then you’d want to return a 404 so that all they’d know was that the resource was not found.

Hopefully these examples have shown how useful the HTTP RFC documents can be. So the next time you are testing an API and you’re wondering if you’re seeing the right response code, try going to the source!

Your Lone Wolf Days Are Over!

The very first test automation job I had was for a company that had no QA Engineer before I was hired. I’d never done automation before, but I convinced the company that with my rudimentary knowledge of Java, I’d be able to figure it out. This was long before there were awesome online resources like Test Automation University, so it took me a long time and a lot of trial and error before I had automated tests that would run and pass. My tests were long, flaky, hard to maintain and filled with implicit waits and duplicated code, but they were my tests, and I had really enjoyed solving the problem of test automation on my own.

Then the company hired a new software developer, and our manager thought it would be a great idea for him to learn about our software by looking at my tests. Without consulting me, the new developer completely rewrote my tests. I was really annoyed, until I looked at the tests and saw that he had reorganized them using page object models and methods so that no code was duplicated. It would be so much easier to maintain the tests now! That week I learned that it’s always best to work with others rather than going it alone, because others often have skills or information that we don’t.

Software developers know this already, because they are required to collaborate with others through feature planning and code reviews. But often testers aren’t required to do this. Test automation code is just as important as production code because of the value it provides, and for that reason, we shouldn’t be lone wolves, even if we enjoy it! Here are some lone wolves that you may encounter or recognize in yourself.

The Gollum: In The Lord of the Rings, Gollum loved his ring so much he called it his “precious”. For the test automation Gollum, her automated tests are her “precious”. She’s worked long and hard at those tests and is very proud of them! Unfortunately, because she was the only one who worked on the tests, they make sense only to her. Nobody wants to help maintain the tests because they are so hard to understand. As a result, she is the only one who can fix the tests when they break, and therefore she has become the bottleneck for automated testing on her team.

How to stop being The Gollum: Share your test automation with others and get feedback on how they could be more useful and maintainable. Implement the feedback and repeat.

The Frank Sinatra: Just as the real Frank Sinatra sang about doing it “My Way”, the test automation Frank wants to do it all his way. He is convinced that he alone knows the right way to do test automation. And unsurprisingly, the right way is with his favorite tool! Every other tool falls short in his estimation, so he’s going to stick with his tool even if the rest of the company is using something different. As a result, he can’t collaborate and share ideas with testers on other teams, and his test automation never improves.

How to stop being The Frank Sinatra: Try some new test automation tools! You don’t have to love every single one, just give yourself enough time to really understand their strong points. You may be surprised at how much test tools have improved in recent years!

The Magpie: This species of bird is attracted to shiny objects. The test automation Magpie is attracted to new test automation frameworks! If it’s new, she wants to try it out, and she loves writing test automation from scratch. In her opinion, any problems with her existing automation must be problems with the framework, and the problems become an excuse to scrap the project and start over again with a new framework. This means her team never has a complete test suite that they can run and rely upon. It also means that the team has to keep up with all the framework changes, which will make them reluctant to contribute to the automation.

How to stop being The Magpie: Get input from your team about what framework would be best for your application, then stick with it. When you run into trouble, ask your team for help, or ask other test automation engineers. You don’t have to keep using the framework forever, but use it at least long enough to see it working in CI/CD.

The Hermit: The Hermit simply loves working alone. He’s very busy working on his automation all by himself, so he doesn’t want to take the time to explain what he’s doing to the rest of his team or to any other testers. He hates asking for help and sees it as a sign of weakness; he’d rather figure everything out on his own. As a result, his automation never improves, and no one at the company ever benefits from his expertise.

How to stop being The Hermit: Find a developer or test automation engineer that you admire and trust, and ask them for their opinion on your automation work. Implement some of their suggestions. Volunteer to lead a workshop on something you’re really good at. Try to help one person a week who is stuck on a thorny automation problem.

Software is a collaborative adventure! Building software is complicated. There are many facets of software quality to consider, while at the same time delivering features that will put your company ahead of its competitors. That’s why we don’t have room for lone wolves anymore. Software testers need to work together to contribute to test automation projects that deliver fast, accurate results. And software testers need to work with software developers to make sure that quality goes both ways: we need quality production code and quality test code.