This week we are looking at my favorite type of automated tests: services tests. The reason I love services tests is because they test so much of the application without the hassle of the UI. There are many types of services, but the most widely used service is the REST API. And my favorite way to test a REST API is with Postman.
Easy Free Automation Part II: Component Tests
Last week, I started an eight-part series to demonstrate in a free and very easy way how to write automation for each test type in the Automation Test Wheel. This week, we’re taking a look at component tests.
As with every term in software testing, component tests mean different things to different people. I like to define a component test as a test for a service that an application is dependent on. For example, an application might need to make calls to a database, so I would have a component test that made a simple call to that database and verified that it received data in response. Another example of a component would be an API that the application doesn’t own. In this scenario, I would make a simple call to the API and verify that I got a 200-level response.
Coming up with a free and easy example of automated component tests was, unfortunately, not that easy! But I have created a very simple Node.js application, which you can download from Github here.
In order to run this application and have my two tests pass, you’ll need to have Node, npm, and MongoDB installed, and you’ll need to create a very simple Mongo database with just one item in it. I used most of the instructions in this really awesome tutorial by Chris Buecheler at CloseBrace.com to create this application. You can use my application along with the instructions in Part 3 of the tutorial to make your Mongo database. Or you can just clone my application and run it, with the understanding that one of the tests will fail. Or if this seems like too much work, you can just read on and look at my test code!
My extremely simple app is dependent on two things: my Mongo database, and an external API (the really great Restful-Booker API, which I’ll be using in next week’s blog). For my component tests, I want to test those two dependencies: that I can make a request to the database and get a positive response, and that I can make a call to the Restful-Booker API and get a positive response.
I am using Jest and Supertest for these tests. I have limited experience with Jest, but from what I have seen so far, it is very easy to set up. Supertest is a library that enhances Javascript testing by making it easier to call APIs.
I’ve put my tests in a file called index.test.js, and this is what it looks like:
const request = require(‘supertest’);
describe(‘Database Connection Test’, () => {
it(‘Returns a 200 with a call to the DB’, async () => {
const res = await request(‘http://localhost:3000’)
.get(‘/userlist’)
.expect(200)
});
});
describe(‘Restful-Booker Connection Test’, () => {
it(‘Returns a 201 with a health check’, async () => {
const res = await request(‘https://restful- booker.herokuapp.com’)
.get(‘/ping’)
.expect(201)
});
});
The first line of my file is invoking Supertest, so I will be able to do HTTP requests. Let’s take a look at each part of the first test so we can see what it’s doing:
describe(‘Database Connection Test’, () => {
it(‘Returns a 200 with a call to the DB’, async () => {
const res = await request(‘http://localhost:3000’)
.get(‘/userlist’)
.expect(200)
});
});
The “describe” section comprises the entire test.
describe(‘Database Connection Test’, () => {
it(‘Returns a 200 with a call to the DB’, async () => {
const res = await request(‘http://localhost:3000’)
.get(‘/userlist’)
.expect(200)
});
});
‘Database Connection Test’ is the title of the test.
describe(‘Database Connection Test’, () => {
it(‘Returns a 200 with a call to the DB’, async () => {
const res = await request(‘http://localhost:3000’)
.get(‘/userlist’)
.expect(200)
});
});
The “it” section is where the assertion is called.
describe(‘Database Connection Test’, () => {
it(‘Returns a 200 with a call to the DB’, async () => {
const res = await request(‘http://localhost:3000’)
.get(‘/userlist’)
.expect(200)
});
});
‘Returns a 200 with a call to the DB’ is the title of the assertion.
describe(‘Database Connection Test’, () => {
it(‘Returns a 200 with a call to the DB’, async () => {
const res = await request(‘http://localhost:3000’)
.get(‘/userlist’)
.expect(200)
});
});
Here is where we are making a GET request to ‘http://localhost:/3000/userlist’.
describe(‘Database Connection Test’, () => {
it(‘Returns a 200 with a call to the DB’, async () => {
const res = await request(‘http://localhost:3000’)
.get(‘/userlist’)
.expect(200)
});
});
Here is where we are expecting that we will get a 200 response.
If you’d like to try to run the tests on your own, assuming you have cloned the application, you can do so with these commands:
cd easyFreeComponentTests (this will move you to the directory where you cloned the application)
npm install (this will install all the components you need to run the application)
npm start (this will start the application)
Then go to a new command-line window, cd to where the application is located, and run:
npm test (this will run the tests)
That was a lot of information for just two component tests! But remember it will be easier to get started when there is already a project to test. How many tests you have in this area will depend on how many external systems your application is dependent on. It may be a good idea to create a “health check” that will run your component tests whenever your code is deployed. That way you will be alerted if there is any error when calling your external systems.
Next week, we’ll move on to my favorite test type: services tests!
Easy Free Automation Part I: Unit Tests
This post is the beginning of an eight-part series on easy, free ways to automate each area of the Automation Test Wheel. It’s been my experience that there are a number of barriers to learning test automation. First, the team you are on might not need certain types of automation. For example, my team has been solely API-focused, so for the last two years I haven’t had much reason to do UI automation. Second, your company may already have invested in specific automation tools, so when you want to learn to use a different tool, you need to do it on your own. Third, there are many tools that have barriers to using them, such as a high cost or a complicated setup. And finally, there is not always good documentation out there to help people get started.
In this series, I’m hoping to provide simple, free examples that will demonstrate each area of the Automation Test Wheel in practice, which you can use as a jumping-off point in your own automation journey. We’ll begin with unit tests.
Automation Wheel Strategy: Moving from What to How to When to Where
Last week, we talked about how I would decide what to test in a simple application in terms of testing every segment of the Automation Test Wheel. I find it’s very helpful to answer the question “What do I want to test?” before I think about how I’m going to test it. This week we’ll look at how to take the “What” of automated testing and continue on with how I want to test, when I want to test it, and where (what environment) I’m going to test it in. As a reminder, my hypothetical application is a simple web app called Contact List, which allows a user to add, edit, and delete their contacts.
- What do we want to test?
- How are we going to test it?
- When will we run our tests?
- Where will we run them?
The Automation Test Wheel in Practice
Last week’s blog post, “Rethinking the Pyramid: The Automation Test Wheel“, sparked many interesting discussions on LinkedIn, Twitter, and in the comments section of this blog! The general consensus was that the Test Pyramid is still useful because it reminds us that tests closest to the code are the fastest and most reliable to run, and that the Automation Test Wheel reminds us to make sure to include categories such as security, accessibility, and performance testing. Also, a reader pointed us to Abstracta’s Software Testing Wheel, which looks at the definition of quality from a number of different perspectives.
This week I’m talking about how to put the Automation Test Wheel into practice. Let’s imagine that I have a simple web app called Contact List. It allows a user to log in, view a list of their contacts, and add new contacts. I want to design a complete automation strategy for this application that will enable my team to deploy all the way up to production confidently. In order to feel confident about the quality of my application, I’ll want to be sure to include tests from every segment of the Automation Test Wheel.
Unit Tests: I will make sure that every function of my code has at least one unit test. I’ll run these tests using mock objects. For example, I will create a list of mock contacts and a mock new contact, add the new contact, and verify that the new contact has been added to the list of mock contacts. I’ll update a contact with new data and verify that the contact has been updated in the list. I’ll create a mock contact with invalid data and verify that attempting to add the contact results in an appropriate error. These are just some examples; for each function in my app, I’ll want to have several tests which exercise all possible code paths.
Component Tests: My application is very simple and relies on just one database. The database is used for both authentication and for retrieving the contact data. I will include one test for each function; I’ll send an authentication request for a valid user and verify that the user is authenticated, and I’ll make one request to the database to retrieve a known contact, and verify that the contact is retrieved.
Services Tests: My application has an API which allows me to do CRUD operations (Create, Read, Update, Delete) on my contacts. I have a GET endpoint which allows me to retrieve the list of contacts, and a GET endpoint which allows me retrieve one specific contact. I have a POST endpoint which allows me to add a contact to the contact list. I have a PUT endpoint which allows me to update the data for an existing contact, and I have a DELETE endpoint which allows me to delete an existing contact. For each one of these endpoints, I will have a series of tests. The tests will include both happy paths and error paths. I’ll verify that in each request, the response code is correct and the response body is correct. For example, with the GET endpoint where I retrieve one contact, I’ll verify that a GET on an existing contact returns a 200 response and the correct data for the contact. I’ll also verify that a GET on a contact that doesn’t exist returns a 404 Not Found response.
User Interface (UI) Tests: This is where I will be testing in the browser, doing activities that a real user would do. A real user will want to fetch their list of contacts, add a new contact, update an existing contact, and delete a contact. I will have one test for each of these activities, and each test will have a series of assertions. To take one example, when I add a new contact, I will navigate to the new contact page, fill in all the form fields, and click the Save button. Then I will navigate to the list page and verify that my new contact appears on the page.
Visual Tests: This is where I will verify that elements are actually appearing on the page the way I want them to. I will navigate to the list page and verify that all of the columns are appearing on the page. I will navigate to the add contact page and verify that all of the form fields and their labels are appearing appropriately on the page. I will trigger all possible error messages (such as the one I would receive if I entered an invalid zip code), and verify that the error appears correctly on the screen. And I will verify that all of the buttons needed to use the application are rendering correctly.
Security Tests: I will run security tests at both the Services layer and the UI layer. I will test the API operations relating to authenticating a user, verifying that only a user with the correct credentials will be authenticated. I will test every request endpoint to make sure that only those requests with a valid token are executing; requests without a valid token should return a 401. For the UI layer, I will conduct a series of login tests that validate that only a user with correct credentials is logged in, and I will verify that I cannot navigate to the list page or the add contact page without being logged in.
Performance Tests: I will set benchmarks for both the server response time and the web page load time. To measure the server response, I will add assertions to my existing Services tests that will verify that the response was returned within that benchmark. To measure the web page load time, I will run a UI test that will load each page and assert that the page was loaded within the benchmark time.
Accessibility Tests: I want to make sure that my application can be used by those with visual difficulties. So I will run a set of UI and Visual tests on each page where I validate that I can zoom in and out on the text and that scroll bars appear and disappear depending on whether they are needed. For example, if I zoom in on the contact list I will now need a vertical scrollbar, because some of the contacts will now be off the page.
With this series of automated tests, I will feel confident that I’ll be able to deploy changes to my application and discover any problems quickly.
I’ve received a few questions over the last week about what percentage of total tests each of spokes in the Automation Test Wheel should have. The answer will always be “It depends”. It will depend on these and many other considerations:
- How many other services does your application depend on? If it depends on many external services, you’ll need more Component tests.
- How complicated is your UI? If it has just a page or two, you’ll need fewer UI and Visual tests. If it has several pages with many images, you’ll need more UI and Visual tests.
- How complicated is your data structure? If you are dealing with large data objects, you’ll need more Services tests to validate that CRUD operations are being handled correctly.
- How secure does your application need to be? An application that handles personal banking will need many more Security tests than an application that saves pictures of kittens.
- How performant does your application need to be? A solitaire game doesn’t need to be as reliable as a heart monitor.
Rethinking the Pyramid: The Automation Test Wheel
Anyone who has spent time working on test automation has likely heard of the Test Automation Pyramid. The pyramid is typically made of three horizontal sections: UI Tests, API Tests, and Unit Tests. The bottom section is the widest section, and is for the unit tests. The idea is that there should be more unit tests run than any other kind of tests. The middle section is for the API tests, and the idea is that fewer API tests should be run than unit tests. Finally, the top section is for the UI tests, and the idea is that the least number of tests run should be UI tests, because they take the most time and are the most fragile.
In my opinion there are two things wrong with the pyramid: it leaves out many types of automated tests, and it assumes that the number of tests is the best indicator of appropriate test coverage. I propose a new way of thinking about automated testing: the Automation Test Wheel.
Each of these test types can be considered as spokes in a wheel; none is more important than another, and they are all necessary. The size of each section of the wheel does not indicate the quantity of the tests to be automated; each test type should have the number of tests that are needed in order to verify quality in that area. Let’s take a look at each test type.
Unit Tests: A unit test is the smallest automated test possible. It tests the behavior of just one function or method. For example, if I had a method that tested whether a number was zero, I could write these unit tests:
- A test that passes a zero to the method and validates that it is identified as a zero
- A test that passes a one to the method and validates that it is identified as non-zero
- A test that passes a string to the method and validates that the appropriate exception is thrown
You may have noticed that the above descriptions often overlap each other. For example, security tests might be run through API testing, and visual tests might be run through UI testing. What is important here is that each area is tested thoroughly, efficiently, and accurately. If there is a spoke missing from the wheel, you will never be comfortable relying on your automation when you are doing continuous deployment.
Next week, I’ll discuss how we can fit all these tests into a real-world application testing scenario!
Fifteen Free Tools to Help With Testing
There are a great many articles, blog posts, and presentations that discuss automation frameworks and strategies. But even the most robust automation framework won’t eliminate the need to do exploratory testing. There will always be situations where we need to generate a large amount of text to test a text field or where we need to encode a string in HTML to test for cross-site scripting. In this week’s post, I share fifteen of my favorite free tools that make testing faster and easier.
Six Steps to Writing an Effective Test Report
As testers, we know how important it is to test our software thoroughly and document our findings meticulously. But all of our talent will be useless if we can’t effectively communicate our test results to others! If your test results are written in a giant, poorly organized spreadsheet with tiny text and lots of unnecessary details, even the most dedicated test manager will find her eyes glazing over with boredom when she looks at it. In this post, I’ll describe six steps to take to make sure that you can communicate your findings to others efficiently and effectively.
Rules | Amy | Bob | Carol | Doug |
Amy-small blue balls; Doug- large green balls | small blue ball; small blue ball; small blue ball; small blue ball | large red ball; small orange ball; large yellow ball; small purple ball | large purple ball; small green ball; large yellow ball; small red ball | large green ball; large green ball; large green ball; large green ball |
Bob- large red balls; Carol- small yellow balls | large orange ball; small purple ball; large yellow ball; small green ball | large red ball; large red ball; large red ball; large red ball; large red ball | small yellow ball; small yellow ball; small yellow ball; small yellow ball | small blue ball; small green ball; large purple ball; small orange ball |
Rules | Rules respected? |
Amy-small blue balls; Doug- large green balls | Yes |
Bob- large red balls; Carol- small yellow balls | Yes |
Number of Rules | Pass/Fail |
0
|
|
1
|
|
2
|
|
3
|
|
4
|
Test Case | Result |
None of the children have rules | The balls are sorted evenly amongst the children |
One child has a rule | The child’s rule is respected |
Two children have rules | The two children’s rules are respected |
Three children have rules | The three children’s rules are respected |
Four children have rules | None of the balls are sorted |
Rules | Rules respected? |
Amy- small blue; Bob- large blue; Carol- small purple | Amy gets only small blue balls, and Bob gets only large blue balls, but Carol gets balls other than the small purple balls |
Amy- large blue; Bob- small purple; Carol- small yellow | Amy gets only large blue balls, Bob gets only small purple balls, and Carol gets only small yellow balls |
Rules | Amy | Bob | Carol |
Amy- small blue; Bob- large blue; Carol- small purple | PASS | PASS | FAIL |
Amy- large blue; Bob- small purple; Carol- small yellow | PASS | PASS | PASS |
Rules | Result |
A-SB; B-LO; C-L; D-S | A-Y; B-Y; C-Y; D-N |
A-L; B-S; C-Y; D-P | A-Y; B-N; C-Y; D-Y |
A-LY; B-L; C-S; D-SG | A-Y; B-Y; C-N; D-Y |
This report conveys exactly the same information:
Test One
|
Amy- small blue
|
Bob- large orange
|
Carol- large
|
Doug- small
|
Rule respected?
|
Yes
|
Yes
|
Yes
|
No
|
Test Two
|
Amy- large
|
Bob- small
|
Carol- yellow
|
Doug- purple
|
Rule respected?
|
Yes
|
No
|
Yes
|
Yes
|
Test Three
|
Amy- large yellow
|
Bob- large
|
Carol- small
|
Doug- small green
|
Rule respected?
|
Yes
|
Yes
|
No
|
Yes
|
It’s easy to see exactly what rules each child was given for each test. Through the use of color, the report demonstrates very clearly where the bug is: whenever a child is given a rule that they should get only small balls, that rule is not respected.
Conclusion:
In today’s fast-paced world, we all have vast amounts of information coming to us every day. If we are going to make a difference with our testing and influence decision-making where we work, we need to be able to convey our test results in ways that clearly show what is going on with our software and what should be done to improve it.
How to Reproduce a Bug
Have you ever seen something wrong in your application, but you haven’t been able to reproduce it? Has a customer ever reported a bug with a scenario that you just couldn’t recreate? It is tempting to just forget about these bugs, but chances are if one person has seen the issue, other people will see it as well. In this post I’ll discuss some helpful hints for reproducing bugs and getting to the root cause of issues.
Gather Information
The first thing to do when you have a bug to reproduce is to gather as much information as you can about the circumstances of the issue. If it’s a bug that you just noticed, think about the steps that you took before the bug appeared. If it’s a bug that someone else has reported, find out what they remember about the steps they took, and ask for details such as their operating system, browser, and browser version.
One Step at a Time
Next, see if you can follow the steps that you or the reporter of the bug took. If you can reproduce the bug right away, you’re in luck! If not, try making one change at a time to your steps, and see if the bug will appear. Don’t just thrash around trying to reproduce the issue quickly; if you’re making lots of disorganized changes, you might reproduce the bug and not know how you did it. Keep track of the changes you made so you know what you’ve tried and what you haven’t tried yet.
Logs and Developer Tools
Application logs and browser developer tools can be helpful in providing clues to what is going on behind the scenes in an application. A browser’s developer tools can generally be accessed in the menu found in the top right corner of the browser; in Chrome, for example, you click on the three-dot menu, then choose “More Tools”, then “Developer Tools”. This will open a window at the bottom of the page where you can find information such as errors logged in the console or what network requests were made.
Factors to Consider When Trying to Reproduce a Bug
- User- what user was utilized when the bug was seen? Did the user have a specific permission level? You may be dealing with a bug that is only seen by administrators or by a certain type of customer.
- Authentication- was the user logged in? There may be a bug that only appears when the user is not authenticated, or that only appears when the user is authenticated.
- State of the data- what kind of data does the user have? Can you try reproducing with exactly the same data? The bug might only appear when a user has a very long last name, or a particular type of image file.
- Configuration issues- There may be something in the application that isn’t set up properly. For example, a user who isn’t getting an email notification might not be getting it because email functionality is turned off for their account. Check all of the configurable settings in the application and try to reproduce the issue with exactly the same configuration.
- Actions taken before the issue- Sometimes bugs are caused not by the current state where they are seen, but by some event that happened before the current state. For example, if a user started an action that used a lot of memory, such as downloading a very large file, and then continued on to other activities while the file was downloading, an error caused by running out of memory might affect their current activity.
- Back button- the Back button can be the culprit in all kinds of mysterious bugs. If a user navigates to a page through the Back button, the state of the data on the page might be different from what it would be through standard forward navigation.
- Caching- caching can result in unexpected behavior as well. For example, it might appear as if data is unchanged when it in fact has been changed. If a cache never expires or takes too long to expire, the state of the data can be very different from what is displayed.
- Race conditions- these issues are very difficult to pin down. Stepping through the code probably won’t help, because when the program is run one step at a time the problem doesn’t occur. The best way to determine if there is a race condition is to run your tests a several times and document the inconsistent behavior. You can also try clicking on buttons or links before a page has loaded in order to speed up input, or throttling your internet connection in order to slow down input.
- Browser/Browser Version- Browsers are definitely more consistent in their behavior than they used to be, and most browsers are now updated to the latest version automatically, but it’s still possible to find bugs that only appear in certain browsers or versions. If your end user is using IE 8 on an old Windows desktop, for example, it’s extremely likely that your application will behave differently for them.
- Browser Size- if a customer is saying that they don’t see a Save button or a scrollbar in their browser window, ask them to run a browser size tool in another tab on their browser. Set your browser to be the same size as theirs, and see if you now have the same problem.
- Machine or Device- Mobile devices are highly variable, so it’s possible that a user is seeing an issue on an Android device that you are not seeing on an iOS device, or that a user is seeing a problem on a Motorola device when you are not seeing it on a Samsung. Laptops and desktop computers are less variable, but it is still possible that there is an issue that a Mac owner is experiencing that you don’t see in Windows. Tools like Browserstack, Perfecto, and Sauce Labs can be helpful for diagnosing an issue on a machine or device that you don’t own.
What to Test When There’s Not Enough Time to Test
In last week’s post, I discussed the various things we should remember to test before we consider our testing “done”. This prompted a question from a reader: “How can I test all these things when there is very limited time for testing?” In today’s agile world, we often don’t have as much time as we feel we need to fully test our software’s features. Gone are the days when testers had weeks or months to test the upcoming release. Because software projects usually take longer than estimated, we may be asked to test things at the last minute, just a day or two before the release. Today I’ll discuss what to test when there’s not enough time to test, and I’ll also suggest some tips to avoid this problem in the first place.
The Bare Minimum: What to Test When There’s Almost No Time
Let’s use our hypothetical Superball Sorter as an example. For those who haven’t read my series of posts on this feature, the feature takes a number of superballs and sorts them among four children using a set of defined rules. What would I do if I was asked to test this feature for the first time, and it was due to be released tomorrow?
1. Test the most basic case
The first thing I would do would be to test the most basic use case of the feature. In this case, it would be running the Superball Sorter with no rules at all. I would test this first because it would give me a very clear indication whether the feature was working at all. If it wasn’t, I could raise the alarm right away, giving the developer more time to fix it.
2. Test the most typical customer scenario
In the case of the Superball Sorter, let’s say that we’ve been told by the product owner that in the most typical scenario, two of the children will be assigned a rule, and the rule will be by size rather than color. So the next test I would run would be to assign one child a rule that she only accepts large balls, and another child a rule that he only accepts small balls. I would run the sorter with these rules and make sure that the rules were respected.
3. Run a basic negative test
We all know how frustrating it can be to make a mistake when we try to do an activity online, such as filling out a form, and we have an error on the page but we haven’t been given any clear message about what it is. So the next thing I would test would be to make a common mistake that a user would make and ensure that I got an appropriate error message. For the Superball Sorter, I would set four rules that resulted in some balls not being able to be sorted, and I would verify that I got an error message that told me this was the case.
4. Test with different users or accounts
Just because something is working correctly for one user or account doesn’t mean it’s going to work correctly for everyone! Developers sometimes check their work with only one test user if they are in a big hurry to deliver their feature. So I would make sure to run the Superball Sorter with at least two users, and I would make sure that those users were different from the one the developer used.
After running these four tests, I would be able to say with some certainty that:
- the feature works at its most basic level
- a typical customer scenario will work correctly
- the customer will be notified if there is an error