A text field with a postal code looks so simple, and yet it can be one of the most complex things to test on a form. There are two important questions to ask before you start testing postal codes:
Automating CRUD Testing
There are a number of different patterns we can use to automate CRUD testing. At the very minimum, we want to test one operation of each: Create, Read, Update, and Delete. For the purposes of this discussion, let’s assume the following:
1. We are testing the simple form used in this post
2. We are doing UI automation (API automation will be discussed in a future post)
This is the pattern I like to use when testing CRUD. I’m writing in Specflow/Cucumber for simplicity:
Scenario: Adding a user
Given I am adding a new user
When I add a first name and save
Then I navigate to the Users page
And I verify that the first name is present
Scenario: Updating a user
Given I am updating a user
When I change the first name and save
Then I navigate to the User page
And I verify that the first name has been updated
Scenario: Deleting a user
Given I am deleting a user
When I delete the first name and save
Then I navigate to the Users page
And I verify that the first name is not present
These three tests have tested Create, Update, and Delete. The first two tests are also testing Read, because we are retrieving the user for our assertions. Therefore, with these three tests I’m testing the basic functionality of CRUD.
Some may argue that these tests are not idempotent. Idempotency means that a test can be run again and again with the same results. I can’t run the third test over and over again with the same results, for example, because after the first time I run it, the user is no longer there to delete.
If we wanted to solve the idempotency issue, we could write a test like this:
Scenario: CRUD testing of user
Given I am testing CRUD
When I add a first name and save
And I verify that the first name is present
When I change the first name and save
And I verify that the first name has been updated
When I delete the first name and save
Then I verify that the first name has been deleted
This one scenario tests all the CRUD functionality of the form. But it also has three different assertions. I prefer to keep my UI tests down to one or two assertions.
When my original three scenarios are looked at as a group, they are idempotent together. My tests are responsible for creating and deleting my data.
It would also be a good idea to test some negative scenarios with our CRUD testing, such as creating a user with an invalid first name, and updating a user with an invalid first name. These tests could look like this:
Scenario: Creating a user with an invalid first name
Given I am adding a new user
When I enter an invalid first name and save
Then I verify that I receive the appropriate error message on the page
And I navigate to the User page
And I verify that the user has not been added
Scenario: Updating a user with an invalid first name
Given I am updating an existing user
When I update the first name with an invalid value and save
Then I verify that I receive the appropriate error message on the page
And I navigate to the User page
And I verify that the existing first name has not been updated
The first scenario is idempotent, because nothing is actually being added to the database. The second scenario is also idempotent, but it requires an existing user. We could assume that our user will always be present in the database, but if someone inadvertently deletes it, our test will fail. In this case, it would be good to add in a Before step and an After step that will create the user at the beginning of the test suite and delete it when the suite is over.
These five scenarios- the three to test the happy path, and the two negative tests- would be a great regression suite for our simple form. This is a very simple form, with just one field, which is not exactly a real-world scenario. But it is a good way to start thinking about automated UI test patterns.
CRUD Testing Part II- Update and Delete
In last week’s post, we started looking at CRUD testing. As you recall, CRUD stands for Create, Read, Update, and Delete. Last week we discussed testing Create and Update operations, and now we will continue by looking at Update and Delete.
In our discussion of the Read operation last week, I mentioned how important it is to test scenarios where the data in the database is invalid. This is also true for Update operations. Just because a text field is supposed to be required and have a certain number of characters doesn’t mean that’s how it is in the database!
Below is a matrix of testing scenarios for editing a text field. Remember from last week that our hypothetical text field has the following validation rules: it is a required field, it must have at least two characters, it must have 40 or fewer characters, and it should only have alphanumeric characters or hyphens and apostrophes; no other symbols are allowed. As with the Create operation, be sure to test that the newly edited field is correct in the UI and in the database after the update.
CRUD Testing Part I- Create and Read
In spite of its unappealing name, CRUD testing is extremely important! CRUD stands for Create, Read, Update, and Delete. As any tester knows, much of our testing involves these operations. Today we’ll discuss the best ways to test Create and Read.
The most important thing to know about testing CRUD is that it’s not enough just to rely on what you are seeing in your UI to confirm that a field’s value has been created or changed. This is because the UI will sometimes cache a value for more efficient loading in the browser. What you need to do to be absolutely sure that the value has changed is to check the database where your value is stored. So you’ll be confirming that your value is set in two places: the UI and the database. If you are doing API testing as well, you can actually confirm in three places, but we’ll save discussing API testing for another post.
Testing a Create Operation:
This text field looks similar to the one we looked at last week, but now we know what it does! This is a form to add a user. We’ll enter the first name of the user into the text field and click Submit. Next, we’ll take a look at the Users page of our imaginary application and verify that the new user is present:
And there it is! Finally, we need to query our database to make sure that the value has saved correctly there. In our imaginary database, this can be done by running
SELECT * from Users
This will give us a result that looks a lot like what we are seeing in the UI, so I won’t include a screenshot here.
To thoroughly test the Create function, you can use some of the same ideas that we talked about in last week’s post. Verify that valid entries of all kinds are saved to the database.
Testing a Read Operation:
We actually started testing the Read operation when we checked the Users page to verify that our new user was added. But there is something else that is important to test! We need to find out what happens when bad data is in the database and we are trying to view it in the UI.
Let’s take a look at what some bad data might look like in the database:
In our imaginary application, there are some constraints in the UI for the First Name field. It is a required field, it must have at least two characters, it must have 40 or fewer characters, and it should only have alphanumeric characters or hyphens and apostrophes; no other symbols are allowed. As we can see in our table, we’ve got lots of bad data:
- User 2 has no entry for First Name
- User 3 has an empty string for a First Name
- User 4 is violating the rule that the name must have at least two characters
- User 5 is violating the rule that the name must have 40 or fewer characters
- User 6 is violating the rule that only hyphens and apostrophes are allowed for symbols
Testing a Text Field
A text field in an application seems so ordinary, and yet it is one of the most important things we can test. Why? Because text fields provide an entryway into an application and its underlying database. Validation on a text field is what keeps lousy data from getting into the database, which can cause all sorts of problems for end users and engineers. It can also prevent cross-site scripting attacks and SQL injection attacks.
There are a myriad of ways to test a text field, and I will be outlining several in this post. First, let’s imagine that we are testing the text field with absolutely no information about what it does:
- Click Submit without filling in the text field
- Press the space bar several times in the text field and then click Submit
- See how many characters you can fit in the text field and then click Submit (an excellent tool to count characters is https://lettercount.com)
- Fill the field with as many numbers as you can and then click Submit
- Add a negative sign, fill the field with as many numbers as you can and then click Submit
- Enter every non-alphanumeric field on the keyboard and click Submit. If you get an error, see if you can narrow down which key (or keys) is causing the error.
- Enter in non-ASCII characters and emojis and click Submit. If you get an error, see if you can narrow down which symbol (or symbols) is causing the error.
- Try cross-site scripting by entering in this script: <script>alert(“I hacked this!”)</script> If on Submit, you get a popup message, then you know the field is vulnerable to cross-site scripting.
- Try a SQL injection attack, such as FOO’); DROP TABLE USERS; — (Don’t try this on your Production database!)
- Try putting in a value that is a different data type from what is expected; for example, if this text field is expecting a value of currency, try putting in a string or a date
- If the field is expecting a string, try putting in a string with one fewer characters than is expected, one more character than is expected, the lower limit of characters that is expected, the upper limit of characters that is expected, and twice the maximum number of characters expected
- If the field is expecting a numeric value, try putting the maximum value, the minimum value, a value above the maximum, a value below the minimum, and a value twice the maximum value
- If the field is expecting an integer, try submitting a value with a decimal point
- If the field is expecting a float, try submitting a value with two decimal points and a value that begins with a decimal point
- If the field is expecting a value of currency, try submitting a value with more than two digits after the decimal point
- If the field is expecting a date, try putting in the maximum date, the minimum date, one day over the maximum date, one day before the minimum date, and a date one hundred years above or below the limit
- For date fields, try entering a date that doesn’t make sense, such as 6/31/17 or 13/13/17 (There are many more ways to test date fields; I’ll touch on this in a later post)
- If the field is expecting a time, try entering a time that doesn’t make sense, such as 25:15
- If the field is expecting a phone number, try entering a number that doesn’t conform to the expected format (There are many, MANY more ways to test phone numbers; I’ll touch on this in a later post as well)
- submitting a null value
- submitting an empty string
- submitting a value that meets the criteria (the “happy path”)
- submitting the maximum number of characters or maximum value
- submitting the minimum number of characters or minimum value
- submitting just above the maximum characters or value
- submitting just below the minimum characters or value
Think Like a Tester
Beginning with this week’s post, my blog will be taking on a new focus!
I have renamed it from Fearless Automation to Think Like a Tester (for the moment, the URL will remain the same). There were three recent events that made me decide to shift my focus:
- I attended a large international computing conference where there was not a single workshop or presentation focused on software testing.
- At this conference, I met computer science students who asked me if there were any college classes to learn to be a tester.
- I interviewed a QA engineer who was able to create a great automated testing solution for a website, but could not think of simple manual tests for the site.
All of these things made me realize the following:
- There aren’t enough people talking about testing software
- There aren’t enough resources to learn about testing software
- The testing community has been focused for so long on how to test software that we haven’t been thinking about what to test and why we are testing it
Testing is truly a craft, and one that requires a different skill set from software development:
- Rather than thinking of ways to make software work, testers think of ways to make software break
- Rather than designing things to go right, testers think of all the ways that things can go wrong
- Rather than focusing deeply on one feature, testers focus on how all those features integrate
- Rather than solving a problem and moving on, testers come up with ways to continually verify that features are working
In the weeks and months to come, I will be getting back to basics and discussing all areas of software testing- manual and automated- that require thinking like a tester. Hopefully both testing newbies and seasoned testers alike will find this knowledge helpful!
API Testing vs. UI Testing
What the Sinking of the Vasa can Teach Us About Software Development
In Stockholm, Sweden, there is a museum that displays the ship called the Vasa, which sank on its maiden voyage in 1628. I’ve never been there, but I’ve heard that the museum is fascinating for both architectural and historical reasons. The Vasa took three years to build, and was supposed to be the flagship for Sweden’s growing navy. The ship was built with 72 guns on two decks, and was adorned with elaborately painted carvings to reflect its majesty.
On the day of its maiden voyage, in full view of thousands of people, including ambassadors from other countries, the ship sailed only 1400 yards before tilting, capsizing, and sinking. It was a calm day, but a simple gust of wind caused the ship to list too much to one side, and water began pouring in through the gunports. The primary reason for the loss of the Vasa was the simple fact that the ship’s center of gravity was too high. How did this crucial error happen? The answers can be helpful to us even 400 years later!
Make sure you have solid, updated plans
The shipwright in charge of building the Vasa became seriously ill (and eventually died) in the beginning stages of the project. His assistant was in charge of completing the project, which had changed significantly since its inception. After the initial plans were drawn, the number of guns it was expected to carry doubled, and the length of the ship was increased from 111 feet to 135 feet. Yet the shipwright’s assistant never created a new set of plans to account for these changes.
Our lesson today: Working in an agile environment means the specifications for our software projects will frequently change. We need to be mindful of this, and remember to re-evaluate our plans and communicate them clearly to the entire team.
Communicate with other teams
Archeologists who have studied the Vasa and the remains of the wreckage discovered that one team of builders was using Swedish rulers, which had the modern-day 12 inches in a foot, while another team was using Amsterdam rulers, which only had 11 inches in a foot. This resulted in the ship’s mass being distributed unevenly, compounding the problem of the high center of gravity.
Our lesson today: Most of us don’t enjoy having meetings and writing documentation, but they can be crucial in making sure that we are all on the same page. We don’t want to waste time accidentally duplicating another team’s work, or using the wrong version of our tools.
Pay attention to your test results
Shortly before the Vasa’s first and final voyage, the captain supervising construction of the ship arranged for a demonstration of the ship’s stability. He had thirty men run back and forth across the deck. He stopped the test after the men had crossed the deck just three times, because the ship was rocking so much he feared it would capsize! Rather than conduct further tests, plans continued for the launch.
Our lesson today: Test results that don’t show us what we want to see can be disheartening, but to see a software release launch and fail feels even worse! It’s important that testers keep digging when we see results that are different from what we expected, and it’s important that we listen to what our testers are telling us, even when it’s bad news.
Learning about the Vasa made me marvel at just how much engineering principles have remained the same over hundreds of years. Even though our projects are built from code rather than timber, the fundamental principles of having solid plans, communicating with everyone in the project, and getting valuable feedback through testing are still crucial to creating a great product.
What “Passengers” Can Teach Us About Quality Assurance
Last weekend, I watched the movie Passengers. The basic plot of the movie is that two passengers in hibernation on a flight from Earth to another planet are awakened ninety years too early. As a QA engineer the movie got me thinking about two valuable lessons for developing and testing software.
Lesson One: “And Yet It Did”
In Passengers, when Jim’s hibernation pod fails, he tells the ship’s computer, the android bartender, and even another human what has happened. The response of all three is “Well, that’s impossible. The hibernation pods never fail.” Jim’s response is “Then how do you explain the fact that I’m here?” Many times in my testing career I have been told by developers that the behavior I am observing in our software is impossible. And I always respond with “And yet, here is the behavior that I’m seeing”. In one particular instance at a previous company, I was testing that information entered into the third-party software we integrated with was making it into our software. This testing was going well, until one entry didn’t travel to our software. I told the developer about it. He said, “That’s impossible. I’ve tested this, and you’ve been testing this for days.” I said, “Yes, and yet, this record wasn’t saved.” He said, “Look at the logs- you can see that the information was sent.” I said, “Yes, and yet, it wasn’t saved.” He said, “I entered more information just now, and you can see that it was saved.” I said, “Yes, and yet, the information I entered was not saved.” After much investigation, it was finally discovered that there was a bug where the database was not saving any record after the 199th record. Because I was testing in a different table than he was, and he didn’t have as many records, he didn’t see the error. The moral of the story: Even if something is impossible, it might still happen.
Lesson Two: “But What If It Did?”
One of the scariest parts of Passengers for me was that there was no way for Jim to reboot his hibernation pod and return to hibernation. Also, there were no spare pods. Even worse, there was no way for him to wake up the captain or any human who could help him. I found myself yelling at the screen, “How is this even possible? Why didn’t they put in contingency plans?” The answer, of course, is that the designers of the system were SO SURE that nothing could ever go wrong with the pods. But something did go wrong, and due to their false confidence there was no way to make it right. QA engineers are always thinking about all the possible ways that software can fail. I have often heard the argument “But no sane user would do that.” And I always respond with “But what if they did?” While we may not have time to account for every possible way that our software might fail, we should create appropriate responses for as many ways as we can, and log the rest for future fixes.
I like to think that somewhere on Earth in the Passengers universe, a QA engineer is saying to her product owners at the spaceship design company, “See, I TOLD you the pods could fail!”
Ask Your Way to Success
I asked a lot of stupid questions.
Most people are reluctant to ask questions, because they are afraid to look ignorant. But I maintain that the best way to learn anything quickly is to ask questions when you don’t understand what’s going on.
Here are six ways that asking questions improves your knowledge and the health of your company:
1. Questions give others an opportunity to help you, which helps them get to know you better and establishes a rapport. At my first official QA job, I was working with hotshot developers, all of whom were at least a decade and a half younger than me. It was embarrassing having to admit that I didn’t know how to reset a frozen iPhone or find the shared drive in File Explorer, but I asked those questions anyway, I remembered the answers, and I showed my co-workers that I was a fast learner.
2. Questions help developers discover things they may have missed. On countless occasions where a developer has been demonstrating a feature to me I’ll ask a question like “But what if there are no records for that user?”, or “What if GPS isn’t on?”, and they will suddenly realize that there is a use case they haven’t handled.
3. Questions keep everyone honest. I have worked with other QA engineers who bandy about terms like “back-end call” or “a different code path” without actually knowing what they are talking about. Asking what they mean by what they are saying makes sure that they do the work to find out what they are actually testing. And when they get their answers, I get my answers as well.
4. Questions give you an opportunity to clear things up in your head. You may have heard the expression “Rubber duck debugging”; I think this method works well when you’re asking questions. I have found that sometimes just formulating the question out loud while I’m asking it clears things up for me.
5. Questions clarify expectations. Yes, sometimes I have felt dumb saying things like “You want me to test this on the latest build, right?”, but every now and then I discover that there’s been a miscommunication, and I’d much rather find out about it before I start testing rather than after I’ve been testing the wrong thing for an hour.
6. Questions clarify priorities. There have been many times where I’ve asked “Why are we adding this feature?” There is almost always a good reason, but the discussion helps the team understand what the business use case is, which helps the developers decide how to design their solution.
A caveat: Don’t ask questions that you can find the answers to by using a search engine (example: “How do I find the UDID of a device using iTunes?”) or by going back and reading your email (example: “What day did we decide on for code freeze?”). Asking these types of questions results in wasted time for everyone!
In summary, asking might make you feel silly in the short run, but it will make you and your team much smarter in the long run. And hopefully it will create an atmosphere in which others feel comfortable asking questions as well, improving teamwork for everyone!