API Testing vs. UI Testing

Recently someone asked me “If you have API testing, you don’t need UI testing, right?”  I said “No, because you need to have tests that make sure that elements such as buttons are present and working correctly.”  

Then he asked, “Then if you have UI testing, you don’t need API testing?”  I said, “No, because UI tests tend to be slow and flaky.  You can get more tested in less time with API testing.”

Inspired by that conversation, I thought I’d share my thoughts on when you should do API testing and when you should do UI testing.

First, test as much as you can with API testing.  Take a look at all of your possible endpoints and  create a suite of tests for each.  Be sure to test both the happy path and the possible error paths.  On every test, assert that you are getting the correct response code.  For GET requests, assert that you receive the correct results.  If there are filtering parameters you can pass in with the request, be sure to test scenarios with and without those parameters.  For POST, PUT, and PATCH tests, test that the changes you made have been written to the database; you can do this with a GET.  Be sure to test scenarios where you are entering invalid data; assert that any message returned in the body of the response is the correct message.  For DELETE requests, test that the resource has been deleted from the database; this can be verified with a GET. 

Once you have tested all the scenarios you can think of with API testing, then it’s time to think about UI testing.  First consider your most common user story.  For example, if you are testing an address book, the most likely scenario for a user would be adding in a new address.  You could create a UI test that would navigate to the address book, click a button to add a new address, add the address, save it, and then search the address book to verify that it has been saved.  

Now that your most common user story has been added, you have probably touched a number of the elements that you would want to verify in your UI.  Next, think about other elements on the page that you might want to check.  For instance, there may be a cancel button on the page where you are adding a new address.  A cancel button cannot be tested with an API test; therefore, you should add in a UI test for it.  Another example would be an error message that appears to the user; you may want to add in a test where you try to add an invalid address, and verify that the correct error message is displayed.  

Once you have tests that verify all of the important elements on your page, you can stop writing UI tests.  It’s not necessary to create lots of scenarios where each field is validated for various incorrect entries, because a) you already created those scenarios in your API tests, and b) you already have one UI test that verifies that the error message is displayed.  

If you already have an automated suite of UI tests, it may be a good idea to take a look at your tests and see which scenarios could be covered by API testing.  Converting your UI tests to API tests will make your regression suites faster and more reliable!

What the Sinking of the Vasa can Teach Us About Software Development

In Stockholm, Sweden, there is a museum that displays the ship called the Vasa, which sank on its maiden voyage in 1628.  I’ve never been there, but I’ve heard that the museum is fascinating for both architectural and historical reasons.  The Vasa took three years to build, and was supposed to be the flagship for Sweden’s growing navy.  The ship was built with 72 guns on two decks, and was adorned with elaborately painted carvings to reflect its majesty. 

On the day of its maiden voyage, in full view of thousands of people, including ambassadors from other countries, the ship sailed only 1400 yards before tilting, capsizing, and sinking.  It was a calm day, but a simple gust of wind caused the ship to list too much to one side, and water began pouring in through the gunports.  The primary reason for the loss of the Vasa was the simple fact that the ship’s center of gravity was too high.  How did this crucial error happen?  The answers can be helpful to us even 400 years later!

Make sure you have solid, updated plans

The shipwright in charge of building the Vasa became seriously ill (and eventually died) in the beginning stages of the project.  His assistant was in charge of completing the project, which had changed significantly since its inception.  After the initial plans were drawn, the number of guns it was expected to carry doubled, and the length of the ship was increased from 111 feet to 135 feet.  Yet the shipwright’s assistant never created a new set of plans to account for these changes.
Our lesson today: Working in an agile environment means the specifications for our software projects will frequently change.  We need to be mindful of this, and remember to re-evaluate our plans and communicate them clearly to the entire team. 

Communicate with other teams

Archeologists who have studied the Vasa and the remains of the wreckage discovered that one team of builders was using Swedish rulers, which had the modern-day 12 inches in a foot, while another team was using Amsterdam rulers, which only had 11 inches in a foot.  This resulted in the ship’s mass being distributed unevenly, compounding the problem of the high center of gravity.
Our lesson today: Most of us don’t enjoy having meetings and writing documentation, but they can be crucial in making sure that we are all on the same page.  We don’t want to waste time accidentally duplicating another team’s work, or using the wrong version of our tools.

Pay attention to your test results

Shortly before the Vasa’s first and final voyage, the captain supervising construction of the ship arranged for a demonstration of the ship’s stability.  He had thirty men run back and forth across the deck.  He stopped the test after the men had crossed the deck just three times, because the ship was rocking so much he feared it would capsize!  Rather than conduct further tests, plans continued for the launch. 
Our lesson today: Test results that don’t show us what we want to see can be disheartening, but to see a software release launch and fail feels even worse!  It’s important that testers keep digging when we see results that are different from what we expected, and it’s important that we listen to what our testers are telling us, even when it’s bad news. 

Learning about the Vasa made me marvel at just how much engineering principles have remained the same over hundreds of years.  Even though our projects are built from code rather than timber, the fundamental principles of having solid plans, communicating with everyone in the project, and getting valuable feedback through testing are still crucial to creating a great product. 

What “Passengers” Can Teach Us About Quality Assurance

Last weekend, I watched the movie Passengers. The basic plot of the movie is that two passengers in hibernation on a flight from Earth to another planet are awakened ninety years too early. As a QA engineer the movie got me thinking about two valuable lessons for developing and testing software.

Lesson One: “And Yet It Did”
In Passengers, when Jim’s hibernation pod fails, he tells the ship’s computer, the android bartender, and even another human what has happened. The response of all three is “Well, that’s impossible. The hibernation pods never fail.” Jim’s response is “Then how do you explain the fact that I’m here?” Many times in my testing career I have been told by developers that the behavior I am observing in our software is impossible. And I always respond with “And yet, here is the behavior that I’m seeing”. In one particular instance at a previous company, I was testing that information entered into the third-party software we integrated with was making it into our software. This testing was going well, until one entry didn’t travel to our software. I told the developer about it. He said, “That’s impossible. I’ve tested this, and you’ve been testing this for days.” I said, “Yes, and yet, this record wasn’t saved.” He said, “Look at the logs- you can see that the information was sent.” I said, “Yes, and yet, it wasn’t saved.” He said, “I entered more information just now, and you can see that it was saved.” I said, “Yes, and yet, the information I entered was not saved.” After much investigation, it was finally discovered that there was a bug where the database was not saving any record after the 199th record. Because I was testing in a different table than he was, and he didn’t have as many records, he didn’t see the error. The moral of the story: Even if something is impossible, it might still happen.

Lesson Two: “But What If It Did?”
One of the scariest parts of Passengers for me was that there was no way for Jim to reboot his hibernation pod and return to hibernation. Also, there were no spare pods. Even worse, there was no way for him to wake up the captain or any human who could help him. I found myself yelling at the screen, “How is this even possible? Why didn’t they put in contingency plans?” The answer, of course, is that the designers of the system were SO SURE that nothing could ever go wrong with the pods. But something did go wrong, and due to their false confidence there was no way to make it right. QA engineers are always thinking about all the possible ways that software can fail. I have often heard the argument “But no sane user would do that.” And I always respond with “But what if they did?” While we may not have time to account for every possible way that our software might fail, we should create appropriate responses for as many ways as we can, and log the rest for future fixes.

I like to think that somewhere on Earth in the Passengers universe, a QA engineer is saying to her product owners at the spaceship design company, “See, I TOLD you the pods could fail!”

Ask Your Way to Success

Ten years ago, I didn’t know how to use a Windows computer.  I didn’t know how the file system worked.  I didn’t know what right-clicking on a mouse did.  Today I am a QA Engineer doing both manual and automated testing at a great company.  How did I get here from there?

I asked a lot of stupid questions.

Most people are reluctant to ask questions, because they are afraid to look ignorant.  But I maintain that the best way to learn anything quickly is to ask questions when you don’t understand what’s going on.
Here are six ways that asking questions improves your knowledge and the health of your company:

1. Questions give others an opportunity to help you, which helps them get to know you better and establishes a rapport.  At my first official QA job, I was working with hotshot developers, all of whom were at least a decade and a half younger than me.  It was embarrassing having to admit that I didn’t know how to reset a frozen iPhone or find the shared drive in File Explorer, but I asked those questions anyway, I remembered the answers, and I showed my co-workers that I was a fast learner.

2. Questions help developers discover things they may have missed.  On countless occasions where a developer has been demonstrating a feature to me I’ll ask a question like “But what if there are no records for that user?”, or “What if GPS isn’t on?”, and they will suddenly realize that there is a use case they haven’t handled.

3. Questions keep everyone honest.  I have worked with other QA engineers who bandy about terms like “back-end call” or “a different code path” without actually knowing what they are talking about.  Asking what they mean by what they are saying makes sure that they do the work to find out what they are actually testing.  And when they get their answers, I get my answers as well.

4. Questions give you an opportunity to clear things up in your head.  You may have heard the expression “Rubber duck debugging”; I think this method works well when you’re asking questions.  I have found that sometimes just formulating the question out loud while I’m asking it clears things up for me.

5. Questions clarify expectations.  Yes, sometimes I have felt dumb saying things like “You want me to test this on the latest build, right?”, but every now and then I discover that there’s been a miscommunication, and I’d much rather find out about it before I start testing rather than after I’ve been testing the wrong thing for an hour.

6. Questions clarify priorities.  There have been many times where I’ve asked “Why are we adding this feature?”  There is almost always a good reason, but the discussion helps the team understand what the business use case is, which helps the developers decide how to design their solution.

A caveat: Don’t ask questions that you can find the answers to by using a search engine (example: “How do I find the UDID of a device using iTunes?”) or by going back and reading your email (example: “What day did we decide on for code freeze?”).  Asking these types of questions results in wasted time for everyone!

In summary, asking might make you feel silly in the short run, but it will make you and your team much smarter in the long run.  And hopefully it will create an atmosphere in which others feel comfortable asking questions as well, improving teamwork for everyone!

Fix All the Things

It’s very easy when you are rushing to complete features to let some bugs slide.  This article will show why in most cases it’s better to fix all the bugs now rather than later.  The following scenario is hypothetical, but is based on my experience as a tester.  

NewTech Inc. is very excited about offering a new email editor to their customers.  Customers will be able to compose emails to their clients and schedule when they should be sent from within the NewTech app.  NewTech’s service reps will also have the ability to add or change a company logo for the customers.  
Because the feature is on a deadline, developers are rushing to complete the work.  The QA engineer finds an issue: when changing the company logo, the logo doesn’t appear to have changed unless the user logs out and back in again.  The team discusses this issue and decides that because customers won’t see the issue (since it’s functionality that only NewTech employees can use), it’s safe to let this issue go on the backlog to be fixed at another time.
The feature is released, and customers begin using it.  The customers would all like to add their company logo to their emails, so they begin calling the NewTech service reps asking for this service.  The service reps add the logo and save, but they don’t see the logo appear on the email config page.  The dev team has forgotten to let them know that there’s a bug here, and that the workaround is to log out and back in again.  

Let’s assume that each time a service provider encounters the issue and emails someone on the dev team about it, five minutes is wasted.  If there are ten service reps on the team, that’s fifty wasted minutes.
Total time wasted to date: Fifty minutes

But now everyone knows about the issue, so it won’t be a problem anymore, right?  Wrong!  Because NewTech has hired two new QA engineers.  Neither one of them knows about the issue.  They encounter it in their testing, and ask the original QA engineer about it.  “Oh, that’s a known issue,” he replies.  “It’s on the backlog.”  Time wasted: ten minutes for each new QA engineer to investigate the problem, and ten minutes for each new engineer to ask the first QA engineer about it.
Total time wasted to date:  One hour and ten minutes

Next, a couple of new service reps are hired.  At some point, they each get a request to from a company to change the company’s logo.  When they go to make the change, the logo doesn’t update.  They don’t know what’s going on, so they ask their fellow service reps.  “Oh yeah, that’s a bug,” say the senior service reps.  “You just need to log out and back in again.”  Time wasted: ten minutes for each new service provider to be confused about what’s going on, and ten more minutes in conversation with the senior service reps.
Total time wasted to date: One hour and thirty minutes

It’s time to add new features to the application.  NewTech decides to give their end users the option of adding a profile picture to their account.  A new dev is tasked with adding this functionality.  He sees that there’s an existing method to add an image to the application, so he chooses to call that method to add the profile picture.  He doesn’t know about the login/logout bug.  When one of the QA engineers tests the new feature, she finds that profile picture images don’t refresh unless she logs out and back in again.  She logs a new bug for the issue.  Investigating the problem and reporting it takes twenty minutes.
Total time wasted to date:  One hour and fifty minutes

The dev team meets and decides that because customers will see the issue, it’s worth fixing.  The dev who is assigned to fix the issue is a different dev from the one who wrote the image-adding method (who has since moved on to another company), so it takes her a while to familiarize herself with the code.  Time spent fixing the issue: two hours.  
Total time wasted to date: Three hours and fifty minutes

The dev who fixed the issue didn’t realize that the bug existed for the company logo as well, so didn’t mention it to the QA engineer assigned to test her bug fix.  The QA engineer tests the bug fix and finds that it works correctly, so she closes the issue.  Time spent testing the fix: thirty minutes.
Total time wasted to date: Four hours and twenty minutes

When it’s time for the new feature to be released, the QA team does regression testing.  They discover that there is now a new issue on the email page: because of the fix for the profile images, now the email page refreshes when customers make edits, and the company logo disappears from the page.  One of the QA engineers logs a separate bug for this issue.  Time spent investigating the problem and logging the issue: thirty minutes.
Total time wasted to date: Four hours and fifty minutes

The dev team realizes that this is an issue that will have significant impact on customers, so the developer quickly starts working on a fix.  Now she realizes that the code she is working on affects both the profile page and the email page, so she spends time checking her fix on both pages.  She advises the QA team to be sure test both pages as well.  Time spent fixing the problem: one hour.  Time spent testing the fixes: one hour.
Total time wasted to date: Six hours and fifty minutes

How much time would it have taken for the original developer to fix the original issue?  Let’s say thirty minutes, because he was already working with the code.  How much time would it have taken to test the fix?  Probably thirty minutes, because the QA engineer was already testing that page, and the code was not used elsewhere.  
So, by fixing the original issue when it was found, NewTech would have saved nearly six hours in work that could have been spent on other things.  This doesn’t seem like a lot, but when considering the number of features in an application, it really adds up.  And this scenario doesn’t account for lost productivity from interruptions.  If a developer is fielding questions from the service reps all day about known issues that weren’t fixed because they weren’t customer-facing bugs, it’s hard for him to stay focused on the coding he’s doing.

The moral of the story is: unless you think that no user, internal or external, will ever encounter the issue, fix things when you find them!

How to Train Your Dev

Training your Dev is really about training yourself.  A more accurate (but much less catchy) title for this blog post would be “How to work and communicate effectively in order to facilitate a productive relationship with developers”.  
There are two steps to having a good working relationship with your dev: 1) developing good work habits, and 2) communicating clearly.  We’ll take a look at these two steps in detail.
Good Work Habits:
  • Make sure you have completely read a feature’s Acceptance Criteria and all available notes and documents.  This can help prevent unnecessary and time-consuming misunderstandings.
  • Ask questions if there is anything in the feature that you don’t understand.  Don’t make potentially incorrect assumptions.
  • Document your work.  This is especially helpful when you have found an issue and the developer needs to know what browser you were using or what server you are pointing to.
  • Check twice to make sure that you really have a bug.  Perhaps what you are seeing is a configuration problem, a connection problem, or simply user error.
Communicating Clearly:
  • Learn the preferred communication style of your dev and use it.  For example, some developers like to hear about issues immediately, and testing and bug fixing become a collaboration.  Other developers prefer to hear right away only if the issues are big ones, and would rather have you document the smaller issues for a later conversation.
  • Ask your dev to walk you through any confusing features.  He or she will be happy to explain things to you, because they know that any information they give you at the outset of testing will save misunderstandings later.
  • Be kind when reporting issues.  Your dev has worked hard on the feature he or she has delivered, and we all know it’s no fun to have our work criticized.
  • Give feedback in the form of a question.  This can soften the blow of finding a bug.  For example, “I noticed that when I clicked the Save button, I wasn’t taken to the next page.  Is this as designed?”
  • Let your dev know what he or she can do to help you do your job more efficiently.  A good example of this is asking them to not assign an issue to you until it is actually in the test environment, so you won’t inadvertently start testing it before the code is there.
A good working relationship with your dev is all about trust!  You trust that your dev has completed the work they’ve assigned to you, that they’ve done some of their own testing before the handoff, that the work is in the QA environment and ready for testing, and that your dev has let you know about any potential areas of regression to test.  
In turn, your developer trusts that you have tested everything in the Acceptance Criteria, that you’ve done regression testing, that you’ve tested with various security levels and on various browsers, that the issues you’ve found are legitimate, and that you will clearly communicate what you tested and what issues you found.  

Train yourself to work effectively and communicate clearly, and you will find this level of trust in your relationship with all the developers you work with!

The Trouble With Toggles

Some of you may have seen the classic Star Trek episode, “The Trouble with Tribbles”.  In the episode, Lt. Uhura gets a gift of a cute little ball of fur from a passing trader.  It’s a huge hit with the crew, and when the tribble has babies, everyone who wants one can enjoy these little fuzzy creatures.  The trouble begins when they discover that the tribbles are born pregnant and quickly give birth, increasing the tribble population exponentially until they are taking over the entire ship. 

Today I would like to talk about the trouble with toggles, and what they have in common with tribbles.  Toggles can be helpful to developers in many ways.  They make it easier for development of a new feature to continue in the main branch, without disturbing the work of others.  They enable us to do beta testing, where a handful of customers try out a feature or layout and provide their feedback while most customers continue using the software without the changes.  And they enable us to quickly remove a feature if it’s discovered that it has a critical error.  But toggles can be problematic for two very important reasons, which I will explain using our tribble friends.

First, toggles make more work for QA engineers.  Just as the tribbles multiplied exponentially, the number of test passes we need to do when using toggles also multiplies exponentially.  Consider the following scenario: Team One adds a new feature with a toggle.  In order to make sure that the feature hasn’t broken any existing functionality, they need to do two passes of smoke tests: one with the toggle on, and one with the toggle off.  Next they add a second feature with a toggle.  Now there are four different combinations of scenarios that their software can have: both toggles on, toggle A on and toggle B off, toggle B on and toggle A off, and both toggles off.  So they’ll need to do four different passes of smoke tests to make sure that all their features are working correctly.  Now let’s add in Team Two.  They have two toggles as well, so they’ll need to do four passes of smoke tests for all the permutations of toggle A and toggle B. 

As release day approaches, Team One and Team Two are merging their code into the main branch.   Now there are EIGHT different possible combinations for toggle states. Even if the teams decide that it’s a low risk to skip smoke testing in all eight scenarios, there will still be a lot of investigative work involved for every bug found.  Let’s say someone finds a bug in an instance where Toggle A and C are on and Toggle B and D are off.  They’ll need to answer questions like “What if just Toggle C is on?  Do we still see the bug? What about if Toggle D is turned on as well?  Does the bug go away?”  Research like this can be really time-consuming.  This is time that could be better spent doing exploratory testing.  If all the features were enabled in all scenarios, testers would only need to do one pass of smoke tests and spend the rest of their time digging deeper into each new feature.

Secondly, toggles can give software teams a false sense of security, which can lead to buggy software and tech debt.  In “The Trouble with Tribbles”, the tribbles’ purring sound had a soothing effect on human’s nervous systems, which made the crew pay less attention to the fact that in a couple of days the tribble population would fill the Enterprise from deck to ceiling.  Similarly, engineers can be soothed by the knowledge that they can always turn the feature off, which means that they can test the feature less carefully. If bugs are found, they can be dismissed with the statement, “Well, it’s Beta, and the Beta users will understand.  We’ll log it and take a look at it later.” Bugs like these can compound, especially when mixed with all the other bugs in all the other Beta features.  And the bugs can take a lot longer to fix six months later when the original developers don’t remember what they did or may not even be with the company any more. Alternatively, if the teams tested every feature with the knowledge that all users would be seeing it on first release, they would test more carefully and fix bugs right away, ensuring much greater code quality from the outset.

Like tribbles, toggles can be fun, in that they enable us to add more pizazz more quickly to an application.  But as the crew of the Enterprise found, too much of a good thing has its consequences!  


Designing a Complete Automation Suite

In my last two positions, I have been the sole Software Engineer in Test for a brand-new software platform.  This means that I have been able to set up an entire arsenal of automated tests for both the backend and the front-end.  This article will describe the suite of tools I used to test a web application.

API Tests:
First, I set up tests that checked that every API call worked correctly.  I did this through Runscope, an API testing tool I highly recommend.  I was fortunate enough to have engineers who set up Swagger, a framework that lists API calls in a really easy-to-read format.  I used the information from Swagger to set up my Runscope tests.  For testing against our development environment, I created tests for every GET, POST, PUT, and DELETE request.  I set this suite of tests to run once a day as a general health check for the development environment.  For testing against our production environment, I set up tests for every GET request that ran hourly.  This way we would be notified quickly if any of our GET requests stopped working.  Runscope is also good for monitoring the response times of API calls.

Backend Tests:
The backend of our platform was written in Java, and we developed a suite of integration tests using JUnit and AssertJ.  Generally the developers would create a few basic tests outlining the functionality of whatever new feature they had coded, and I would create the rest of the tests, using the strategy that I outlined in my previous post.

Front-End Tests:
The front-end of our platform was written in Angular, so I used Protractor to write a series of end-to-end tests.  The UI developers were responsible for writing unit tests, so I focused on testing typical user interactions.  For example, in my Contact-Info tests, I tested adding and editing a phone number, adding and editing an email address, and adding and editing a mailing address.  If it was possible to delete contact info in our UI, I would have added those tests to the suite as well.  I set up two suites of tests: a smoke suite and a full suite.  The smoke suite was for basic feature testing and was integrated into the build process, running whenever a developer pushed new code.  If they pushed a change that broke a feature, they would be notified immediately and the code would not be added to the build.  The full suite was set to run every test, and was configured in Jenkins to run overnight against the development environment using SauceLabs.  If anything happened to cause the tests to fail, I’d be notified by email immediately.

Since the UI also included uploaded images, I integrated Applitools visual testing into my Protractor tests.  Their platform is extremely easy to integrate and use.  I wrote tests to check that the image I added to a page appeared on the screen, and tests to check that if I changed an image, the new image would appear on the screen.

Load Tests:
In my previous position, I set up a series of load tests using SoapUI.  This tool is similar to Runscope in that it tests API calls, but it has the added functionality of being able to configure load testing.  I set up the following test scenarios: a Baseline test, where a small group of simulated users would run through a series of API requests; a Load test, where a small group of users would make a more frequent series of requests; a Stress test, where a large group of users would make a frequent series of requests; and a Soak test, where a small group of users would make a frequent series of requests for a long test time.  We were able to use this suite of tests to monitor response times, check for memory leaks, and test our load balance setup.

Every application is different, so you may find that you need a different set of tools for your test toolbox!  But I hope that this blog post gives you a jumping-off point to think about how to test your application thoroughly.

Organizing Java Integration Tests

In this post, I’ll be switching gears from discussing automated tests with Webdriver to discussing integration tests that are included as part of an application’s back-end Java code. 

In my new Software Engineer in Test position, I have been writing integration tests to test every API call we have.  This means that I have dozens of test classes, and over a thousand possible individual tests.  Today I’m sharing with you the strategy I came up with for organizing the tests. 

I sorted out the tests into four test types: 

Resource Tests:  these are the “Happy Path” tests- the tests where you expect things to work correctly.  Examples include:  POST requests with valid parameters, GET requests where the correct object is returned,  PUT requests where the new parameters are valid , and DELETE requests where the correct object is deleted.

Security Tests:  this is where I put all the tests that should fail because the user is not logged in.  For every method that is tested in the Resource Tests, there should be a similar test where the method is called, but the user is not logged in.

Validation Tests:  this is where I put all the tests that should fail because a value being sent is invalid.  Examples include: sending an null value where a value is required, sending an empty value where a value is required, sending a value with invalid characters (such as letters in a numeric field), and sending a value with too many or too few characters.  It’s important to include tests for both Create and Update requests. 

Conflict Tests:  this is where I put tests that should fail because the request has been incorrectly formed.  Examples include: 
any POST, GET, PUT, or DELETE request where the required id is null
any POST, GET, PUT, or DELETE request where the required id is non-existent
any POST request where an id of the object about to be created is included
any PUT request where the id in the request does not match the id of the existing object

I created a package for each of these test types.  Then for each domain object, I created four test classes: one for each package.  For example, our code has a domain object called Phone.  Here are the test classes I created:

PhoneResourceTest- verifies that it is possible to add, get, edit, and delete a phone number

PhoneSecurityTest- verifies that it is not possible to add, get, edit, or delete a phone number if the user is not logged in

PhoneValidationTest- verifies that the phone number being added or edited is in a valid format (ten digits, no letters, etc.)

PhoneConflictTest- verifies that it is not possible to add or edit a phone number if the id of the associated contact is missing or incorrect, that it is not possible to add a phone number if there is a phone number id in the request, that it is not possible to edit a phone number if the phone number id in the request does not match the id of the phone number being edited, etc.

I hope this post will help you think about how to best organize your own integration tests!

Finding an Element With Webdriver: Part III

Today we will tackle the most difficult ways to find an element: by css and xpath.  These are the most difficult because it’s so easy to miss a slash or a period in the description.  They are also the most difficult because they are so brittle- every time a developer makes a change to the webpage structure, the navigation path will change and your findElement command will no longer work.

However, finding an Element by css or xpath is often very needed, because developers often don’t take the time to give their elements easy and unique ids or names.  I would recommend using these strategies as a last resort to find the element you need, and I would try css first before resorting to xpath.

First, let’s find an element with css.  Take a look at the table that was described in the last blog post:

<table border=”2″>
<tr>
<td>Row 1, Column 1</td>
<td>Row 1, Column 2</td>
</tr>
<tr>
<td>Row 2, Column 1</td>
<td>Row 2, Column 2</td>
</tr>
</table>

Let’s say you want to find the td element that is in the first column of the second row.  The html doesn’t give you much to work with.  You can’t just do a findElement by tagName, because there are four different elements with the td tag name.  CSS selectors give you a way to navigate to the precise element you want.

In this example, we first want to select the correct row.  We can do this using the nth-of-type descriptor.  Nth-of-type finds a child element where there are more than one child elements in a parent.  In this case, the parent element is the table, and the child element is the row (the tr).  Because the second row is the second tr type in the table, we can describe this second row as tr:nth-of-type(2).
Next, we’ll want to find the first td element in that row.  Because it is the first td element in the row, we can just refer to it as td.  (If we needed the second element, we could describe it as td:nth-of-type(2).)

Now we’ll put these two descriptions together:
findElement(By.css(“tr:nth-of-type(2) td”));

This instruction tells Webdriver to first find a tr element that is the second tr child of its parent, and then within that element, to look for the first td element.

Another way to use css is to find classes.  Let’s say for example that the row we are looking for has this in it: class=”rowwewant”.  We can find classes with css by using a period:
findElement(By.css(“.rowwewant td”);

This instruction tells Webdriver to first find the element with the class name “rowwewant” and then within that element, to look for the first td element.

Now let’s find the same element with xpath.  In order to find the element, we will need to traverse through the DOM to get there.  First we will find the table, then find the second row, then find the first td element:

findElement(By.xpath(“//table/tr[2]/td[1]”);

First the table will be located, then the second row within the table, then the first td element within the row.

The two slashes at the beginning of the xpath indicate that is a relative xpath- the element will be found near the test’s current location.  If this is not effective, an xpath can be created from the root element (see the previous blog for the html for the entire webpage):

findElement(By.xpath(“/html/table/tr[2]/td[1]”);

There are many more ways that findElement by css or xpath can be used, but hopefully this will provide a jumping-off point for finding elements that can’t be found by easier means.