Localization Testing

If your app is used anywhere outside your country of origin, chances are it uses some kind of localization strategy.  Many people assume that localization simply means translation to another language, but this is not the case.  Here are some examples of localization that your application might use:

Language: different countries speak different languages.  But different regions can speak different languages as well.  An example of this would be Canada: in the province of Quebec, the primary language spoken is French, and in the other provinces, the primary language spoken is English.

Spelling: even when two areas speak the same language, the spelling of words can be different.  For example, “color” in the US, as opposed to “colour” in Canada and the UK.

Words and idioms: words can vary even in a common language.  In the UK, a truck is a lorry, and a car’s trunk is a boot.  In the US, to “table” a topic means to stop talking about it until a later meeting.  But in the UK and Canada, to “table” a topic means to start talking about it in the current meeting- the complete opposite of what it means in the US!

Currency: different countries will use different currencies.  But this doesn’t just mean using a different symbol in front of the currency, like $ or £.  The currencies can also be formatted differently.  In the US, fractions of a dollar are separated with a dot, and amounts over one thousand are separated with a comma.  In the UK, it’s the opposite.  So what would be written as 1,000.00 in the US would be written as 1.000,00 in the UK.

Date and Time Formats:  in the US, dates are written as month/day/year, but in the UK, dates are written as day/month/year.  The US generally writes times using AM and PM, but many other countries use 24-hour time, so what would be 1:00 PM in the US would be 13:00 elsewhere.

Units of Measure: the US usually uses US Customary units, such as pounds for weight and feet and inches for height.  Most other countries will use the metric system for these measurements.  Most countries measure air temperature in Celsius, while the US uses Fahrenheit.

Postal Codes and Phone Numbers: these vary widely from country to country.  See my posts on international phone numbers and postal codes for some examples.

Images: pictures in an application might need to be varied from country to country, but there are some considerations.  For example, if your application was to be used internationally, you might not want to include a picture of a building with an American flag in the front.  Or if your app were to be used in religiously conservative countries, you might not want a picture of a person in a sleeveless shirt.

Testing for Localization

The first step in localization testing is to determine exactly what will be localized.  Your company may decide to localize for date and time, postal codes, and phone numbers, but not for language.  Or a mobile app may choose to only use other languages that are built into the device, so that the text of the app would be in one language, but the buttons for the app would be in the user’s language.

If your app will be using other languages, gather all the texts you will need to be checking.  For example, if your app has menu items such as “Home”, “Search”, “Your Account”, and “About Us”, and your app will be localized for French and Spanish, find out what those menu items should be in French and Spanish.  It goes without saying that whoever has done the translations should have consulted with a native speaker to make sure that the translations are correct.

Next, create a test plan.  The simplest way to do this would be to create a spreadsheet, where the left column lists the different localization types you need to test and the top row lists the different countries.  Here is a very basic example:

Once your matrix is created, it should be very simple to run through your tests.  If you are testing on mobile, here’s a helpful hint: When you switch your mobile device to a different language, make sure you know exactly how to switch it back if you don’t recognize the words in the language you are switching to.  When I was testing localization for Mandarin, this was especially important; since I didn’t know any of the characters, I had no idea what any of the menu items said.  I memorized the order of the menu items so I knew which item I needed to click on to get back to English.

Another important thing to watch for as you are testing is that translated items fit well in the app.  For example, your Save button might look perfectly fine in English, but in German it could look like this:

Once you have completed your localization testing, you’ll want to automate it.  This could be done with UI automation tools such as Selenium.  You could have a separate test suite for each language, where the setup step would be to set the desired country on the browser or device, and each test would validate one aspect of localization, such as verifying that button texts are in the correct language, or validating that you can enter a postal code in the format of that country.  It would be very helpful to use a tool like Applitools to validate that buttons are displaying correctly or that the correct flag icon is displaying for the location.

Localization is a tricky subject, and just like software, it’s hard to make it perfect.  But if you and your development team clarify exactly what you want to localize, and if you are methodical in your testing, you’ll ensure that a majority of your users will be satisfied with your application.

Usability and Accessibility Testing

Two often-overlooked types of application testing are Usability Testing and Accessibility Testing.  Usability testing refers to the user experience: whether the app is intuitive and easy to use.  Accessibility testing refers to how easy it is for users with limited ability to use the application.  I’ll be discussing both in today’s post.  

Usability Testing

Usability testing is often called User Experience (UX) testing, and some larger companies have dedicated UX designers whose goal is to make their company’s application pleasing to customers.  Even if you have UX people at your company, it’s still a good idea to test your application with the user experience in mind.  Here are four ways to do that:
  • Learn what the expected “User Journeys” are.  Usually when we are testing an application, we use it in ways that users won’t, focusing on one feature or page at a time.  A good strategy is to find out how real users will be using the application and run through those scenarios.  For example, if you had an application that allowed users to order tickets for a movie theater, a user journey might be to log in, browse through the movies, look at the show times for one movie, select a show time, click the button to place the order, add the credit card information, and complete the sale.  By running through scenarios like this, you’ll discover which areas might not be offering the best user experience.
  • As you run through your user journeys, look for tasks that require many clicks or steps.  Could those tasks be done with fewer clicks?  This week, my husband was searching online for a new car.  He went to a website for a car manufacturer and was browsing through the different models.  Every time he changed to a new page, he was prompted to enter his Zip Code again.  That’s not a great user experience!
  • Test a new feature before you know what it’s supposed to do.  This is a strategy that doesn’t get used much any more, now that Test-Driven Development (TDD) is popular.  Even if your company isn’t using TDD, you are probably present for the feature planning.  But I have found that it is sometimes helpful to take a look at a feature while knowing very little about it.  That is what your customers will be doing, so anything you find frustrating or complicated will probably also be frustrating or complicated for your customers.  An alternative to testing without knowing anything about the feature is to get someone who has never used the application to try it out.  Spouses, roommates, friends, and people from non-technical teams in your company are good candidates for this testing.  By watching them navigate through your site, you will find the tasks that might not be intuitive.  
  • When you are testing, see if you can do everything you want to do in the application with the keyboard alone, or with the mouse alone.  People who use applications extensively want to be able to use them as quickly as possible.  A good example of this would be a customer service representative who has to fill out a form for every person who calls them.  If they are typing into each field, and they have to keep moving their hand over to the mouse to click the Submit button, this is a waste of their time.  If they can instead submit the form with the Enter key, their hands don’t need to leave the keyboard.  

Accessibility Testing
Accessibility testing is important because fifteen percent of the population has some kind of disability, and we want our applications to be used by as many people as possible.  The three main types of accessibility testing you will want to do are Visual, Dexterity, and Auditory.  Here are some strategies for each. 
Visual Testing:
  • Is the text large enough for most users?  Could users zoom and enlarge the text if needed?
  • Do the images have text descriptions, so that visually impaired users who use text-to-speech features will know what the image is?
  • Are the colors in the application distinctive enough that color-blind users won’t be confused by them?  This helpful website allows you to load in a screenshot of your application and see what it will look like to a color-blind person.  I put a screenshot from an earlier blog post into the site, and here’s what the buttons will look like to a red-green color-blind individual:
Dexterity Testing:
  • Does your app require any complicated click-and-drag or highlight-and-click scenarios?  Imagine how difficult those would be to a person who had only one arm, or who had limited use of their hand.  Can you change the app so that those tasks can be accomplished in a simpler way?
  • Are your buttons and links easy to click on?  If the buttons are too small, it may be difficult for someone with limited dexterity to click in the correct place.
Auditory Testing:
  • Are there any places in your application that have videos?  If so, do those videos have captions so the hearing impaired will know what is being said?
  • Are there places in your application that rely solely on sound effects in order for the user to know what is happening?  Try running the application with the sound turned off.  Do you miss any information while you are running through your test scenarios?
As software testers, we want our users to have as pleasant an experience as possible when using our application.  Usability and accessibility testing will help us ensure that our users will be able to accomplish what they want with our apps efficiently and easily.  

How to Design a Load Test

Last week, we talked about Performance Testing and various ways to measure the reliability and speed of your application.  This week we’ll be talking about Load Testing.  Load testing is simply measuring how your application will perform under times of great demand.  This could mean testing scenarios of reasonable load, or it could mean testing with scenarios of extreme stress to find the limits of the application.

It’s easy to find a load testing tool, create a few tests, and run them with a few hundred users to create metrics.  But this isn’t particularly helpful unless you know why you are testing and how your results will help you.

So before you begin any load testing, it’s important to ask the following questions:

  • What kind of scenarios are you testing for?
  • What is the expected behavior in those scenarios?

Let’s imagine that you have a new website that sells boxes of chocolates.  You have noticed that your site is most active on Saturday mornings.  Also, Valentine’s Day is coming, and you anticipate that you will have many more orders in the week leading up to that day.  In March, your company will be reviewed by a popular cable TV show, and you are hopeful this will lead to thousands of visits to your site.

With this in mind, you come up with the following expected behaviors:

  • You would like your web pages to load in less than two seconds under a typical Saturday morning load
  • You would like to be able to process five hundred orders an hour, which will get you through the busy Valentine’s Day season
  • You would like your site to be able to handle ten thousand visitors in an hour, which is how many people you expect to visit right after the TV show airs
The next step is to figure out what test environment you will use.  Testing in Production would obviously provide the most realistic environment, but load testing there might be a bad idea if your testing results in crashing your site!  The next most realistic option would be a test environment that accurately mimics your Prod environment in terms of the number of servers used and the size of the back-end database.  An ideal situation would be one in which this test environment was only used for your load testing, but this is not always an option; you may need to share this environment with other testers, in which case you’ll need to be aware of what kinds of tests they are running and how they will impact you.  You’ll also want to let other testers know when you are conducting your load tests, so they won’t be surprised if response times increase.  
Once your test environment is ready, you can conduct some simple baseline tests.  You can use some of the strategies mentioned in last week’s post to find out how long it takes for your web pages to load, and what the typical response times are for your API endpoints.  Knowing these values will help you gauge how large an impact a load scenario will have on your application.
Now it’s time to design your tests.  There are a couple of different strategies to use in this process:
  • You can test individual components, such as loading a webpage or making a single request to an API
  • You can test whole user journeys, such as browsing, adding an item to a cart, and making a purchase
You’ll probably want to use both of these strategies, but not at the same time.  For instance, you could measure how long it takes to load the home page of your website when you have ten thousand requests for the page in an hour.  In a separate test, you could create a scenario where hundreds of users are browsing, adding items to their cart, and making a purchase, and you could monitor the results for any errors.  
For each test you design, you’ll want to determine the following:
  • How many users will be interacting with the application at one time?
  • Will those users be added all at once, or every few seconds?
  • Will the users execute just one action and then stop, or will they execute the action continuously for the duration of the test?
  • How long will the test last?
Let’s design a test for the Valentine’s Day scenario.  We’ll assume that you have created test steps that will load the webpage, browse through three product pages, add one product to the cart, and make a purchase.  We already mentioned above that you’ll want to be able to handle five hundred orders an hour, so we’ll set up the test to do just that.  It’s unlikely that in a real-world scenario all five hundred users would start the ordering process at the same time, so we’ll set the test to add a new user every five seconds.  Each user will run through their scenario once and then stop.  The test will run for one hour, or until all five hundred users have completed the scenario.

Before you run your test, be sure that your test steps will return errors if they don’t result in the expected response.  When I first got started with load testing, I ran several tests with hundreds of requests before I discovered that all of my requests were returning an empty set.  Because the requests were returning a 200 response, I didn’t notice that there was anything wrong!  Make sure that your steps have assertions that will validate that your application is really behaving as you want it to.

Once you have the test steps in place, you’ve made sure that the steps have good assertions, and you have your test parameters set up with the number of users, the ramp-up time (how frequently a new user will be added to the test), and the test duration, it’s time to run the test!  While the test is running, watch the response times and the CPU usage of your application.  If you start seeing errors or high CPU spikes, you can stop the test and make note of how high the load was when the spikes occurred.

Whether you need to stop the test early or whether the test completed successfully, you’ll want to run a few test passes to make sure that your data is fairly consistent.  At the end of your testing, you’ll be able to answer the question: can my website handle 500 orders in an hour?  If all of the purchases were completed with no errors, and if all of the response times were reasonable, then the answer is yes.  If you started seeing errors, or if the response times increased to several seconds, then the answer is no. If the answer is no, you can take the data you collected and share it with your developers, showing them exactly how many users it took to slow the system down.

Load testing should never be an activity you do to simply check it off of a list of test types.  When you take the time to consider what behaviors you want to measure, how you want your application to behave, what scenarios you can run to test those behaviors, and how you can analyze your results, you will ensure that load testing is a productive and informative activity.  

  

Introduction to Performance Testing

Performance Testing, like many other phrases associated with software testing, can mean different things to different people.  Some use the term to include all types of tests that measure an application’s behavior, including load and stress testing.  Others use the term to mean the general responsiveness of an application under ordinary conditions.  I will be using the latter definition in today’s post.

Performance Testing measures how an application behaves when it is used.
This includes reliability:

  • Does the page load when a user navigates to it?  
  • Does the user get a response when they make a request?

and speed:

  • How fast does the page load?  
  • How fast does the user get a response to their request?
Depending on the size of the company you work for, you may have performance engineers or DevOps professionals who are already monitoring for metrics like these.  But if you work for a small company, or if you simply like to be thorough in your testing, it’s worth learning how to capture some of this data to find out how well your application is behaving in the real world.  I know I have stopped using an application simply because the response time was too slow.  You don’t want your end users to do that with your application!

Here are five different ways that you can monitor the health of your application:
Latency– this is the time that it takes for a request to reach a server and return a response.  The simplest way to test this is with a ping test.  You can run a ping test from the command line on your laptop, simply by entering the word ping, followed by a website’s URL or an IP address.  For example, you could run this command
ping www.google.com

And get a response something like this:

(To stop a ping test, simply use CTRL-C)
Let’s take a look at the response times.  Each ping result shows how long it took in milliseconds to reach that server and return a response.  At the bottom of the test results, we can see the minimum response time, the average response time, the maximum response time, and the standard deviation in the response time.  In this particular example, the slowest response time was 23.557 milliseconds.
API Response Time- this is a really helpful measurement, because so many web and mobile applications are using APIs to request and post data.  Postman (my favorite tool for API testing) has response time measurements built right into the application.  When you run a request, you will see a Time entry right next to the Status of the response:
Here we can see that the GET request I ran took 130 milliseconds to return a response.  You can include an assertion in your Postman tests which will verify that your response was returned in less than a selected time, such as 200 milliseconds.  The assertion will look like this:

pm.test(“Response time is less than 200ms”, function () {
    pm.expect(pm.response.responseTime).to.be.below(200);
});

(If you would like to learn more about API testing with Postman, I have several blog posts on this topic.)

Another great tool for testing API response time is Runscope.  It is easy to use, and while it does not offer a free version, it has a very helpful feature: you can make API requests from locations all over the world, and verify that your response times are good no matter where your users are.  You can easily set up automated checks to run every hour or every day to make sure that your API is up and running.  Runscope also offers real-time monitoring of your APIs, so if your application suddenly starts returning 500 errors for some users, you will be alerted.
Web Response Time- Even if your API is responding beautifully, you’ll also want to make sure that your web page is loading well.  There’s nothing more frustrating to a user than sitting around waiting for a page to load!  There are a number of free tools that you can use to measure how long it takes your application’s pages to render.  With Pingdom, you can enter your website’s URL and it will crawl through your application, measuring load times.  Here are the results I got when I used my website’s URL and requested that it be tested from Melbourne, Australia:
Pingdom also provided suggestions for improving my site’s performance, such as adding browser caching and minimizing redirects.  Paid customers can also set up monitoring and alerting, so you can be notified if your page loading times slow down.  

Mobile Application Monitoring- If you have a native mobile application, you’ll want to make sure that it’s responding correctly and quickly.  Crashlytics is free software that can be added to your app to provide statistics about why your app crashed.  New Relic offers paid mobile monitoring for your app, allowing you to see data about which mobile devices are working well with your app and which might be having problems.

Application Performance Monitoring (APM) Tools- For more advanced monitoring of your application, you can use an APM tool such as ElasticSearch or Sharepath.  These tools track every single transaction your application processes and can provide insights on CPU usage, server response times, and request errors.  

Whether you work for a big company with a web application that has millions of users, or a small startup with one little mobile app, performance testing is important.  It may mean the difference between happy customers who keep using your application, and disaffected users who uninstall it.

Mobile Testing Part IV: An Introduction to Mobile Security Testing

Mobile security testing can be problematic for a software tester, because it combines the challenges of mobile with the challenges of security testing.  Not knowing much about mobile security testing, I did some research this week, and here are some of the difficulties I discovered:

  • Mobile devices are designed to be more secure than traditional web applications, because they are personal to the user.  Because of this, it’s harder to look “under the hood” to see how an application works.  
  • Because of the above difficulty, mobile security testing often requires tools that the average tester might not have handy, such as Xcode Tools or Android Studio.  Security testing on a physical device usually means using a rooted or jailbroken phone.  (A rooted or jailbroken phone is one that is altered to have admin access or user restrictions removed.  An Android phone can be rooted; an iPhone can be jailbroken. You will NOT want to do this with your personal device.)
  • It’s difficult to find instructions for mobile security testing when you are a beginner; most documentation assumes that you are already comfortable with advanced security testing concepts or developing mobile applications.
I’m hoping that this post will serve as a gentle introduction for testers who are not already security testing experts or mobile app developers.  Let’s first take a look at the differences between web application security testing and mobile app security testing: 
  • Native apps are usually built with the mobile OS’s development kit, which has built-in features for things like input validation, so SQL injection and cross-site scripting vulnerabilities are less likely.
  • Native apps often make use of the data storage capabilities on the device, whereas a web application will store everything on the application’s server.
  • Native apps will be more likely than web applications to use biometric data, such as a fingerprint, for authentication.

However, there are still a number of vulnerabilities that you can look for in a mobile app that are similar to the types of security tests you would run on a web application.  Here are some examples:

  • For apps that require a username and password to log in, you can check to make sure that a login failure doesn’t give away information.  For example, you don’t want your app to return the message “invalid password”, because that lets an intruder know that they have a correct username.
  • You can use a tool such as Postman to test the API calls that the mobile app will be using and verify that your request headers are expected to use https rather than http.
  • You can test for validation errors. For example, if a text field in the UI accepts a string that is longer than what the database will accept, this could be exploited by a malicious user for a buffer overflow attack.
If you are ready for a bigger testing challenge, here are a couple of mobile security testing activities you could try:
  • You can access your app’s local data storage and verify that it is encrypted.  With Android, you can do this with a rooted phone or an Android emulator and the Android’s ADB (Android Debug Bridge) command line tool.  With iPhone, you can do this with Xcode Tools and a jailbroken phone or an iPhone simulator.  
  • You can use a security testing tool such as Burp Suite to intercept and examine requests made by the mobile app.  On Android, unless you have an older device running the Lollipop OS or earlier, you’ll need to do this with an emulator.  On iPhone, you can do this with a physical device or a simulator.  In both instances, you’ll need to install a CA certificate on the device that allows requests to be intercepted.  This CA certificate can be generated from Burp Suite itself.  
These two testing tasks can prepare you to be a mobile security testing champion!  If you are ready to learn even more, I recommend that you check out the online book OWASP Mobile Security Testing Guide.  This is the definitive guide to making sure that your application is free of the most common security vulnerabilities.  Happy hacking!

Mobile Testing Part III: Seven Automated Mobile Testing Tips (and Five Great Tools)

Walk into any mobile carrier store and you will see a wide range of mobile devices for sale.  Of course you want to make sure that your application works well on all of those devices, in addition to the older devices that some users have.  But running even the simplest of manual tests on a phone or tablet takes time.  Multiply that time by the number of devices you want to support, and you’ve got a huge testing burden!

This is where automated mobile testing comes in.  We are fortunate to be testing at a time where there are a whole range of products and services to help us automate our mobile tests.  Later in this article, I will discuss five of them.  But first, let’s take a look at seven tips to help you be successful with mobile automated testing.

Tip 1: Don’t test things on mobile that could be more easily tested elsewhere
Mobile automation is not the place to test your back-end services.  It’s also not the place to test the general logic of the application, unless your application is mobile only.  Mobile testing should be used for verifying that elements appear correctly on the device and function correctly when used.  For example, let’s say you have a sign-up form in your application.  In your mobile testing, you’ll want to make sure that the form renders correctly, that all fields can be filled in, that error messages display appropriately, and that the Save button submits the form when clicked on.  But you don’t want to test that the error message has the correct text, or that the fields have all saved correctly.  You can save those tests for standard Web browser or API automation.
Tip 2: Decide whether you want to run your tests on real devices or emulators
The advantage of running your tests on real devices is that the devices will behave like the devices your users own, with the possibility of having a low battery, connectivity issues, or other applications running.  But because of this, it’s more likely that your tests will fail because a phone in the device farm froze up or was being used by another tester.  Annoyances like these can be avoided by using emulators, but emulators can’t completely mimic the real user experience.  It’s up to you decide which choice is more appropriate for your application.  
Tip 3: Test only one thing at a time
Mobile tests can be flaky, due to the issues found in real devices discussed above and other issues such as the variations found in different phones and tablets.  You may find yourself spending a fair amount of time taking a look at your failed tests and diagnosing why they failed.  Because of this, it’s a good strategy to keep your tests small.  For example, if you were testing a login screen, you could have one test for a successful login and a second test for an unsuccessful login, instead of putting both scenarios into the same test.
Tip 4: Be prepared to re-run tests
As mentioned in Tip 3, you will probably encounter some flakiness in your mobile tests.  A test can fail simply because the service that is hosting the emulator loses connectivity for a moment.  Because of this, you may want to set up a system where your tests run once and then re-run the failed tests automatically.  You can then set up an alert that will notify you only if a test has failed twice.
Tip 5: Don’t feel like you have to test every device in existence

As testers, we love to be thorough.  We love to come up with every possible permutation in testing and run through them all.  But in the mobile space, this can quickly drive you crazy!  The more devices you are running your automated tests on, the more failures you will have.  The more failures you have, the more time you have to spend diagnosing those issues.  This is time taken away from new feature testing or exploratory testing.  Do some research on which devices your users own and come up with a list of devices to test with that covers most, but not all, of those devices.  
Tip 6: Take screenshots
Nothing is more frustrating than seeing that a test failed and not being able to figure out why.  Screenshots can help you determine if you were on the correct screen during a test step and if all the elements are visible.  Some mobile testing companies take a screenshot of every test step as the test progresses.  Others automatically take a screenshot of the last view before a test fails.  You can also code your test to take screenshots of specific test steps.  
Tip 7: Use visual validation

Visual validation is essential in mobile testing.  Many of the bugs you will encounter will be elements not rendering correctly on the screen.  You can test for the presence of an element, but unless you have a way to compare a screenshot with one you have on file, you won’t really be verifying that your elements are visible to the user.  Applitools makes an excellent product for visual comparison.  It integrates with common test software such as Selenium, Appium, and Protractor.  With Applitools, you can build visual verification right into your tests and save a collection of screenshots from every device you test with to use for image comparison. 

Now let’s discuss some good tools for test automation.  I’ve already mentioned Applitools; below are four other tools that are great for mobile test automation.  The mobile landscape is filled with products for automated testing, both open-source and paid.  In this post, I am discussing only the products that I have used; there are many other great products out there. 

Visual Studio App Center:  A Microsoft product that allows you to test Android and iOS applications on real devices.  A screenshot is taken of every test step, which makes it easy to figure out where a test started to go wrong. 

Appium:  An open-source product that integrates with Selenium and provides the capability to test on device emulators (or real devices if you integrate with a device farm). 

Sauce Labs:  Sauce Labs is great for testing on both mobile devices and web browsers on all kinds of operating systems.  You can run tests on real devices or emulators, and you can run tests in parallel.  They integrate well with Selenium and Appium.  A screenshot is taken whenever a test fails, and you can even watch a video of your test execution.

Perfecto: Uses real devices and integrates with Visual Studio, Appium, and Selenium.  They can simulate real-world user conditions such as network availability and location.

Whatever automated test tools you choose to use, remember the tips above, and you will ensure that you are comprehensively testing your application on mobile without spending a lot of time debugging. 

I initially said this series on Mobile Testing was going to be three blog posts long.  On reflection, I’ve realized that we need a fourth post: Mobile Security Testing.  This is a topic I know very little about.  So I’ll be doing some research, and you can expect Mobile Testing Part IV from me next week!

Mobile Testing Part II: Manual Mobile Testing

I am a firm believer that no matter how great virtual devices and automated tests are, you should always do some mobile testing with a physical device in your hand.  But none of us has the resources to acquire every possible mobile device with every possible carrier.  So today’s post will discuss how to assemble a mobile device portfolio that meets your minimum testing criteria, and how to get your mobile testing done on other physical devices.  We’ll also talk about the manual tests that should be part of every mobile test plan.

Every company is different, and will have a different budget available for acquiring mobile devices.  Here is an example of how I would decide on which phones to buy if I was allowed to purchase no more than ten.  I am based in the United States, so I would be thinking about US carriers.  I would want to make sure that I had at least one AT&T, Verizon, T-Mobile, and Sprint device in my portfolio.  I would also want to have a wifi-only device.  I would want to make sure that I had at least one iOS device and at least one Android device.  For OS versions, I’d want to have the both the latest OS version and the next-to-latest OS version for each operating system.  For Android devices, I’d want to have Samsung, LG, and Motorola represented, because these are the most popular Android devices in the US.  Finally, I would want to make sure that I had at least one tablet for each operating system.

With those stipulations in mind, I would create a list of devices like this:

In this portfolio, we have three iOS devices and six Android devices.  All four carriers I wanted are represented, and we have one wifi only device.  We have three tablets and six smartphones.  We have the latest iOS and Android versions, and the next-to-latest versions.  And we also have a variety of screen sizes.  It’s easy to modify a device plan like this if for some reason devices aren’t available.  For example, if I went to purchase these devices and found that Sprint wasn’t carrying the iPhone X, I could easily switch my plan around so I could get an iPhone X from AT&T and an iPhone 8 Plus from Sprint instead. 

The benefit of having a physical device portfolio is that you can add to it every year as your budget allows. Each year you can purchase a new set of devices with the latest OS version, and you can keep your old devices on the older OS versions, thus expanding the range of OS versions you can test with.

Once you have a device portfolio, you’ll need to make sure you are building good mobile tests into your test plans.  You can add the following tests:

  • Test the application in the mobile browser, in addition to testing the native app
  • Test in portrait and landscape modes, switching back and forth between the two
  • Change from using the network to using wifi, to using no service, and back again
  • Test any in-app links and social media features
  • Set the phone or device timer to go off during your testing
  • Set text messages or low battery warnings to come in during your testing
What about testing on the dozens of devices that you don’t have?  This is where device farms come in.  A device farm is made of many physical devices housed in one location that you can access through the Web.  From your computer, you can access the device controls such as the Home or Back buttons, swipe left and right on the screen, and click on the controls in your application.  You may even be able to do things like rotate the device and receive a phone call.  With a device farm, you can expand the range of devices you are testing on.  Good ideas for expanding your test plan would be adding devices with older OS versions, and adding devices from manufacturers that you don’t have in your portfolio.  In my case above, this might mean adding in HTC and Huawei devices.  
For manual device farm testing, I have had good experiences with Perfecto.  Other popular device farms with manual testing capabilities are AWS, Sauce Labs, and Browserstack.
You may be saying to yourself at this point, “You’ve got some great devices and carriers for US testing, but my users come from all over the world.  How can I make sure that they are all having a good user experience with my app?”  This is where crowd-testing comes in!  There are testing companies that specialize in using testers from many countries, who are using devices with their local carriers.  They can test your application in their own time zone on a device in their own language.  Popular global test companies include Testlio and Global App Testing.  Another good resource is uTest, which matches up independent testers with companies who are looking for testing on specific devices in specific countries.  
With a mobile device portfolio, a mobile test plan, a device farm, and a crowd-testing service in place, you will be able to to execute a comprehensive suite of tests on your application and ensure a great user experience worldwide.  But all of this manual testing takes a lot of time!  Next week, we’ll discuss how to save time and maximize your mobile test coverage through automated mobile testing.  

Mobile Testing Part I: Twelve Challenges of Mobile

Just over a decade ago, the first iPhone was released.  Now we live in an age where smartphones are ubiquitous.  Our smartphones are like our Swiss Army knives- they are our maps, our address books, our calendars, our cameras, our music players, and of course our communication devices.  Testing software would not be complete without testing on mobile.  There are so many considerations when testing on mobile that I’ve decided to make this into a three part series: this week I’ll discuss the challenges of mobile testing, next week I’ll discuss mobile manual testing, and the following week I’ll talk about mobile automated testing. 

First, the challenges!  Below are twelve reasons why testing on mobile is difficult.  I thought it would be fun to illustrate just what can go wrong with mobile software by describing a bug I’ve found in each area.  Some of these bugs were found in the course of my testing career, and some were found on my personal device as an end user.
1. Carriers: Mobile application performance can vary depending on what carrier the device is using.  In the US, the two major carriers are Verizon and AT&T, and we also have smaller carriers like Sprint and T-Mobile.  In Europe some of the major carriers are Deutche Telekom, Telefonica, Vodaphone, and Orange; in Asia some of the carriers are China Mobile, Airtel, NTT, and Softbank.  When testing software on mobile, it’s important to consider what carriers your end users will be using, and test with those carriers.
Example bug: I once tested a mapping function within an application, and discovered that while the map would update based on my location when I was using one carrier, it did not update when I was using a different carrier.  This had something to do with the way the location was cached after a cell tower ping.  
2. Network or Wifi: Device users have the choice of using their applications while connected to the carrier’s network, or while on wifi.  They can even make a choice to change how they are connecting in the middle of using the application; or their connection can be cut completely if they go out of network range.  It’s important to test an application when connected to a network and when connected to wifi, and to see what happens when the connection changes or is lost completely.
Example bug: I have a wifi extender in my house. When I switch my phone’s wifi connection to use the extender’s IP, Spotify thinks I am offline.  I have to force-close the app and reopen it in order for Spotify to recognize that I am online.
3. Application Type: Mobile applications can be Web-based, native, or a hybrid of the two (developed like a Web app, but installed like a native app).  Some of your end users will choose not to use a native or hybrid app and will prefer to interact with your application in their phone’s browser.  There are also a variety of mobile browsers that could be used, such as Safari, Chrome, or Opera.  So it’s important to make sure that your web application works well on a variety of mobile browsers.
Example bug: There have been many times where I’ve gone to a mobile website and their “mobile-optimized” site doesn’t have the functionality I need.  I’ve had to choose to go to the full site, where all of the text is tiny and navigation is difficult.  
4. Operating System: Mobile applications will function differently depending on the operating system.  The two biggest operating systems are iOS and Android, and there are others, such as Windows Mobile and Blackberry.  It’s important to test on whatever operating systems your end users will be using, to make sure that all of the features in the application are supported in all systems.
Example bug: This is not a bug, but a key difference between Android and iOS: Android devices have a back button, while iOS devices do not.  Applications written for iOS need to have a back button included on each page so users will have the option to move back to the previous page.
5. Version: Every OS updates their version periodically, with new features designed to entice users to upgrade.  But not every user will upgrade their phone to the latest and greatest version.  It’s important to use analytics to determine which versions your users are most likely to have, and make sure that you are testing on those versions.  Also, every version update has the potential to create bugs in your application that weren’t there before.  
Example bug: Often when the version is updated on my phone, I can no longer use the speaker function when making phone calls.  I can hear the voice on the other end, but the caller can’t hear me.
6. Make: While all iOS devices are manufactured by Apple, Android devices are not so simple.  Samsung is one of the major Android device manufacturers, but there are many others, such as Huawei, Motorola, Asus, and LG.  It’s important to note that not every Android user will be using a Samsung device, and test on other Android devices as well.
Example bug: I once tested a tablet application where the keyboard function worked fine on some makes but not others. The keyboard simply wouldn’t pop up on those devices, so I wasn’t able to type in any form fields.
7. Model: Similar to versioning, new models of devices are introduced annually.  While some users will upgrade every year or two to the latest device, others will not.  Moreover, some devices will not be able to upgrade to the latest version of the OS, so they will be out-of-date in two ways.  Again, it’s important to find out what models your end users are using so you can make decisions about which models to test on and to support.
Example bug: This is not a bug, but it was an important consideration: when Apple released a new model of the iPad that would allow a signature control for users to sign their name in a form, the software I was testing included this feature.  But older versions of the iPad weren’t able to support this, so the application needed to account for this and not ask users on older versions to sign a document.
8. Tablet or Smartphone:  Many of your end users will be interacting with your application on a tablet rather than a smartphone.  Native applications will often have different app versions depending on whether they are designed for tablet or phone.  An application designed for smartphone can often be downloaded to a tablet, but an application designed for a tablet cannot be installed on a smartphone.  If a web app is being used, it’s important to remember that tablets and smartphones sometimes have different features.  Test your application on both tablets and phones.
Example bug: I have tested applications that worked fine on a smartphone and simply gave me a blank screen when I tried to test them on a tablet.  
9. Screen Size:  Mobile devices come in many, many different sizes.  While iOS devices fit into a few sizing standards, Android devices have dozens of sizes. Although it’s impossible to test every screen size, it’s important to test small, medium, large, and extra large sizes to make sure that your application draws correctly in every resolution.  
Example bug: I have tested applications on small phones where the page elements were overlapping each other, making it difficult to see text fields or click on buttons.
10. Portrait or Landscape: When testing on smartphones, it’s easy to forget to test in landscape mode, because we often hold our phones in a portrait position.  But sometimes smartphone users will want to view an application in landscape mode, and this is even more true for tablet users.  It’s important to not only test your application in portrait and landscape modes, but also to be sure to switch back and forth between modes while using the application.  
Example bug: I tested an application once that looked great in a tablet when it was in portrait mode, but all of the fields disappeared when I moved to landscape mode.  

11. In-App Integration: One of the great things about mobile applications is that they can integrate with other features of the device, such as the microphone or camera.  They can also link to other applications, such as Facebook or Twitter.  Whatever integrations the application supports, be sure to test them thoroughly.  
Example bug:  I tested an application where it was possible for a user to take a picture of an appliance in their home and add it to their home’s inventory.  When I chose to take a picture, I was taken to the camera app correctly, and I was able to take the picture, but after I took the picture I wasn’t returned to the application.  
12. Outside of App Integration: Even if your application isn’t designed to work with any other apps or features, it’s still possible that there are bugs in this area.  What happens if the user gets a phone call, a text, or a low battery warning while they are using your app?  It’s important to find out.
Example bug:  For a while, there was a bug on my phone where if my device timer went off while I was in a phone call, as soon as I got off the phone, the timer sounded and wouldn’t stop.  
I hope that the above descriptions and examples have shown just how difficult it is to test mobile applications!  It may seem overwhelming at first, but in my next two blog posts, I’ll discuss ways to make testing simpler.  Next week, we’ll be taking a look at writing mobile test plans and assembling a portfolio of physical devices to test on.  

Cross-Browser Testing

In today’s Agile world, with two-week sprints and frequent releases, it’s tough to keep on top of testing.  We often have our hands full with testing the stories from the sprint, and we rely on automation for any regression testing.  But there is a key component of testing that is often overlooked, and that is cross-browser testing.

Browser parity is much better than it was just a few years ago, but every now and then you will still encounter differences with how your application performs in different browsers.  Here are just a few examples of discrepancies I’ve encountered over the years:

  • A page that scrolls just fine in one browser doesn’t scroll at all in another, or the scrolling component doesn’t appear
  • A button that works correctly in one browser doesn’t work in another
  • An image that displays in one browser doesn’t display in another
  • A page that automatically refreshes in one browser doesn’t do so in another, leaving the user feeling as if their data hasn’t been updated
Here are some helpful hints to make sure that your application is tested in multiple browsers:
Know which browser is most popular with your users
Several years ago I was testing a business-to-business CRM-style application.  Our team’s developers tended to use Chrome for checking their work, and because of this I primarily tested in Chrome as well.  Then I found out that over 90% of our end users were using our application in Internet Explorer 9.  This definitely changed the focus of my testing!  From then on, I made sure that every new feature was tested in IE 9, and that a full regression pass was run in IE 9 whenever we had a release.  
Find out which browsers are the most popular with your users and be sure to test every feature with them.  This doesn’t mean that you have to do the bulk of your testing there; but with every new feature and every new release you should be sure to validate all of the UI components in the most popular browsers.
Resize your browsers

Sometimes a browser issue isn’t readily apparent because it only appears when the browser is using a smaller window.  As professional testers, we are often fortunate to be issued large monitors on which to test.  This is great, because it allows us to have multiple windows open and view one whole webpage at a time, but it often means that we miss bugs that end users will see.  
End users are likely not using a big monitor when they are using our software.  Issues can crop up such as: a vertical or horizontal scrollbar not appearing, or not functioning properly; text not resizing, so that it goes off the page and is not visible; or images not appearing or taking too much space on the page.  
Be sure to build page resizing into every test plan for every new feature, and build it into a regression suite as well.  Find out what the minimum supported window size should be, and test all the way down to that level, with variations in both horizontal and vertical sizes.  
Assign each browser to a different tester

When doing manual regression testing, an easy way to make sure that all browsers you want to test are covered is by assigning each tester a different browser.  For example, if you have three testers on your team (including yourself), you could run your regression suite in Chrome and Safari, another tester could run the suite in Firefox, and a third tester could run the suite in Internet Explorer and Edge.  The next time the suite is run, you can swap browsers, so that each browser will have a fresh set of eyes on it.  
Watch for changes after browser updates

It’s possible that something that worked great in a browser suddenly stops working correctly when a new version of the browser is released.  It’s also possible that a feature that looks great in the latest version of the browser doesn’t work in an older version.  Many browsers like Chrome and Firefox are set to automatically update themselves with every release, but some end users may have turned this feature off, so you can’t assume that everyone is using the latest version.  It’s often helpful if you have a spare testing machine to keep browsers installed with the next-to-last release.  That way you can identify any discrepancies that may appear between the old browser version and the new.  
Use visual validation in your automated tests

Generally automated UI tests focus on the presence of elements on a web page.  This is great for functional testing, but the presence of an element doesn’t tell you whether or not it is appearing correctly on the page.  This is where a visual testing tool like Applitools comes in.  Applitools coordinates with UI test tools such as Selenium to add a visual component to the test validation.  In the first test run, Applitools is “taught” what image to expect on a page.  Then in all subsequent runs, it will take a screenshot of the image and compare with the correct image that it has saved.  If the image fails to load or is displaying incorrectly, the UI test will fail.  Applitools is great for cross-browser testing, because you can train it to expect different results for each browser type, version, and screen size.  
Browser differences are something that can greatly impact the user experience!  If you build in manual and automated systems to check for discrepancies, you can easily ensure a better user experience with a minimum of extra work.  
You may have noticed that I didn’t discuss the mobile experience at all in this post.  That’s because next week, I’ll be focusing solely on the many challenges of mobile testing!

A Gentle Introduction to Session Hijacking

We all know that Session Hijacking is bad, and that we should protect ourselves and our applications against it.  But it’s difficult to get easy-to-understand information about what it is, and how to test for it.  In this post, I’ll first describe the different types of session hijacking, and then I’ll provide a walkthrough on how to test for session hijacking using the OWASP Juice Shop and Burp Suite.

Session Hijacking refers to when a malicious user gets access to authentication information, and uses it to impersonate another user or gain access to information they should not have.  There are several types of Session Hijacking:

Predictable Session Token- this happens when the access tokens being generated by an application follow some kind of pattern.  For example, if a token granted for a login for a user was “APP123”, and a token granted for a second user was “APP124”, a malicious user could assume that the next token granted would be “APP125”.  This is a pretty obvious vulnerability, and there are many tools in use today that create non-sequential tokens, so it’s not a very common attack.

Session Sniffing- this takes place when a malicious user finds a way to examine the web traffic that is being sent between a user and a web server, and copies the token for their own use.  This is classified as a passive attack, because the malicious user is not interfering with the operation of the application or with the request.

Client-Side Attack- in this scenario, the malicious user is using XSS to cause an application to display a user’s token.  Then they copy the token and use it.

Man-in-the-Middle Attack- this attack is similar to Session Sniffing in that the malicious user gains access to web traffic.  But this is an active attack, because the malicious user uses a tool such as Burp Suite or Fiddler to intercept the request and then manipulate it for their purposes.

Man-in-the-Browser Attack- this takes place when a malicious user has managed to get code into another user’s computer.  The implanted code will then send all request information to the malicious user.

Now we are going to learn how test for Session Hijacking by using a Man-in-the-Middle Attack!  But first, it’s important to note that we are going to be intercepting requests by using the same computer where we are making the requests.  In a real Man-in-the-Middle Attack, the malicious user would be using some sort of packet-sniffing tool to gain access to requests that someone was making on a different computer.

The instructions provided here will be for Firefox.  Since I generally do my browsing (and blog-writing) on Chrome, I like to use a different browser for intercepting requests.  But Burp Suite will work with Chrome and Internet Explorer as well.

First, we’ll need to download and install Burp Suite.  You can install the free version, which is the Community Edition.  Don’t launch it yet.

Next, we’ll set up proxy settings in Firefox.  Click on the hamburger menu (a button with three horizontal lines) in the top right corner of the browser.  Choose “Preferences” and then scroll down to the bottom of the page and click on the “Settings” button.  Select the “Manual Proxy Configuration” radio button, and in the HTTP Proxy field, type “127.0.0.1”.  In the Port field, type “8080”.  Click the checkbox that says “Use this proxy server for all protocols”.  If there is anything in the “No proxy for” window, delete it.  Then click “OK”.

Now it’s time to start Burp Suite.  Start the application, then click “Next”, then click “Start Burp”.

Next we’ll set up the certificate that will give Firefox the permission to allow requests to be intercepted.  In Firefox, navigate to http://burp.  You’ll see a page appear that says “Burp Suite Community Edition”.  Click on the link in the upper right that says “CA Certificate”.  A popup window will appear; choose “Save File”.  The certificate will download, probably into your Downloads folder.

Click on the hamburger menu again, and choose “Preferences”.  In the menu on the left, choose “Privacy and Security”.  Scroll to the bottom of the page, and click the “View Certificates” button.  A popup window with a list of certificates will appear.  Click on the “Import” button, navigate to the “Downloads” folder, choose the recently downloaded certificate, and click the “Open” button.  A popup window will appear; check the “Trust this CA to identify websites” checkbox, and then click “OK”.  The certificate should now be installed.  Restart Firefox to ensure that the new settings are picked up.

Next, return to Burp Suite and turn the intercept function off.  We’re doing this so that we’re not intercepting web requests until we’re ready.  To turn off the intercept function, click on the Proxy tab in the top row of tabs, then click on the Intercept tab in the second row of tabs, then click on the “Intercept is on” button.  The button should now read “Intercept is off”.

Navigate to the Juice Shop, and create an account.  Once your account has been created, you’ll be prompted to log in.  Before you do so, go back to Burp Suite and turn the intercept function back on, using that “Intercept is off” button.

Now that Burp Suite is set to intercept your requests, log into the Juice Shop.  You will see nothing happen initially in your browser; this is because your request has gone to Burp Suite.  In Burp Suite, click on the Forward button; this will forward the intercepted request on to the server.  Continue to click the Forward button until the Search page of the Juice Shop has loaded completely.

In Burp Suite, click the HTTP History tab, in the second row of tabs.  You will see all the HTTP requests that were made when the Search page was loaded.  Scroll down until you see a POST request with the /rest/user/login endpoint.  Click on this request.  In the lower panel of Burp Suite, you will see the details of your request.  Notice that your username and password are displayed in clear text!  This means that if anyone were to intercept your login request, they would obtain your login credentials, and would then be able to log in as you at any time.

Next, return to the Juice Shop, and click on the cart symbol for the first juice listed in order to add the juice to your shopping cart.  Return to Burp Suite and click on the Intercept tab, and then click the Forward button to forward the request.  Continue to click the Forward button until no more requests are intercepted.

Return to the HTTP History tab in Burp Suite, and scroll down through the list of requests until you see a POST request with the api/BasketItems endpoint.  Right-click on this request and choose “Send to Repeater”.  This will send the request to the Repeater module where we can manipulate the request and send it again.  Return to the Intercept tab and turn the Intercept off.

Click on the Repeater tab, which is in the top row of tabs.  The request we intercepted when we added a juice to the shopping cart is there.  Let’s try sending the request again, by clicking on the Go button.  In the right panel of the page, we get a 400 response, with the message that the Product Id must be unique.  So, let’s return to the request in the left panel.  We can see that the body of the request is {“ProductId”:1,”BasketId”:”16″ (this number will vary by your basket id),”quantity”:1}.  Let’s change the ProductId from 1 to 2, and send the request again by clicking the Go button.  We can see in the response section that the request was successful.

Let’s return to the Juice Shop and see if we were really able to add an item to the cart by sending the request in Burp Suite.  Click on the Your Basket link.  You should see two juices in your cart.  This means that if someone were to intercept your request to add an item to your cart, they could manipulate the request and use it to add any other item they wanted to your cart.

How else might we manipulate this request?  Return to the Repeater tab in Burp Suite.  The request we intercepted is still there.  This time, let’s change the BasketId to a different number, such as 1.  Click Go to send the request again.  The response was successful, which means that we have just added a juice to someone else’s cart!

So, we can see that if a malicious user would be able to intercept a request to add an item to a shopping cart, they might be able to manipulate that request in all kinds of ways, to add unwanted items to the carts of any number of users.  They are able to do this because the request they intercepted has the Authorization needed to make more successful requests.  When you set up Burp Suite to intercept requests in your own application, you will be able to test for Session Hijacking vulnerabilities like this one.

This concludes my series of posts on Security Testing, although I will probably write new ones at some point in the future.  In the next few weeks, we’ll take a look at browser testing, mobile testing, and visual validation!