In today’s Agile world, with two-week sprints and frequent releases, it’s tough to keep on top of testing. We often have our hands full with testing the stories from the sprint, and we rely on automation for any regression testing. But there is a key component of testing that is often overlooked, and that is cross-browser testing.
Browser parity is much better than it was just a few years ago, but every now and then you will still encounter differences with how your application performs in different browsers. Here are just a few examples of discrepancies I’ve encountered over the years:
- A page that scrolls just fine in one browser doesn’t scroll at all in another, or the scrolling component doesn’t appear
- A button that works correctly in one browser doesn’t work in another
- An image that displays in one browser doesn’t display in another
- A page that automatically refreshes in one browser doesn’t do so in another, leaving the user feeling as if their data hasn’t been updated
Here are some helpful hints to make sure that your application is tested in multiple browsers:
Know which browser is most popular with your users
Several years ago I was testing a business-to-business CRM-style application. Our team’s developers tended to use Chrome for checking their work, and because of this I primarily tested in Chrome as well. Then I found out that over 90% of our end users were using our application in Internet Explorer 9. This definitely changed the focus of my testing! From then on, I made sure that every new feature was tested in IE 9, and that a full regression pass was run in IE 9 whenever we had a release.
Find out which browsers are the most popular with your users and be sure to test every feature with them. This doesn’t mean that you have to do the bulk of your testing there; but with every new feature and every new release you should be sure to validate all of the UI components in the most popular browsers.
Resize your browsers
Sometimes a browser issue isn’t readily apparent because it only appears when the browser is using a smaller window. As professional testers, we are often fortunate to be issued large monitors on which to test. This is great, because it allows us to have multiple windows open and view one whole webpage at a time, but it often means that we miss bugs that end users will see.
End users are likely not using a big monitor when they are using our software. Issues can crop up such as: a vertical or horizontal scrollbar not appearing, or not functioning properly; text not resizing, so that it goes off the page and is not visible; or images not appearing or taking too much space on the page.
Be sure to build page resizing into every test plan for every new feature, and build it into a regression suite as well. Find out what the minimum supported window size should be, and test all the way down to that level, with variations in both horizontal and vertical sizes.
Assign each browser to a different tester
When doing manual regression testing, an easy way to make sure that all browsers you want to test are covered is by assigning each tester a different browser. For example, if you have three testers on your team (including yourself), you could run your regression suite in Chrome and Safari, another tester could run the suite in Firefox, and a third tester could run the suite in Internet Explorer and Edge. The next time the suite is run, you can swap browsers, so that each browser will have a fresh set of eyes on it.
Watch for changes after browser updates
It’s possible that something that worked great in a browser suddenly stops working correctly when a new version of the browser is released. It’s also possible that a feature that looks great in the latest version of the browser doesn’t work in an older version. Many browsers like Chrome and Firefox are set to automatically update themselves with every release, but some end users may have turned this feature off, so you can’t assume that everyone is using the latest version. It’s often helpful if you have a spare testing machine to keep browsers installed with the next-to-last release. That way you can identify any discrepancies that may appear between the old browser version and the new.
Use visual validation in your automated tests
Generally automated UI tests focus on the presence of elements on a web page. This is great for functional testing, but the presence of an element doesn’t tell you whether or not it is appearing correctly on the page. This is where a visual testing tool like Applitools comes in. Applitools coordinates with UI test tools such as Selenium to add a visual component to the test validation. In the first test run, Applitools is “taught” what image to expect on a page. Then in all subsequent runs, it will take a screenshot of the image and compare with the correct image that it has saved. If the image fails to load or is displaying incorrectly, the UI test will fail. Applitools is great for cross-browser testing, because you can train it to expect different results for each browser type, version, and screen size.
Browser differences are something that can greatly impact the user experience! If you build in manual and automated systems to check for discrepancies, you can easily ensure a better user experience with a minimum of extra work.
You may have noticed that I didn’t discuss the mobile experience at all in this post. That’s because next week, I’ll be focusing solely on the many challenges of mobile testing!
Thank you Kristin for writing this up, and I totally agree on all points, especially as a testing team we should be aware of what kind of browsers that real users use in live and we should plan our test activites accrdingly.One thing we do at our company is we have deployed a third party tool called 'Pendo' (which is basically helps to put help-contents in our web app for users), and also gives informations to us about the user activities like what application pages were commonly used in production, what browser were used in large, their metrics details etc. that helps us to know things better and help us to plan testing and activites… 🙂
Thanks for sharing this suggestion! 🙂
Hello friends, nice post and nice urging commented at this place, I am in fact enjoying by these.
Thanks a lot, this post helps a lot making some high-level architectural decision. Thanks for the information.
software testing services
software testing companies
Regression testing services
Performance testing Services
Test automation services
Nice Post.. really helpful to clear my small concepts. thanks :)Keep doing more, waiting to read your next blog.
quality assurance and testing services
Software testing and Quality Assurance Services
Software testing companies in USA
End to end Performance testing services in USA
Performance testing services company
security testing services company
Test automation service providers
QA Services company
mobile app testing services