There are a great many articles, blog posts, and presentations that discuss automation frameworks and strategies. But even the most robust automation framework won’t eliminate the need to do exploratory testing. There will always be situations where we need to generate a large amount of text to test a text field or where we need to encode a string in HTML to test for cross-site scripting. In this week’s post, I share fifteen of my favorite free tools that make testing faster and easier.
Six Steps to Writing an Effective Test Report
As testers, we know how important it is to test our software thoroughly and document our findings meticulously. But all of our talent will be useless if we can’t effectively communicate our test results to others! If your test results are written in a giant, poorly organized spreadsheet with tiny text and lots of unnecessary details, even the most dedicated test manager will find her eyes glazing over with boredom when she looks at it. In this post, I’ll describe six steps to take to make sure that you can communicate your findings to others efficiently and effectively.
Rules | Amy | Bob | Carol | Doug |
Amy-small blue balls; Doug- large green balls | small blue ball; small blue ball; small blue ball; small blue ball | large red ball; small orange ball; large yellow ball; small purple ball | large purple ball; small green ball; large yellow ball; small red ball | large green ball; large green ball; large green ball; large green ball |
Bob- large red balls; Carol- small yellow balls | large orange ball; small purple ball; large yellow ball; small green ball | large red ball; large red ball; large red ball; large red ball; large red ball | small yellow ball; small yellow ball; small yellow ball; small yellow ball | small blue ball; small green ball; large purple ball; small orange ball |
Rules | Rules respected? |
Amy-small blue balls; Doug- large green balls | Yes |
Bob- large red balls; Carol- small yellow balls | Yes |
Number of Rules | Pass/Fail |
0
|
|
1
|
|
2
|
|
3
|
|
4
|
Test Case | Result |
None of the children have rules | The balls are sorted evenly amongst the children |
One child has a rule | The child’s rule is respected |
Two children have rules | The two children’s rules are respected |
Three children have rules | The three children’s rules are respected |
Four children have rules | None of the balls are sorted |
Rules | Rules respected? |
Amy- small blue; Bob- large blue; Carol- small purple | Amy gets only small blue balls, and Bob gets only large blue balls, but Carol gets balls other than the small purple balls |
Amy- large blue; Bob- small purple; Carol- small yellow | Amy gets only large blue balls, Bob gets only small purple balls, and Carol gets only small yellow balls |
Rules | Amy | Bob | Carol |
Amy- small blue; Bob- large blue; Carol- small purple | PASS | PASS | FAIL |
Amy- large blue; Bob- small purple; Carol- small yellow | PASS | PASS | PASS |
Rules | Result |
A-SB; B-LO; C-L; D-S | A-Y; B-Y; C-Y; D-N |
A-L; B-S; C-Y; D-P | A-Y; B-N; C-Y; D-Y |
A-LY; B-L; C-S; D-SG | A-Y; B-Y; C-N; D-Y |
This report conveys exactly the same information:
Test One
|
Amy- small blue
|
Bob- large orange
|
Carol- large
|
Doug- small
|
Rule respected?
|
Yes
|
Yes
|
Yes
|
No
|
Test Two
|
Amy- large
|
Bob- small
|
Carol- yellow
|
Doug- purple
|
Rule respected?
|
Yes
|
No
|
Yes
|
Yes
|
Test Three
|
Amy- large yellow
|
Bob- large
|
Carol- small
|
Doug- small green
|
Rule respected?
|
Yes
|
Yes
|
No
|
Yes
|
It’s easy to see exactly what rules each child was given for each test. Through the use of color, the report demonstrates very clearly where the bug is: whenever a child is given a rule that they should get only small balls, that rule is not respected.
Conclusion:
In today’s fast-paced world, we all have vast amounts of information coming to us every day. If we are going to make a difference with our testing and influence decision-making where we work, we need to be able to convey our test results in ways that clearly show what is going on with our software and what should be done to improve it.
How to Reproduce a Bug
Have you ever seen something wrong in your application, but you haven’t been able to reproduce it? Has a customer ever reported a bug with a scenario that you just couldn’t recreate? It is tempting to just forget about these bugs, but chances are if one person has seen the issue, other people will see it as well. In this post I’ll discuss some helpful hints for reproducing bugs and getting to the root cause of issues.
Gather Information
The first thing to do when you have a bug to reproduce is to gather as much information as you can about the circumstances of the issue. If it’s a bug that you just noticed, think about the steps that you took before the bug appeared. If it’s a bug that someone else has reported, find out what they remember about the steps they took, and ask for details such as their operating system, browser, and browser version.
One Step at a Time
Next, see if you can follow the steps that you or the reporter of the bug took. If you can reproduce the bug right away, you’re in luck! If not, try making one change at a time to your steps, and see if the bug will appear. Don’t just thrash around trying to reproduce the issue quickly; if you’re making lots of disorganized changes, you might reproduce the bug and not know how you did it. Keep track of the changes you made so you know what you’ve tried and what you haven’t tried yet.
Logs and Developer Tools
Application logs and browser developer tools can be helpful in providing clues to what is going on behind the scenes in an application. A browser’s developer tools can generally be accessed in the menu found in the top right corner of the browser; in Chrome, for example, you click on the three-dot menu, then choose “More Tools”, then “Developer Tools”. This will open a window at the bottom of the page where you can find information such as errors logged in the console or what network requests were made.
Factors to Consider When Trying to Reproduce a Bug
- User- what user was utilized when the bug was seen? Did the user have a specific permission level? You may be dealing with a bug that is only seen by administrators or by a certain type of customer.
- Authentication- was the user logged in? There may be a bug that only appears when the user is not authenticated, or that only appears when the user is authenticated.
- State of the data- what kind of data does the user have? Can you try reproducing with exactly the same data? The bug might only appear when a user has a very long last name, or a particular type of image file.
- Configuration issues- There may be something in the application that isn’t set up properly. For example, a user who isn’t getting an email notification might not be getting it because email functionality is turned off for their account. Check all of the configurable settings in the application and try to reproduce the issue with exactly the same configuration.
- Actions taken before the issue- Sometimes bugs are caused not by the current state where they are seen, but by some event that happened before the current state. For example, if a user started an action that used a lot of memory, such as downloading a very large file, and then continued on to other activities while the file was downloading, an error caused by running out of memory might affect their current activity.
- Back button- the Back button can be the culprit in all kinds of mysterious bugs. If a user navigates to a page through the Back button, the state of the data on the page might be different from what it would be through standard forward navigation.
- Caching- caching can result in unexpected behavior as well. For example, it might appear as if data is unchanged when it in fact has been changed. If a cache never expires or takes too long to expire, the state of the data can be very different from what is displayed.
- Race conditions- these issues are very difficult to pin down. Stepping through the code probably won’t help, because when the program is run one step at a time the problem doesn’t occur. The best way to determine if there is a race condition is to run your tests a several times and document the inconsistent behavior. You can also try clicking on buttons or links before a page has loaded in order to speed up input, or throttling your internet connection in order to slow down input.
- Browser/Browser Version- Browsers are definitely more consistent in their behavior than they used to be, and most browsers are now updated to the latest version automatically, but it’s still possible to find bugs that only appear in certain browsers or versions. If your end user is using IE 8 on an old Windows desktop, for example, it’s extremely likely that your application will behave differently for them.
- Browser Size- if a customer is saying that they don’t see a Save button or a scrollbar in their browser window, ask them to run a browser size tool in another tab on their browser. Set your browser to be the same size as theirs, and see if you now have the same problem.
- Machine or Device- Mobile devices are highly variable, so it’s possible that a user is seeing an issue on an Android device that you are not seeing on an iOS device, or that a user is seeing a problem on a Motorola device when you are not seeing it on a Samsung. Laptops and desktop computers are less variable, but it is still possible that there is an issue that a Mac owner is experiencing that you don’t see in Windows. Tools like Browserstack, Perfecto, and Sauce Labs can be helpful for diagnosing an issue on a machine or device that you don’t own.
What to Test When There’s Not Enough Time to Test
In last week’s post, I discussed the various things we should remember to test before we consider our testing “done”. This prompted a question from a reader: “How can I test all these things when there is very limited time for testing?” In today’s agile world, we often don’t have as much time as we feel we need to fully test our software’s features. Gone are the days when testers had weeks or months to test the upcoming release. Because software projects usually take longer than estimated, we may be asked to test things at the last minute, just a day or two before the release. Today I’ll discuss what to test when there’s not enough time to test, and I’ll also suggest some tips to avoid this problem in the first place.
The Bare Minimum: What to Test When There’s Almost No Time
Let’s use our hypothetical Superball Sorter as an example. For those who haven’t read my series of posts on this feature, the feature takes a number of superballs and sorts them among four children using a set of defined rules. What would I do if I was asked to test this feature for the first time, and it was due to be released tomorrow?
1. Test the most basic case
The first thing I would do would be to test the most basic use case of the feature. In this case, it would be running the Superball Sorter with no rules at all. I would test this first because it would give me a very clear indication whether the feature was working at all. If it wasn’t, I could raise the alarm right away, giving the developer more time to fix it.
2. Test the most typical customer scenario
In the case of the Superball Sorter, let’s say that we’ve been told by the product owner that in the most typical scenario, two of the children will be assigned a rule, and the rule will be by size rather than color. So the next test I would run would be to assign one child a rule that she only accepts large balls, and another child a rule that he only accepts small balls. I would run the sorter with these rules and make sure that the rules were respected.
3. Run a basic negative test
We all know how frustrating it can be to make a mistake when we try to do an activity online, such as filling out a form, and we have an error on the page but we haven’t been given any clear message about what it is. So the next thing I would test would be to make a common mistake that a user would make and ensure that I got an appropriate error message. For the Superball Sorter, I would set four rules that resulted in some balls not being able to be sorted, and I would verify that I got an error message that told me this was the case.
4. Test with different users or accounts
Just because something is working correctly for one user or account doesn’t mean it’s going to work correctly for everyone! Developers sometimes check their work with only one test user if they are in a big hurry to deliver their feature. So I would make sure to run the Superball Sorter with at least two users, and I would make sure that those users were different from the one the developer used.
After running these four tests, I would be able to say with some certainty that:
- the feature works at its most basic level
- a typical customer scenario will work correctly
- the customer will be notified if there is an error
The One Question to Ask to Improve Your Testing Skills
We’ve all been in this situation: we’ve tested something, we think it’s working great, and after it goes to Production a customer finds something obvious that we missed. We can’t find all the bugs 100% of the time, but we can increase the number of bugs we find with this one simple question:
I have asked this question of myself many times; I make a habit of asking it before I move any feature to Done. It almost always results in my finding a bug. The conversation with myself usually goes like this:
Good Tester Me: “What haven’t we tested yet?” “Well, we haven’t tested with an Admin user.”
Lazy Tester Me: “Why should that make a difference? This feature doesn’t have anything to do with user privileges.”
Good Tester Me: “That may be the case, but we should really test it anyway, to be thorough.”
Lazy Tester Me: “But I’ve been testing this feature ALL DAY! I want to move on to something else.”
Good Tester Me: “You know that we always find the bugs in the last things we think of to test. TEST IT!”
And I’m always happy I did. Even if I don’t find a bug, I have the peace of mind that I tested everything I could think of, and I’ve gained valuable product knowledge that I can share with others.
When I ask myself this question, here are twelve follow-up questions I ask:
Did I test with more than one user?
It seems so obvious, but we are often so embroiled in testing a complicated feature that we don’t think to test it with more than our favorite test user. Even something as simple as the first letter of a last name could be enough to trigger different behavior in a feature.
Did I test with different types of users?
Users often come with different privileges. When I was first starting out in testing, I would often test with an admin user, because it was the easiest thing to do. Finding out I’d missed a bug where a regular user didn’t have access to a feature they should have taught me a valuable lesson!
Did I test with more than one account/company?
For those of us testing B2B applications, we often have customers from different accounts or companies. I missed a bug once where the company ID started with a 0, and the new feature hadn’t been coded to handle that.
Did I test this on mobile?
Anyone who has ever tested an application on mobile or tablet knows that it can behave very differently from what is seen on a laptop. You don’t want your users to be unable to click a “Submit” button because it’s off-screen and can’t be accessed.
Did I test this on more than one browser?
Browsers have more parity in behavior than they did a few years ago, but even so, you will be occasionally surprised by a link that will work in some browsers but not others.
Did I try resizing the browser?
I often forget to do this. One things I’ve discovered when resizing is that the scroll bar can disappear, making it impossible for users to scroll through records.
Did I test with the Back button?
This seems so simple, but a lot of bugs can crop up here! Also be sure to test the Cancel button on a form.
Is this feature on any other pages, and have we tested on those pages?
This one recently tripped up my team. We forgot to test our feature on a new page that’s currently in beta. Be sure to mentally run through all the pages in your application and ask yourself if your feature will be on those pages. If you have a really large application, you may want to ask testers from other teams in your organization.
Did I test to make sure that this feature works with other features?
Always think about combining your features. Will your search feature work with your notification feature? Will your edit feature work with your sorting feature? And so on.
Have I run negative tests on this feature?
This is one that’s easy to forget when you are testing a complicated feature. You may be so focused on getting your application configured correctly for testing that you don’t think about what happens when bad data is passed in. For UI tests, be sure to test the limits of every text field, and verify that the user gets appropriate error messages. For API tests, be sure to pass in invalid data in the test body, and try using bad query parameters. Verify that you get 400-level responses for invalid requests rather than a generic 500 response.
Have I run security tests on this feature?
It’s a sad fact of life that not all of our end users will be legitimate users of our application. There will be bad actors looking for security flaws to exploit. This is especially true for financial applications and ones with a lot of personally identifiable information (PII). Protect your customers by running security scans on your features.
Have I checked the back-end database to make sure that data is being saved as I expected?
When you fill out and submit a form in your application, a success message is not necessarily an indication that the data’s been saved. There could be a bug in your software that causes an error when writing to the database. Even if the data has been saved, it could have been saved inaccurately, or there may be an error when retrieving the data. For example, a phone number might be saved with parentheses and dashes, but when the data is retrieved the front-end doesn’t know how to parse those symbols, so the phone number isn’t displayed. Always check your back-end data for accuracy.
How is the end user going to use this feature? Have I run through that scenario?
It’s so easy to get wrapped up in our day-to-day tasks of testing, writing automation, and working with our team that we forget about the end user of our application. You should ALWAYS understand how your user will be using your feature. Think about what journey they will take. For example, in an e-commerce app, if you’re testing that you can pay with PayPal, make sure you also run through a complete journey where you add a product to your cart, go to the checkout page, and then pay with PayPal.
Missing a bug that then makes it to Production can be humbling! But it happens to everyone. The good news is that every time this happens, we learn a new question to ask ourselves before we stop testing, making it more likely that we’ll catch that bug next time.
What questions do you ask yourself before you call a feature Done? Let me know in the comments section!
Five Strategies for Managing Test Automation Data
Has this ever happened to you? You arrive at work in the morning to find that many of your nightly automated tests have failed. Upon investigation, you discover that your test user has been edited or deleted. Your automation didn’t find a bug, and your test isn’t flaky; it simply didn’t work because the data you were expecting wasn’t there. In this week’s post, I’ll take a look at five different strategies for managing test data, and when you might use each.
What to Put in a Smoke Test
The term “smoke test” is usually used to describe a suite of basic tests that verify that all the major features of an application are working. Some use the smoke test to determine whether a build is stable and ready for further testing. I usually use a smoke test as the final check in a deploy to production. In today’s post, I’ll share a cautionary tale about what can happen if you don’t have a smoke test. Then I’ll continue that tale and talk about how smoke tests can go wrong.
Early in my testing career, I worked for a company that had a large suite of manual regression tests, but no smoke test. Each software release was difficult, because it was impossible to run all the regression tests in a timely fashion. With each release, we picked which tests we thought would be most relevant to the software changes and executed those tests.
One day, in between releases, we heard that there had been a customer complaint that our Global Search feature wasn’t working. We investigated and found that the customer was correct. We investigated further and discovered that the feature hadn’t worked in weeks, and none of us had noticed. This was quite embarrassing for our QA team!
To make sure that this kind of embarrassment never happened again, one of our senior QA engineers created a smoke test to run whenever there was a release to production. It included all the major features, and could be run fairly quickly. We felt a lot better about our releases after that.
However, the tester who created the test kept adding test steps to the Smoke Test. Every time a new feature was created, a step was added to the smoke test. If we found a new bug in a feature, even it was a small one, a step checking for the bug was added to the smoke test. As the months went on, the smoke test took longer and longer to execute and became more and more complicated. Eventually the smoke test itself took so much time that we didn’t have time to run our other regression tests.
Clearly there needs to be a happy medium between having no smoke test at all, and having one that takes so long to run that it’s no longer a smoke test. In order to decide what goes in a smoke test, I suggest asking these three questions:
1. What would absolutely embarrass us if it were broken in this application?
Let’s use an example of an e-commerce website to consider this question. For this type of website, it would be embarrassing or even catastrophic if a customer couldn’t:
- search for an item they were looking for
- add an item to their cart
- log in to their account
- edit their information
- wish list functionality
- product reviews
- recommendations for the user
How to Log a Bug
Last week, we talked about all the things you should check before you log a bug, in order to make sure that what you are seeing is really a bug. Once you have run through all your checks and you are sure you have a bug, it’s time to log it. But just throwing a few sentences in your team’s bug-tracking software is not a good idea! The way you log a bug and the details that you include can mean the difference between a bug being prioritized or left on the backlog; or the difference between a developer being able to find the problem, or closing the bug out with a “cannot repro” message. In this post, I’ll outline some best practices for logging a bug, with examples of what to do and what not to do.
Let’s take an example bug from the hypothetical Superball Sorter that I described a few weeks ago. Imagine that when you tested the feature, you discovered that if three of the children have a rule where they accept only large balls of some color, the small purple ball is never sorted.
Here are the components of a well-logged bug:
Title: The title of the bug should begin with the area of the application it is referring to. For example, if a bug was found in the Contacts section of your application, you could begin the title with “Contacts”. In this case, the area of the application is the Superball Sorter. After the application area, you should continue the title with a concise statement of the problem.
RIGHT: Superball Sorter: Small purple ball is not sorted when three children have large ball rules
WRONG: Small purple ball not sorted
While the second example gives at least a clue as to what the bug is about, it will be hard to find among dozens of other bugs later, when you might try to search by “Superball”. Moreover, it doesn’t state what the conditions are when the ball is not sorted, so if there is another bug found later where this same ball isn’t sorted, there could be confusion as to which bug is which.
Description: The first sentence of the bug should describe the issue in one sentence. This sentence can provide a bit more detail than the title. I often start this sentence with “When”, as in “when I am testing x, then y happens”.
RIGHT: When three children have sorting rules involving large balls, the small purple ball is not sorted.
WRONG: Doug doesn’t get the small purple ball
There are a number of things wrong with this second example. Firstly, the issue happens regardless of which three children have rules where they get only large balls, so referring to Doug here could be misleading. Secondly, the statement doesn’t describe what rules have been set up. A developer could read this sentence and assume that the small purple ball is never sorted, regardless of what rules are set up.
Environment and browser details: These can be optional if it’s assumed that the issue was found in the QA environment and if the issue occurs on all browsers. But if there’s any chance that the developer won’t know what environment you are referring to, be sure to include it. And if the issue is found on one browser but not others, be sure to mention that detail.
RIGHT: This is happening in the Staging environment, on the Edge browser only
Steps to reproduce: The steps should include any configuration and login information, and clearly defined procedures to display the problem.
RIGHT:
1. Log in with the following user:
username: foobar
password: mebs47
2. Assign Amy a rule where she gets large red balls only
Assign Bob a rule where he gets large orange balls only
Assign Carol a rule where she gets large yellow balls only
3. Create a set of superballs to be sorted, and ensure that there is at least one small purple ball
4. Run the superball sorter
5. Check each child’s collection for the small purple ball or balls
WRONG:
Everyone has a rule but Doug, and no one is getting the small purple ball
The above example doesn’t provide nearly enough information. The developer won’t know the login credentials for the user, and won’t know that the three rules should be for large balls.
ALSO WRONG:
1. Open the application
2. Type “foobar” in the username field
3. Type “mebs47” in the password field
4. Click the login button
5. Go to the Superball Sorter rules page
6. Click on Amy’s name
7. Click on the large ball button
8. Click on the red ball button
9. Click the save button
10. Click on Bob’s name
etc. etc. etc.
Before You Log That Bug…
Have you heard the ancient fable, “The Boy Who Cried Wolf”? In the tale, a shepherd boy repeatedly tricks the people of his village by crying out that a wolf is about to eat his sheep. The villagers run to his aid, only to find that it was a prank. One day, the boy really does see a wolf. He cries for help, but none of the villagers come because they are convinced that it must be another trick.
Similarly, we do not want to be “The Tester Who Cried Bug”. As software testers, if we repeatedly report bugs that are really user error, our developers won’t believe us when we really find a bug. To keep this from happening, let’s take a look at the things we should check before we report a bug. These tasks fall into two categories: checking for user error and gathering information for developers.
Check for User Error
- The first thing you should always check is to verify that the code you should be testing has actually been deployed. When I was a new tester, I constantly made this mistake. It’s such a waste of time to keep investigating an issue with code when the problem is actually that the code isn’t there!
- Are you testing in the right environment? When you have multiple environments to test in, and they all look similar, it’s easy to think you are testing in one environment when you are actually in another. Take a quick peek at your URL if you are testing a web app, or at your build number if you are testing a mobile app.
- Do you understand the feature? In a perfect world, we would all have great documentation and really well-written acceptance criteria. In the real world, this often isn’t the case! Check with your developer and product owner to make sure that you understand exactly how the feature is supposed to behave. Maybe you misunderstood something when you started to test.
- Have you configured the test correctly? Maybe the feature only works when certain settings are enabled. Think about what those settings are and go back and check them.
- Are you testing with the right user? Maybe this feature is only available to admin users or paid users. Verify the criteria of the feature and check your user.
- Does the back-end data support the test? Let’s say you are testing that a customer’s information is displayed. You are expecting to see the customer’s email address on the page, but the email is not there. Maybe the problem is actually that the email address is null, and that is why it is not displaying.
- Are you able to reproduce the issue? You should be able to reproduce the issue at least once before logging the bug. This doesn’t mean that you shouldn’t log intermittent issues, as they are important as well; but it does mean that you should have as much information as possible about when the issue occurs and when it doesn’t.
- Do you have clear, reproducible steps to demonstrate the issue? It is incredibly frustrating to a developer to hear that something is wrong with their software, but have only vague instructions available to use for investigation. For best results, give the developer a specific user, complete with login credentials, and clear steps that they can use to reproduce the problem.
- Is this issue happening in Production? Maybe this isn’t a new bug; maybe the issue was happening already. This is especially possible when you are testing old code that no one has looked at or used in a while. See last week’s post, The Power of Pretesting, for ideas on testing legacy software.
- Does the issue happen on every browser? This information can be very helpful in narrowing down the possible cause of an issue.
- Does the issue happen with more than one user? It’s possible that the user you are testing with has some kind of weird edge case in their configuration or their data. This doesn’t mean that the issue you are seeing isn’t a bug; but if you can show that there are some users where the issue is not happening, it will help narrow the scope of the problem.
- Does the issue happen if the data is different? Try varying the data and see if the issue goes away. Maybe the problem is caused by a data point that is larger than the UI is expecting, or a field that is missing a value. The more narrowly you can pinpoint the problem, the faster the developer can fix it.
The Power of Pretesting
Having been in the software testing business for a few years now, I’ve become accustomed to various types of testing: Acceptance Testing, Regression Testing, Exploratory Testing, Smoke Testing, etc. But in the last few weeks, I’ve been introduced to a type of testing I hadn’t thought of before: Pretesting.
On our team, we are working to switch some automatically delivered emails from an old system to a new system. When we first started testing, we were mainly focused on whether the emails in the new system were being delivered. It didn’t occur to us to look at the content of the new emails until later, and then we realized we had never really looked at the old emails. Moreover, because the emails contained a lot of detail, we found that we kept missing things: some extra text here, a missing date there. We discovered that the best way to prevent these mistakes was to start testing before the new feature was delivered, and thus, Pretesting was born.
Now whenever an email is about to be converted to the new system, we first test it in the old system. We take screenshots, and we document any needed configuration or unusual behavior. Then when the email is ready for the new system, it’s easy to go back and compare with what we had before. This is now a valuable tool in our testing arsenal, and we’ve used it in a number of other areas of our application.
When should you use Pretesting?
- when you are testing a feature you have never tested before
- when no one in your company seems to know how a feature works
- when you suspect that a feature is currently buggy
- when you are revamping an existing feature
- when you are testing a feature that has a lot of detail
Why should you Pretest?
How to Pretest:
1. Conduct exploratory testing on the old feature. Figure out what the Happy Path is.
2. Document how the Happy Path works and include any necessary configuration steps
3. Continue to test, exploring the boundaries and edge cases of the feature
4. Document any “gotchas” you may find- these are not bugs, but are instead areas of that feature that might not work as someone would expect
5. Log any bugs you find and discuss the with your team to determine whether they should be fixed with the new feature or left as is
6. Take screenshots of any complicated screens, such as emails, messages, or screens with a lot of text, images, or buttons. Save these screenshots in an easily accessible place such as a wiki page.
When the New Feature is Ready:
1. Run through the same Happy Path scenario with the new feature, and verify that it behaves the same way as the old feature
2. Test the boundaries and edge cases of the feature, and verify that it behaves in the same way as the old feature
3. Verify that any bugs the team has decided to fix have indeed been fixed
4. Compare the screenshots of the old feature with the screenshots of the new feature, and verify that they are the same (with the exceptions of anything the team agreed to change)
5. Do any additional testing needed, such as testing new functionality that the old feature didn’t have or verifying that the new feature integrates with other parts of the application
The power of Pretesting is that it helps you notice details you might otherwise miss in a new feature, and as a bonus, find existing bugs in the old feature. Moreover, testing the new feature will be easy, because you will have already created a test plan. Your work will help the developer do a better job, and your end users will appreciate it!