What to Test When There’s Not Enough Time to Test

In last week’s post, I discussed the various things we should remember to test before we consider our testing “done”.  This prompted a question from a reader: “How can I test all these things when there is very limited time for testing?”  In today’s agile world, we often don’t have as much time as we feel we need to fully test our software’s features.  Gone are the days when testers had weeks or months to test the upcoming release.  Because software projects usually take longer than estimated, we may be asked to test things at the last minute, just a day or two before the release.  Today I’ll discuss what to test when there’s not enough time to test, and I’ll also suggest some tips to avoid this problem in the first place.

The Bare Minimum: What to Test When There’s Almost No Time

Let’s use our hypothetical Superball Sorter as an example.  For those who haven’t read my series of posts on this feature, the feature takes a number of superballs and sorts them among four children using a set of defined rules. What would I do if I was asked to test this feature for the first time, and it was due to be released tomorrow?

1. Test the most basic case

The first thing I would do would be to test the most basic use case of the feature.  In this case, it would be running the Superball Sorter with no rules at all.  I would test this first because it would give me a very clear indication whether the feature was working at all.  If it wasn’t, I could raise the alarm right away, giving the developer more time to fix it.

2. Test the most typical customer scenario

In the case of the Superball Sorter, let’s say that we’ve been told by the product owner that in the most typical scenario, two of the children will be assigned a rule, and the rule will be by size rather than color.  So the next test I would run would be to assign one child a rule that she only accepts large balls, and another child a rule that he only accepts small balls.  I would run the sorter with these rules and make sure that the rules were respected.

3. Run a basic negative test

We all know how frustrating it can be to make a mistake when we try to do an activity online, such as filling out a form, and we have an error on the page but we haven’t been given any clear message about what it is.  So the next thing I would test would be to make a common mistake that a user would make and ensure that I got an appropriate error message.  For the Superball Sorter, I would set four rules that resulted in some balls not being able to be sorted, and I would verify that I got an error message that told me this was the case.

4. Test with different users or accounts

Just because something is working correctly for one user or account doesn’t mean it’s going to work correctly for everyone!  Developers sometimes check their work with only one test user if they are in a big hurry to deliver their feature.  So I would make sure to run the Superball Sorter with at least two users, and I would make sure that those users were different from the one the developer used.

After running these four tests, I would be able to say with some certainty that:

  • the feature works at its most basic level
  • a typical customer scenario will work correctly
  • the customer will be notified if there is an error
If I had time left over after this testing, I would move on to test more Happy Path scenarios, and then to the other tests I mentioned in last week’s post.  

Remember that it will never be perfect, and things will never be completely done

When software is released monthly, weekly, or even daily, there’s no way to test everything you want to test.  Even if you could get to everything, there will always be some sneaky bug that slips through.  This is just a fact of life in software development.  The good news is that because software is released so frequently, a bug fix can be released very shortly after the bug is found.  So relax, and don’t expect things to be perfect.
Speak up- in person and in writing- if disaster is about to strike

Early in my testing career, I was on a team where we were asked to test a large number of new features for a release in a short amount of time.  When we were asked whether we felt confident in the new release, every single one of us said no.  We each delineated the things we hadn’t been able to test yet, and why we were concerned about the risks in those areas.  Unfortunately, management went ahead and released the software anyway, because there was a key customer who was waiting for one of the features.  As a result, the release was a failure and had to be recalled after many customer complaints.  
If you believe that your upcoming software release is a huge mistake, speak up!  Outline the things you haven’t tested and some of the worst-case scenarios you can envision.  Document what wasn’t tested, so that the key decision-makers in your company can see where the risks are.  If something goes wrong after the release, your documentation can serve as evidence that you had concerns.
Enlist the help of developers and others in your testing
While testers possess a valuable set of skills that help them find bugs quickly, remember that all kinds of other people can run through simple test scenarios.  If everyone on your team understands that you have been given too short an amount of time in which to test, they will be happy to help you out.  If I were asking my teammates to test the Superball Sorter, I might ask one person to test scenarios with just one rule, one person to test scenarios with three rules, and one person to test scenarios with four rules, while I continued to test scenarios with two rules.  In this way, we could test four times as many Happy Path scenarios as I could test by myself.  
Talk with your team to find out how you can start testing earlier
To prevent last-minute testing, try to get involved with feature development sooner in the process.  Attend meetings about how the feature will work, and ask questions about integration with other features and possible feature limitations.  Start putting together a test plan before the feature is ready. Work with your developer to write some automated tests that he or she can use while in development.  Ask your developer to commit and push some of their code so you can test basic scenarios, with the understanding that the feature isn’t completely done.  In the case of the Superball Sorter, I could ask the dev to push some code once the sorter was capable of sorting without any rules, just to verify that the balls were being passed to each child evenly.  
Automate as much as possible

In sprint-based development, there’s often a lull for testers at the beginning of a sprint while the developers are still working on their assigned features.  This is the perfect time to automate features that you have already tested.  When release day looms, much or all of your regression testing can run automatically, freeing you up to do more exploratory testing on the new features.
As testers, we want our users to have a completely bug-free experience.  Because of that, we always want more time for testing than we are given.  With the strategies above, we can ensure that the most important things are tested and that with each sprint we are automating more tests, freeing up our valuable time.  

3 thoughts on “What to Test When There’s Not Enough Time to Test

Comments are closed.