Last week, we talked about how I would decide what to test in a simple application in terms of testing every segment of the Automation Test Wheel. I find it’s very helpful to answer the question “What do I want to test?” before I think about how I’m going to test it. This week we’ll look at how to take the “What” of automated testing and continue on with how I want to test, when I want to test it, and where (what environment) I’m going to test it in. As a reminder, my hypothetical application is a simple web app called Contact List, which allows a user to add, edit, and delete their contacts.
How I’m Going to Test:
I’m going to run my unit and component tests directly in the code. Unit tests are designed to specifically run in the code, because they are the smallest possible unit and test the code directly. My component tests are very simple- just one call to the database and one call for authentication- so I will run those directly from my code as well.
For my services tests, I’m going to use Postman, which is my favorite API testing tool. I’ll run the Postman tests using Newman, which is the command-line tool for Postman. I’ll also include some security tests in Postman, validating that any requests without appropriate authentication return an appropriate error, and I will also do some performance checks here, verifying that the response times to my API requests are within acceptable levels.
For my UI tests, I’m going to use Selenium and Jasmine, because I like the assertion style that Jasmine uses. I’ll be adding a few security tests here, making sure that pages do not load when the user doesn’t have access to them. I’ll also be integrating my visual tests into my Selenium tests, using Applitools, and I’ll be using both Selenium and Applitools to run my accessibility tests.
Finally, I would set up a performance testing tool such as Pingdom that would consistently monitor my web page loading times and alert me when load times have slowed.
When I’m Going to Test:
Now that I’ve figured out how I’m going to test, it’s time to think about when I’m going to run my tests. I’m going to organize my tests into four times.
With every build: every time new code is pushed, I’m going to run my unit tests, component tests, and Newman tests. These tests will give me very fast feedback. I’m not going to run any UI tests at this time, because I don’t want to slow my feedback down.
With every deploy: every time code is deployed to an environment, I’m going to run all my Newman tests, and a small subset of my Jasmine tests. My Jasmine tests will include at least one visual check and one security check. This will ensure that the API is running exactly as it should and that there are no glaring errors in the UI.
Daily: I’ll want to run all of my Newman tests and all of my Jasmine tests early in the morning, before I start my workday. When I begin my workday I’ll have a clear indication of the health of my application.
Ongoing: As mentioned above, I’ll have Pingdom monitoring my page load times throughout the day to alert me of any performance problems. I’ll also set up a job to run a small set of Newman tests periodically throughout the day to alert me of any server downtime.
Where I’m Going to Test:
Now that I’ve decided how and when to test, I need to think about where to test. Let’s imagine that my application has four different environments: Dev, QA, Stage, and Prod. My Dev environment is solely for developers. My QA environment is where code will be deployed for manual and exploratory testing. My Stage environment is where a release candidate will be prepared for Production. Let’s look at what I will test in each environment.
Dev: My unit and component tests will run here whenever a build is run, as well as my Newman and Jasmine tests whenever a deploy is run.
QA: I’ll run my full daily Newman and Jasmine suites here, and I’ll run my full Newman suite and a smaller Jasmine suite with a deploy.
Stage: I’ll run the full sets of Newman and Jasmine tests when I deploy. This is because the Stage environment is the last stop before Prod, and I’ll want to make sure we haven’t missed any bugs. I’ll also run my Pingdom monitoring here, to catch any possible performance issues before we go to Prod.
Prod: I’ll run a small set of daily Newman and Jasmine tests here. I’ll also point my Pingdom tests to this environment, and I’ll have those tests and a set of Newman tests running periodically throughout the day.
Putting it All Together:
When viewed in prose form, this all looks very complicated. But we have actually managed to simplify things down to four major test modalities tested at four different times. Are we covering all the areas of the Automation Test Wheel? Let’s take a look:
We are covering each different area with one or more testing modalities. Now let’s visualize our complete test plan:
Viewed in a grid like this, our plan looks quite simple! By considering each question in turn:
- What do we want to test?
- How are we going to test it?
- When will we run our tests?
- Where will we run them?
We’ve been able to come up with a comprehensive plan that covers all areas of the testing wheel and tests our application thoroughly and efficiently.
I like your post there is a lot of information about software testing, which i would like to learn, thank you for the great guide. Very useful post and I think it is rather easy to see from the other comments as well that this post is well written and useful. I bookmarked this blog a while ago because of the useful content and I am never being disappointed. Keep up the good work.. Read more about QA Services