A Gentle Introduction to Git

For a software tester who has just started writing test automation, using version control software such as Git can seem daunting and confusing.  But being able to pull down the latest code, update it, and submit a pull request is very important for any team project!  In this week’s post, I’ll provide a gentle introduction to the basics of Git.

What is Git?

Git is a version control system.  A version control system is a system that allows a group of people to collaborate on code without accidentally overwriting each other’s work.  It also allows the group to keep track of who changed the code and when it was changed, so it’s easy to trace back to the source of a problem.

Why is Git needed?

Consider what file editing is like when you don’t use a version control system.  Let’s say you have a recipe for brownies.  You send the recipe to your friend, and he decides to change the amount of cocoa in the recipe.  When he makes that change, it is only in his version of the file, not yours.  Your files are now different.  If you make a change to add more vanilla to the recipe, now your versions have diverged even further. 

You can see how this would be unacceptable for software code!  In a version control system, there is one “master version” which is the accepted version of the code.  This master version lives in GitHub (or another version control hosting service), and can be “pulled” down by any user.  When someone wants to make a change to the code, they pull down the master version of the code, create a “branch” that is a copy of the master version, make their changes to the branch, push the branch up to GitHub, and then do a “pull request”, which is asking for someone to review their code and merge it into the master branch.
Confused?  Don’t worry, this will look much simpler with an example.  Let’s imagine that we have a source code repository called “The Thinking Tester Guestbook”.  We’ll take a look at what would happen if Prunella Prunewhip wanted to add her name to the guestbook.

These instructions assume that Prunella has already installed Git on her computer, and has already created a GitHub account.)



Step One: Prunella clones the source code repository

This is often called “cloning the repo” or “pulling down the repo”.  Prunella does this by going to the URL in GitHub that has the source code, and clicking the green “Clone or download” button.  A dropdown appears with the URL she will need to clone the source code.  She clicks the little clipboard button to the right of the URL to copy the URL text.

Prunella opens up a command window and navigates to the folder where she would like put the source code.  Once she’s there, she types git clone and then pastes the URL text next to those words.  The repository is copied from GitHub into a new folder.

Now that the repository is in a folder on her computer, she can open the folder up in her file browser and take a look at what’s in there.  She sees that there is one text file, called “guestBook.txt”.  The text file reads:

Kristin Jackvony was here on May 11, 2019

Step Two: Prunella makes a new branch and adds her changes to that branch

Before Prunella makes any changes to guestBook.txt, she should create a new branch and switch to it.  So in the command line, she navigates to the new folder that was cloned earlier by typing
cd ThinkingTesterGuestBook.

She can verify that she’s in the master branch by typing git status, and she will get a response like this: On branch master.

Now she can create a new branch and switch to it by typing git checkout -b NewEntry.  The command “-b” tells Git to create a new branch.  “NewEntry” is what Prunella has chosen to name her branch.  And the command “checkout” is what causes Git to switch to the new branch.

If Prunella types git status at this point, she will get On branch NewEntry as a response.

Now that Prunella is in the correct branch, she’s going to make a change to the guestBook.txt file, by adding one line, so that the file now reads:

Kristin Jackvony was here on May 11, 2019
Prunella Prunewhip was here on May 13, 2019

Step Three: Prunella commits her changes and pushes them to GitHub

Now that Prunella has made the change she wanted, she needs to commit and push her change.  First, she can run git status and she’ll get this response: 

On branch NewEntry
modified: guestBook.txt

This shows that the guestBook.txt file has been modified. Next, Prunella needs to add the file to the commit, by typing git add guestBook.txt.  Now if she types git status, she’ll see this response:

On branch NewEntry
Changes to be committed:
     modified: guestBook.txt

Next, Prunella commits her change by typing git commit -m “Adding a new entry”.  The “-m” in this command stands for “message”.  The “Adding a new entry” text is the message that she is adding to explain what she is committing.  The command line will respond with how many files and lines were changed.

Once the change has been committed, Prunella can push the change up to the GitHub repository by typing git push origin NewEntry.  The “NewEntry” value explains that the code should go up to the NewEntry branch, which doesn’t exist yet in the GitHub repository, but it will be created with this command.  “Origin” refers to the GitHub repository (this is also referred to as “remote”).  The command line will respond with several lines, the final line of which will be
* [new branch] NewEntry -> NewEntry, which shows that a new branch called NewEntry has been created in the origin, and that it was copied from the local branch Prunella created, which was also called NewEntry.

Step Four: Prunella creates a pull request in GitHub

Now that her new branch has been pushed up to GitHub, Prunella can submit a pull request to ask that her changes are merged with the master branch.  She does this by going to the GitHub repository and clicking the “New Pull Request” button.  This takes her to the “Compare” page.  She makes sure that the left side of the comparison is the master branch, and then she chooses the NewEntry branch from the branch dropdown.  She can see how the guestBook.txt file has changed; the new line she added is highlighted in green, illustrating the difference between the two files.  (If she had deleted a line, the line she removed would be highlighted in red.)  Finally, she clicks the “Create Pull Request” button.

Step Five: Prunella’s pull request is approved and merged

The final step in the file change process is that the owner of the repository (or any teammates who have approval permissions) will review the change, approve it, and merge it.  Now if Prunella changes to the master branch by doing git checkout master, pulls down the changes by doing git pull origin master, and takes a look at guestBook.txt file, she will see that her entry has been added:

Kristin Jackvony was here on May 11, 2019
Prunella Prunewhip was here on May 13, 2019

And that’s all there is to it!  In my next post, I’ll add a few more Git tips and tricks, but these steps should be enough to get you started with committing your own code to your team’s repository.

Seven Excuses Software Testers Need to Stop Making

Last summer, I read an interesting book called Extreme Ownership.  Written by two Navy SEAL officers, it describes the concept of taking responsibility for every facet of your job, even those things that you feel that you have no control over.  If one of their soldiers made a mistake, the officers would take responsibility, because they could have trained the soldier better.  If their commander made a poor decision, the officers would take responsibility for that as well, because they could have “managed up” and provided information that would have led to a better decision.  When everyone exercises Extreme Ownership, a culture of excellence and achievement is the result.

Extreme Ownership can be applied to any career, including software testing!  Yet, there are a number of excuses that I often hear software testers make.  Excuses keep us from taking full ownership over our work, and keep us from being taken seriously.  Below are eight excuses that software testers need to stop making.

Excuse #1: I don’t know how the feature works

All too often, testers simply follow the meager directions left for them by the developer in the software story, without have any idea what they are doing.  For example, “Run this SQL query and verify the result is 1”.  Why?  What information is this query obtaining?  How do you know that this answer is the right one?  If it turns out there is a bug related to this feature, how can you possibly say that you’ve tested it?

When you are presented with a story to test that you don’t understand, start asking questions.  If the developer can’t explain the feature to you, find someone who can.  Restate the information that you are given to make absolutely sure that you understand it correctly.  Ask for the information to be presented in a way that makes sense to you.  I am a visual learner, and the developers on my team know that if I don’t understand what they are trying to explain to me, it’s time for them to draw me a diagram.

There have been many times where I have uncovered bugs in a feature even before I’ve started testing, simply by asking questions about how the feature works!

Excuse #2: There’s no way to test the feature

Really?  There’s NO way to test the feature?  How does the developer know that the feature is working then?  Are they just sending it to you and hoping for the best?  There must be SOME way for your developer to know that their code is working.  What is that way?  Can they show it to you?

There have been some features that I was unable to test myself because I didn’t have access to the back-end system that was being used in the feature.  When this is the case, I make sure to work with the developer and have them show me that the feature is working.  Then I can ask them to try various test cases while I watch, so we are effectively pair testing the feature.  In this way, we can uncover any bugs that may exist.

Excuse #3: The developer coded it wrong

I have sometimes seen instances where a developer misunderstood the requirements of the feature to be built and created it incorrectly.  This is why it’s important for everyone on the team to understand the requirements and to see to it that acceptance criteria are included in the story.  If you test the story based solely on what the developer tells you, and don’t verify exactly what was supposed to be built, then the fault lies with you.  You are the tester- usually the last line of defense before the product goes to the customer.  Make sure the customer is getting the right thing!

Excuse #4: The other tester on my team missed the bug

Even the best of software testers misses a bug now and then.  That’s why it’s important to have at least two sets of eyes on every feature.  It’s the policy on my team that when one tester is finished testing a feature, another tester tests the feature in the next environment.  The week after I instituted this policy, one of my co-workers found two bugs I missed!

If you are the only tester in your company, set up a “bug hunt” where everyone in the company looks for bugs.  Don’t be embarrassed if someone finds something you missed; when we test the same thing over and over again, we can sometimes develop inattentional blindness.

Excuse #5: There wasn’t enough time to test

Let’s face it: there will never be enough time to test everything that you want.  Software developers have time constraints too; they would probably really like to refactor their code a few more times before they hand it over to you, but they are working with a deadline just as you are.  So instead of making excuses, test the most important things, and manage your time wisely.

Excuse #6: If I log the bug I found in Production, I’ll be asked why I didn’t find it sooner

This was a new one to me when I heard it a couple of months ago.  There may be some managers who blame testers for finding bugs, but these managers are misguided.  It’s up to us to educate our team about what testers do.  We simply can’t find every bug; there are just too many ways that software can go wrong.  What we can do is report what we find as soon as we find it, and keep an eye out for similar bugs next time. If you don’t report that bug in Production, it will go unfixed, and the next person to find it will be a customer, or the CEO of your company!

Excuse #7: I don’t know how to code

Software development has changed significantly over the last two decades.  Companies used to release software every six months, so testers had tons of time to do regression testing.  Now software is released every week or two.  It’s simply not possible to manually test an entire application during that time frame.  This is why automation is necessary, and why you need to learn how to automate!

You don’t have to take a college course in Java to learn how to code.  All coding languages run on some very simple logical principles that are easy to understand.  The only tricky thing is the syntax of whatever language is being used, and the more you expose yourself to the code, the more you will understand.

If there’s no test automation at your company, see if you can get one of your developers to write some tests.  If you have software testers who are already writing automation at your company, ask them to walk you through their tests.  Learn how to make a simple change to an automated test, such as changing an assertion that says “true” to one that says “false”.  Copy a test that verifies the value of a text field, and see if you can change it so it verifies the value of a different text field.  Learn how your company’s version control system works, and see if you can submit a code change for your team’s approval.

Take small steps!  You don’t have to learn it all at once.  Think of learning code as learning a new language.  When you learn a new language, no one expects you to be fluent right away.  You learn a few phrases and keep using them, and you gradually add more.

Software testing is such a valuable profession, but too often companies take testers for granted.  By applying the principles of Extreme Ownership and eliminating excuses from your vocabulary, you will come to be seen as an indispensable asset to your company.

Time Management for Testers (and Everyone)

It’s a perennial problem: there’s so much testing to be done and not enough time in which to do it.  I’ve already written one post about this issue: What to Test When There’s Not Enough Time to Test, which talks about how to prioritize your testing and how to work with your team to avoid getting into situations where there’s not enough testing time.  But this week I’d like to take a more general view of time management: how can we structure our days so we don’t feel continually stressed by the many projects we work on?  Here are eight time management strategies that work for me:

Strategy One:  Know Your Priorities

I have a bi-weekly one-on-one meeting with my manager, and in each meeting he asks me “What’s the most important priority for you right now?”  I love this question, because it helps me focus on what’s most important.  You may have ten different things on your to-do list, but if you don’t decide which things are the most important, you will always feel like you should be working on something else, which keeps you from focusing on the task at hand.  I like to think about my first, second, and third priorities when I am planning what to work on next.

How do you decide what’s most important?  One good way is to think about impacts and deadlines.  If there is a release to Production that is going out tonight and it requires some manual testing, preparing for that release is going to be my top priority, because customers will be impacted by the quality of the release.  If I am presenting a workshop to other testers in my company, and that presentation is tomorrow, I’m going to want to make my preparation a priority.  If you evaluate each of your tasks in terms of what its impact is and what its due date is, it will become clearer how your priorities should be ordered.

Strategy Two: Keep a To-Do List

Keeping a to-do list means that you won’t forget about any of your tasks.  This does not mean that all of your tasks will get done.  When I finally realized that my to-do list would never be completed, I was able to stop worrying about how many items were on it.  I have found that the less I worry about how many things are in my to-do list, the more items I can accomplish.
I use a Trello board for my to-do list, and this is what I use for columns:  Today, This Week, Next Week, Soon, and Someday.  Whenever I have a new task, I add it to one of these columns.  If it’s urgent, I put it in Today or This Week.  Something less urgent will go in Next Week or Soon; the task will eventually move to This Week or Today.  Projects I’d like to work on or tech debt I’d like to address will go in Someday.  With this method, I don’t lose track of any of my tasks, and I always have something to do in those slow moments when I’m waiting for new features to test. 

Strategy Three: Use Quiet Times for Your Deepest Work

One of the best things about working for a remote-friendly company is that my team is spread over four time zones.  Since I am in Eastern Time, our daily meetings don’t start until what is late morning for me.  This means that the first couple of hours of my workday are quiet ones without interruption.  So I use those hours to work on projects that require concentration and focus.  My teammates on the West Coast do the opposite, using the end of their workday for their deepest work because the rest of us have already stopped for the day.

Even if you are in the same time zone as your co-workers, you can still carve out some quiet time to get your most challenging work done.  Maybe your co-workers take a long lunch while you choose to eat at your desk.  Or maybe you are a morning person and get into the office before they do.  Take advantage of those quiet times!

Strategy Four: Minimize Interruptions

It should come as no surprise to anyone that we are interrupted with notifications on our phones and laptops several times every hour.  Every time one of those notifications comes through, your concentration is broken as you take the time to look at the notification to see if it’s important.  But how many of those notifications do you really need?  I’ve turned off all notifications but text messages and work-related messages on my phone.  Any other notifications, such as Linked In, Facebook, and email, are not important enough to cause me to break my concentration.

On my laptop, I’ve silenced all of my notification sounds but one: the notification that I’m about to have a meeting.  I’ve set my Slack notifications to pop up, but they don’t make a sound.  That way I have fewer sounds disrupting my concentration.

Another helpful hint is to train your team to send you an entire message all at once, instead of sending messages like:

9:30 Fred: Hi
9:31 Fred: Good Morning
9:31 Fred: I have a quick question
9:33 Fred: Do you know what day our new feature is going to Production?

If you were to receive the messages in the above example, your work would be interrupted FOUR TIMES in the course of four minutes.  Instead, ask your co-workers to do this:

9:33  Fred: Good Morning!  I have a quick question for you: do you know what day our new feature is going to Production?

In this way, you are only interrupted ONCE.  You can answer the question quickly, and then get back to work.

Strategy Five: Set Aside Time for the Big Things

Sometimes projects are so big that they feel daunting.  You may have wanted to learn a new test automation platform for a long time, but you never seem to find the time to work on it.  While you know that learning the new platform will save you and your colleagues time in the long run, it’s not urgent, and the course you’d like to take will take you ten hours to complete.

Rather than trying to find a day or two to take the course, why not set aside a small amount of time every day to work on it?  When I have a course to take, I usually set aside fifteen or twenty minutes at the very beginning of my workday to work on it.  Each day I chip away at the coursework to be done, and if I keep at it consistently, I can finish a ten hour course in six to eight weeks.  That may seem like a long time, but it’s much better than never starting the course at all!

Strategy Six:  Ask For Help

We testers have a sense of personal pride when it comes to the projects we work on.  We want to make sure that we are seen as technologically savvy, and not “just a tester”.  We take pride in the automation we write.  But the fact is that developers usually have more experience working with code than we do, and they might have ideas for better or more efficient ways of doing things.

Recently, I was preparing an example project that I was going to use to teach some new hires how to write unit tests.  It was just a simple app that compared integers.  I knew exactly how to write the logic, but when I went to compile my program, I ran into a “cannot instantiate class” error.  I knew that the cause of the error was probably something tiny, but since I don’t often write apps on my own, I couldn’t remember what the issue was.  I had a choice to make at this point- I could save my pride and spend the next two hours figuring out the problem by myself, or I could ask one of my developers to look at it and have him tell me what the problem was in less than ten seconds.  The choice was obvious: I asked my developer, and he instantly solved the problem.

However, there is one caveat to this strategy: sometimes we can get into the habit of asking for information that we could easily find ourselves.  Before you interrupt a coworker and ask them for information, ask yourself if you could find it through a simple Google search or looking through your company’s wiki.  If you can find it yourself faster than it would take to ask your coworker and wait for his or her response, you’ve just saved yourself AND your coworker some time!

Strategy Seven:  Take Advantage of Your Energy Levels

I am a morning person; I am the most energetic at the start of my workday.  As the day moves along, my energy levels drop.  By the end of the work day, it’s hard for me to focus on difficult tasks.  Because of this, I organize my work so that I do my more difficult tasks in the morning, and save the afternoon for more repetitive tasks.

Your energy levels might be different; if you think about what times of the day you do your best and worst work, you will be able to figure out when you have the most energy.  Plan your most challenging and creative work for those times!

Strategy Eight: Adjust Your Environment

I am very fortunate in that I am able to work remotely.  This means I have complete control over the cleanliness, temperature, and sound in my office.  You may not be so lucky, but you can still find ways to adjust your environment so that you work more efficiently.  If your office is so warm that you find yourself falling asleep at your desk, you can bring in a small fan to cool the air around you.  If you are distracted by back and shoulder pain that comes from slumping in your chair, you can install a standing desk and stand for part of your workday.  If your coworkers are distracting you with their constant chit-chat, you can buy a pair of noise-cancelling headphones.

Experiment with what works best for you.  What works for some people might not be right for you.  You might work most efficiently with total silence, with white noise, with ambient music, with classical music, or with heavy metal playing in your ears.  You might find that facing your desk so that you can look out the window helps you relax your mind and solve problems, or you might find it so distracting that you are better off facing empty white walls.  Whatever your formula, once you’ve found it, make it work for you!

The eight strategies above can make it easier for testers to manage their time and work more efficiently.  You may find that these strategies help in other areas of your life as well: paying bills and doing house work, home improvement projects, and so on.  If these tips work for you, consider passing them on to others in your life to help them work more efficiently too!

Get Organized for Testing Success

Before I discovered the joy of software testing, I had a brief career as a professional organizer.  I organized homes, small businesses, and non-profit organizations.  I’ve always loved getting organized because it helped me to accomplish my goals more quickly.  The same is true with software testing!  Being organized as a tester means that you have easy access to your tools, test plans, and resources, which frees you up to do more creative thinking and exploratory testing.  In this post, I’ll outline four of my strategies for organizing.

Strategy One: Avoid Reinventing the Wheel

At various times in my testing career, I’ve needed to test a file upload feature.  I made sure to test with different file types: pdf, jpg, png, and so on.  Sometimes it was hard to find the file type I was looking for; for instance, it took me a long time to locate a tiff file.  After I had tested file uploading a couple of times, I realized that it would be a good idea to save all the files I’d found in a folder called “File Types for Testing”.  That way, the next time I needed to test file uploads I would have all my files ready to go.  Recently I expanded my “File Types for Testing” folder to include some very large files.  Now when I need to test file size limits I don’t have to waste a second looking for files to use.

Similarly, I have a folder of bookmarked web pages that contains all the tools I use regularly, such as a character count tool and a GUID generator.  This way, I don’t need to spend valuable time conducting a search for the tool, or asking a co-worker to remind me where the tool is.

Strategy Two: Be Consistent with Naming and Filing

Every now and then someone on my team will ask me about how I tested a feature, or I’ll ask the question of myself, because I’ll need to do some regression testing.  If I don’t remember what I named my test plan when I saved it, or what folder I saved it to, I’ll waste a lot of time looking for it.  For this reason, I name all of my test plans consistently: the name begins with the JIRA ticket number, and then I include a brief description of the feature.  For example: “W-246- File resizing”. 

When I first started naming my test plans consistently, I just named them with the description, but that made them difficult to find because I could never remember what verbiage I used: was it “Resizing files” or “File resizing”?  Then I named them with just the JIRA ticket number, but locating them required two steps: first I needed to remember the ticket number by searching through JIRA, and then I needed to look up the test plan.  Naming the test plan with both the number and the description gives me two ways to find the plan, which speeds up the process. 

I also organize my test plans by feature.  For example, all of my test plans associated with messaging go in a Messaging folder.  And all of my test plans associated with file uploads go in a File Upload folder. 

Strategy Three: Have a Place for Shared Tests

As much as I love avoiding reinventing the wheel myself, I also enjoy helping others avoid it.  My team does a lot of API testing, and we use Postman for that purpose.  We have a shared workspace where I put all of our saved collections.  The collections are organized by API so they are easy to find.  Really long collections are organized in sub-folders by endpoint or by topic.  This is helpful not just for our testers, but also for our developers; they have mentioned to me that it’s much faster for them to reproduce and fix an issue when they can use saved requests instead of setting up their own. 

We save all of our regression test plans in Confluence.  They are organized by version number for major releases, and by API and date for smaller releases.  We use Confluence because it’s easy to collaborate on a test plan; we each add our name to the tests we run so we can see who is working on each section and which tests have been completed.  Saving the test plans this way makes it easy to go back and see what we tested, and it also makes it easy to duplicate and edit a plan for the next release. 

Strategy Four:  Leave Yourself Notes

Whenever I get a new piece of information, such as a test user’s credentials or a URL for a test environment, I say to myself “Am I likely to need this information again?” If it is likely, I make sure to add it to my notes.  I used to use a notebook for notes like this, but now I use Notepad++.  Keeping this information in saved files makes it easier to locate, instead of searching back through pages of a notebook.  I keep all my Notepad++ files in the same folder, and I name them things that will be recognizable, such as “Test Users” or “Email Addresses for Testing”. 

As in any company with more than one employee, we share files, and sometimes other people don’t file things in the places where I would expect them.  After getting really frustrated trying to find the same information over and over again, I created a spreadsheet for myself called “File Locations”. This spreadsheet has a column for what I would have named the file, and then a column with a link to get to the file.  This has saved me valuable time searching for files, and freed me from frustration.

When I have a piece of information that I need to save, but I know I will only need it temporarily, I save it in a Notepad++ file called “Random Notes”.  I periodically delete information that is no longer needed from the file to keep it from getting too long and hard to read. 

Organizing files, test plans, and information takes a little bit of time at first, but with practice it becomes second nature.  And it saves you the time and frustration of constantly searching for the information you need.  With the time you save, you can do more exploratory testing, which will help find new bugs; and you can write more test automation, which will free you up to do even more exploratory testing!

Logging, Monitoring, and Alerting

This week I’m writing about three things not often associated with testing: logging, monitoring, and alerting.  Perhaps you’ve taken advantage of logging in your testing, but monitoring and alerting seem like a problem for IT or DevOps.  However, a bug-free application doesn’t mean a thing if your users can’t get to it because the server crashed!  For this reason, it’s important to understand logging, monitoring, and alerting so that we as testers can participate in ensuring the health of our applications.

Logging:

Logging is simply recording information about what happens in an application.  This can be done through writing to a file or a database.  Often developers will include logging statements in their code to help determine what’s going on with the application below the UI.  This is especially helpful in applications that make calls to a number of servers or databases.

Recently I tested a notification system that passed a message from a function to a number of different channels.  Logging was so helpful in testing because it enabled me to follow the message through the channels.  If I hadn’t had good logging, I wouldn’t have had any way to figure out where the bug was when I didn’t get a message I was expecting.

Good log messages should be easy to access and easy to search.  You shouldn’t have to log on to some obscure remote desktop and sift through tens of thousands of entries with no line breaks.  One helpful tool for logging is Kibana– an open-source tool that lets you search and sort through logs in an easy-to-read format.

Good log messages should also be easy to understand and provide helpful information.  It’s so frustrating to find a log message about an error and discover that it says “An unknown error occurred”, or “Error TSGB-45667”.  Ask your developer if he or she can provide log messages that make it clear what went wrong and where in the code it happened.

Another helpful tactic for logging is to give each event a specific GUID as an identifier.  The GUID will stay associated with everything that happens with the event, so you can follow it as it moves from one area of an application to another. 

Monitoring:

Monitoring means setting up automatic processes to watch the health of your application and the servers that run it.  Good monitoring ensures that any potential problems can be discovered and dealt with before they reach the end user.  For example, if it becomes clear that a server’s disk space is reaching maximum capacity, additional servers can be added to handle the load.

Things to monitor include:

  • server response times
  • load on the server
  • server errors, such as 500-level response errors
  • CPU usage
  • memory usage
  • disk space
One way to monitor application health is with a periodic health check or a ping.  A job is set up to make a request to the server every few minutes and record whether the response was positive or negative.  Monitoring can also happen through a tool that watches the number of requests to the server and records whether those requests were successful.  Data points such as response times and CPU usage can also be recorded and examined to see if there are any trends that might indicate that the application is unhealthy.  One example of a tool that monitors application and server health is AppDynamics.  
Alerting:

All the logging and monitoring in the world won’t be helpful if no one is watching to see if there are problems!  This is where alerting comes in.  Alerts can be set to notify the appropriate people so that immediate action can be taken when there is a problem.  
Some situations that might call for an alert would be:
  • CPU or memory usage goes above a certain threshold
  • Disk space goes below a certain threshold
  • The number of 500 errors goes above a certain level
  • A health check fails twice in a row
  • Response times are slower than expected
  • Load is higher than normal
There are a number of ways to alert people of problems.  Alerts can be set up that will send emails, text messages, or phone calls.  PagerDuty is one service that provides this alerting functionality.  An important thing to consider, however, is to set off-hours alerts only for serious cases in which users might be affected.  No one wants to be woken up in the middle of the night by an alert that says that the QA servers are down!  However, a problem in the QA environment could indicate an issue that could be seen in the production environment in the future.  So a less invasive alert, such as a message to a team chat room, could be set up for this situation.  
You may be saying to yourself at this point, “But I’m a software tester!  It’s not my job to set up logging, monitoring, and alerting for the company.”  The health of your application is the responsibility of everyone who works on the application, including you!  While you might not have the clout to purchase server monitoring software, you still have the power to ask questions of your team, such as:
  • How can we troubleshoot user issues?
  • How do we know that we have enough servers to handle our application’s load?
  • How will we know if our API is responding correctly?
  • How will we know if a DDoS attack is being attempted on our application?
  • How will we know if our end users are experiencing long wait times?
  • How will we know if we are running out of disk space?
Hopefully these questions will motivate you and your team to set up logging, monitoring, and alerting that will ensure the health and reliability of your application.  

The Positive Outcomes of Negative Testing

As software testers and automation engineers, we often think about the “Happy Path”- the path that the user will most likely take when they are using our application.  When we write our automated UI tests, we want to make sure that we are automating those Happy Paths, and when we write API automation, we want to verify that every endpoint returns a “200 OK” or similar successful response.

But it’s important to think about negative testing, in both our manual and automated tests.  Here are a few reasons why:

Our automated tests might be passing for the wrong reasons.

When I first started writing automated UI tests in Javascript, I didn’t understand the concept of the promise.  I just assumed that when I made a request to locate an element, it wouldn’t return that element until it was actually located.  I was so excited when my tests started coming back with the green “Passed” result, until a co-worker suggested I try to make the test fail by asserting on a different value.  It passed again, because it was actually validating against the promise that existed, which was always returning “True”.  That taught me a valuable lesson- never assume that your automated tests are working correctly just because they are passing.  Be sure to run some scenarios where your tests should fail, and make sure that they do so.  This way you can be sure that you are really testing what you think you are testing.

Negative testing can expose improperly handled errors that could impact a user.

In API testing, any client-related error should result in a 400-level response, rather than a 500-level server error.  If you are doing negative testing and you discover that a 403 response is now coming back as a 500, this could mean that the code is no longer handling that use case properly.  A 500 response from the server could keep the user from getting the appropriate information they need for fixing their error, or at worst, it could crash the application.

Negative testing can find security holes.

Just as important as making sure that a user can log in to an application is making sure that a user can’t log into an application when they aren’t supposed to.  If you only run a login test with a valid username and password, you are missing this crucial area!  I have seen a situation where a user could log in with anything as the password, a situation where a user could log in with a blank password, and a situation where if both the username and password were wrong the user could log in.

It’s also crucial to verify that certain users don’t have access to parts of an application.  Having a carefully tested and functional Admin page won’t mean much if it turns out that any random user can get to it.

Negative testing keeps your database clean.

As I mentioned in my post two weeks ago on input validation, having good, valid data in your database will help keep your application healthy.  Data that doesn’t conform to expectations can cause web pages to crash or fail to load, or cause information to be displayed incorrectly.  The more negative testing you can do on your inputs, the more you can ensure that you will only have good data.

For every input field I am responsible for testing, I like to know exactly which characters are allowed.  Then I can run a whole host of negative tests to make sure that entries with the forbidden characters are refused.

Sometimes users take the negative path.

It is so easy, especially with a new feature that is being rushed to meet a deadline, to forget to test those user paths where they will hit the “Cancel” or “Delete” button.  But users do this all the time; just think about times where you have thought about making an online purchase and then changed your mind and removed an item from your cart.  Imagine your frustration if you weren’t able to remove something from your cart, or if a “Cancel” button didn’t clear out a form to allow you to start again.  User experience in this area is just as crucial as the Happy Path.

Software testing is about looking for unexpected behaviors, so that we find them before a user does.  When negative testing is combined with Happy Path testing, we can ensure that our users will have no unpleasant surprises.

Three Ways to Test Output Validation

Last week, I wrote about the importance of input validation for the security, appearance, and performance of your application.  An astute reader commented that we should think about output validation as well.  I love it when people give me ideas for blog posts!

There are three main things to think about when testing outputs:

1. How is the output displayed?  

A perfect example of an output that you would want to check the appearance of is a phone number.  Hopefully when a user adds a phone number to your application’s data store it is being saved without any parentheses, periods, or dashes.  But when you display that number to the user, you probably won’t want to display it as 8008675309, because that’s hard to read.  You’ll want the number to be formatted in a way that the user would expect; for US users, the number would be displayed as 800-867-5309 or (800) 867-5309.

Another example would be a currency value.  If financial calculations are made and the result is displayed to the user, you wouldn’t want the result displayed as $45.655, because no one makes payments in half-pennies.  The calculation should be rounded or truncated so that there are only two decimal places.

2. Will the result of a calculation be saved correctly in the database?

Imagine that you have an application that takes a value for x and a value for y from the user, adds them together, and stores them as z.  The data type for x, y, and z is set to tinyint in the database.  If you’re doing a calculation with small numbers, such as when x is 10 and y is 20, this won’t be a problem.  But what happens if x is 255- the upper limit of tinyint- and y is 1?  Now your calculated value for z is 256, which is more than can be stored in the tinyint field, and you will get a server error.

Similarly, you’ll want to make sure that your calculation results don’t go below zero in certain situations, such as an e-commerce app.  If your user has merchandise totaling $20, and a discount coupon for $25, you don’t want to have your calculations show that you owe them $5!

3. Are the values being calculated correctly?

This is especially important for complicated financial applications. Let’s imagine that we are testing a tax application for the Republic of Jackvonia.  The Jackvonia tax brackets are simple:

Income Tax Rate
$0 – $25,000 1%
$25,001 – $50,000 3%
$50,001 – $75,000 5%
$75,001 – $100,000 7%
$100,001 and over 9%

There is only one type of tax deduction in Jackvonia, and that is the dependents deduction:

Number of Dependents Deduction
1 $100
2 $200
3 or more $300

The online tax calculator for Jackvonia residents has an income field, which can contain any dollar amount from 0 to one million dollars; and a dependents field, which can contain any whole number of dependents from 0 to 10.  The user enters those values and clicks the “Calculate” button, and then the amount of taxes owed appears.

If you were charged with testing the tax calculator, how would you test it?  Here’s what I would do:

First, I would verify that a person with $0 income and 0 dependents would owe $0 in taxes.

Next, I would verify that it was not possible to owe a negative amount of taxes: if, for example, a person made $25,000 and had three dependents, they should owe $0 in taxes, not -$50.

Then I would verify that the tax rate was being applied correctly at the boundaries of each tax bracket.  So a person who made $1 and had 0 dependents should owe $.01, and a person who made $25,000 and had 0 dependents should owe $250.  Similarly, a person who made $25,001 and had 0 dependents should owe $750.03 in taxes.  I would continue that pattern through the other tax brackets, and would include a test with one million dollars, which is the upper limit of the income field.

Finally, I would test the dependents calculation. I would test with 1, 2, and 3 dependents in each tax bracket and verify that the $100, $200, or $300 tax deduction was being applied correctly. I would also do a test with 4, 5, and 10 dependents to make sure that the deduction was $300 in each case.

This is a lot of repetitive testing, so it would definitely be a good idea to automate it. Most automation frameworks allow a test to process a grid or table of data, so you could easily test all of the above scenarios and even add more for more thorough testing.

Output validation is so important because if your users can’t trust your calculations, they won’t use your application!  Remember to always begin with thinking about what you should test, and then design automation that verifies the correct functionality even in boundary cases.

Four Reasons You Should Test Input Validation (Even Though It’s Boring)

When I first started in software testing, I found it fun to test text fields.  It was entertaining to discover what would happen when I put too many characters in a field.  But as I entered my fourth QA job and discovered that once again I had a contact form to test, my interest started to wane.  It’s not all that interesting to input the maximum amount of characters, the minimum amount of characters, one too many characters, one too few characters, and so on for every text field in an application!

However, it was around this time that I realized that input validation is extremely important.  Whenever a user has the opportunity to add data in an application, there is the potential of malicious misuse or unexpected consequences.  Testing input validation is a critical activity for the following four reasons:

1. Security

Malicious users can exploit text fields to get information they shouldn’t have.  They can do this in three ways:

  • Cross-site scripting– an attacker enters a script into a text field.  If the text field does not have proper validation that strips out scripting characters, the value will be saved and the script will then execute automatically when an unsuspecting user navigates to the page.  The executed script can return information about the user’s session id, or even pop up a form and prompt the user to enter their password, which then gets written to a location the attacker has access to.
  • SQL injection– if a text field allows certain characters such as semicolons, it’s possible that an attacker can enter values into the field which will fool the database into executing a SQL command and returning information such as the usernames and passwords of all the users on the site.  It’s even possible for an attacker to erase a data table through SQL injection.
  • Buffer overflow attack- if a variable is configured to have enough memory for a certain number of characters, but it’s possible to enter a much larger number of characters into the associated text field, the memory can overflow into other locations.  When this happens, an attacker can exploit this to gain access to sensitive information or even manipulate the program.

2. Stability

When a user is able to input data that the application is not equipped to handle, the application can react in unexpected ways, such as crashing or refusing to save.  Here are a couple of examples:
  • My Zip code begins with a 0.  I have encountered forms where I can’t save my address because the application strips the leading 0 off of the Zip code and then tells me that my Zip code has only four digits.  
  • I have a co-worker who has both a hyphen and an apostrophe in his last name.  He told me that entering his name frequently breaks the forms he is filling out.

3. Visual Consistency

When a field has too many characters in it, it can affect the way a page is displayed.  This can be easily seen when looking at any QA test environment.  For example, if a list of first names and last names is displayed on a page of contacts, you will often see that some astute tester has entered “Reallyreallyreallyreallyreallylongfirstname Reallyreallyreallyreallyreallylonglastname” as one of the contacts.  If a name like this causes the contact page to be excessively wide and need a horizontal scroll bar, then a real user in the production environment could potentially cause the page to render in this way.
4. Health of the Database

When fields are not validated correctly, all kinds of erroneous data can be saved to the database.  This can affect both how the application runs and how it behaves.  

The phone number field is an excellent example of how unhealthy data can affect an application.  I worked for a company where for years phone numbers were not validated properly.  When we were updating the application, we wanted to automatically format phone numbers so they would display attractively in this format:  (800)-555-1000.  But because there were values in the database like “Dad’s number”, there was no way to format them, therefore causing an error on the page.
Painstakingly validating input fields can be very tedious, but the above examples demonstrate why it is so important.  The good news is that there are ways to alleviate the boredom.  Automating validation checks can keep us from having to manually run the same tests repeatedly.  Monkey-testing tools can help flush out bugs.  And adding a sense of whimsy to testing can help keep things interesting.  I have all the lyrics to “Frosty the Snowman” saved in a text file.  Whenever I need to test the allowed length of a text field, I paste all or some of the lyrics into the field.  When a developer sees database entries with “Frosty the Snowman was a j”, they know I have been there!

Easy Free Automation Part VIII: Accessibility Tests

Accessibility in the context of a software application means that as many people as possible can use the application easily.  When making an application accessible, we should consider users with limited vision or hearing, limited cognitive ability, and limited dexterity.  Accessibility also means that users from all over the world can use the application, even if their language is different from that of the developers who created it.

In this final post in my “Easy Free Automation” series, I’ll be showing two easy ways to test for accessibility.  I’ll be using Python and Selenium Webdriver.  You can download the simple test here.

To run the test, you will need to have Python and Selenium installed.  You can find instructions for installing Python in Easy Free Automation Part I: Unit Tests.  To install Selenium, open a command window and type pip install selenium.  You may also need to have Chromedriver installed.  You can find instructions for installing it here.

Once you have downloaded the test file and installed all the needed components, navigate to the test folder in the command line and type python3 easyFreeAccessibilityTest.py.  (If you don’t have Python 3, or if you don’t have two versions of Python installed, you may be able to type python instead of python3.) The test should run, the Chrome browser should open and close when the test is completed, and in the command line you should see these two log entries:
Alt text is present
Page is in German

Let’s take a look at these two tests to see what they do.  The first test verifies that an image has an alt text.  Alt texts are used to provide a description of an image for any user who might not be able to see the image.  A screen-reading application will read the alt text aloud so the user will know what image is portrayed.

driver.get(“https://www.amazon.com/s?k=goodnight+moon&ref=nb_sb_noss_1”)
elem = driver.find_element_by_class_name(“s-image”)
val = elem.get_attribute(‘alt’)
if val == ‘Goodnight Moon’:
print(‘Alt text is present’)
else:
print(‘Alt text is missing or incorrect’)

In the first line, we are navigating to an Amazon.com web page where we are searching for the children’s book “Goodnight Moon”.  In the next line, we are locating the book image.  In the third line, we are getting the ‘alt’ attribute of the web element and assigning it to the variable ‘val’.  If there is no alt text, this variable will remain null.

Finally we are using an if statement to assert that the alt text is correct.  If the alt text is not the title of the book, we will get a message that the text is missing.

The second test verifies that we are able to change the language of the Audi website to German.

driver.get(“https://www.audi.com/en.html”)
driver.find_element_by_link_text(“DE”).click()
try:
elem = driver.find_element_by_link_text(“Kontakt”)
if elem:
print(‘Page is in German’)
except:
print(‘Page is not in German’)

In the first line, we navigate to the Audi website.  In the second line, we find the button that will change the language to German, and we click it.  Then we look for the element with the link text of “Kontakt”.  If we find the element, we can conclude that we are on the German version of the page.  If we do not find the element, the page has not been changed to German.  The reason I am using a try-except block here is that if the element with the link text is not located, an error will be thrown.  I’m catching the error so that an appropriate error message can be logged and the test can end properly.

There are other ways to verify things like alt texts and page translations.  There are CSS scanning tools that will verify the presence of alt texts and rate how well a page can be read by a screen reader.  There are services that will check your internationalization of your site with native speakers of many different languages.  But if you are looking for an easy, free way to check these things, this simple test script provides you with a way to get started.

For the last eight weeks, we’ve looked at easy, free ways to automate each area of the Automation Test Wheel.  I hope you have found these posts informative!  If you missed any of the posts, I hope you’ll go back and take a look.  Remember also that each week has a code sample that can be found at my Github page.  Happy automating!

Easy Free Automation Part VII: Load Tests

Load testing is a key part of checking the health of your application.  Just because you get a timely response when you make an HTTP request in your test environment doesn’t mean that the application will respond appropriately when 100,000 users are making the same request in your production environment.  With load testing, you can simulate different scenarios as you make HTTP calls to determine how your application will behave under real-world conditions.

There are a wide variety of load testing tools available, but many of them require a subscription.  Both paid and free tools can often be confusing to use or difficult to set up.  For load testing that is free, easy to install, and fairly easy to set up, I recommend K6.

As with every installment of this “Easy Free Automation” series, I’ve created an automation script that you can download here.  In order to run the load test script, you’ll need to install K6, which is easy to do with these instructions.

Once you have installed K6 and downloaded your test script, open a command window and navigate to the location where you downloaded the script.  Then type k6 run loadTestScript.js.  The test should run and display a number of metrics as the result.

Let’s take a look at what this script is doing.  I’m making four different requests to the Swagger Pet Store.  (For more information about the Swagger Pet Store, take a look at this blog post and the posts that follow it.)  I’ve kept my requests very simple to make it easier to read the test script: I’m adding a pet with just a pet name, I’m retrieving the pet, I’m updating the pet by changing the name, and I’m deleting the pet.

import http from “k6/http”;
import { check, sleep } from “k6”;

In the first two lines of the script, I’m importing the modules needed for the script: the http module that allows me to make http requests, and the check and sleep modules that allow me to do assertions and put in a wait in between requests. 

export let options = {
  vus: 1,
  duration: “5s”
};

In this section, I’m setting the options for running my load tests.  The word “vus” stands for “virtual users”, and “duration” describes how long in seconds the test should run.

var id = Math.floor(Math.random() * 10000);
console.log(“ID used: “+id);

Here I’m coming up with a random id to use for the pet, which will get passed through one complete test iteration. 

var url = “http://petstore.swagger.io/v2/pet”;
var payload = JSON.stringify({ id: id, name: “Grumpy Cat” });
var params =  { headers: { “Content-Type”: “application/json” } }
let postRes = http.post(url, payload, params);

This is how the POST request is set up.  First I set the URL, then the payload, then the headers; then I do the POST request. 

check(postRes, {
    “post status is 200”: (r) => r.status == 200,
    “post transaction time is OK”: (r) => r.timings.duration < 200
  });
sleep(1);

Once the POST has executed, and the result has been assigned to the postRes variable, I check to make sure that the status returned was 200, and that the transaction time was less than 200 milliseconds.  Finally, I have the script sleep for one second. 

Now let’s take a look at the load test output:

INFO[0005] ID used: 1067

This is the id created for the test, which I set up to be logged in line 12 of the script.  At the beginning of each iteration of the script, a new id will be created and logged. 

✓ put status is 200
✓ put transaction time is OK
✓ delete status is 200
✓ delete transaction time is OK
✓ post status is 200
✓ post transaction time is OK
✓ get status is 200
✓ get transaction time is OK

Here are the results of my assertions.  All the POSTs, GETs, PUTs, and DELETEs were successful.

http_req_duration……….: avg=27.56ms min=23.16ms med=26.68ms max=34.69ms p(90)=31.66ms p(95)=33.18ms  

This section shows metrics about the duration of each request.  The average request duration was 27.56 milliseconds, and the maximum request time was 34.69 milliseconds.

iterations…………….. : 1       0.199987/s
vus…………………… : 1         min=1 max=1
vus_max……………….. : 1    min=1 max=1

This section shows how many complete iterations were run during the test, and what the frequency was; how many virtual users there were; and how many virtual users there were in maximum.

Obviously, this wasn’t much of a load test, because we only used one user and it only ran for five seconds!  Let’s make a change to the script and see how our results change.  First we’ll leave the number of virtual users at 1, but we’ll set the test to run for a full minute.  Change line 6 of the script to duration: “1m”, and run the test again with the k6 run loadTestScript.js command. 

http_req_duration……….: avg=26.13ms min=22.3ms  med=25.86ms max=37.45ms p(90)=27.57ms  p(95)=33.56ms

The results look very similar to our first test, which isn’t surprising, since we are still using just one virtual user. 

iterations……………..: 14      0.233333/s

Because we had a longer test time, we went through several more iterations, at the rate of .23 per second.

Now let’s see what happens when we use 10 virtual users.  Change line 5 of the test to: vus: 10, and run the test again.

✓ delete transaction time is OK
✓ post status is 200
✓ post transaction time is OK
✗ get status is 200
     ↳  83% — ✓ 117 / ✗ 23
✓ get transaction time is OK
✓ put status is 200
✓ put transaction time is OK
✗ delete status is 200
     ↳  77% — ✓ 108 / ✗ 31

We are now seeing the impact of adding load to the test; some of our GET requests and DELETE requests failed.

http_req_duration……….: avg=27.8ms  min=21.17ms med=26.67ms max=63.74ms p(90)=33.08ms p(95)=34.98ms

Note also that our maximum duration was much longer than our duration in the previous two test runs.

This is by no means a complete load test; it’s just an introduction to what can be done with the K6 tool.  It’s possible to set up the test to have realistic ramp-up and ramp-down times, where there’s less load at the beginning and end of the test and more load in the middle.  You can also create your own custom metrics to make it easier to analyze the results of each request type.  If you ever find yourself needing to some quick free load testing, K6 may be the tool for you.

Next week, I’ll close out the “Easy Free Automation” series with a look at accessibility tests!