Why I’ll Be Using Cypress For UI Automation

I’ve mentioned in previous posts that I don’t do much UI automation.  This is because the team projects I am currently on have almost no UI, and it’s also because I’m a strong believer that we should automate as much as we can at the API level.  But I had an experience recently that got me excited about UI testing again!

I was working on a side project, and I needed to do a little UI automation to test it out.  I knew I didn’t want to use Selenium Webdriver, because every time I go to use Webdriver I have so much trouble getting a project going.  Here’s a perfect example: just one year ago, I wrote a tutorial, complete with its own GitHub repo, designed to help people get up and running with Webdriver really quickly.  And it doesn’t work any more.  When I try to run it, I get an error message about having the wrong version of Chrome.  And that is why I hate Webdriver: it always seems like I have to resolve driver and browser mismatches whenever I want to do anything.

So instead of fighting with Webdriver, I decided to try Cypress.  I had heard good things about it from people at my company, so I thought I’d try it for myself.  First I went to the installation page.  I followed the directions to install Cypress with npm, and in a matter of seconds it was installed.  Then I started Cypress with the npx cypress open command, and not only did it start right up, I also got a welcome screen that told me that there were a whole bunch of example tests installed that I could try out!  And it automatically detected what my browser was, and set the tests to run on that version!  When I clicked on the Run All Tests button, it started running through all the example tests.  Amazing!  In less than five minutes, I had automated tests running.  No more Chrome version must be between 71 and 75 messages for me!

The difference between Cypress and Webdriver is that Cypress runs directly in the browser, as opposed to communicating with the browser.  So there is never a browser-driver mismatch; if you want to run your tests in Firefox, just type npx cypress run –browser firefox, and it will open up Firefox and start running the tests.  It’s that easy!  In comparison, think about the last time you set up a new Webdriver project, and how long it took to find the Firefox driver you needed, install it in the right place, make sure you had the PATH configured, and reference it in your test script.

Here are some other great features of Cypress:

  • There’s a great tutorial that walks you through how to write simple tests.
  • Every test step has a screenshot associated with it, so you can scroll back in time to see what the browser looked like at each step.
  • Whenever you make a change to your test and save, the test automatically runs in the browser.  You don’t need to go back to the command line and rerun a command.
  • You don’t have to act like a user.  For example, you can make a simple HTTP request to get an authentication token instead of automating the typing of the username and password in the login fields.  
  • You can stub out methods.  If you wanted to test what happens when a certain request returned an error, you can create a stub method that always returns an error and call that instead of the real method.
  • You can mock HTTP requests.  You can set an HTTP request to return a 404 and see what that response looks like.
  • You can spy on a function to see how many times it was called and what values it was called with.
  • You can manipulate time using the Clock method- for example, you can set it to simulate that a long period of time has elapsed in order to test things like authentication timeouts.
  • You can run tests in parallel (although not on the same browser), and you can run tests in a Continuous Integration environment.
In addition, the Cypress documentation is so clear!  As I was investigating Cypress, it was so easy to find what I was looking for.  
If you are tired of fighting with Webdriver and are looking for an alternative, I highly recommend that you try Cypress.  In less than ten minutes, you can have a simple automated test up and running, and that’s a small investment of time that can reap big rewards!


“Less” is More Part II: Headless Browser Testing

Last week we talked about serverless architecture, and we learned that it’s not really serverless at all!  This week we’re going to be learning about a different type of “less”: headless browser testing.

Headless browser testing means testing the UI of an application without actually opening up the browser.  The program uses the HMTL and CSS files of the app to determine exactly what is on the page without rendering it.

Why would you want to test your application headlessly?  Because without waiting for a webpage to load, your tests will be faster and less flaky.  You might think that it’s impossible to actually see the results of your testing by using the headless method, but that’s not true!  Headless testing applications can use a webpage’s assets to determine what the page will look like and take a screenshot.  You can also find the exact coordinates of elements on the page.

Something important to note is that headless browser testing is not browserless testing.  When you run a headless test, you are actually running it in a specific browser; it’s just that the browser isn’t rendering.  Chrome, Firefox, and other browsers have added code that makes it possible to run the browser headlessly.  
To investigate headless browser testing, I tried out three different applications: Cypress, Puppeteer, and TestCafe.  All three applications can run in either regular browser mode or headless mode, although Puppeteer is set to be headless by default.  I found great tutorials for all three, and I was able to run a simple headless test with each very quickly.  
Cypress is a really great UI testing tool that you can get up and running in literally minutes.  (I’m so excited about this tool that it will be the subject of next week’s blog post!)  You can follow their excellent documentation to get started:  https://docs.cypress.io/guides/getting-started/installing-cypress.html.  Once you have a test up and running, you can try running it headlessly in Chrome by using this command:  cypress run –headless –browser chrome.
Puppeteer is a node.js library that works specifically with Chrome.  To learn how to install and run it, I used this awesome tutorial by Nick Chikovani.  I simply installed Puppeteer with npm and tried out his first test example.  It was so fun to see how easy it is to take a screenshot headlessly!
Finally, I tried out TestCafe.  To install, I simply ran npm install -g testcafe.  Then I created a basic test file with the instructions on this page.  To run my test headlessly, I used this command: testcafe “chrome:headless” test1.js.
With only three basic tests, I barely scratched the surface of what these applications can do.  But I was happy to learn just how easy it is to set up and start working with headless browser tests.  I hope you find this helpful as you explore running your UI tests headlessly.  

“Less” is More, Part I: Serverless Architecture

Have you heard of serverless architecture and wondered what it could possibly be?  How could an application be deployed without a server?  Here’s the secret: it can’t.

Remember a few years ago when cloud computing first came to the public, and it was common to say “There is no cloud, it’s someone else’s computer”?  Now we can say, “There is no serverless; you’re just using someone else’s server”.

Serverless architecture means using a cloud provider for the server.  Often the same cloud provider will also supply the database, an authentication service, and an API gateway.  Examples of serverless architecture providers include AWS (Amazon Web Services), Microsoft Azure, Google Cloud, and IBM Cloud Functions.

Why would a software team want to use serverless architecture?  Here are several reasons:

  • You don’t have to reinvent the wheel.  When you sign up to use serverless architecture, you get many features such as an authentication service, a backend database, and monitoring and logging directly in the service.  
  • You don’t have to purchase and maintain your own equipment.  When your company owns its own servers, it’s responsible for making sure they are safely installed in a cool place.  The IT team needs to make sure that all the servers are running efficiently and that they’re not running out of disk space.  But when you are using a cloud provider’s servers, that responsibility falls to the provider.  There’s less initial expense for you to get started, and less for you to worry about.  
  • The application can scale up and down as needed.  Most serverless providers automatically scale the number of servers your app is running on depending on how much demand there is for your app at that moment.  So if you have an e-commerce app and you are having a big sale, the provider will add more servers to your application for as long as they’re needed, then scale back down when the demand wanes.
  • With many serverless providers, you only pay for what you use.  So if you are a startup and have only a few users, you’ll only be paying pennies a month.  
  • Applications are really easy to deploy with serverless providers.  They take care of most of the work for you.  And because the companies that are offering cloud services are competing with each other, it’s in their best interest to make their development and deployment processes as simple as possible.  So deployments will certainly get even easier in the future.  
  • Monitoring is usually provided automatically.  It’s easy to take a look at the calls to the application and gather data about its performance, and it’s easy to set up alarms that will notify you when something’s wrong.
Of course, nothing in life is perfect, and serverless architecture is no exception.  Here are some drawbacks to using a serverless provider:

  • There may be some things you want to do with your application that your provider won’t let you do.  If you set up everything in-house, you’ll have more freedom.
  • If your cloud provider goes down, taking your app with it, you are completely helpless to fix it.  Recently AWS was the victim of a DDoS attack.  In an effort to fight off the attack, AWS blocked traffic from many IP addresses.  Unfortunately some of those addresses belonged to legitimate customers, so the IP blocking rendered their applications unusable.  
  • Your application might be affected by other customers.  For example, a company that encodes video files for streaming received a massive upload of videos from one new customer.  It swamped the encoding company, which meant that other customers had to wait hours for their videos to be processed.  
How do you test serverless architecture?  The simplest answer is that you can test it the same way you would test an in-house application!  You’ll be able to access your web app through your URL in the usual way.  If your application has an API, you can make calls to the API using Postman or curl or your favorite API testing tool.  
If you are given login access to the serverless provider, you can also do things like query the datastore, see how the API gateway is set up, and look at the logs.  You’ll probably have more insight into how your application works than you do with a traditionally hosted application.  
The best way to learn how serverless architecture works is to play around with it yourself!  You can sign up for a free AWS account, and do this fun tutorial.  The tutorial takes only two hours to complete, and in it you see how to create a web application with an authentication service, an API gateway and a back-end data store.  It’s a little bit out of date, so there are some steps where the links or buttons are a bit off from the instructions, but it’s not too hard to figure out.  When you get to the end, check out this Stack Overflow article to correct any authentication errors.  
After you get some experience with serverless architecture, you will have no trouble figuring out all kinds of great ways to test it.  Next week, I’ll talk about another important “Less”.  Be sure to watch for my next post to find out what it is!

Book Review: Agile Testing Condensed

I read a ton of books, and I’ve found that reading books about testing is my favorite way to learn new technical skills and testing strategies.  James Clear, an author and expert on creating good habits, says: “Reading is like a software update for your brain. Whenever you learn a new concept or idea, the ‘software’ improves. You download new features and fix old bugs.” As a software tester, I love this sentiment!



I thought it would be fun this year to review one testing-related book a month in my blog, and what better book to start with than Agile Testing Condensed by Janet Gregory and Lisa Crispin?  They literally “wrote the book” on agile testing a decade ago, then followed it up with a sequel called More Agile Testing in 2014.  Now they have a condensed version of their ideas, and it’s a great read!

This book should be required reading for anyone involved in creating or testing software.  It would be especially helpful for those in management, who might not have much time to read but want to understand the key components of creating and releasing software with high quality.  The book took only a couple of hours for me to read, and I learned a lot of new concepts in the process.  

One of my favorite things about the electronic version of the book is that it comes with a ton of hyperlinks.  So if the authors mention a concept that you aren’t familiar with, such as example mapping, it comes with a link that you can click to go to the original source of the concept and read the description.  But if you are familiar with the concept, you can just skip the link and read on.  What a great way to keep the text short and make reading more interactive!

The book is divided into four sections:

Foundations: This is where the term “Agile Testing” is defined, and where the authors explain how a whole software team can get involved in testing.  

Testing Approaches: In this section, the authors show how important it is to come up with examples when designing software, and how a tester’s role can be that of a question asker, bringing up use cases that no one else may have thought of.  They also define exploratory testing and offer up some great exploratory testing ideas, and they explain the difference between Continuous Delivery and Continuous Deployment.  

Helpful Models: This section discusses two models that can be used to help teams design a good testing strategy: the Agile Testing Quadrants and the Test Automation Pyramid.  There’s also a great section on defining “Done”; “Done” can mean different things in different contexts.

Agile Testing Today: This was my favorite part of the book!  The authors asked several testing thought leaders what they saw as the future of the software tester’s role.  I loved the ideas that were put forth in the responses.  Some of the roles we can play as agile testers (suggested by Aldo Rall) are: 

  • Consultant
  • Test engineering specialist
  • Agile scholar
  • Coach 
  • Mentor
  • Facilitator
  • Change agent
  • Leader
  • Teacher
  • Business domain scholar

I found myself nodding along with each of these descriptions, thinking “Yes, I do that, and so do all the great testers I know.” 

I recommend that you purchase this book, read it, put its ideas to use on your team, and then share those ideas with everyone in your company, especially those managers who wonder why we still need software testers!  In just 101 pages, Agile Testing Condensed shows us how exciting it is to use testing skills to help create great software.  

Your Future Self Will Thank You

Recently I learned a lesson about the importance of keeping good records.  I’ve always kept records of what tests I ran and whether they passed, but I have now learned that there’s something else I should be recording.  Read the story below to find out what it is!

I have mentioned in previous posts that I’ve been testing a file system.  The metadata used to access the files are stored in a non-relational database.  As I described in this post, non-relational databases store their data in document form rather than in the table form found in SQL databases.

Several months ago, my team made a change to the metadata for our files. After deploying the change, we discovered that older files weren’t able to be downloaded.  It turned out that the change to the metadata had resulted in older files not being recognized, because their metadata was different.  The bug was fixed, so now the change was backwards-compatible with the older files.

I added a new test to our smoke test suite that would request a file with the old metadata. Now, I thought, if a change was ever made that would affect that area, the test would fail and the problem would be detected.

A few weeks ago, my team made another change to the metadata.  The code was deployed to the test environment, and shortly afterwards, someone discovered that there were files that couldn’t be downloaded anymore.

I was perplexed!  Didn’t we already have a test for this?  When I met with the developer who investigated the bug, I found out that there was an even older version of the metadata that we hadn’t accounted for.

Talking this over with the developers on my team, I learned that a big difference between SQL databases and non-relational databases is that when a schema change is made to a relational database, it goes through and updates all the records.  For example, if you had a table with first names and last names, and someone wanted to update the table to now contain middle names, every existing record would be modified to have a null value for the middle name:

FirstName MiddleName LastName
Prunella NULL Prunewhip
Joe Bob Schmoe

With non-relational databases, this is different.  Because each entry is its own document and there are no nulls, it’s possible to create situations where a name-value pair simply doesn’t exist at all.  To use the above example, in a non-relational database, Prunella wouldn’t have a “MiddleName” name-value pair: 

{
    “FirstName”:”Prunella”,
    “LastName”:”Prunewhip”
},
{
    “FirstName”:”Joe”,
    “MiddleName”:”Bob”,
    “LastName”:”Schmoe”
}

If the code relies on retrieving the value for MiddleName, that code would return an exception, because there’d literally be nothing to retrieve.

The lesson I learned from this situation is that when we are using non-relational databases, it’s important to keep a record of what data structures are used over time.  This way whenever a change is made, we can test with data that uses the old structures as well as the new structure.

And this lesson is applicable to situations other than non-relational databases!  There may be other times where an expected result changes after the application changes.  Here are some examples:

  • A customer listing for an e-commerce site used to display phone numbers; now it’s been decided that phone numbers won’t be displayed on the page
  • A patient portal for a doctor’s office used to display social security numbers in plain text; now the digits are masked
  • A job application workflow used to take the applicant to a popup window to add a cover letter; now the cover letter is added directly on the page and the popup window has been eliminated
In all these situations, it may be useful to remember how the application used to behave in case you have users who are using an old version, or in case there’s an unknown dependency on the old behavior that now results in a bug, or in case a new product owner asks why a feature is behaving in the new way.  
So moving forward, I plan to document the old behavior of my applications.  I think my future self will be appreciative!  

The Command Line, Demystified- Part II

In last week’s blog post, we started taking a look at the command line, and how it’s possible to navigate through your computer’s file system using some very simple commands.  This week we’ll learn how to use the command line to create and remove folders and files.  We’ll be building upon the knowledge in last week’s post, so if you haven’t read that, I recommend starting there.

Let’s start by creating a new folder.  Open up the command window, and enter mkdir MyNewFolder.  You won’t get any kind of response, just a new command prompt.  But if you type ls (on Mac) or dir (on Windows), you’ll now see MyNewFolder listed in the contents of your home directory.  
To navigate to your new directory, type cd MyNewFolder.  Your command prompt will now look like MyNewFolder$ on Mac, or MyNewFolder> on Windows.  If you type ls or dir now, you’ll get nothing in response, because your folder is empty.

Let’s put something in that new folder.  If you are using a Mac, type nano MyNewFile.  A text editor will open up in the command window.  If you are using Windows, type notepad NewFile.txt.  Notepad will open up in a separate window.

Type This is my new file in the text editor (Mac) or in Notepad (Windows).  If you are in Windows, just save and close the Notepad file.  If you are in Mac, click Control-X,  type Y when asked if you want save the file, then click the Return key to save with the file name you had specified.

If you are in Windows, return to the command line; Mac users should already be there and the text editor should have closed.  Your working directory should still be MyNewFolder.  Now when you type ls (Mac) or dir (Windows), you should get this response:  MyNewFile (Mac) or MyNewFile.txt (Windows).  You have successfully created a new file from the command line.

We can now read the contents of this file from the command line.  If you are in Mac, type cat MyNewFile.  If you are in Windows, type type MyNewFile.txt.  You should see This is my new file as the response.

Now let’s learn how to delete a file.  Simply type rm MyNewFile if you are in Mac, or del MyNewFile.txt if you are in Windows.  If you’ve deleted correctly, an ls or dir command will now give you an empty result.

Finally, let’s delete the folder we created.  You can’t have the folder you want to delete as your working directory, so we need to move one level up by typing cd ..  .  Now you should be in your home directory.  If you are in Mac, type rm -r MyNewFolder.  If you are in Windows, type rmdir MyNewFolder.  You won’t get any response from the command line, but if you do ls or dir, you’ll see that the folder has been deleted.

Now you know how to create and delete files and folders from the command line.  I’ll close by adding two bonus commands- one for Mac and one for Windows.

For Mac users:  the term sudo (which stands for superuser do) allows you to run a command as an administrator.  There are some times where you may need to do an installation or edit which requires administrator access.  By putting sudo before the command, you can run the command as the system admin.  For example, if you typed sudo rm -r MyNewFolder, you’d be removing the folder as the system admin.  Think carefully before you use this command, and make sure you know what you are doing.  There are many commands that require a superuser to execute them because they are dangerous.  You don’t want to delete your entire filesystem, for example!

For Windows users:  a handy command is explorer.  Typing this in your command window will bring up the File Explorer.  This is useful when you want to switch from navigating the folder structure in the command line to navigating in the Explorer window.  For example, if you knew that a folder called MyPictures had images in it, you might want to open up the Explorer to take a look at the thumbnails of those images.

I hope that these two blog posts have gotten you more comfortable using the command line.  Have fun using your newly-learned skills!

The Command Line, Demystified- Part I

When I first started out as a software tester, I would always get nervous when I had to do anything with the command line, and I was so impressed when my coworkers could type tiny commands and get dozens of lines of text in response.  The one thing that helped me when learning the command line was a course I took in Linux.  I was confused for much of the course, but I did manage to learn some commands, and over the years I’ve been able to gradually expand my knowledge.

The command line is hugely helpful when you want to navigate through your system’s folder structure, create new folders or files, or execute runnable files.  In this post, I’ll be walking you through some simple commands that can help you get started using the command line like a pro.  Most of the commands I’ll be sharing will work in both Mac and Windows; when there are differences between the two, I’ll point them out.

First, let’s look at some useful keys:

The up arrow
The up arrow copies whatever command you just ran, and if you click on the up arrow more than once, you can cycle back through all of the commands you have run so far.  For example, if you ran these three commands:
ls
cd Documents
cd ..
and then you were to click the up arrow, you’d see cd .. . If you clicked the up arrow again, you’d see cd Documents, and if you were to click it a third time, you’d see ls.

The up arrow is really helpful for those times when you ran a complicated command and you need to run it again, but you don’t feel like typing it all over again.  Simply click the up arrow until you’ve returned to the command you want, then click Return to run the command again.

The tab key
The tab key has auto-complete functionality.  To see how this works, let’s imagine that you have a folder called MyFolder that contains three sub-folders:
LettersToDad
LettersToMom
graduationPics
If you wanted to navigate from MyFolder to graduationPics using the cd command (more on this later), you could simply type:
cd grad
and then click the tab key.  The folder name will auto-complete to graduationPics.

This command is helpful when you don’t feel like typing out an entire folder name.  Typing just the first few letters of the folder and hitting tab, then Return, is a really fast way to navigate.

In order for the auto-complete to work, you need to type enough letters that there’s only one possible option left when you click the tab key.  For example, when you type
cd LettersTo
and then click the tab key, the command line doesn’t know if you mean LettersToDad or LettersToMom.  The Windows command line will cycle through the possible options as you repeatedly click the tab key.  In Mac, if you click the tab key a second time, it will return your possible options.

Next, let’s learn some navigation skills:

The command prompt:  The command prompt is a symbol that indicates that the command line is ready to receive commands.  In Mac, the command prompt looks like this: $.  In Windows, the command prompt looks like this: >.  The command prompt is preceded by the working directory.

Working directory:
The term working directory refers to whatever directory (folder) you are in when you are using the command line.  When you first open the command line window, you’ll be in your home directory.  This is your own personal directory.  For example, in Windows, my personal directory is C:/users/kjackvony.  In Mac, my personal directory is /Users/K.Jackvony, but the directory location will display only as ~ , which means the home directory.

ls or dir
This command – ls in Mac and dir in Windows – will list all the files and folders in your working directory.

cd <folder name>
This command will change your working directory to the folder you specify.  For example, if you said cd Documents, your working directory would change to the Documents folder.

cd ..
This command moves you up one level in the directory.

Let’s look at an example.  I am using a Mac, so I’ll use ls rather than dir.

1. I begin in my home directory:
~$

2. I type ls to see what’s in my home directory, and I get this response:
Desktop
Documents
Pictures
Projects

3. I type cd Documents, and my working directory is now the Documents folder:
Documents$

4. I type ls to see what’s in my Documents folder, and I get this response:
Blog Post Notes
Images for Testing
ProfilePhoto.jpg

5. I type cd “Blog Post Notes” (I’m using quote marks because the directory name has spaces in it), and my working directory is now the Blog Post Notes folder:
Blog Post Notes$

6. I type cd .. and I’m taken back to the Documents folder:
Documents$

7.  I type cd .. again and I’m taken back to my home folder:
~$

Now that you see how the example works, take some time to try it in your own command line, with your own folders!  Remember that if you are using Windows, your prompt will look like a > rather than a $, and you’ll want to type dir instead of ls to see what’s in your working directory.

Next week we’ll continue the command line adventure by learning some more navigation skills and how to create our own files and folders.

New Year’s Resolutions for Software Testers

I love New Year’s Day!  There’s something exciting about getting a fresh start and imagining all that can be accomplished in the coming year.  The new year is an opportunity to think about how we can be better testers, how we can share our knowledge with others, and how we can continue to improve the public perception of the craft of software testing.

Image by <a href="https://pixabay.com/users/wonderwoman627-1737396/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=4701494">M Harris</a> from <a href="https://pixabay.com/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=4701494">Pixabay</a>


Here are some suggestions for resolutions you could make to improve your testing and the testing skills of those around you:

Speak Up
Because testers are sometimes made to feel like second-class citizens compared to software developers, they might feel timid about voicing their opinions.  But testers often know more about the product they test than the developers, who are usually working in one small area of the application.  This year, resolve to speak up if you see an issue with the product that you think would negatively impact the end user, even if it isn’t a “bug”.  Similarly, speak up if you find a bug that the team has dismissed as unimportant, and state why you think it should be fixed.  Advocate for your user!  Make sure that the product your customers are getting makes sense and is easy to use.

Pay Attention in Product Meetings
I’m sure my Product Owner would be sad to read this (sorry, Brian!) but I find product meetings boring.  I know that the small details of the user’s experience are important, and I’m so glad that there are people who care about where a notification badge is displayed.  But listening to the discussion where that decision is being made is not very exciting to me.  However, I am so glad that I am included in these meetings, and every year I resolve to pay more attention to product decision-making than I did the year before, and to contribute when I have information that I think will be helpful.  Attending product meetings allows me to hear why certain choices are made, and also helps me think about what I need to test when a new feature comes available.

Do Some Exploratory Testing
I suspect that most of us have some area of the application we test where we have a sneaking suspicion that things aren’t working quite right.  Or there’s a really old area of the application that no one knows how to use, because the people who initially built and tested it have since left the company.  But we are often too busy testing new features and writing test automation to take the time to really get to know the old and confusing areas of an application.  This year, resolve to set aside a few hours to do exploratory testing in those areas and share your findings with the team.  You may find some long-buried bugs or features that no one knows about!

Streamline Your Operation
Are there things your team does that could be done more efficiently?  Perhaps you have test automation that uses three different standards to name variables, making the variable names difficult to remember.  Perhaps your methods of processing work items isn’t clear, so some team members are assigning testing tickets while others are leaving them for testers to pick up.  Even if it seems like a small problem, these types of inefficiencies can keep a team from moving as quickly as it could.  Resolve to notice these issues and make suggestions for how they can be improved.

Learn Something New
This year, learn a new tool or a new language.  You don’t have to become a master user; just learn enough to be able to say why you are using your current language or tool over the new one you’ve learned.  Or you could discover that the new language or tool suits your needs better, in which case you can improve your test automation.  Either way, learning something new makes you more employable the next time you are looking for a new position.

Share Your Knowledge With Your Team
Don’t be a knowledge hoarder!  Your company and your software will be better when you share your knowledge about the product you are testing and the tools you are using to test it.  Sometimes misguided people hold on to knowledge thinking it will make them indispensable.  This will not serve to keep you employed.  In today’s world, sharing information so that the whole team can be successful is the best way to be noticed and appreciated.  Resolve to hold a workshop for the other testers on your team about the test automation you are writing, or create documentation that shows everyone how to set up a tricky test configuration.  Your teammates will thank you!

Share Your Knowledge With the Wider World
If I had one wish for software testers for the year 2020, it would be that we would be seen by the wider tech community as the valuable craftspeople we are.  If you are an awesome software tester- and I’m guessing you are because you are taking the time to read a blog about testing- share your skills with the world!  Write a blog post, help someone on Stack Overflow, or present at a local testing meetup.  You don’t have to be the World’s Most Authoritative Expert on whatever it is you are talking about, nor do you have to be the Best Speaker in the World.  Just share the information you have freely!  We will all benefit from your experience.

What New Year’s resolutions do you have for your software testing?  Please share in the comments below!




A Question of Time

Time is the one thing of which everyone gets the same amount.  Whether we are the CEO of a company or we are the intern, we all have 1440 minutes in a day.  I’ve often heard testers talk about how they don’t have enough time to test, and that can certainly happen when deadlines are imposed without input from everyone on the team.  I’ve written a blog post about time management techniques for testers, but today I’m going to tackle the question:

Is it worth my time to automate this task?

Sometimes we are tempted to create a little tool for everything, just because we can.  I usually see this happen with developers more than testers, but I do see it with some testers who love to code.  However, writing code does not always save us time.  When considering whether to do a task manually or to write automation for it, ask yourself these four questions:

1. Will I need to do this task again?

Recently my team was migrating files from one system to another system.  I ran the migration tool manually and did manual checking that the files had migrated properly.  I didn’t write any automation for this, because I knew that I was never going to need to test it again.

Contrast this with a tester from another team who is continually asked to check the UI on a page when his team makes updates.  He got really tired of doing this again and again, so he created a script that will take screenshots and compare the old and new versions of the page.  Now he can run the check with the push of a button.

2. How much time does this task take me, and how much time will it take me to write the code?

Periodically my team’s test data gets refreshed, and that means that the information we have for our test users sometimes gets changed.  When this happens, it takes about eight hours to manually update all the users.  It took me a few hours to create a SQL script that would update the users automatically, but it was totally worth my time, because now I save eight hours of work whenever the data is refreshed.

But there have been other times where I’ve needed to set up some data for testing, and a developer has offered to write a little script to do it for me.  Since I can usually set up the data faster than they can create the script, I decline the offer.

3. How much time will it take to maintain the automation I’m writing?

At a previous job, I was testing email delivery and I wanted to write an automated test that would show that the email had actually arrived in the Gmail test account.  The trouble was that there could be up to a ten minute delay for the email to appear.  I spent a lot of time adjusting the automated test to wait longer, to have retries, and so on, until finally I realized it was just faster for me to take that assertion out of the test, and manually check the email account from time to time.

However, my team’s automated API smoke tests take very little time to maintain, because the API endpoints change so infrequently that the tests rarely need to change.  The first API smoke test I set up took a few days; but once we had a working model it became very easy to set up tests for our other APIs.

4. Does the tool I’m creating already exist?

At a previous company, the web team was porting over many customers’ websites from one provider to another.  I was asked to create a tool that would crawl through the sites and locate all the pages, and then crawl through the migrated site to make sure all the pages had been ported over.  It was really fun to create this tool, and I learned a lot about coding in the process.  However, I discovered after I made the tool that web-crawling software already exists!

But in that particular month I did have the time to create the tool, and the things I learned helped me with my other test automation.  So sometimes it may be worth “reinventing the wheel” if it will help you or your team.

The Bottom Line: Are you saving or wasting time?

All of these questions come down to one major consideration, and that is whether your task is saving or wasting time.  If you are a person who enjoys coding, you may be tempted to write a fun new script for every task you need to do; but this might not always save you time.  Similarly, if you don’t enjoy coding, you might insist on doing repetitive tasks manually; but using a simple tool could save you a ton of time.  Always consider the time-saving result of your activities!

Measuring Quality

The concept of measuring quality can be a hot-button topic for many software testers.  This is because metrics can be used poorly; we’ve all heard stories about testers who were evaluated based on how many bugs they found or how many automated tests they wrote.  These measures have absolutely no bearing on software quality. A person who finds a bug in three different browsers can either write up the bug once or write up a bug for each browser; having three JIRA tickets instead of one makes no difference in what the bug is!  Similarly, writing one hundred automated tests where only thirty are needed for adequate test coverage doesn’t ensure quality and may actually slow down development time.

But measuring quality is important, and here’s why: software testers are to software what the immune system is to the human body.  When a person’s immune system is working well, they don’t think about it at all.  They get exposed to all kinds of viruses and bacteria on a daily basis, and their immune system quietly neutralizes the threats.  It’s only when a threat gets past the immune system that a person’s health breaks down, and then they pay attention to the system.  Software testers have the same problem: when they are doing their job really well, there is no visible impact in the software.  Key decision-makers in the company may see the software and praise the developers that created it without thinking about all the testing that helped ensure that the software was of high quality.

Measuring quality is a key way that we can demonstrate the value of our contributions.  But it’s important to measure well; a metric such as “There were 100 customer support calls this month” means nothing, because we don’t have a baseline to compare it to.  If we have monthly measurements of customer support calls, and they went from 300 calls in the first month, to 200 calls in the second month, to 100 calls in the third month, and daily usage statistics stayed the same, then it’s logical to conclude that customers are having fewer problems with the software.

With last week’s post about the various facets of quality in mind, let’s take a look at some ways we could measure quality.

Functionality:
How many bugs are found in production by customers?
A declining number could indicate that bugs are being caught by testers before going to production.
How many daily active users do we have? 
A rising number probably indicates that customers are happy with the software, and that new customers have joined the ranks of users.

Reliability:
What is our percentage of uptime?  
A rising number could show that the application has become more stable.
How many errors do we see in our logs?  
A declining number might show that the software operations are generally completing successfully.

Security:
How many issues were found by penetration tests and security scans?  
A declining number could show that the application is becoming more secure.

Performance:
What is our average response time?
A stable or declining number will show that the application is operating within accepted parameters.

Usability:
What are our customers saying about our product?
Metrics like survey responses or app store ratings can indicate how happy customers are with an application.
How many customer support calls are we getting?
Increased support calls from customers could indicate that it’s not clear how to operate the software.

Compatibility:
How many support calls are we getting related to browser, device, or operating system?
An increased number of support calls could indicate that the application is not working well in certain circumstances.
What browsers/devices/operating systems are using our software?
When looking at analytics related to app usage, a low participation rate by a certain device might indicate that users have had problems and stopped using the application.

Portability:
What percentage of customers upgraded to the new version of our software?
Comparing upgrade percentages with statistics of previous upgrades could indicate that the users found the upgrade process easy.
How many support calls did we get related to the upgrade?
An increased number of support calls compared to the last upgrade could indicate that the upgrade process was problematic.

Maintainability:
How long does it take to deploy our software to production?
If it is taking longer to deploy software than it did during the last few releases, then the process needs to be evaluated.
How frequently can we deploy?
If it is possible to deploy more frequently than was possible six months ago, then the process is becoming more streamlined.

There’s no one way to measure quality, and not every facet of quality can be measured with a metric.  But it’s important for software testers to be able to use metrics to demonstrate how their work contributes to the health of their company’s software, and the above examples are some ways to get started.  Just remember to think critically about what you are measuring, and establish good baselines before drawing any conclusions.