Adventures in Node: Arrow Functions

This year I’ve been feeling an urge to really learn a programming language.  There are lots of languages I know well enough to write automation code in- C#, Java, Javascript, and so on- but I decided I wanted to really dive into one language and learn to really understand it.

I decided to go deep with Node.js.  Node is essentially Javascript with a server-side runtime environment.  It’s possible to write complete applications in Node, because you can code both the front-end and the back-end of the application.  And I was fortunate enough to find this awesome course by Andrew Mead.  Andrew does a great job of making complicated concepts really simple, so as I am taking the course, I’m finding that things that used to confuse me about Node finally make sense!  And because I like sharing things I’ve learned, I’ll be periodically sharing my new-found understanding in my blog posts.

I’ll start with arrow functions.  Arrow functions have been around for a few years now, but I’ve always been confused by them, because they weren’t around when I was first learning to write code.  You may have seen these functions, which use the symbol =>.  They seem so mysterious, but they are actually quite simple!  Arrow functions are simply a way to notate a function to save space and make code easier to read.  I’ll walk you through an example.  We’ll start with a simple traditional function:

const double = function(x) {
     return x + x
}

double is the name of the function.  When x is passed into the function, x + x is returned.  So if I called the double function with the number 3, I’d get 6 in response.

Now we’re going to replace the function with an arrow:

const double = (x) =>  {
    return x + x
}

Note that the arrow comes after the (x), rather than before.  Even though the order is different, function(x) and (x) => mean the same thing.

Now we’re going to replace the body of the function { return x + x } with something simpler:

const double = (x) => x + x

When arrow functions are used, it’s assumed that what comes after the arrow is what will be returned.  So in this case, x + x means the same thing as { return x + x }.  This is only used if the body of the response is relatively simple.

See?  It’s simple!  You can try running these three functions for yourself if you have node installed.  Simply create an app.js file with the first version of the function, and add a logging command:

console.log(double(3))

Run the file with node app.js, and the number 6 will be returned in the console.

Then replace version 1 of the function with version 2, run the file, and you should get a 6 again.  Finally, replace version 2 with version 3, and run the file; you should get a 6 once again.

It’s even possible to nest arrow functions!  Here’s an example:

const doublePlusTen = (x) => {
    const double = (x) => x + x
    return double(x) + 10
}

The const double = (x) => x + x is our original function.  It’s nested inside a doublePlusTen function.  The doublePlusTen is using curly braces and a return command, because there’s more than one line inside the function (including the double function).  If we were going to translate this nested function into plain English, it would look something like this:

“We have a function called doublePlusTen.  When we pass a number into that function, first we pass it into a nested function called double, which takes the number and doubles it.  Then we take the result of that function, add 10 to it, and return that number.”  

You can try out this function by calling it with console.log(doublePlusTen(3)), and you should get 16 as the response.

Hopefully this information will help you understand what an arrow function is doing the next time you encounter it in code.  You may want to start including arrow functions in your own automation code as well.  Stay tuned in the coming weeks for more Adventures in Node posts!

How I Would Have Tested the Iowa Caucus App

About six weeks ago, the Iowa Democratic Party held its caucus.  For those who don’t live in the United States, this event is one of the first steps in the presidential primaries, which determine who will be running for president in the next presidential election. 

In 2016, the Iowa Caucus used a mobile app created by a company called Interknowlogy in partnership with Microsoft to allow each precinct to report their results.  This app worked successfully in the 2016 caucus.  But this year the Iowa Democratic Party chose to go with a different company to create a new app, which proved disastrous.  Incorrect tallies were reported, and precincts that tried to report via phone were often not able to get through or found that their calls were disconnected.
From reading this assessment, it appears that the biggest problem with the 2020 app was that the software company didn’t have adequate time to create the application, and certainly didn’t have enough time to test it.  But as a software tester, I found myself thinking about what I would have done if it had been my responsibility to test the app, assuming that there had been enough time for testing.  Below is what I came up with:
Step One: Consider the Use Case

The interesting thing about this application is that unlike an app like Twitter or Uber, the number of users is finite.  There are only about 1700 precincts in Iowa, including a few out-of-state precincts for Iowans who are in the military or working overseas.  So the app wouldn’t need to handle tens of thousands of users.  
The users of the application will be the precinct leaders, who will own a wide variety of mobile phones, such as iPhone, Galaxy, or Motorola, and each of those devices could have one of several carriers, such as AT&T, Verizon, or Sprint.  Mobile service might be spotty in some rural areas, and wifi might be unavailable in some locations as well.  So it will be important to test the app on a wide variety of operating systems and devices, with a variety of carriers and connection scenarios.  
Moreover, the precinct leaders will probably vary widely in their technical ability.  Some might be very comfortable with technology, while others might have never installed an app on their phone.  So it will be imperative to make sure that the app is on both the Apple App Store and Google Play, and that the installation is simple.
Some leaders may choose to call in their election results instead of entering them in the app.  So the application should allow an easy way to do this with a simple button click.  This will also be useful as a backup plan in case other parts of the app fail.
Finally, because this is an event of high political importance, security must be considered.  The app should have multi-factor authentication, and transmissions should be secured using https with appropriate security headers.  
Step Two: Create an In-House Test Plan
Now that the users and the use case have been considered, it’s time to create an in-house test plan.  Initial testing should begin at least six months before the actual event.  Here is the order that I would direct the testing:
  • Usability testing: the application should be extremely easy to install and use.
  • Functional testing: does the application actually do what it’s supposed to do?  Testers should test both the happy path- where the user does exactly what is expected of them- and every possible sad path- where the user does something odd, like cancel the transaction or back out of the page.
  • Device and carrier testing: testers should test on a wide variety of carriers, with a wide variety of providers, and with a wide variety of connection scenarios, including scenarios such as a wifi connection dropping in the middle of a transmission.  Testers should also ensure that the application will work correctly overseas for the remote precincts.  They can do this by crowd-sourcing a test application that has the same setup as the real application.  
  • Load and performance testing: testers should make sure that the application can handle 2500 simultaneous requests, which is much higher than the actual use case.  They should also make sure that page response times are fast enough that the user won’t be confused and think that there’s something wrong with the application.  
  • Security testing: testers should run through penetration tests of the application, ensuring that they can’t bypass the login or hijack an http request.  
  • Backup phone system testing: testers should validate that they can make 2500 simultaneous calls to the backup phone system and be able to connect.  Since there probably won’t be 2500 phone lines available, testers should make sure that wait times are appropriate and that callers are told how many people are in the queue in front of them.  

Step Three: External Security Audit

Because of the sensitive nature of the application, the app should be given to an external security testing firm at least four months before the event.  Any vulnerabilities found by the analysis should be addressed and retested immediately.
Step Four: Submit to the Apple App Store and Google Play
As soon as the application passes the security audit, it should be submitted to the app stores for review.  Once the app is in app stores, precinct leaders should be given instructions for how to download the app, log in with a temporary password, and create a new password, which they should save for future use.  
Step Five: End User Testing
Two months before the caucus, precinct leaders will be asked to do a trial run on the application.  Instead of using actual candidates, the names will be temporarily replaced by something non-political, like pizza toppings.  The leaders will all report a fictitious tally for the pizza toppings using the app, and will then use the backup phone number to report the tally as well.  This test will accomplish the following:
  • it will teach the leaders how to use the app
  • it will validate that accurate counts are reported through the app
  • it will help surface any issues with specific devices, operating systems, or carriers
  • it will validate that the backup phone system works correctly
By two weeks before the caucus, any issues found in the first pizza test should have been fixed.  Then a final trial run (again with pizza toppings rather than candidates) will be conducted to find any last-minute issues.  The precinct leaders will be strongly encouraged to make no changes to their device or login information between this test and the actual caucus.
Monday Morning Quarterbacking
There’s a term in the US called “Monday Morning Quarterbacking”, where football fans take part in conversations after a game and state what they would have done differently if they had been the quarterback.  Of course, most people don’t have the skill to be a major-league quarterback and they probably don’t have access to all the information that the team had.  
I realize that what I’m doing is the software tester equivalent of Monday Morning Quarterbacking.  Still, it’s an interesting thought exercise.  I had a lot of fun thinking about how I would test this application.  The next time you see a software failure, try this thought exercise for yourself- it will help you become a better tester!

API Contract Testing Made Easy

As software becomes increasingly complex, more and more companies are turning to APIs as a way to organize and manage their application’s functionality.  Instead of being one monolithic application where all changes are released at once, now software can be made up of multiple APIs that are dependent upon each other, but which can be released separately at any time.  Because of this, it’s possible to have a scenario where one API releases new functionality which breaks a second API’s functionality, because the second API was relying on the first and now something has changed.

The way to mitigate the risk of this happening is through using API contract tests.  These can seem confusing: which API sets up the tests, and which API runs them?  Fortunately after watching this presentation, I understand the concept a bit better.  In this post I’ll be creating a very simple example to show how contract testing works.

Let’s imagine that we have an online store that sells superballs.  The store sells superballs of different colors and sizes, and it has uses three different APIs to accomplish its sales tasks:

Inventory API:  This API keeps track of the superball inventory, to make sure that orders can be fulfilled.  It has the following endpoints:

  • /checkInventory, which passes in a color and size and verifies that that ball is available
  • /remove, which passes in a color and size and removes that ball from the inventory
  • /add, which passes in a color and size and adds that ball to the inventory

Orders API:  This API is responsible for taking and processing orders from customers.  It has the following endpoints:
  • /addToCart, which puts a ball in the customer’s shopping cart
  • /placeOrder, which completes the sale

Returns API:  This API is responsible for processing customer returns.  It has the following endpoint:
  • /processReturn, which confirms the customer’s return and starts the refund process
Both the Orders API and the Returns API are dependent on the Inventory API in the following ways:
  • When the Orders API processes the /addToCart command, it calls the /checkInventory endpoint to verify that the type of ball that’s been added to the cart is available
  • When the Orders API processes the /placeOrder command, it calls the /remove command to remove that ball from the inventory so it can’t be ordered by someone else
  • When the Returns API runs the /processReturn command, it calls the /add command to return that ball to the inventory
In this example, the Inventory API is the producer, and the Orders API and Returns API are the consumers.  
It is the consumer’s responsibility to provide the producer with some contract tests to run whenever the producer makes a code change to their API.  So in our example:
The team who works on the Orders API would provide contract tests like this to the team who works on the Inventory API:
  • /checkInventory, where the body contained { “color”: “purple”, “size”: “small” }
  • /remove, where the body contained { “color”: “red”, “size”: “large” }
The team who works on the Returns API would provide an example like this to the team who works on the Inventory API:
  • /add, where the body contained { “color”: “yellow”, “size”: “small” }
Now the team that works on the Inventory API can take those examples and add them to their suite of tests.  
Let’s imagine that the superball store has just had an update to their inventory. There are now two different kinds of bounce levels for the balls: medium and high.  So the Inventory API needs to make some changes to their API to reflect this.  Now a ball can have three properties: color, size, and bounce.  
The Inventory API modifies their /checkInventory, /add, and /remove commands to accept the new bounce property.  But the developer accidentally makes “bounce” a required field for the /checkInventory endpoint.  
After the changes are made, the contract tests are run.  The /checkInventory test contributed by the Orders API fails with a 400 error, because there’s no value for “bounce”.  When the developer sees this, she finds her error and makes the bounce property optional.  Now the /checkInventory call will pass.  
Without these contract tests in place, the team working on the Inventory API might not have noticed that their change was going to break the Orders API.  If the change went to production, no customer would be able to add a ball to their cart!
I hope this simple example illustrates the importance of contract testing, and the responsibilities of each API team when setting up contracts.  I’d love to hear about how you are using contract testing in your own work!  You can add your experiences in the Comments section.

More Fun With Cypress

Two weeks ago, I wrote about my first experiences using Cypress.io.  I was intrigued by the fact that it was possible to do http requests using Cypress commands, so this week I decided to see if I could combine API commands with UI commands in the same test.  To be honest, it wasn’t as easy as I thought it would be, but I did manage to come up with a small proof-of-concept.

Part of the difficulty here may lie in the fact that there aren’t many websites around on which to practice both UI and API automation.  For my experimentation, I decided to use the OWASP Juice Shop, which is a great site for practicing security testing.  I wanted to log into the site using an HTTP command, and then use the token I retrieved from my login to navigate to the site as an authenticated user.

Setting up the HTTP command was pretty easy.  Here’s what it looks like:

var token;

describe(‘I can log in as a user’, () => {
    it(‘Logs in’, () => {
        cy.request({
  method: ‘POST’,
  url: ‘https://juice-shop.herokuapp.com/rest/user/login’,
  headers: {‘content-type’:’application/json’},
  body: {
    email: ‘[email protected]’,
    password: ‘123456’
  }
})
  .then((resp) => {
    const result = JSON.parse(JSON.stringify(resp.body));
    token = result.authentication.token;
        expect(resp.status).to.eq(200);
    })
});
});

Let’s take a look at what’s happening here.  First I declare the token variable.  The ‘I can log in as a user’ and ‘Logs in’ parts are just the names of the test section and the test.  Then we have the cy.request section.  This is where the http request happens.  You can see the method, the url, the headers, and the body of the request.  Next, there’s the then((resp), which shows what the test is doing with the response.  With const result = JSON.parse(JSON.stringify(resp.body)), I’m parsing the body of the response into JSON format and saving it to a result variable.  Then I’m setting the token variable to result.authentication.token.  Finally, with expect(resp.status).to.eq(200) I’m doing a quick assertion that the status code of the response is 200 just to alert me if something didn’t come back correctly.

Next, I loaded the web page, and included the token in the browser’s local storage so the web page would know I was authenticated:

describe(‘Is logged in’, function() {
  it(‘Is logged in’, function() {
    cy.visit(‘https://juice-shop.herokuapp.com/#/’, {
    onBeforeLoad (win) {
      win.localStorage.setItem(‘token’, token)
    },
  })
    cy.contains(‘Dismiss’).click();
    cy.contains(‘Your Basket’).should(‘be.visible’);
  })
});

With this line: cy.visit(‘https://juice-shop.herokuapp.com/#/’ I’m navigating to the web page.  With the next section:

    onBeforeLoad (win) {
      win.localStorage.setItem(‘token’, token)
    },
  })

I’m telling the browser to put the token I saved into local storage.  There was a popup window with a “Dismiss” button that appeared in the browser, so I closed it with cy.contains(‘Dismiss’).click(). And finally with cy.contains(‘Your Basket’).should(‘be.visible’) I asserted that the link called “Your Basket” was visible, because that link doesn’t appear unless the user is authenticated.

My code definitely wasn’t perfect, because I noticed that when I manually logged in, I saw my email address in the Account dropdown, but when I logged in through Cypress, the email address was blank.  I also tried doing some other UI tasks, like adding an item to my cart, but I had trouble simply because the application didn’t have good element identifiers.  (I so appreciate developers who put identifying tags on their elements!  If your developers do this, please thank them often.)  And there may be irregularities with this application because it was specifically designed to have security holes.

It would be very interesting to see how easy it would be to set up API and UI testing in Cypress when testing an application with normal authentication processes and good element identifiers!  However, I think my experiment showed that it’s fairly easy to integrate API and UI tests together in Cypress.

Book Review: The Unicorn Project

As I mentioned in a previous post, it’s my goal this year to read and review one tech-related book each month.  This month, I read The Unicorn Project, by Gene Kim.  The book is a work of fiction, and is the story of an auto parts supply company that is struggling to participate in the digital transformation of retail business.  I was a bit dubious about reading a work of fiction that aimed to tackle the common problems of DevOps; I assumed either the lessons or the story would be boring.  I’m happy to say that wasn’t the case!  I really enjoyed this tale and learned a lot in the process of reading it. 

The hero of the story, Maxine, goes through the same trials and tribulations that we all do in the workplace.  At the beginning of the book, Maxine has just been chosen to be the “fall guy” for a workplace failure, even though she had nothing to do with it and was actually on vacation at the time.  As a punishment, she is sent to a project that is critical for the company’s success, but has been bogged down for years in a quagmire of environments, permissions, and archaic processes. 

I could definitely relate to Maxine’s frustrations.  One day I was having a particularly tough day at work, and I was reading the book on my lunch break.  Maxine had been working for days to try to get a build running on her machine, and she had opened a ticket to get the appropriate login permissions.  She sees that there’s been progress on the ticket, but when she goes to look at it, it’s been closed because she didn’t have the appropriate approval from her manager.  So she opens a new ticket and gets her manager’s approval, only to have the ticket closed because the manager’s approval wasn’t allowed to be in the “Notes” field!  My troubles that day were different, but I too had been struggling with getting the build I needed; I felt like shouting into the book: “Honey, I feel your pain!”

The real magic in the story comes when a small band of people from various departments gathers together to try to make some huge process changes in the company.  They are aided by a surprise mentor, who tells them about the Five Ideals of the tech workplace:

1. Locality and Simplicity– having locality in our systems and simplicity in our processes
2. Focus, Flow, and Joy– people can focus on their work, flow through processes easily, and experience the joy of understanding their contributions
3. Improvement of Daily Work– continually improving processes so that day-to-day operations are simple
4. Psychological Safety– people feel comfortable suggesting changes, and the whole team owns successes and failures without playing the blame game
5. Customer Focus– everything is looked at from the lens of whether it matters to the customers

Using those Five Ideals, Maxine and her fellow rebels are able to start making changes to the systems at their company, sometimes with permission and sometimes without.  I don’t want to give away the ending, but I will say that the changes they make have a positive impact.

There were a couple of key things I learned from this book, which have given me a new understanding of just how important DevOps is.  The first is that when we create a new feature or verify that an important bug has been fixed, it means nothing until it’s actually in the hands of customers in Production.  I have fallen for the fantasy of thinking that something is “Done” when I see it working correctly in my test environment, but it’s important to remember that to the customer, it is totally not done!

The second thing I learned is the importance of chaos testing.  As companies move further toward using microservices models and cloud technologies, we need to make sure we know exactly what will happen if one of those services or cloud providers is unavailable.  Chaos testing is a great way to simulate those failures and help teams create ways to fail more elegantly; for example, by having a failover system, using cached data, or including a helpful error message.

I’ll be thinking about this book for a long time as I look at the systems I work with.  I definitely recommend this book for developers, testers, managers, and DevOps engineers!

Why I’ll Be Using Cypress For UI Automation

I’ve mentioned in previous posts that I don’t do much UI automation.  This is because the team projects I am currently on have almost no UI, and it’s also because I’m a strong believer that we should automate as much as we can at the API level.  But I had an experience recently that got me excited about UI testing again!

I was working on a side project, and I needed to do a little UI automation to test it out.  I knew I didn’t want to use Selenium Webdriver, because every time I go to use Webdriver I have so much trouble getting a project going.  Here’s a perfect example: just one year ago, I wrote a tutorial, complete with its own GitHub repo, designed to help people get up and running with Webdriver really quickly.  And it doesn’t work any more.  When I try to run it, I get an error message about having the wrong version of Chrome.  And that is why I hate Webdriver: it always seems like I have to resolve driver and browser mismatches whenever I want to do anything.

So instead of fighting with Webdriver, I decided to try Cypress.  I had heard good things about it from people at my company, so I thought I’d try it for myself.  First I went to the installation page.  I followed the directions to install Cypress with npm, and in a matter of seconds it was installed.  Then I started Cypress with the npx cypress open command, and not only did it start right up, I also got a welcome screen that told me that there were a whole bunch of example tests installed that I could try out!  And it automatically detected what my browser was, and set the tests to run on that version!  When I clicked on the Run All Tests button, it started running through all the example tests.  Amazing!  In less than five minutes, I had automated tests running.  No more Chrome version must be between 71 and 75 messages for me!

The difference between Cypress and Webdriver is that Cypress runs directly in the browser, as opposed to communicating with the browser.  So there is never a browser-driver mismatch; if you want to run your tests in Firefox, just type npx cypress run –browser firefox, and it will open up Firefox and start running the tests.  It’s that easy!  In comparison, think about the last time you set up a new Webdriver project, and how long it took to find the Firefox driver you needed, install it in the right place, make sure you had the PATH configured, and reference it in your test script.

Here are some other great features of Cypress:

  • There’s a great tutorial that walks you through how to write simple tests.
  • Every test step has a screenshot associated with it, so you can scroll back in time to see what the browser looked like at each step.
  • Whenever you make a change to your test and save, the test automatically runs in the browser.  You don’t need to go back to the command line and rerun a command.
  • You don’t have to act like a user.  For example, you can make a simple HTTP request to get an authentication token instead of automating the typing of the username and password in the login fields.  
  • You can stub out methods.  If you wanted to test what happens when a certain request returned an error, you can create a stub method that always returns an error and call that instead of the real method.
  • You can mock HTTP requests.  You can set an HTTP request to return a 404 and see what that response looks like.
  • You can spy on a function to see how many times it was called and what values it was called with.
  • You can manipulate time using the Clock method- for example, you can set it to simulate that a long period of time has elapsed in order to test things like authentication timeouts.
  • You can run tests in parallel (although not on the same browser), and you can run tests in a Continuous Integration environment.
In addition, the Cypress documentation is so clear!  As I was investigating Cypress, it was so easy to find what I was looking for.  
If you are tired of fighting with Webdriver and are looking for an alternative, I highly recommend that you try Cypress.  In less than ten minutes, you can have a simple automated test up and running, and that’s a small investment of time that can reap big rewards!


“Less” is More Part II: Headless Browser Testing

Last week we talked about serverless architecture, and we learned that it’s not really serverless at all!  This week we’re going to be learning about a different type of “less”: headless browser testing.

Headless browser testing means testing the UI of an application without actually opening up the browser.  The program uses the HMTL and CSS files of the app to determine exactly what is on the page without rendering it.

Why would you want to test your application headlessly?  Because without waiting for a webpage to load, your tests will be faster and less flaky.  You might think that it’s impossible to actually see the results of your testing by using the headless method, but that’s not true!  Headless testing applications can use a webpage’s assets to determine what the page will look like and take a screenshot.  You can also find the exact coordinates of elements on the page.

Something important to note is that headless browser testing is not browserless testing.  When you run a headless test, you are actually running it in a specific browser; it’s just that the browser isn’t rendering.  Chrome, Firefox, and other browsers have added code that makes it possible to run the browser headlessly.  
To investigate headless browser testing, I tried out three different applications: Cypress, Puppeteer, and TestCafe.  All three applications can run in either regular browser mode or headless mode, although Puppeteer is set to be headless by default.  I found great tutorials for all three, and I was able to run a simple headless test with each very quickly.  
Cypress is a really great UI testing tool that you can get up and running in literally minutes.  (I’m so excited about this tool that it will be the subject of next week’s blog post!)  You can follow their excellent documentation to get started:  https://docs.cypress.io/guides/getting-started/installing-cypress.html.  Once you have a test up and running, you can try running it headlessly in Chrome by using this command:  cypress run –headless –browser chrome.
Puppeteer is a node.js library that works specifically with Chrome.  To learn how to install and run it, I used this awesome tutorial by Nick Chikovani.  I simply installed Puppeteer with npm and tried out his first test example.  It was so fun to see how easy it is to take a screenshot headlessly!
Finally, I tried out TestCafe.  To install, I simply ran npm install -g testcafe.  Then I created a basic test file with the instructions on this page.  To run my test headlessly, I used this command: testcafe “chrome:headless” test1.js.
With only three basic tests, I barely scratched the surface of what these applications can do.  But I was happy to learn just how easy it is to set up and start working with headless browser tests.  I hope you find this helpful as you explore running your UI tests headlessly.  

“Less” is More, Part I: Serverless Architecture

Have you heard of serverless architecture and wondered what it could possibly be?  How could an application be deployed without a server?  Here’s the secret: it can’t.

Remember a few years ago when cloud computing first came to the public, and it was common to say “There is no cloud, it’s someone else’s computer”?  Now we can say, “There is no serverless; you’re just using someone else’s server”.

Serverless architecture means using a cloud provider for the server.  Often the same cloud provider will also supply the database, an authentication service, and an API gateway.  Examples of serverless architecture providers include AWS (Amazon Web Services), Microsoft Azure, Google Cloud, and IBM Cloud Functions.

Why would a software team want to use serverless architecture?  Here are several reasons:

  • You don’t have to reinvent the wheel.  When you sign up to use serverless architecture, you get many features such as an authentication service, a backend database, and monitoring and logging directly in the service.  
  • You don’t have to purchase and maintain your own equipment.  When your company owns its own servers, it’s responsible for making sure they are safely installed in a cool place.  The IT team needs to make sure that all the servers are running efficiently and that they’re not running out of disk space.  But when you are using a cloud provider’s servers, that responsibility falls to the provider.  There’s less initial expense for you to get started, and less for you to worry about.  
  • The application can scale up and down as needed.  Most serverless providers automatically scale the number of servers your app is running on depending on how much demand there is for your app at that moment.  So if you have an e-commerce app and you are having a big sale, the provider will add more servers to your application for as long as they’re needed, then scale back down when the demand wanes.
  • With many serverless providers, you only pay for what you use.  So if you are a startup and have only a few users, you’ll only be paying pennies a month.  
  • Applications are really easy to deploy with serverless providers.  They take care of most of the work for you.  And because the companies that are offering cloud services are competing with each other, it’s in their best interest to make their development and deployment processes as simple as possible.  So deployments will certainly get even easier in the future.  
  • Monitoring is usually provided automatically.  It’s easy to take a look at the calls to the application and gather data about its performance, and it’s easy to set up alarms that will notify you when something’s wrong.
Of course, nothing in life is perfect, and serverless architecture is no exception.  Here are some drawbacks to using a serverless provider:

  • There may be some things you want to do with your application that your provider won’t let you do.  If you set up everything in-house, you’ll have more freedom.
  • If your cloud provider goes down, taking your app with it, you are completely helpless to fix it.  Recently AWS was the victim of a DDoS attack.  In an effort to fight off the attack, AWS blocked traffic from many IP addresses.  Unfortunately some of those addresses belonged to legitimate customers, so the IP blocking rendered their applications unusable.  
  • Your application might be affected by other customers.  For example, a company that encodes video files for streaming received a massive upload of videos from one new customer.  It swamped the encoding company, which meant that other customers had to wait hours for their videos to be processed.  
How do you test serverless architecture?  The simplest answer is that you can test it the same way you would test an in-house application!  You’ll be able to access your web app through your URL in the usual way.  If your application has an API, you can make calls to the API using Postman or curl or your favorite API testing tool.  
If you are given login access to the serverless provider, you can also do things like query the datastore, see how the API gateway is set up, and look at the logs.  You’ll probably have more insight into how your application works than you do with a traditionally hosted application.  
The best way to learn how serverless architecture works is to play around with it yourself!  You can sign up for a free AWS account, and do this fun tutorial.  The tutorial takes only two hours to complete, and in it you see how to create a web application with an authentication service, an API gateway and a back-end data store.  It’s a little bit out of date, so there are some steps where the links or buttons are a bit off from the instructions, but it’s not too hard to figure out.  When you get to the end, check out this Stack Overflow article to correct any authentication errors.  
After you get some experience with serverless architecture, you will have no trouble figuring out all kinds of great ways to test it.  Next week, I’ll talk about another important “Less”.  Be sure to watch for my next post to find out what it is!

Book Review: Agile Testing Condensed

I read a ton of books, and I’ve found that reading books about testing is my favorite way to learn new technical skills and testing strategies.  James Clear, an author and expert on creating good habits, says: “Reading is like a software update for your brain. Whenever you learn a new concept or idea, the ‘software’ improves. You download new features and fix old bugs.” As a software tester, I love this sentiment!



I thought it would be fun this year to review one testing-related book a month in my blog, and what better book to start with than Agile Testing Condensed by Janet Gregory and Lisa Crispin?  They literally “wrote the book” on agile testing a decade ago, then followed it up with a sequel called More Agile Testing in 2014.  Now they have a condensed version of their ideas, and it’s a great read!

This book should be required reading for anyone involved in creating or testing software.  It would be especially helpful for those in management, who might not have much time to read but want to understand the key components of creating and releasing software with high quality.  The book took only a couple of hours for me to read, and I learned a lot of new concepts in the process.  

One of my favorite things about the electronic version of the book is that it comes with a ton of hyperlinks.  So if the authors mention a concept that you aren’t familiar with, such as example mapping, it comes with a link that you can click to go to the original source of the concept and read the description.  But if you are familiar with the concept, you can just skip the link and read on.  What a great way to keep the text short and make reading more interactive!

The book is divided into four sections:

Foundations: This is where the term “Agile Testing” is defined, and where the authors explain how a whole software team can get involved in testing.  

Testing Approaches: In this section, the authors show how important it is to come up with examples when designing software, and how a tester’s role can be that of a question asker, bringing up use cases that no one else may have thought of.  They also define exploratory testing and offer up some great exploratory testing ideas, and they explain the difference between Continuous Delivery and Continuous Deployment.  

Helpful Models: This section discusses two models that can be used to help teams design a good testing strategy: the Agile Testing Quadrants and the Test Automation Pyramid.  There’s also a great section on defining “Done”; “Done” can mean different things in different contexts.

Agile Testing Today: This was my favorite part of the book!  The authors asked several testing thought leaders what they saw as the future of the software tester’s role.  I loved the ideas that were put forth in the responses.  Some of the roles we can play as agile testers (suggested by Aldo Rall) are: 

  • Consultant
  • Test engineering specialist
  • Agile scholar
  • Coach 
  • Mentor
  • Facilitator
  • Change agent
  • Leader
  • Teacher
  • Business domain scholar

I found myself nodding along with each of these descriptions, thinking “Yes, I do that, and so do all the great testers I know.” 

I recommend that you purchase this book, read it, put its ideas to use on your team, and then share those ideas with everyone in your company, especially those managers who wonder why we still need software testers!  In just 101 pages, Agile Testing Condensed shows us how exciting it is to use testing skills to help create great software.  

Your Future Self Will Thank You

Recently I learned a lesson about the importance of keeping good records.  I’ve always kept records of what tests I ran and whether they passed, but I have now learned that there’s something else I should be recording.  Read the story below to find out what it is!

I have mentioned in previous posts that I’ve been testing a file system.  The metadata used to access the files are stored in a non-relational database.  As I described in this post, non-relational databases store their data in document form rather than in the table form found in SQL databases.

Several months ago, my team made a change to the metadata for our files. After deploying the change, we discovered that older files weren’t able to be downloaded.  It turned out that the change to the metadata had resulted in older files not being recognized, because their metadata was different.  The bug was fixed, so now the change was backwards-compatible with the older files.

I added a new test to our smoke test suite that would request a file with the old metadata. Now, I thought, if a change was ever made that would affect that area, the test would fail and the problem would be detected.

A few weeks ago, my team made another change to the metadata.  The code was deployed to the test environment, and shortly afterwards, someone discovered that there were files that couldn’t be downloaded anymore.

I was perplexed!  Didn’t we already have a test for this?  When I met with the developer who investigated the bug, I found out that there was an even older version of the metadata that we hadn’t accounted for.

Talking this over with the developers on my team, I learned that a big difference between SQL databases and non-relational databases is that when a schema change is made to a relational database, it goes through and updates all the records.  For example, if you had a table with first names and last names, and someone wanted to update the table to now contain middle names, every existing record would be modified to have a null value for the middle name:

FirstName MiddleName LastName
Prunella NULL Prunewhip
Joe Bob Schmoe

With non-relational databases, this is different.  Because each entry is its own document and there are no nulls, it’s possible to create situations where a name-value pair simply doesn’t exist at all.  To use the above example, in a non-relational database, Prunella wouldn’t have a “MiddleName” name-value pair: 

{
    “FirstName”:”Prunella”,
    “LastName”:”Prunewhip”
},
{
    “FirstName”:”Joe”,
    “MiddleName”:”Bob”,
    “LastName”:”Schmoe”
}

If the code relies on retrieving the value for MiddleName, that code would return an exception, because there’d literally be nothing to retrieve.

The lesson I learned from this situation is that when we are using non-relational databases, it’s important to keep a record of what data structures are used over time.  This way whenever a change is made, we can test with data that uses the old structures as well as the new structure.

And this lesson is applicable to situations other than non-relational databases!  There may be other times where an expected result changes after the application changes.  Here are some examples:

  • A customer listing for an e-commerce site used to display phone numbers; now it’s been decided that phone numbers won’t be displayed on the page
  • A patient portal for a doctor’s office used to display social security numbers in plain text; now the digits are masked
  • A job application workflow used to take the applicant to a popup window to add a cover letter; now the cover letter is added directly on the page and the popup window has been eliminated
In all these situations, it may be useful to remember how the application used to behave in case you have users who are using an old version, or in case there’s an unknown dependency on the old behavior that now results in a bug, or in case a new product owner asks why a feature is behaving in the new way.  
So moving forward, I plan to document the old behavior of my applications.  I think my future self will be appreciative!