Debugging for Testers

Wikipedia defines debugging as the process of finding and resolving defects or problems within a computer program that prevent correct operation of computer software or a system”.  Often we think of debugging as something that only developers need to do, but this isn’t the case.  Here are two reasons why:  first, investigating the cause of a bug when we find it can help the developer fix it faster. Second, since we write automation code ourselves, and since we want to write code that is of high quality just as developers do, we ought to know how to debug our code.  



Let’s take a look at three different strategies we can employ when debugging code.

Console output:
Code that is executing in a browser or on a device generally outputs some information to the console.  You can easily see this by opening up Developer Tools in Chrome or the Web Console in Firefox.  When something goes wrong in your application, you can look for error messages in the console.  Helpful error messages like “The file ‘address.js’ was not found” can tell you exactly what’s going wrong.  

Often an error in an application will produce a stack trace.  A stack trace is simply a series of error statements that go in order from the most recent file that was called all the way back to the first file that was called.  Here’s a very simple example: let’s say that you have a Node application that displays cat photos.  Your main app.js page calls a function called getCats which will load the user’s cat photos.  But something goes wrong with getCats, and the application crashes.  Your stack trace might look something like this:

        Error: cannot find photos
        at getCats.js 10:57
        at app.js 15:16
        at internal/main/run_main_module.js:17:47

  • The first line of the stack trace is the error- the main cause of what went wrong.  
  • The next line shows the last thing that happened before the app crashed: the code was executing in getCats.js, and when it got to line 10, column 57, it couldn’t find the photos.  
  • The third line shows which file called getCats.js: it was app.js, and it called getCats at line 15, column 16.  
  • The final line shows what file was called to run app.js in the first place: an internal Node file that called app.js at line 17, column 47. 

Stack traces are often longer, harder to read, and more complicated than this example, but the more you practice looking at them, the better you will get at finding the most important information.

Logging:
Much of what you see in the console output can be called logging, but there are often specific log entries set up in an application’s code that record everything that happens in the application.  I’m fortunate to work with great developers who are adept at creating clear log statements that make it easy to figure out what happened when things go wrong.

Log statements often come with different levels of importance, such as Error, Warning, Info, and Debug.  An application can sometimes be set to only log certain levels of statement.  For example, a Production version of an application might be set to only log Errors and Warnings.  When you’re investigating a bug, it may be possible to increase the verbosity of the logs so you can see the Info and Debug statements as well.

You can also make your own log statements, simply by writing code that will output information to the console.  I do this when I’m checking to make sure that my automation code is working like I’m expecting it to.  For example, if I had a do-while statement like this:

do {
     counter++
}
while (counter < 10)

I might add a logging statement that tells me the value of counter as my program progresses:

do {
     console.log (“The value of counter right now is: ” + counter)
     counter++
}
while (counter < 10)

The great thing about creating your own log statements is that you can set them up in a way that makes the most sense to you.

Breakpoints:
A breakpoint is a place that you set in the code that will cause the program to pause.  Software often executes very quickly and it can be hard to figure out what’s happening as you’re flying through the lines of code.  When you set a breakpoint, you can take a look at exactly what all your variable values are at that point in the program.  You can also step through the code slowly to see what happens at each line.

Debuggers are generally available in any language you can write code in.  Here are some examples:

I hope this post helps you get started with both debugging your code, and investigating someone else’s bugs!

The Joy of JWTs

Have you ever used a JWT before?  If you have tested anything with authentication or authorization, chances are that you have!  The term JWT is pronounced “jot” and it stands for JSON Web Token.  JWTs are created by a company called Auth0, and their purpose is to provide a method for an application to determine whether a user has the credentials necessary to request an asset.  Why are JWTs so great?  Because they allow an application to check for authorization without passing in a username and password or a cookie.  Requests of all kinds can be intercepted, but a JWT contains non-sensitive data and is encrypted, so intercepting it doesn’t provide much useful information.  (For more information about the difference between tokens and cookies, see this post.)  Let’s learn about how JWTs are made!

A JWT has three parts, which are made up of a series of letters and numbers and are separated by periods.  One of the best ways to learn about JWTs is to practice using the official JWT Debugger, so go to jwt.io and scroll down until you see the Debugger section.

Part One: Header
The header lists the algorithm that is used for encrypting the JWT, and also lists the token type (which is JWT, of course):
{
  “alg”: “HS256”,
  “typ”: “JWT”
}

Part Two: Payload
The payload lists the claims that the user has.  There are three types of claims:
Registered claims: These are standard claims that are predefined by the JWT code, and they include:
     iss (issuer)- who is issuing the claim
     iat (issued at)- what time, in Epoch time, the claim was issued
     exp (expiration time)- what time, in Epoch time, the claim will expire
     aud (audience)- the recipient of the token
     sub (subject)- what kinds of things the recipient can ask for
Public claims: These are other frequently-used claims, and they are added to the JWT registry.  Some examples are name, email, and timezone.
Private claims: These are claims that are defined by the creators of an application, and they are specific to that company.  For example, a company might assign a specific userId to each of their users, and that could be included as a claim.

Here’s an example used in the jwt.io Debugger:
{
  “sub”: “1234567890”,
  “name”: “John Doe”,
  “iat”: 1516239022
}

Here the subject is 1234567890 (which isn’t a very descriptive asset), the name of the user who has access to the subject is John Doe, and the token was issued at 1516239022 Epoch time.  Wondering what that time means?  You can use this Epoch time converter to find out!

Part Three: Signature
The signature takes the first two sections and encodes them in Base64.  Then it takes those encoded sections and adds a secret key, which is a long string of letters and numbers.  Finally it encrypts the entire thing with the HMAC SHA256 algorithm.  See my post from last week to understand more about encoding and encryption.

Putting It All Together
The JWT is comprised of the encoded Header, then a period, the encoded Payload, then another period, and finally the encrypted signature.  The JWT Debugger helpfully color-codes these three sections so you can distinguish them.

If you use JWTs regularly in the software you test, try taking one and putting it in the JWT Debugger.  The decoded payload will give you insight into how your application works.

If you don’t have a JWT to decode, try making your own!  You can paste values like this into the Payload section of the Debugger and see how the encrypted JWT changes:
{
     “sub”: “userData”,
     “userName”: “kjackvony”,
     “iss”: 1516239022,
     “exp”: 1586606340
}

When you decode a real JWT, the signature doesn’t decrypt.  That’s because the secret used is a secret!  But because the first and second parts of the JWT are encoded rather than encrypted, they can be decoded.

Using JWTs
How JWTs are used will vary, but a common usage is to pass them with an API request using a Bearer token.  In Postman, that will look something like this:

Testing JWTs
Now that you know all about JWTs, how can you test them?

  • Try whatever request you are making without a JWT, to validate that data is not returned.  
  • Change or remove one letter in the JWT and make sure that data is not returned when the JWT is used in a request.
  • Decode a valid JWT in the Debugger, change it to have different values, and then see if the JWT will work in your request.  
  • Use a JWT without a valid signature and make sure that you don’t get data in the response.  
  • Make note of when the JWT expires, and try a request after it expires to make sure that you don’t get data back.  
  • Create a JWT that has an issue time of somewhere in the future and make sure that you don’t get data back when you use it in your request.
  • Decode a JWT and make sure that there is no sensitive information, such as a bank account number, in the Payload.  

Have fun, and happy testing!

Encryption and Encoding

We’ve all encountered mysterious hashed passwords and encrypted texts.  We’ve heard mysterious terms like “salted” and “SHA256” and wondered what they meant.  This week I decided it was finally time for me to learn about encryption!

The first distinction we need to learn is the difference between encryption and encodingEncoding simply means transforming data into a form that’s easier to transfer.  URL encoding is a simple type of encoding.  Here’s an example: the Coderbyte website has a challenge called “Binary Reversal”.  The URL for the page is  https://coderbyte.com/information/Binary%20Reversal; the space between “Binary” and “Reversal” is replaced with “%20”.  There are other symbols, such as !, that are replaced in URL encoding as well.  If you’d like to learn more about URL encoding, you can play around with an encoding/decoding tool such as this one.

Another common type of encoding is Base64 encoding.  Base64 encoding is often used to send data; the encoding keeps the bytes from getting corrupted.  This type of encoding is also used in Basic authentication.  You may have seen a username and password encoded in this way when you’ve logged into a website.  It’s important to know that Basic authentication is not secure!  Let’s say a malicious actor has intercepted my login with Basic auth, and they’ve grabbed the authentication string: a2phY2t2b255OnBhc3N3b3JkMTIz.  That looks pretty secure, right?  Nope!  All the hacker needs to do is go to a site like this and decode my username and password.  Try it for yourself!

Now that we know the difference between encoding and encryption, and we know that encoding is not secure, let’s learn about encryption.  Encryption transforms data in order to keep it secret.  

A common method of password encryption is hashing, which is a mathematical way of encrypting that is impossible to decrypt.  This seems puzzling- if a string is impossible to decrypt, how will an application ever know that a user’s password is correct?  What happens is that the hashed password is saved in the application’s authentication database.  When a user logs in, their submitted password is encrypted with the same hashing algorithm that was used to store the password.  If the hashed passwords match, then the password is correct.
What about if two users have the same password?  If a user somehow was able to access the authentication database to view the hashed passwords and they saw that another user had the same hashed password as they did, that user would now know someone else’s password.  We solve this problem through salting.  A salt is a short string that is added to the end of a user’s password before it is encrypted.  Each password has a different salt added to it, and that salt is saved in the database along with the hashed password.  This way if a hacker gets the list of stored passwords, they won’t be able to find any two that are the same.  
A common hashing algorithm is SHA256.  SHA stands for “Secure Hash Algorithm”.  The 256 value refers to the number of bits used in the encoding.  
There are other types of encryption that can be decoded.  Two examples are AES encryption and RSA encryptionAES stands for Advanced Encryption Standard.  This type of encryption is called symmetric key encryption. In symmetric key encryption, the data is encoded with a key, and the receiver of the data needs to have the same key to decrypt the data.  AES encryption is commonly used to transfer data over a VPN.  
RSA stands for Rivest-Shamir-Adleman, who are the three inventors of this encryption method.  RSA uses asymmetric encryption, also called public key encryption, where there is a public key to encode the data and a private key to decode it.  This can work in couple of ways: if the sender of the message knows the receiver’s public key, they can encrypt the message and send it; then the receiver decrypts the message with the private key.  Or the sender of the message can sign the message with their private key, and then the receiver of the message can decode it with the sender’s public key.  In the second example, the private key is used to show that the message is authentic.  How does the receiver know that the message is authentic if they don’t know what the private key is?  They know because if the private key is tampered with, it will be flagged to show that it has been manipulated.  A very common use of RSA encryption is TSL, which is what is used to send data to and from websites.  I wrote about TSL in this post if you’d like to learn more.  
Encryption involves very complicated mathematical algorithms.  Fortunately, we don’t have to learn them to understand how encryption works!  In next week’s post, I’ll talk about how encoding and encryption are used in JWTs.  

Book Review: Enterprise Continuous Testing

As I’ve mentioned in previous posts, this year I’m reading one testing-related book a month and reviewing it in my blog.  This month I read Enterprise Continuous Testing, by Wolfgang Platz with Cynthia Dunlop.

This book aims to answer solve the problems often found in continuous testing.  Software continuous testing is defined by the author as “the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release as rapidly as possible”.  Platz writes that there are two main problems that companies encounter when they try to implement continuous testing:

1. The speed problem
Testing is a bottleneck because most of it is still done manually
Automated tests are redundant and don’t provide value
Automated tests are flaky and require significant maintenance

2. The business problem
The business hasn’t performed a risk analysis on their software
The business can’t distinguish between a test failure that is due to a trivial issue and a failure that reveals a critical issue

I have often encountered the first set of problems, but I never really thought about the second set.  While I have knowledge of the applications I test and I know which failures indicate serious problems, it never occurred to me that it would be a good idea to make sure that product managers and other stakeholders can look at our automated tests and be able to tell whether our software is ready to be released.

Fortunately, Platz suggests a four-step solution to help ensure that the right things are tested, and that those tests are stable and provide value to the business.

Step One: Use risk prioritization

Risk prioritization involves calculating the risk of each business requirement of the software.  First, the software team, including the product managers, should make a list of each component of their software.  Then, they should rank the components twice: first by how frequently the component is used, and second by how bad the damage would be if the component didn’t work.  The two rankings should be multiplied together to determine the risk prioritization.  The higher the number is, the higher the risk; higher risk items should be automated first, and those tests should have priority.

An example of a lower-risk component in an e-commerce platform might be the product rating system: not all of the customers who use the online store will rate the products, and if the rating system is broken, it won’t keep customers from purchasing what’s in their cart.  But a higher-risk component would be the ability to pay for items with a credit card: most customers pay by credit card, and if customers can’t purchase their items, they’ll be frustrated and the store will lose revenue.

Step Two: Design tests for efficient test coverage

Once you’ve determined which components should be tested with automation, it’s time to figure out the most efficient way to test those components.  You’ll want to use the fewest tests possible to ensure good risk coverage.  This is because the fewer tests you have, the faster your team will get feedback on the quality of a new build.  It’s also important to make sure that each test makes it very clear why it failed when it fails.  For example, if you have a test that checks that a password has been updated, and also checks that the user can log in, when the test fails you won’t know immediately whether it has failed on the password reset or on the login.  It would be better to have two separate tests in this case.

Platz advocates the use of equivalence classes: this is a term that refers to a range of inputs that will produce the same result in the application.  He uses the example of a car insurance application: if an insurance company won’t give a quote to a driver who is under eighteen, it’s not necessary to write a test with a driver who is sixteen and a driver who is seventeen, because both tests will test the same code path.

Step Three: Create automated tests that provide fast feedback

Platz believes that the best type of automated tests are API tests, for two reasons: one, while unit tests are very important, developers often neglect to update them as a feature changes, and two, UI tests are slow and flaky.  API tests are more likely to be kept current because they are usually written by the software testers, and they are fast and reliable.  I definitely agree with this assessment!

The author advises that UI tests should be used only in cases where you want to check the presence of or location of elements on a webpage, or when you want to check functionality that will vary by browser or device.

Step Four: Make sure that your tests are robust

This step involves making sure that your tests won’t be flaky due to changing test data or unreliable environments.  Platz suggests that synthetic test data is best for most automated tests, because you have control over the creation of the data.  In the few cases where it’s not possible to craft synthetic data that matches an important test scenario, masked production data can be used.

In situations where environments might be unreliable, such as a component that your team has no control over that is often unavailable, he suggests using service virtualization, where responses from the other environment are simulated.  This way you have more control over the stability of your tests.

Enterprise Continuous Testing is a short book, but it is packed with valuable information!  There are many features of the book that I didn’t touch on here, such as metrics and calculations that can help your team determine the business value of your automation.  I highly recommend this book for anyone who wants to create an effective test automation strategy for their team.

Adventures in Node: Arrow Functions

This year I’ve been feeling an urge to really learn a programming language.  There are lots of languages I know well enough to write automation code in- C#, Java, Javascript, and so on- but I decided I wanted to really dive into one language and learn to really understand it.

I decided to go deep with Node.js.  Node is essentially Javascript with a server-side runtime environment.  It’s possible to write complete applications in Node, because you can code both the front-end and the back-end of the application.  And I was fortunate enough to find this awesome course by Andrew Mead.  Andrew does a great job of making complicated concepts really simple, so as I am taking the course, I’m finding that things that used to confuse me about Node finally make sense!  And because I like sharing things I’ve learned, I’ll be periodically sharing my new-found understanding in my blog posts.

I’ll start with arrow functions.  Arrow functions have been around for a few years now, but I’ve always been confused by them, because they weren’t around when I was first learning to write code.  You may have seen these functions, which use the symbol =>.  They seem so mysterious, but they are actually quite simple!  Arrow functions are simply a way to notate a function to save space and make code easier to read.  I’ll walk you through an example.  We’ll start with a simple traditional function:

const double = function(x) {
     return x + x
}

double is the name of the function.  When x is passed into the function, x + x is returned.  So if I called the double function with the number 3, I’d get 6 in response.

Now we’re going to replace the function with an arrow:

const double = (x) =>  {
    return x + x
}

Note that the arrow comes after the (x), rather than before.  Even though the order is different, function(x) and (x) => mean the same thing.

Now we’re going to replace the body of the function { return x + x } with something simpler:

const double = (x) => x + x

When arrow functions are used, it’s assumed that what comes after the arrow is what will be returned.  So in this case, x + x means the same thing as { return x + x }.  This is only used if the body of the response is relatively simple.

See?  It’s simple!  You can try running these three functions for yourself if you have node installed.  Simply create an app.js file with the first version of the function, and add a logging command:

console.log(double(3))

Run the file with node app.js, and the number 6 will be returned in the console.

Then replace version 1 of the function with version 2, run the file, and you should get a 6 again.  Finally, replace version 2 with version 3, and run the file; you should get a 6 once again.

It’s even possible to nest arrow functions!  Here’s an example:

const doublePlusTen = (x) => {
    const double = (x) => x + x
    return double(x) + 10
}

The const double = (x) => x + x is our original function.  It’s nested inside a doublePlusTen function.  The doublePlusTen is using curly braces and a return command, because there’s more than one line inside the function (including the double function).  If we were going to translate this nested function into plain English, it would look something like this:

“We have a function called doublePlusTen.  When we pass a number into that function, first we pass it into a nested function called double, which takes the number and doubles it.  Then we take the result of that function, add 10 to it, and return that number.”  

You can try out this function by calling it with console.log(doublePlusTen(3)), and you should get 16 as the response.

Hopefully this information will help you understand what an arrow function is doing the next time you encounter it in code.  You may want to start including arrow functions in your own automation code as well.  Stay tuned in the coming weeks for more Adventures in Node posts!

How I Would Have Tested the Iowa Caucus App

About six weeks ago, the Iowa Democratic Party held its caucus.  For those who don’t live in the United States, this event is one of the first steps in the presidential primaries, which determine who will be running for president in the next presidential election. 

In 2016, the Iowa Caucus used a mobile app created by a company called Interknowlogy in partnership with Microsoft to allow each precinct to report their results.  This app worked successfully in the 2016 caucus.  But this year the Iowa Democratic Party chose to go with a different company to create a new app, which proved disastrous.  Incorrect tallies were reported, and precincts that tried to report via phone were often not able to get through or found that their calls were disconnected.
From reading this assessment, it appears that the biggest problem with the 2020 app was that the software company didn’t have adequate time to create the application, and certainly didn’t have enough time to test it.  But as a software tester, I found myself thinking about what I would have done if it had been my responsibility to test the app, assuming that there had been enough time for testing.  Below is what I came up with:
Step One: Consider the Use Case

The interesting thing about this application is that unlike an app like Twitter or Uber, the number of users is finite.  There are only about 1700 precincts in Iowa, including a few out-of-state precincts for Iowans who are in the military or working overseas.  So the app wouldn’t need to handle tens of thousands of users.  
The users of the application will be the precinct leaders, who will own a wide variety of mobile phones, such as iPhone, Galaxy, or Motorola, and each of those devices could have one of several carriers, such as AT&T, Verizon, or Sprint.  Mobile service might be spotty in some rural areas, and wifi might be unavailable in some locations as well.  So it will be important to test the app on a wide variety of operating systems and devices, with a variety of carriers and connection scenarios.  
Moreover, the precinct leaders will probably vary widely in their technical ability.  Some might be very comfortable with technology, while others might have never installed an app on their phone.  So it will be imperative to make sure that the app is on both the Apple App Store and Google Play, and that the installation is simple.
Some leaders may choose to call in their election results instead of entering them in the app.  So the application should allow an easy way to do this with a simple button click.  This will also be useful as a backup plan in case other parts of the app fail.
Finally, because this is an event of high political importance, security must be considered.  The app should have multi-factor authentication, and transmissions should be secured using https with appropriate security headers.  
Step Two: Create an In-House Test Plan
Now that the users and the use case have been considered, it’s time to create an in-house test plan.  Initial testing should begin at least six months before the actual event.  Here is the order that I would direct the testing:
  • Usability testing: the application should be extremely easy to install and use.
  • Functional testing: does the application actually do what it’s supposed to do?  Testers should test both the happy path- where the user does exactly what is expected of them- and every possible sad path- where the user does something odd, like cancel the transaction or back out of the page.
  • Device and carrier testing: testers should test on a wide variety of carriers, with a wide variety of providers, and with a wide variety of connection scenarios, including scenarios such as a wifi connection dropping in the middle of a transmission.  Testers should also ensure that the application will work correctly overseas for the remote precincts.  They can do this by crowd-sourcing a test application that has the same setup as the real application.  
  • Load and performance testing: testers should make sure that the application can handle 2500 simultaneous requests, which is much higher than the actual use case.  They should also make sure that page response times are fast enough that the user won’t be confused and think that there’s something wrong with the application.  
  • Security testing: testers should run through penetration tests of the application, ensuring that they can’t bypass the login or hijack an http request.  
  • Backup phone system testing: testers should validate that they can make 2500 simultaneous calls to the backup phone system and be able to connect.  Since there probably won’t be 2500 phone lines available, testers should make sure that wait times are appropriate and that callers are told how many people are in the queue in front of them.  

Step Three: External Security Audit

Because of the sensitive nature of the application, the app should be given to an external security testing firm at least four months before the event.  Any vulnerabilities found by the analysis should be addressed and retested immediately.
Step Four: Submit to the Apple App Store and Google Play
As soon as the application passes the security audit, it should be submitted to the app stores for review.  Once the app is in app stores, precinct leaders should be given instructions for how to download the app, log in with a temporary password, and create a new password, which they should save for future use.  
Step Five: End User Testing
Two months before the caucus, precinct leaders will be asked to do a trial run on the application.  Instead of using actual candidates, the names will be temporarily replaced by something non-political, like pizza toppings.  The leaders will all report a fictitious tally for the pizza toppings using the app, and will then use the backup phone number to report the tally as well.  This test will accomplish the following:
  • it will teach the leaders how to use the app
  • it will validate that accurate counts are reported through the app
  • it will help surface any issues with specific devices, operating systems, or carriers
  • it will validate that the backup phone system works correctly
By two weeks before the caucus, any issues found in the first pizza test should have been fixed.  Then a final trial run (again with pizza toppings rather than candidates) will be conducted to find any last-minute issues.  The precinct leaders will be strongly encouraged to make no changes to their device or login information between this test and the actual caucus.
Monday Morning Quarterbacking
There’s a term in the US called “Monday Morning Quarterbacking”, where football fans take part in conversations after a game and state what they would have done differently if they had been the quarterback.  Of course, most people don’t have the skill to be a major-league quarterback and they probably don’t have access to all the information that the team had.  
I realize that what I’m doing is the software tester equivalent of Monday Morning Quarterbacking.  Still, it’s an interesting thought exercise.  I had a lot of fun thinking about how I would test this application.  The next time you see a software failure, try this thought exercise for yourself- it will help you become a better tester!

API Contract Testing Made Easy

As software becomes increasingly complex, more and more companies are turning to APIs as a way to organize and manage their application’s functionality.  Instead of being one monolithic application where all changes are released at once, now software can be made up of multiple APIs that are dependent upon each other, but which can be released separately at any time.  Because of this, it’s possible to have a scenario where one API releases new functionality which breaks a second API’s functionality, because the second API was relying on the first and now something has changed.

The way to mitigate the risk of this happening is through using API contract tests.  These can seem confusing: which API sets up the tests, and which API runs them?  Fortunately after watching this presentation, I understand the concept a bit better.  In this post I’ll be creating a very simple example to show how contract testing works.

Let’s imagine that we have an online store that sells superballs.  The store sells superballs of different colors and sizes, and it has uses three different APIs to accomplish its sales tasks:

Inventory API:  This API keeps track of the superball inventory, to make sure that orders can be fulfilled.  It has the following endpoints:

  • /checkInventory, which passes in a color and size and verifies that that ball is available
  • /remove, which passes in a color and size and removes that ball from the inventory
  • /add, which passes in a color and size and adds that ball to the inventory

Orders API:  This API is responsible for taking and processing orders from customers.  It has the following endpoints:
  • /addToCart, which puts a ball in the customer’s shopping cart
  • /placeOrder, which completes the sale

Returns API:  This API is responsible for processing customer returns.  It has the following endpoint:
  • /processReturn, which confirms the customer’s return and starts the refund process
Both the Orders API and the Returns API are dependent on the Inventory API in the following ways:
  • When the Orders API processes the /addToCart command, it calls the /checkInventory endpoint to verify that the type of ball that’s been added to the cart is available
  • When the Orders API processes the /placeOrder command, it calls the /remove command to remove that ball from the inventory so it can’t be ordered by someone else
  • When the Returns API runs the /processReturn command, it calls the /add command to return that ball to the inventory
In this example, the Inventory API is the producer, and the Orders API and Returns API are the consumers.  
It is the consumer’s responsibility to provide the producer with some contract tests to run whenever the producer makes a code change to their API.  So in our example:
The team who works on the Orders API would provide contract tests like this to the team who works on the Inventory API:
  • /checkInventory, where the body contained { “color”: “purple”, “size”: “small” }
  • /remove, where the body contained { “color”: “red”, “size”: “large” }
The team who works on the Returns API would provide an example like this to the team who works on the Inventory API:
  • /add, where the body contained { “color”: “yellow”, “size”: “small” }
Now the team that works on the Inventory API can take those examples and add them to their suite of tests.  
Let’s imagine that the superball store has just had an update to their inventory. There are now two different kinds of bounce levels for the balls: medium and high.  So the Inventory API needs to make some changes to their API to reflect this.  Now a ball can have three properties: color, size, and bounce.  
The Inventory API modifies their /checkInventory, /add, and /remove commands to accept the new bounce property.  But the developer accidentally makes “bounce” a required field for the /checkInventory endpoint.  
After the changes are made, the contract tests are run.  The /checkInventory test contributed by the Orders API fails with a 400 error, because there’s no value for “bounce”.  When the developer sees this, she finds her error and makes the bounce property optional.  Now the /checkInventory call will pass.  
Without these contract tests in place, the team working on the Inventory API might not have noticed that their change was going to break the Orders API.  If the change went to production, no customer would be able to add a ball to their cart!
I hope this simple example illustrates the importance of contract testing, and the responsibilities of each API team when setting up contracts.  I’d love to hear about how you are using contract testing in your own work!  You can add your experiences in the Comments section.

More Fun With Cypress

Two weeks ago, I wrote about my first experiences using Cypress.io.  I was intrigued by the fact that it was possible to do http requests using Cypress commands, so this week I decided to see if I could combine API commands with UI commands in the same test.  To be honest, it wasn’t as easy as I thought it would be, but I did manage to come up with a small proof-of-concept.

Part of the difficulty here may lie in the fact that there aren’t many websites around on which to practice both UI and API automation.  For my experimentation, I decided to use the OWASP Juice Shop, which is a great site for practicing security testing.  I wanted to log into the site using an HTTP command, and then use the token I retrieved from my login to navigate to the site as an authenticated user.

Setting up the HTTP command was pretty easy.  Here’s what it looks like:

var token;

describe(‘I can log in as a user’, () => {
    it(‘Logs in’, () => {
        cy.request({
  method: ‘POST’,
  url: ‘https://juice-shop.herokuapp.com/rest/user/login’,
  headers: {‘content-type’:’application/json’},
  body: {
    email: ‘[email protected]’,
    password: ‘123456’
  }
})
  .then((resp) => {
    const result = JSON.parse(JSON.stringify(resp.body));
    token = result.authentication.token;
        expect(resp.status).to.eq(200);
    })
});
});

Let’s take a look at what’s happening here.  First I declare the token variable.  The ‘I can log in as a user’ and ‘Logs in’ parts are just the names of the test section and the test.  Then we have the cy.request section.  This is where the http request happens.  You can see the method, the url, the headers, and the body of the request.  Next, there’s the then((resp), which shows what the test is doing with the response.  With const result = JSON.parse(JSON.stringify(resp.body)), I’m parsing the body of the response into JSON format and saving it to a result variable.  Then I’m setting the token variable to result.authentication.token.  Finally, with expect(resp.status).to.eq(200) I’m doing a quick assertion that the status code of the response is 200 just to alert me if something didn’t come back correctly.

Next, I loaded the web page, and included the token in the browser’s local storage so the web page would know I was authenticated:

describe(‘Is logged in’, function() {
  it(‘Is logged in’, function() {
    cy.visit(‘https://juice-shop.herokuapp.com/#/’, {
    onBeforeLoad (win) {
      win.localStorage.setItem(‘token’, token)
    },
  })
    cy.contains(‘Dismiss’).click();
    cy.contains(‘Your Basket’).should(‘be.visible’);
  })
});

With this line: cy.visit(‘https://juice-shop.herokuapp.com/#/’ I’m navigating to the web page.  With the next section:

    onBeforeLoad (win) {
      win.localStorage.setItem(‘token’, token)
    },
  })

I’m telling the browser to put the token I saved into local storage.  There was a popup window with a “Dismiss” button that appeared in the browser, so I closed it with cy.contains(‘Dismiss’).click(). And finally with cy.contains(‘Your Basket’).should(‘be.visible’) I asserted that the link called “Your Basket” was visible, because that link doesn’t appear unless the user is authenticated.

My code definitely wasn’t perfect, because I noticed that when I manually logged in, I saw my email address in the Account dropdown, but when I logged in through Cypress, the email address was blank.  I also tried doing some other UI tasks, like adding an item to my cart, but I had trouble simply because the application didn’t have good element identifiers.  (I so appreciate developers who put identifying tags on their elements!  If your developers do this, please thank them often.)  And there may be irregularities with this application because it was specifically designed to have security holes.

It would be very interesting to see how easy it would be to set up API and UI testing in Cypress when testing an application with normal authentication processes and good element identifiers!  However, I think my experiment showed that it’s fairly easy to integrate API and UI tests together in Cypress.

Book Review: The Unicorn Project

As I mentioned in a previous post, it’s my goal this year to read and review one tech-related book each month.  This month, I read The Unicorn Project, by Gene Kim.  The book is a work of fiction, and is the story of an auto parts supply company that is struggling to participate in the digital transformation of retail business.  I was a bit dubious about reading a work of fiction that aimed to tackle the common problems of DevOps; I assumed either the lessons or the story would be boring.  I’m happy to say that wasn’t the case!  I really enjoyed this tale and learned a lot in the process of reading it. 

The hero of the story, Maxine, goes through the same trials and tribulations that we all do in the workplace.  At the beginning of the book, Maxine has just been chosen to be the “fall guy” for a workplace failure, even though she had nothing to do with it and was actually on vacation at the time.  As a punishment, she is sent to a project that is critical for the company’s success, but has been bogged down for years in a quagmire of environments, permissions, and archaic processes. 

I could definitely relate to Maxine’s frustrations.  One day I was having a particularly tough day at work, and I was reading the book on my lunch break.  Maxine had been working for days to try to get a build running on her machine, and she had opened a ticket to get the appropriate login permissions.  She sees that there’s been progress on the ticket, but when she goes to look at it, it’s been closed because she didn’t have the appropriate approval from her manager.  So she opens a new ticket and gets her manager’s approval, only to have the ticket closed because the manager’s approval wasn’t allowed to be in the “Notes” field!  My troubles that day were different, but I too had been struggling with getting the build I needed; I felt like shouting into the book: “Honey, I feel your pain!”

The real magic in the story comes when a small band of people from various departments gathers together to try to make some huge process changes in the company.  They are aided by a surprise mentor, who tells them about the Five Ideals of the tech workplace:

1. Locality and Simplicity– having locality in our systems and simplicity in our processes
2. Focus, Flow, and Joy– people can focus on their work, flow through processes easily, and experience the joy of understanding their contributions
3. Improvement of Daily Work– continually improving processes so that day-to-day operations are simple
4. Psychological Safety– people feel comfortable suggesting changes, and the whole team owns successes and failures without playing the blame game
5. Customer Focus– everything is looked at from the lens of whether it matters to the customers

Using those Five Ideals, Maxine and her fellow rebels are able to start making changes to the systems at their company, sometimes with permission and sometimes without.  I don’t want to give away the ending, but I will say that the changes they make have a positive impact.

There were a couple of key things I learned from this book, which have given me a new understanding of just how important DevOps is.  The first is that when we create a new feature or verify that an important bug has been fixed, it means nothing until it’s actually in the hands of customers in Production.  I have fallen for the fantasy of thinking that something is “Done” when I see it working correctly in my test environment, but it’s important to remember that to the customer, it is totally not done!

The second thing I learned is the importance of chaos testing.  As companies move further toward using microservices models and cloud technologies, we need to make sure we know exactly what will happen if one of those services or cloud providers is unavailable.  Chaos testing is a great way to simulate those failures and help teams create ways to fail more elegantly; for example, by having a failover system, using cached data, or including a helpful error message.

I’ll be thinking about this book for a long time as I look at the systems I work with.  I definitely recommend this book for developers, testers, managers, and DevOps engineers!