Introduction to SQL Injection

SQL Injection is another type of security attack that can do serious damage to your application.  It’s important to find SQL Injection vulnerabilities before a malicious user does.

In SQL Injection, a malicious user sends in a SQL query through a form field which interacts with the database in an unexpected way.  Here are four different things that a malicious user might do with SQL Injection:

  • drop a table
  • change the records of another user
  • return records that the user shouldn’t have access to
  • log in without appropriate credentials
To understand how a SQL Injection attack is crafted, let’s take a look at an example.  Let’s say we have a form in our application with a username field.  When the username field is populated with a name such as ‘testerguy’ and submitted to the server, the following SQL query is run:
SELECT * from users where username = ‘testerguy’
If this username exists in the database, results for the users table are returned to the application.  
A malicious user will try to trick the database by 
  1. making it think that the entry has terminated, by passing in testerguy’ 
  2. adding an additional clause, such as OR 1=1
  3. adding a terminating statement such as ; to make sure no other SQL statement will be run
In the above example, what the user would add to the username field would be:
testerguy’ OR 1=1;

And what the database will execute is:

SELECT * from users where username = ‘testerguy’ OR 1=1;

Take a moment to think about the 1=1 clause.  1=1 is always true, so the database interprets this as selecting everything in the table!  So this select statement is asking for all values for all the users in the table. 

Let’s see some SQL Injection in action, using the OWASP Juice Shop.  Click the login button in the top left corner of the page.  We are going to use SQL injection to log in without valid credentials.

We’ll make the assumption that when the login request happens, a request like this goes to the database:

SELECT * from users where username = ‘testerguy’ AND password = ‘mysecretpass’

If the request returns results, then it’s assumed that the user is valid, and the user is logged in.

What we will want to do is try to terminate the statement so that all usernames will be returned, and so that the password isn’t looked at at all. 

So we will send in:

  1. any username at all, such as foo
  2. a single quote to make it look like our entry has terminated
  3. the clause OR 1=1 to make the database return every username in the table
  4. a terminating string of to make the database ignore everything after our request
Taken together, the string we will add to the username field is:
foo’ OR 1=1–
You may notice that the submit button is not enabled yet.  This is because we haven’t added a password.  The UI expects both a username and a password in order to submit the login.  You can add any text at all into the password field, because we are ensuring that it will be ignored. Let’s add bar.
Now when you submit the login request, this is what will be executed on the database:
SELECT * from users where username = ‘foo’ OR 1=1–‘ AND password = ‘bar’

The first part of the request is returning all users, because 1=1 is always true.  And the second part of the request will be ignored, because in SQL everything after the dashes is commented out.  So when the code sees that all users have been returned, it logs us in!
If you hover over the person icon at the top left of the screen, you will see that you have actually been logged in as the admin!  The admin’s email address was the first address in the database, so this is the credential that was used.  Because you are logged in as the admin, you now have elevated privileges on the website that you would not have as a regular user.
Obviously, this sort of scenario is one that you would want to avoid in your application!  Next week we’ll take a look at some other SQL Injection patterns we can use to test for this vulnerability.  

Automated Testing For XSS

Last week, we talked about three different ways to test for Cross-Site scripting.  We looked at examples of manual XSS testing, and talked about how to use the code to formulate XSS attacks.  Today we will look at the third way to test, which is to use automation.  For today’s testing, we’ll be using Burp Suite, which is an oddly-named but very helpful tool, and is available for free (there is also a paid version with additional functionality).  We’ll also be using the Juice Shop and Postman

First, let’s take a look at the field we will be testing in the Juice Shop.  Using the Chrome browser, navigate to the Juice Shop’s home page.  You’ll see a search window at the top of the page.  Open up the Chrome Developer Tools, by clicking on the three-dot menu in the upper right corner, then choosing “More Tools”, then “Developer Tools”.  Once the dev tools are open, click on the Network tab. 

Do a search for “apple” in the search field.  You’ll get your search results on the web page, and you should see the network request in the developer tools.  The request name will be “search?q=apple”.  Click on this request.  A window will open up with the full request URL, which should be https://juice-shop.herokuapp.com/rest/product/search?q=apple. 

Next, open up Postman.  Paste this URL into the URL window, and click the Send button.  You should get a 200 response, and you should see your search results.  Now we’ll set Postman to use a proxy.  Click on the wrench icon in the top navigation bar and select “Settings”.  Click on the Proxy tab, then turn the Global Proxy on.  In the first section of the Proxy Server window, type 127.0.0.1, which is your local IP address.  In the second section, type 8080, which is your local port.  Postman is now set up to send requests to Burp Suite.  You may need to do one more step here, which is to turn SSL verification off.  In the Settings window, click on the General tab, and turn off the “SSL certificate verification” setting. 

Once you have downloaded Burp Suite, start the application and click Next.  Then click Start Burp.  Now Burp Suite is ready to receive requests.  Go back to Postman, and click Send on the search request that you sent earlier.  It will appear that nothing happens; this is because the request has just been sent to Burp!  Go to Burp, and you will see that the Proxy tab is now in an orange font.  Click on the Proxy tab, and then click the Forward button.  Your request is now being sent on to Postman, and if you return to Postman to check, you will see your request results.  It’s a good idea to turn off the Global Proxy in Postman now, because if you forget, the next time you make a Postman request you’ll wonder why you aren’t getting results! 

Return to Burp Suite, and click on the HTTP tab.  (This is a sub-tab of the Proxy tab.  If you don’t see the HTTP tab, make sure the Proxy tab is selected.)  You should see your GET request listed here.  Right-click on your request, and click “Send to Intruder”.  You should see that the Intruder tab is now in an orange font. 

Click on the Intruder tab.  You can see that the attack target has already been set to the Juice Shop URL.  Now click on the Positions sub-tab.  This is where we choose which element of the request we want to replace. Burp Suite has guessed correctly that the element we’d like to replace in our testing is the search field value of “apple”, so we can leave this setting as-is.  Now click on the Payloads sub-tab.  Here is where we will try out a bunch of cross-site scripting payloads! 

Enter in a bunch of XSS attacks into the Payload Options window, using the Add button.  Here are some suggestions:

<script>alert(“XSS here!”)</script>
<script/src=data:,alert()>
<IMG SRC=javascript:alert(‘XSS’)>
<IMG SRC=JaVaScRiPt:alert(‘XSS’)>
<a onmouseover=”alert(document.cookie)”>xxs link</a>

You can find many more suggestions in this XSS Filter Bypass List.  It’s worth noting that if you sign up for the paid version of Burp Suite, a whole list of XSS attacks will be available to use, saving you from having to type them in manually. 

Let’s start our attack!  Click the “Start attack” button in the top right corner of the application.  You’ll get a warning that your requests may be throttled because you are using the free version.  Just click OK, and the attack will start.  You’ll see a popup window, and one by one your XSS attacks will be attempted.

Once the attacks are finished, we can look at the attack results. There are six requests in the popup window.  The first request, request 0, is simply a repeat of our original request.  Requests 1-5 are the requests that we added into the Payload Options window.  We can see that requests 1, 2, and 5 returned a code 200, while requests 3 and 4 returned a 500.  This means that requests 1, 2, and 5 were most likely successful!  You can try them yourself by pasting them into the search field of the Juice Shop page. 

A few notes: 

  • It’s also possible, and more common, to use Burp Suite to intercept web browser requests directly, rather than going through Postman.  I chose to use Postman here because it’s so easy to set up the proxy.  
  • If you have set up Burp Suite to intercept browser requests directly, you may be able to replay your XSS attack responses directly in the browser to see them in action.  
  • Another feature of the paid version of Burp Suite is the Scanner tool, which will scan for a number of vulnerabilities, including XSS.

I hope that this blog post and the two previous ones have helped you to have a greater understanding of what Cross-Site Scripting is, why it’s dangerous, and how to test for it.  Next week, we’ll take a look at SQL Injection!

Three Ways to Test for Cross-Site Scripting

Last week, we explained what Cross-Site Scripting (XSS) is and demonstrated a couple of examples.  But knowing what it is isn’t enough- we need to able to verify that our application is not vulnerable to XSS attacks!  Today we’ll discuss three different strategies to test for XSS.

Strategy One:  Manual Black-Box Testing

This is the strategy to use when you don’t have access to an application’s code, and when you want to manually try XSS.  To implement this strategy, you’ll need to think about the places where you could inject a script into an application:

  • an input field
  • a URL
  • the body of an HTTP request
  • a file upload area
You’ll also need to think about what attacks you will try.  You may want to use an existing list, such as this one:  https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet

This cheat sheet includes lots of different ways to get scripts past any validation filters, including:

  • on error and on mouseover alerts
  • URL encoding
  • using upper and lower case letters (to evade a filter that’s just looking for “javascript” in lower case letters)
  • putting a tab or space into a script so it won’t be detected
  • using a character to end an existing script, and then appending your own
If you are testing manually, a systematic approach is best.  Locate all of the places where you could inject a script, choose a list of attacks you’d like to try, and try each attack in each place.  While you are testing, you may also gain some insight of how you could change the attack by what kind of response you get from the attack.  For example, if your script tag is stripped out by validation, you could try to encode it.

Strategy Two:  Look at the Code

This is the strategy to use if you want to test manually, and you have access to your application’s code.  By looking at the code, you can determine the best way to craft an attack script.  This also works well for testing file uploads; for example, if your application’s code lists file types that are not allowed, you may find some types that have not explicitly been blacklisted, and you can see if you can upload a script using one of those types.
Let’s take a look at how you can use an application’s code to craft an attack.  We’ll be using the XSS Game again; we’ll be doing Challenge 3, and in order to access it you need to have solved Challenges 1 and 2; so take a look at last week’s post to see how to do that.  
As you look at the website in Challenge 3, you see that there are three different tabs, each of which displays a different image when clicked on.  Take a look at what happens in the URL each time you click on one of the tabs.  When you click on the Image 2 tab, “#2” is appended to the URL.  When you click on the Image 3 tab, “#3” is appended to the URL.  What happens when instead of clicking on a tab, you type “#2” into the URL?  Unsurprisingly, you are taken to the Image 2 tab.  What happens when you type “#5” into the URL?  There is no Image 5, but you can see that the page displays the words “Image 5”.  What happens when you type “#FOO”?  The page displays “Image NaN” (short for “Not a Number”).  You have probably figured out now that the end of the URL is the place that you are going to inject your malicious script.  
Now, let’s take a look at the code!  Click on the “toggle” link next to the words “Target Code”.  This will display the code used for the webpage.  Look at line 17; it shows how the URL for the image tag is created:
“<img src=’/static/level3/cloud” + num + “.jpg’ />”;
The “num” part of this image tag is a variable.  The value of the variable is taken from what we are sending in the URL.  If you send in a URL with “#3”, then the image tag will be
cloud3.jpg
If you send in a URL with “#FOO”, then the image tag will be
cloudFOO.jpg

Our task now is to see how we can inject a script using this “num” variable.  Recall that in last week’s post, we did some Cross-Site Scripting where we made it look like we were uploading an image, and we included an alert that would display when there was an error uploading the image. And we set things up so that there would always be an error, because we weren’t really uploading an image at all.  We are going to do the same thing here.  
Let’s craft our URL.  We will begin with
https://xss-game.appspot.com/level3/frame
because this is how the URL always starts.
Next, we’ll add
https://xss-game.appspot.com/level3/frame#3
because we want to make it look like we are following the pattern of choosing an image number.
Now we’ll add
https://xss-game.appspot.com/level3/frame#3
because we want to trick the code into thinking that the image URL is completed. This means that the code will try to load an image called “cloud3” instead of “cloud3.jpg”, which will generate an error.
Now we can add our on-error script:
https://xss-game.appspot.com/level3/frame#3  onerror=’alert(“Hacked!”)’
When the alert is triggered, a popup window will appear with the “Hacked!” message.
Let’s try it!  Take the entire URL:
https://xss-game.appspot.com/level3/frame#3 ‘ onerror=’alert(“Hacked!”)’
Paste it into the URL window, and click the “Go” button.  
You should see the popup window appear, and you have solved the challenge!
Strategy Three: Use a Security Testing Tool
As you can see from the example above, crafting an XSS attack takes a little time.  You may have many places in your application that you need to test for XSS, and not much time to test them.  This is where automated tools come in!  With an automated tool, you can send hundreds of XSS attacks in less than a minute.  In next week’s blog post, we’ll take a look at how to use an oddly-named tool called Burp Suite to automate XSS attacks!


What is Cross-Site Scripting, and Why Should You Care?

In discussions about security testing, you have probably heard about Cross-Site Scripting (XSS), but you may not have a good definition of what it is.  Cross-Site Scripting is an attack in which a malicious user finds a way to execute a script on another user’s website.  Today we’ll learn about two different kinds of XSS attacks, do a hands-on demo of each, and discuss why they are harmful to the end user.

Reflected XSS:

Reflected XSS is an attack that is executed through the web server, but is not stored in the code or the database.  Because the attack is not stored, the owner of a site may have no idea that the attack is happening.

In order to demonstrate this attack, we’ll go to a great training site from Google called the XSS Game. This site has a series of challenges in which you try to execute XSS attacks.  The challenges become increasingly more difficult as they progress.  Let’s try the first challenge.

On this page, you see a simple search field and button.  To execute the attack, all you need to do is type
<script>alert(“XSS here!”)</script>
into the text field, and click the button.  You will see your message, “XSS here!”, pop up in a new window.

What is happening here is that you are sending a script to execute a popup alert to the server.  The client-side code does not have appropriate safeguards in place to prevent a script from executing, so the site executes the script.

You might be thinking “This is a fun trick, but how could a malicious user use this to hack me?  I’m typing into my own search window.”  One way this is used is through a phishing link.  Let’s say that you are the owner of a website.  A malicious user could create a link that goes to your site, but appends a script to the end of the URL, such as
?query=%3Cscript%3Ealert%28%22XSS%22%29%3C%2Fscript%3E. 
(This is simply the attack we used earlier, with HTML encoding.) The malicious user could send this link in an email to an unsuspecting visitor to your site, making the email look like it came from you.  When the person clicks on the link, the script will navigate to your site, and then execute the popup script.  The malicious user will craft the script so that instead of containing the message “XSS here!”, it contains a message that encourages the visitor to interact with it, in order to obtain the user’s account number, or other sensitive information.

Stored XSS:

Stored XSS is an attack where the malicious script is actually stored in the database or code of a website, so it executes whenever a user navigates to the page or link. This could happen if the creator of the site did not put adequate input sanitization in the back-end database.

We’ll take a look at how to craft this attack by looking at the second challenge of the XSS Game.  (In order to see this challenge, you’ll need to have solved the first challenge, so follow the instructions above.)

In the second challenge, you are presented with a chat app.  To solve the challenge, you need to add some text to the application that will execute a script.  You can do this by typing in
<img src=’foobar’ onerror=’alert(“xss”)’>

As soon as you submit this entry, you should see a popup window with the “XSS alert!” message.  And not only that, if you navigate away from this page and return to it, you will see the popup window again.  The attack has been stored in your comment on the chat page, where it will cause a popup for any users who navigate to it.

Let’s parse through the script we entered to see what it’s doing:

<img src=’foobar’ onerror=’alert(“xss”)’>
The items in red indicate that we are passing in an image element.

<img src=’foobar’ onerror=’alert(“xss”)’>
The section in blue is telling the server what the source of the image should be.  And here’s the trick- there is no URL of foobar, so the image cannot load.

<img src=’foobar’ onerror=’alert(“xss”)’>
The section in green is telling the server that if there is an error, that a popup window should be generated with the “xss” text.  Because we have set things up so that there will always be an error, this popup will always execute.

One way that stored XSS might be used is to spoof a login window.  When a user navigates to a hacked site, they will be presented with a login window that has been crafted to look authentic.  When they enter their login credentials, their credentials will be sent to the malicious user, who can now use them to log in to the site, impersonating the victim.

Next week, we’ll discuss more ways to test for XSS attacks!

Testing for IDOR Vulnerabilities

In this week’s post, we will learn how to test for IDOR.  IDOR stands for Insecure Direct Object Reference, and it refers to a situation when a user can successfully request access to a webpage, a data object, or a file that they should not have access to.  We’ll discuss four different ways this vulnerability might appear, and then we’ll actually exploit this vulnerability in a test application using Chrome’s Developer Tools and Postman.

One easy way to look for IDOR is in a URL parameter.  Let’s say you are an online banking customer for a really insecure bank.  When you want to go to your account page, you login and you are taken to this URL:  http://mybank/customer/27.  Looking at this URL, you can tell that you are customer number 27.  What would happen if you changed the URL to http://mybank/customer/28?  If you are able to see customer 28’s data, then you have definitely found an instance of IDOR!

Another easy place to look is in a query parameter.  Imagine that your name is John Smith, and you work for a company that conducts annual reviews for each of its employees.  You can access your review by going to http://mycompany/reviews?employee=jsmith.  You are very curious about whether your coworker, Amy Jones, has received a better review than you.  You change the URL to http://mycompany/reviews?employee=ajones, and voila!  You now have access to Amy’s review.

A third way to look for IDOR is by trying to get to a page that your user should not have access to.  If your website has an admin page with a URL of http://mywebsite/admin, which is normally accessed by a menu item that is only visible when the user has admin privileges, see what happens if you log in as a non-admin user and then manually change the URL to point to the admin page.  If you can get to the admin page, you have found another instance of IDOR.

Finally, it’s also possible to exploit an IDOR vulnerability to get files that a user shouldn’t have access to.  Let’s say your site had a file called userlist.txt with the names and addresses of all your users.  If you can log in as a non-admin user and navigate to http://mywebsite/files?file=userlist.txt, then your files are not secure.

Let’s take a look at IDOR in action, using Postman, Chrome Developer Tools, and an awesome website called the OWASP Juice Shop!  The OWASP Juice Shop is an application created by Bjorn Kimminich to demonstrate the most prevalent security vulnerabilities.  You can download it and run it locally by going to https://github.com/bkimminich/juice-shop, or you can access an instance of it by going to https://juice-shop.herokuapp.com.  For this tutorial, we’ll use the heroku link.  Once you navigated to the site on Chrome, create a login for yourself.  Click the Login button in the top left, and then click the “Not yet a customer?” link. You can use any email address and password to register (don’t use any real ones!).  Log in as your new user, and click on any of the juices on the Search page in order to add it to your shopping basket. 

Before you take a look at your basket, open up the Chrome Developer Tools by clicking on the three dots in the top right corner of the browser, selecting “More Tools”, and then “Developer Tools”.  A new window will open up on either the right or the bottom of your browser.  In the navigation bar of the tools, you should see a “Network” option.  Click on this.  This network tool will display all of the network requests you are making in your browser. 

Click on the “Your Basket” link in the top right of the Juice Shop page.  You will be taken to your shopping cart and you should see the juice that you added to the basket.  Take a look in the Network section of the Developer Tools.  The request that you are looking for is one that is named simply with a number, such as “6” or “7”.  Click on this request, and you should see that the request URL is https://juice-shop.herokuapp.com/rest/basket/<whateverYourAccountIdIs>, and that the request type is a GET.  Scrolling down a bit, you’ll see that in the Request Headers, the Authorization is set to Bearer and then there is a long string of letters and numbers.  This is the auth token.  Copy everything in the token, including the word “Bearer”. 

Next, we’ll go to Postman.  Click on the plus tab to create a new request.  The request should already be set to GET by default.  Enter https://juice-shop.herokuapp.com/rest/basket/<yourAccountId> into the URL.  Now go to the Headers section, and underneath the Key section, type “Authorization”, and underneath the Value section, paste the string that you copied.  Click to Send the request, and if things are set up correctly, you will be able to see the contents of your shopping basket in the response. 

Now for the fun part!  Change the account id in the URL to a different number, such as something between 1 and 5, and click Send.  You will see the contents of someone else’s basket!  Congratulations!  You have just exploited an IDOR vulnerability! 

Introduction to Security Testing

Until a few years ago, security testing was seen as something separate from QA; something that an InfoSec team would take care of.  But massive data breaches have demonstrated that security is everyone’s responsibility, from CEOs to product owners, from DBAs to developers, and yes, to software testers.  Testers already verify that software is working as it should so that users will have a good user experience; it is now time for them to help verify that software is secure, so that users’ data will be protected.

The great news is that much of what you already do as a software tester helps with security testing!  In this post, I will outline the ways that testers can use the skills they already have to start testing with security in mind, and I will discuss the new skills that testers can learn to help secure their applications.

Things you are probably already testing:

  • Field Validation: It’s important to make sure that fields only accept the data types they are expecting, and that the number and type of characters is enforced. This helps ensure that SQL injection and cross-site scripting can’t be entered through a data field.
  • Authentication: Everyone knows that it’s important to test the login page of an application. You are probably already testing to make sure that when login fails, the UI doesn’t provide any hints as to whether the username or password failed, and testing to make sure that the password isn’t saved after logout or displayed in clear text.  This serves to make it more difficult for a malicious user to figure out how to log in.
  • Authorization: You are already paying attention to which user roles have access to which pages.  By verifying that only authorized users can view specific pages, you are helping to insure that data does not fall into the wrong hands.

    Things you can learn for more comprehensive security testing:

    • Intercepting and Manipulating Requests: It is easy to intercept web requests with free tools that are available to everyone online.  If attackers are doing this (and they are), then it is important for you to insure that they can’t get access to information that they shouldn’t have.
    • Cross-site Scripting (XSS): This involves entering scripted code that will be executed when someone navigates to a page or retrieves data.  Any text field on a page, even any URL, represents a potential attack point for a malicious user to insert a script.
    • SQL Injection: This is exploiting potential security holes in communication with the database in order to retrieve more information than the application intended.  As with cross-site scripting, any text field or URL has the potential to be used to extract data.
    • Session Hijacking: It’s important to learn if usernames, passwords, tokens, or other sensitive information is displayed in clear text or poorly encrypted. Malicious users can take this information and use it to log in as someone else.  

      Security testing involves a shift in mindset from traditional testing.  When we test software, we are usually thinking like an end user.  For security testing, we need to think like a malicious user.  End users take the Happy Path, because they are using the software for its intended purpose, whereas hackers are trying to find any possible security holes and exploit them.  Because of this, security testing requires a bit more patience than traditional testing.  In the next few posts, I’ll be discussing the new skills we can learn, and the ways that we can Think Like a (Security) Tester!

      Understanding JSON Data

      New API testers will often be mystified by the assortment of curly braces, colons, and commas that they see in the body of the response to their GET requests.  Trying to create a valid JSON body for a POST request is even more puzzling.  In this week’s post, I’ll discuss how JSON data is formed and offer up some resources that will make working with JSON easier.

      JSON stands for JavaScript Object Notation.  It’s simply a way to organize data so that it can easily be parsed by the code.  The fundamental building block in JSON is the name-value pair.  Here are some examples of name-value pairs:
      "Name": "Dino"
      "Color": "Purple"
      A group of name-value pairs is separated by commas, like this:
      "FirstName": "Fred",
      "LastName": "Flintstone",
      "City": "Bedrock"
      Note that the final name:value pair does not have a comma.  This is because it’s at the end of the group.
      An object is simply a grouping of one or more name-value pairs.  The object is represented with curly braces surrounding the name-value pairs.  For example, we might represent a pet object like this:
      {
      "Name": "Dino",
      "Type": "Dinosaur",
      "Age": "5",
      "Color": "Purple"
      }
      An array is a group of objects.  The array is represented with square braces, and the objects inside the array have curly braces.  For example:

      "residents": [
      {
      "FirstName": "Fred",
      "LastName": "Flintstone"
      },
      {
      "FirstName": "Barney",
      "LastName": "Rubble"
      },
      {
      "FirstName": "Wilma",
      "LastName": "Flintstone"
      }
      ]

      Notice that Fred Flintstone’s last name does not have a comma after it.  This is because the LastName is the last name-value pair in the object.  But, notice that the object that contains Fred Flinstone does have a comma after it, because there are more objects in the array.  Finally, notice that the object that contains Wilma Flintstone does not have a comma after it, because it is the last object in the array.

      Not only can an array contain objects, but an object can contain an array.  When you are sending in JSON in the body of an API request, it will always be in the form of an object, which means that it will always begin and end with a curly brace.  Also, name-value pairs, objects, and arrays can be very deeply nested.  It would not be unusual to see something like this contained in a POST for city data:

      {
      "residents": [
      {
      "firstName": "Fred",
      "lastName": "Flintstone",
      "contactInfo": {
      "phoneNumber": "555-867-5309",
      "email": "[email protected]"
      }
      },
      {
      "firstName": "Wilma",
      "lastName": "Flintstone",
      "contactInfo": {
      "phoneNumber": "555-423-4545",
      "email": "[email protected]"
      }
      }
      ],
      "pets": [
      {
      "name": "Dino",
      "type": "dinosaur",
      "color": "purple"
      },
      {
      "name": "Hoppy",
      "type": "hopparoo",
      "color": "green"
      }
      ]
      }

      Notice that the contactInfo is deeply nested in the city object.  If we were testing this API and you wanted to assert that Fred Flintstone’s phone number was correct, we would access it like this:

      residents[0].contactInfo.phoneNumber

      The first array in the city object is the residents array, and Fred is the first resident in the array, so we access him with residents[0].  Next, we move to the contactInfo, and since the contactInfo is an object rather than array, we don’t need to specify a number in braces.  Finally, we specify the phoneNumber as the name-value pair within the contactInfo object that we are looking for.

      Understanding this nested structure is also important when passing in query parameters in a URL.  For example, if we were to do a GET request on the city object, and we only wanted to have the residents of the city returned, we could use a URL like this:

      http://myapp/city/Bedrock?fields=residents

      If we wanted to narrow the results further, and only see the first names and email addresses of our residents, we could use a URL like this:

      http://myapp/city/Bedrock?fields=residents(firstName), residents(contactInfo(email))

      First we are asking for just the residents, and we specify only the firstName within the residents array.  Then we ask for the residents, and we specify only the contactInfo within the residents and only the email within the contactInfo.

      Even with the explanations above, you may find working with JSON objects frustrating.  Here are two great, free, tools that can help:

      JSONLint– paste any JSON you have into this page, and it will tell you whether or not it is valid JSON.  If it is invalid JSON, it will let you know at what line it becomes invalid.

      JSON Pretty Print– it’s sometimes hard to format JSON so that the indents are all correct.  Also, having correct indents will make it easier to interpret the JSON.  Whenever you have a JSON object that is not indented correctly, you can paste it into this page and it will format it for you.

      Over the last several weeks, we’ve covered everything you need to know to be successful with API testing.  If you have any unanswered questions, please mention them in the comments section of this post.  Next week, we’ll begin a discussion of application security!

      What API Tests to Automate, and When to Automate Them

      Last week, we talked about running API tests from the command line using Newman, and how to add Newman into your Continuous Integration system so that your API tests run automatically.  But knowing how to run your tests isn’t that helpful unless you make good choices about what tests to run, and when to run them.  Let’s first think about what to automate.

      Let’s imagine that we have an API with these requests:

      POST user
      GET user/{userId}
      PUT user/{userId}
      DELETE user/{userId}

      The first category of tests we will want to have are the simple Happy Path requests.  For example:

      POST a new user and verify that we get a 200 response
      GET the user and verify that we get a 200 response, and that the correct user is returned
      PUT an update to the user and verify that we get a 200 response
      DELETE the user, and verify that we get a 200 response

      The next category of tests we want to have are some simple negative requests.  For example:

      POST a new user with a missing required field and verify that we get a 400 response
      GET a user with an id that doesn’t exist and verify that we get a 404 response
      PUT an update to the user with an invalid field and verify that we get a 400 response
      DELETE a user with an id that doesn’t exist and verify that we get a 404 response

      You’ll want to test 400 and 404 responses on every request that has them.  It’s not necessary to test every single trigger of a 400- for example, you don’t need to have automated tests for every single missing required field- but you will want to have one test where one required field is missing.

      The third category of tests we want to have are more Happy Path requests, with variations. For example:

      POST a new user with only the required fields, rather than all fields, and verify that we get a 200 response
      GET a user with query parameters, such as user/{userId}?fields=firstName,lastName, and verify that we get a 200 response, and the appropriate values in the response
      PUT a user where one non-required field is replaced with null, and one field that is currently null is replaced with a value, and verify that we get a 200 response

      It’s worth noting that we might not want to test every possible combination in this category.  For example, if our GET request allows us to filter by five different values: firstName, lastName, username, email, and city, there are dozens of possibilities of what you could filter on.  We don’t want to automate every single combination; just enough to show that each filter is working correctly, and that some combinations are working as well.

      Finally, we have the category of security tests.  For example, if each request needs an authorization token to run, verify that we get an appropriate error:

      POST a new user without an authorization token, and verify that we get a 401 response
      GET a user with an invalid token, and verify that we get a 403 response
      PUT an update to the user with an invalid token, and verify that we get a 403 response
      DELETE a user without an authorization token, and verify that we get a 401 response

      For each request, you’ll want to test for both a 401 response (an unauthenticated user’s request) and a 403 response (an authorized user’s request).

      There may be many more tests than the ones that have been listed here that are appropriate for an API you are testing.  But these examples serve to get you thinking about the four different types of tests.

      Now let’s take a look at how we might use these four types in automation!  First, we want to have some Smoke Tests that will run very quickly when code is deployed from one environment to another, up the chain to the Production environment.  What we want to do with these tests is simply verify that our endpoints can be reached.  So all we need to do is run the first category of tests: the simple Happy Path requests.  In our example API, we only have four request types, so we only need to run four tests.  This will only take a matter of seconds.

      We’d also like to have some tests that run whenever new code is checked in.  We want to make sure that the new code doesn’t break any existing functionality.  For this scenario, I recommend doing the first two categories of tests: the simple Happy Path requests, and the simple negative requests.  We could have one positive and one or two negative tests for each request, and this will probably be enough to provide accurate feedback to developers when they are checking in their code.  In our example API, this amounts to no more than twelve tests, so our developers will be able to get feedback in about one minute.

      Finally, it’s also great to have a full regression suite that runs nightly.  This suite can take a little longer, because no one is waiting for it to run.  I like to include tests of all four types in the suite, or sometimes I create two nightly regression suites: one that has the first three types of tests, and one that has just the security tests.  Even if you have a hundred tests, you can probably run your full regression suite in just a few minutes, because API tests run so quickly.

      Once you have your Smoke, Build, and Regression tests created and set to run automatically, you can relax in the knowledge that if something goes wrong with your API, you’ll know it.  This will free you up to do more exploratory testing!

      Next week we’ll take look at API request formats: JSON structure and query parameters!

      Automating Your API Tests

      Running a test collection in Postman is a great way to test your APIs quickly.  But an even faster way to run your tests is to run them automatically!  In order to automate your Postman tests, we first need to learn how to run them from the command line.

      Newman is the command-line running tool for Postman.  It is a NodeJS module, and it is easy to install with npm (the Node Package Manager).  If you do not already have Node installed, you can easily install it by going to https://nodejs.org.  Node contains npm, so once Node is installed, it is extremely easy to install Newman.  Simply open a command line window and type:

      npm install -g newman

      The “-g” in this command is telling npm to install newman globally on your computer.  This means that it will be easy to run Newman from the command line no matter what folder you are in.

      Now that Newman is installed, let’s try running a collection!  We will use the Pet Store collection that we created in this blog post and updated with assertions in this post.  In order to run Newman, you will need to export your collection and your environment as .json files.

      Find your collection in the left panel of Postman, and click on the three dots next to it.  Choose “Export”, then choose Collection v.2.1 as your export option, then click on the “Export” button.  You’ll be prompted to choose a location for your collection.  Choose any location you like, but make a note of it, because you’ll need to use that location in your Newman command.  I have chosen to export my collection to my Desktop.

      Next, click on the gear button in the top right of the screen to open the Environments window.  Next to the Pet Store environment, click on the down arrow.  Save the exported environment to the same location where you saved your collection.

      Now we are ready to run the Newman command.  In your command window, type:

      newman run <pathToYourFile> -e <pathToYourEnvironment>

      Obviously, you will want to replace the <pathToYourFile> and the <pathToYourEnvironment> with the correct paths.  Since I exported my .json files to my Desktop, this is what my command looks like:

      newman run “Desktop/Pet Store.postman_collection.json” -e “Desktop/Pet Store.postman_environment.json”

      The -e in the command specifies what environment should be used with the collection.

      An alternative way of pointing to the .json files is to cd directly into the location of the files.  If I ran cd Desktop, then my command would look like:

      newman run “Pet Store.postman_collection.json” -e “PetStore.postman_environment.json”

      Also note that the quote marks are not necessary if your file name doesn’t have any spaces.  If I were to rename my files so there was no space between Pet and Store, I could run my collection like this:

      newman run PetStore.postman_collection.json -e PetStore.postman_environment.json

      And, once you have exported your .json files, you can really name them whatever you want.  As long as you call them correctly in the Newman command, they will run.

      When you make the call to Newman, you will see your tests run in the command line, and you will wind up with a little chart that shows the test results like this:

      If you are running your tests against your company’s test environment, you may find that running Newman returns a security error.  This may be because your test environment has a self-signed certificate.  Newman is set by default to have strict SSL, which means that it is looking for a valid SSL certificate.  You can relax this setting by sending your Newman command with a -k option, like this:

      newman run “Desktop/Pet Store.postman_collection.json” -e “Desktop/Pet Store.postman_environment.json” -k

      Another handy option is the -n option.  This specifies the number of times you want the Newman collection to run.  If you’d like to put some load on your application, you could run:

      newman run “Desktop/Pet Store.postman_collection.json” -e “Desktop/Pet Store.postman_environment.json” -n 1000

      This will run your collection 1000 times.  In the results chart, you can see what the average response time was for your requests.  While this is not exactly a load test, since the requests are running one at a time, it will still give you a general idea of a typical response time.

      Once you have your Newman command working, you can set it to run in a cron job, or a Powershell script, or put the command into a continuous integration tool such as Jenkins, or Travis CI, or Docker. There is also an npm module for reporting Newman results with TeamCity.

      Whatever tool you choose, you will have a way to run your Postman API tests automatically!  Next week, we’ll talk about which API tests to automate, and when to run them.

      Organizing Your API Tests

      One of the things that makes me happy about API testing is how easy it is to organize tests and environment variables.  I love having test suites ready at a moment’s notice; to run at the push of a button when regression testing is needed, or to run automatically as part of continuous integration.

      This week I’ll be talking about some organizational patterns you can use for your API tests.  I’ll be discussing them in the context of Postman, but the concepts will be similar no matter what API testing platform you are using.

      First, let’s discuss environments.  As you recall from last week’s post, an environment is a collection of variables in Postman.  There are two different ways I like to set up my Postman environments.  In order to explain them, I’ll use some scenarios.  For both scenarios, let’s assume that I have an application that begins its deployment lifecycle in Development, then moves to QA, then Staging, and then Production.

      In my first scenario, I have an API that gets and updates information about all the users on my website.  In each product environment (Dev, QA, Staging, Prod), the test users will be different.  They’ll have different IDs, and different first and last names.  The URLs for the product environments will each be different as well. However, my tests will be exactly the same; in each product environment, I’ll want to GET a user, and PUT a user update.

      So, I will create four different Postman environments:
      Users- Dev
      Users- QA
      Users- Staging
      Users- Prod

      In each of my four environments, I’ll have these variables:
      environmentURL
      userId
      firstName
      lastName

      Then my test collection will reference those variables.  For example, I could have a test request like this:

      GET https://{{environmentURL}}/users/{{userId}}

      Which environment URL is called and which userId is used will depend on which Postman environment I am using.  With this strategy, it’s easy for me to switch from the Dev environment to the QA environment, or any other environment.  All I have to do is change the Postman environment setting and run the same test again.

      The second scenario I use is for situations like this one: I have a function that delivers an email.  The function uses the same URL regardless of the product environment. I like to pass in a timestamp variable, and that stays the same (it shows the current time) regardless of what environment I am using.  But I like to change the language of the email depending on what product environment I am in.

      In this case, I am creating only one Postman environment:
      Email Test

      My one Postman environment has this variable:
      timestamp

      My test collection, however, has one test for each product environment.  So I have:
      Send Email- Dev
      Send Email- QA
      Send Email- Staging
      Send Email- Production

      Each request includes the timestamp variable, but has a variation in what is sent in the body of the email. For the Dev environment, I use a request that contains “Dev” in the message body, for the QA environment, I use a request that contains “QA” in the message body, and so on.

      When deciding which of these two environment strategies to use, consider the following:

      • what stays the same from product environment to product environment?
      • what changes from product environment to product environment?
      If there are many variables that change from product environment to product environment, you may want to consider setting up multiple Postman environments, as in my first scenario.  If there are only one or two things that change from environment to environment, and if the URL doesn’t change, you may want to use the second scenario, which has just one Postman environment, but different test requests for each product environment.
      Now let’s talk about ways to organize our tests.  First, let’s think about test collections. The most obvious way to organize collections is by API.  If you have more than one API in your application, you can create one collection for each of the APIs.  You can also create collections by test function.  For example, if I have a Users API, and I want to run a full regression suite, a nightly automated test, and a deployment smoke test, I could create three collections, like this:
      Users- Full Regression
      Users- Nightly Tests
      Users- Deployment Smoke
      Finally, let’s think about test folders.  Postman is so flexible in this area, in that you can use any number of folders in a collection, and you can also use sub-folders.  Here are some suggestions for how you can organize your tests into folders:
      By type of request: all your POST requests in one folder; all your GET requests in another
      By endpoint: GET myapp/users in one folder; GET myapp/user/userId in another
      By result expected: GET myapp/users Happy Path requests in one folder; GET myapp/users bad requests in another folder
      By feature: GET myapp/users with a Sort function in one folder, and GET myapp/users with a Filter function in another
      As with all organizing efforts, the purpose of organizing your tests and environments is to ensure that they can be used as efficiently as possible.  By looking at the types of tests you will be running, and what the variations are in the environment where you will be running them, you can organize the Postman environments, collections, and folders in such a way that you have all the tests you need immediately at your fingertips.
      Next week, we’ll be talking about running collections from the command line, and setting tests to run automatically!