UI Unit Testing

Are you confused by this title? We generally think of unit testing as something that’s done deep in the code, and we think of UI testing as something that’s done with the browser once an application has been deployed. But recently I learned that there’s such a thing as UI unit testing, and I’m really excited about it!

Wikipedia defines unit testing in this way: “a software testing method by which individual units of source code—sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures—are tested to determine whether they are fit for use”. We can think of individual UI components as units of source code as well. Two popular front-end frameworks make it easy to do UI unit testing: React and Angular. In this post, I’ll walk through a simple example for each.

Fortunately the good people who write documentation for React and Angular have made it really easy to get set up with a new project! You can find the steps for setting up a React project here: https://create-react-app.dev/docs/getting-started/, and the steps for setting up an Angular project here: https://angular.io/guide/setup-local. I’ll be walking you through both sets of steps in this post. Important note: both React and Angular will need to have Node installed in order to run.

React:
From the command line, navigate to the folder where you would like to create your React project. Then type the following:
npx create-react-app my-react-app
cd my-react-app

This will create a new React project called my-react-app, and then change directories to that project.
Next you’ll open that my-react-app folder in your favorite code editor (I really like VS Code).
Open up the src folder, then the App.js file. In that file you can see that there is some HTML which includes a link to a React tutorial with the text “Learn React”.
Now let’s take a look at the App.test.js file. This is our test file, and it has one test, which checks to see that the “Learn React” link is present.

Let’s try running the test! In the command line, type npm test, then the Enter key, then a. The test should run and pass.

If you open the App.js file and change the link text on line 19 to something else, like Hello World! and save, you’ll see the test run again and fail, because the link now has a different text. If you open the App.test.js file and change the getByText on line 7 from /learn react/ to /Hello World!/ and save, you’ll see the test run again, and this time it will pass.

Angular:
To create an Angular app, you first need to install Angular. In the command line, navigate to the folder where you’d like to create your new project, and type npm install -g @angular/cli.
Now you can create your new app by typing ng new my-angular-app. Then change directories to your new app by typing cd my-angular-app.

Let’s take a look at the app you created by opening the my-angular-app folder in your favorite code editor. Open the src folder, then the app folder, and take a look at the app.component.ts file: it creates a web page with the title ‘my-angular-app’. Now let’s take a look at the the test file: app.component.spec.ts. It has three tests: it tests that the app has been created, that it has my-angular-app as the title and that the title is rendered on the page.

To run the tests, simply type ng-test in the command line. This will compile the code and launch a Karma window, which is where the tests are run. The tests should run and pass.

Let’s make a change to the app: in the app.component.ts file, change the title on line 9 to Hello Again!. The Karma tests will run again, and now you will see that two tests are failing, because we changed the title. We can update our tests by going to the app.component.spec.js file and changing lines 23, 26, and 33 to have Hello Again! instead of my-angular-app. Save, and the tests should run again and pass.

Did you notice something interesting about these tests? They didn’t actually require the application to be deployed! The Angular application did launch a local browser instance, but it wasn’t deployed anywhere else. And the React application didn’t launch a browser at all when it ran the tests. So there you have it! It’s possible to run unit tests on the UI.

Why We’ll Always Need Software Testers

Are you familiar with the Modern Testing Principles, created by Alan Page and Brent Jensen? I first heard of the principles about a year ago, and I was really excited about the ideas they contained. But I was uncomfortable with Principle #7, which is “We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.” Eliminating a testing specialist? This seemed wrong to me! I thought about it over several months and realized that yes, it’s possible for a team to develop such a good testing mindset and skillset that a dedicated testing expert wouldn’t be needed. But don’t hang up your QA hat just yet! Here are three reasons we will always need software testers.

Reason One: Teams Change

Did you hear about the software team that was so good that it never changed personnel? Nope, neither did I. Life brings about changes, and even a perfect team doesn’t last forever. A member could retire, take another job, or be moved to another team. New members could join the team. I’ve heard that every time a team changes even by one person, that team is brand new. This means that there will be new opportunities to coach the team to build in quality.

Even if the team members don’t change, there can be changes that are challenging for a team, such as a new technical problem to solve, a big push towards a deadline, or a sudden increase in the number of end users. All of these changes might result in new testing strategies, which means the team will need a software tester’s expertise.

Reason Two: Times Change

When I started my first software testing job, I had never heard of the Agile model of software development. But soon it became common knowledge, and now practically no one is using the old Waterfall model. Similarly, there was a time when I didn’t know what CI/CD was, and now it’s a goal for most teams. There are pushes to shift testing to the left (unit and integration tests running against a development build), and to shift testing to the right (testing in production).

Some of these practices may prove to be long-lasting, and some will be replaced by new ideas. Whenever a new idea emerges, new ways of thinking and behaving are necessary. Software testing experts will be needed to determine the best ways to adapt to these new strategies.

Reason Three: Tech Changes

Remember when Perl was a popular scripting language? Or when Pascal was the language of choice? The tools and languages that are in use today won’t be in use forever. Someone will come along and create a newer, more efficient tool or language that will replace the previous one. For example, Cypress recently emerged as an alternative to Selenium Webdriver, which has been the tool of choice for years. And companies are frequently moving toward cloud providers such as AWS or Azure Web Services to reduce the processing load on their servers.

When a team adopts a new technology, there will always be some uncertainty. As a feature is developed, there may be changes made to the configuration or coding strategy, so it may be unclear at first how to design test automation. It can take a team a while to adapt and learn what’s best. A testing expert will be very valuable in this situation.

Change happens, and teams must adapt to change, so it is helpful to have a team member who understands the best way to write a test plan, conduct exploratory testing, or evaluate risk. So don’t go looking for a software development job! Your testing expertise is still needed.

Book Review: Performance Testing- A Practical Guide and Approach

It’s book review time once again, and this month I read “Performance Testing- A Practical Guide and Approach” by Albert Witteveen. I’ve been looking for a resource on performance testing for a long time, because I’ve found that most of the articles and presentations on performance testing either assume a lot of prior knowledge, or focus on using a specific tool without explaining the reasoning behind what is being tested.

This book was definitely what I needed! It explains very clearly why we should be doing performance testing, what kinds of tests we should run, how we should set the tests up, how to run them, and how to report on the results.

Here are some of the things I learned in this book:

Performance testing simply means testing a system to see if it performs its functions within given constraints such as processing time and throughput rate.

Load testing is measuring the performance of an application when we apply a load that is the same as what we would expect in the production environment.

Stress testing is finding out at what point an application will break when we apply an increasing load, and determining what will happen when it breaks.

Endurance testing is about testing with a load over an extended period of time. This can be helpful in discovering problems such as memory leaks.

Virtual users refers to the number of simulated users we are using to test the system.

Performance tests need to be planned thoughtfully; it’s not just a matter of throwing load on every web page or API call. The tester needs to think about where the potential risks of load are and design tests to investigate those risks.

How to create a load test:

  • Generally load tests are created by recording an activity in order to create a test script. Next, you’ll need to add a data set for use in testing.
  • You should run your load test first with just one user. You’ll need to build in some kind of check in your script to make sure that the script is working as you are expecting it to. For example, if you are load testing the login page, you’ll want to see that the user was able to log in successfully instead of getting an error message.
  • Once your script is working well with one user, try it with three users, and make sure that it’s using unique values from your test data instead of using the same values three times.
  • When you have validated that your script is working correctly, you can execute your load test by adding the number of virtual users and ramp-up time that are appropriate to what you would expect your application to be able to handle in production.

It’s very important to monitor things such as CPU usage, memory usage, database activities, and network activity when you are running a load test. Just measuring response times isn’t going to give you the whole picture.

It’s also important to know what kind of queuing your application is using so you can locate bottlenecks in performance. The author uses an easy to understand analogy with a supermarket:

  • A small market with just one checkout lane is like a system with a single CPU. Every customer has to go through this queue, and it’s first come, first served.
  • A larger market with more than one checkout lane is like a system with multiple web servers. Every customer (or in the case of the web servers, the load balancer) picks one checkout lane, and waits to go through that lane.
  • A deli where the customer takes a number and waits their turn is like Web server software where multiple workers can process the request. The customer waits their turn and can be picked up by any one of the worker processes.

Load testing tools themselves generate load when they are running! For that reason, it’s best to keep test scripts simple.

This is just a small sampling of what I learned from this book. It’s a short book, but filled with great explanations and advice. One thing worth mentioning is that there are a number of grammatical errors in the book, and even a few chapters that are missing the last words in the last sentence. It makes reading the book a little slower, since the reader sometimes has to guess at what was meant.

But in spite of these issues, it’s a great book for getting started with performance testing! I recommend it to anyone who would like to understand performance testing or run load tests on their application.

Adventures in Node: Async/Await

As I’ve mentioned in previous posts, I’ve been taking an awesome course on Node for the last several months. This course has really helped me learn JavaScript concepts that I knew about but didn’t understand.

Last month I explained promises. The basic idea of promises is that node functions happen asynchronously, so promises are like place-savers that wait for either a resolve or reject response.

Here’s an example of a function with a promise:

const add = (a, b) => {
    return new Promise((resolve, reject) => {
        setTimeout(() => {
            resolve(a+b)
        }, 2000)
    })
}

In this function, we’re simply adding two numbers. The setTimeout for two seconds is added to make the function take a little longer, as a real function would.

Here’s what calling the function might look like:

add(1, 2).then((sum) => {
    console.log(sum)
})

The add function is called, and then we use the then command to log the result. Using then() means that we’re waiting for the promise to be resolved before we go on.

But what if we needed to call the function a second time? Let’s say we wanted to add two numbers, then we wanted to take that sum and add a third number to it. For this we’d need to do promise chaining. Here’s an example of promise chaining:

add(1, 2).then((sum) => {
    add(sum, 3).then((sum2) => {
        console.log(sum2)
    })
})

The add function is called, and we use the then() command with the sum that is returned. Then we call the function again, and use the then() command once again with the new sum that is returned. Finally, we log the new sum.

This works just fine, but imagine if we had to chain a number of function calls together? It would start to get pretty tricky with all the thens, curly braces and indenting. And this is why async/await was invented!

With async/await, you don’t have to chain promises together. You create a new async function call, and then you use the await command when you are calling a function with a promise.

Here’s what the chained promise call would look like if we used async/await instead:

const getSum = async () => {
    const sum = await add(1, 2)
    const sum2 = await add(sum, 3)
    console.log(sum2)
}

getSum()

We’re creating a new async function called getSum, by using this command:
const getSum = async () =>. In that function, we’re first calling the add function with an await, and we’re setting the result of that call to the variable called sum. Then we’re calling the add function again with an await, and we’re setting the result of that call to the variable called sum2. Finally, we’re logging the value of sum2.

Now that the async function has been created, we’re calling it with the getSum() command.

It’s pretty clear that this code is easier to read with async/await! Keep in mind that promises are still being used here; the add() function still returns a promise. But async/await provides a way to call a promise function without having to add in a then() statement.

Now that I understand about async/await, I plan to use it whenever I am doing development or writing test code in Node. I hope you’ll take the time to try out these examples for yourself to see how they work!

Tear Down Your Automation Silos!

On many software teams, developers are responsible for writing unit and component tests, and software testers are responsible for writing API and UI tests.  It’s great that teams have so much test coverage, but problems can arise when test automation is siloed in this way.  For one thing, developers and software testers often don’t know how each other’s tests work, which means if a developer makes a change that breaks a test, they don’t know how to fix it.  And if only one person on the team knows how the deployment smoke tests work, then that person will need to be on call for every single deployment.  

I recommend that every developer and software tester on the team know how to write and maintain every type of test automation for their product.  Here are three good reasons to break down automation silos: 

No more test overlap: If automated tests are siloed between developers and testers, it’s possible that there is work that is duplicated.  Why have several UI tests that exercise business logic when there are already integration tests that do this?

No more bottlenecks: Testers are often required to create and maintain all the UI automation while at the same time doing all the testing.  If a developer pushes a change that breaks a UI test, it’s often up to the tester to figure out what’s wrong.  If developers know how the UI automation works, they can fix any tests they break, and even add new tests when needed, allowing testers to finish testing new features.  

Knowledge sharing: Software testers have a very special skill set; they can look at application features and think of ways to test the limits of those features.  By learning from testers, developers will become better at testing their own code.  Developers have a very special skill set as well; they are very familiar with good coding patterns.  Many software testers arrive at QA from diverse backgrounds, and don’t always have formal training in coding.  They can benefit from learning clean coding skills from developers.  

By breaking down automation silos and taking responsibility for test automation together, software developers and software testers can benefit from and help each other, speeding up development and improving the quality of the application.  

Book Review: Unit Testing Principles, Practices, and Patterns

It’s book review time once again, and this month I read Unit Testing Principles, Practices, and Patterns by Vladimir Khorikov.  I thought that a book about unit testing would be pretty dry, but it was really interesting!

Since I am not a developer I don’t usually write unit tests, although I have done so occasionally when a developer asks me to help.  Being a good tester, I knew to do things like mock out dependencies and keep my tests idempotent, but through this book I discovered lots of things I didn’t know about unit testing.

The author has a background in mathematics, and it shows.  He is very systematic in his process of explaining good unit test patterns, and each chapter builds upon the previous one.  Here are some of the important things I learned from this book:

  • There are two schools of thought about unit testing: the classical school and the London school.  In the classical school, unit tests are not always limited to a single class.  The tests are more concerned with units of behavior.  Dependencies, such as other classes, don’t need to be mocked if they aren’t shared.  In the London school, unit tests are limited to a single class, and calls to other classes are always mocked, even if they are part of the same code base and not shared with any other code.
  • Unit tests should always follow this pattern:
    • Arrange- where the variables, mocks, and system under test (SUT) are set up
    • Act- where something is done to the SUT
    • Assert- where we assert that the result is what we expect
  • The Act section of the unit test should only have one line of code.  If it has more than one line of code, that probably means that we are testing more than one thing at a time.
  • A good unit test has the following characteristics:
    • It’s protected against regressions- it shouldn’t break when you change something unrelated in the code
    • It’s resistant to refactoring- refactoring the code shouldn’t break the test
    • It provides fast feedback
    • It’s maintainable- it’s easy for someone to look at the test, see what it’s supposed to do, and make changes to it when needed
  • Mocks and stubs are both types of test doubles: faked dependencies in tests which are used instead of calling the real dependencies in order to keep the tests fast, resilient, and focused only on the code being tested.
    • Mocks emulate outgoing interactions, such as putting a message on a service bus
    • Stubs emulate incoming interactions, such as receiving data from a database
  • Test doubles should only be used with inter-system communications: calls to something outside the code, like a shared database or an email server.  For intra-system communications, where a datastore or class is solely owned by the code, the call shouldn’t be mocked or stubbed.

The most interesting thing I learned from the book was that it’s really hard to write good unit tests when the code is bad.  The author provides lots of examples of how to refactor code in order to make tests more robust.  These practices also result in better code!  Reading through the examples, I now understand how to better organize my code by separating it into two groups: code that makes a decision, such as function that adds two numbers, and code that acts upon a decision, such as writing a sum to a database.

The author doesn’t just write about unit tests in this book; he also describes how to write integration tests, and provides examples of writing tests for interacting with databases.  

I learned much more than I was expecting to from this book!  Software test engineers will find many helpful ideas for all types of automation code in this book. Software developers will not only improve their unit test writing, but also their coding skills.  I recommend it to anyone who would like to improve their test automation.

Managing Test Data

It’s never fun to start your work day and discover that some or all of your nightly automated tests failed. It’s especially frustrating when you discover that the reason why your tests failed was because someone changed your test data.

Test data issues are a very common source of aggravation for software testers. Whether you are testing manually or running automation, if you think your data is set the way you want it, and it has been changed, you will waste time trying to figure out why your test results aren’t right.

Here are some of the common problems with test data:

Users overwrite each others’ data
I was on a team that had an API I’ll call API 1. I wrote several automated tests for this API using a test user. API 1 was moved to another team, and my team started working on API 2. I wrote several automated tests for API 2 as well. Unfortunately, I used the same test user for API 2, and this test user needed to have a different email address for API 2 than it did for API 1. This meant that whenever automated tests were run on API 1, it changed the address of the test user, and then my API 2 tests would fail.

Configuration is changed by another team
When teams need to share a test environment, changes to the environment configuration made by one team can impact another team. This is especially common when using feature toggles. One team might have test automation set up with the assumption that a feature toggle will be on, but another team might have automation set up with the expectation that the feature toggle is off.

Data is deleted or changed by a database refresh
Companies that use sensitive data often need to periodically scramble or overwrite that data to make sure that no one is testing with real customer information. When this happens, test users that have been set up for automation or manual testing can be renamed, changed, or deleted, causing tests to fail.

Data becomes stale
Sometimes data that is valid at one point in time becomes invalid as time passes. A great example of this is a calendar date. If an automated test needs a date in the future, the test writer might choose a date a year or two from now. Unfortunately, in a year or two, that future date will become a past date, and then the test will fail.

What can we do about these problems? Here are some suggestions:

Use Docker
Using a virtual environment like Docker means that you have complete control over your test environment, including your application configuration and your database. To run your tests, you spin up a virtual machine, run the tests, and destroy the machine when the tests have completed.

Create a fresh database for testing
It’s possible to create a brand-new database for the sole purpose of running your test automation. With Windows, this can be accomplished by creating a SQL DACPAC. You can set your database schema, add in only the data that you need for testing, create the database, point your tests to that database, and destroy the database when you are finished.

Give each team their own test space
Even if teams have to share the same test environment, they might be able to divide their testing up by account. For example, if your application has several test companies, each team can get a different test company to use for testing. This is especially helpful when dealing with toggles; one team’s test company can have a feature toggled on while another team’s test company has that feature toggled off.

Give each team their own users
If you have a situation where all teams have to use the same test environment and the same test account, you can still assign each team a different set of test users. This way teams won’t accidentally overwrite one another’s data. You can give your users names specific to your team, such as “Sue GreenTeamUser”. 

Create new data each time you test
One great way to manage test data is to create the data you need at the beginning of the test. For example, if you need a customer for your test, you create the new customer at the beginning of your test suite, use that customer for your tests, and then delete the customer at the end of your tests. This ensures that your test data is always exactly the way you want it, and it doesn’t add bloat to the existing database.

Use “today+1” for dates in the future
Rather than choosing an arbitrary date in the future, you can get today’s date, and then use an operation like DateAdd to add some interval, like a day, month, or year, to today’s date. This way your test date will always be in the future. 

Working with test data can be very frustrating. But with some planning and strategy, you can ensure that your data will be correct whenever you run a test. 

Why We Test

Most software testers, when asked why they enjoy testing, will say things like:

  • I like to think about all the ways I can test new features
  • It’s fun to come up with ways to break the software
  • I like the challenge of learning how the different parts of an application work

I certainly agree with all of those statements!  Testing software is fun, creative, and challenging. 

But this is not WHY we are testing.  We test to find out things about an application in order to ensure that our end users have a good experience with it.  Software is built in order to be used for something; if it doesn’t work well or correctly, it is not accomplishing its purpose.  

For example:

  • If a mobile app won’t load quickly, users will stop using it or delete the app from their phone
  • If a financial app has a security breach, they’ll lose customers and may even be sued for damages
  • If an online store has a bug that keeps shoppers from completing their purchases, the company will lose out on sales

There are even documented cases of people losing their lives because of problems with software!  

So while it’s fun to find bugs, it’s also critically important.  And it’s even more important to remember that the true test of software is how it behaves in production with real users.  Often testers keep their focus on their test environment, because that’s where they have the most control over the software under test, but it’s crucial to test in production as well.  

I have seen situations where testers only tested a new feature in their test environment, and then were totally surprised when users reported that the feature didn’t work at all in production!  This was because there were environment variables that were hard-coded to match the test environment.  The feature was released to production, and the testers didn’t bother to check it.

Having things “work” in production is only one facet of quality, however.  We also need to make sure that pages load within a reasonable amount of time, that data is saved correctly, and that the system behaves well under times of high use.

Take a moment to think about the application you test.  In production:

  • Is it usable?
  • Is it reliable?
  • Is the user’s data secure?
  • Do the pages load quickly?
  • Are API response times quick?
  • Do you monitor production use, and are you alerted automatically if there’s a problem?
  • Can you search your application’s logs for errors?

Saying “But it worked in the test environment” is the tester’s equivalent of the developer saying “But it worked on my machine”.  It’s fun to test and find bugs.  It’s fun to check items off in test plans.  It’s fun to see test automation run and pass. But none of those things matter if your end user has a poor experience with your application.  

Adventures in Node: Promises

Have you ever written an automated UI test that uses Javascript, and when you went to assert on a response, you got Promise {pending} instead of what you were expecting?  This really frustrated me when I first encountered it!  A developer I was working with explained that this is because Javascript processes commands asynchronously through the use of promises.  I sort of understood what he meant, so I tried to work with it as best I could, but I didn’t really get it.

As I mentioned in this post, I’ve been taking a really awesome course on Node.js.  It’s much more extensive than any programming language course I’ve ever taken, even the ones I took in college.  So I’m starting to understand Node concepts more clearly, and one of those concepts is promises!  In this post I’ll explain why Javascript and Node need promises and show an example of how they work.

Javascript needs promises because it is a single-threaded language, meaning it can only do one thing at a time.  If we had a program where we needed to do three things, such as make an http request, alphabetize a list, and update a record in a database, we wouldn’t want to have to wait around for each of those tasks to finish before we went on to the next one, because our program would be very slow!  So Javascript is designed to be asynchronous– it can start a task, and then while it’s waiting for that task to complete, it can start the next task.

Our program with three things might actually run like this:
start the http request
start alphabetizing the list
start updating the record in the database
finish alphabetizing the list
finish updating the record in the database
finish the http request

The way that Javascript and Node manage this is through the use of promises.  Let’s take a look at a promise:

const sumChecker = new Promise((resolve, reject) => {
     if (a+b==c) {
          resolve(‘You are correct!)
     }
     else {
          reject(‘Sorry, your math is wrong.’)
     }
}


This function called sumChecker is a promise.  It’s going to have two possible results: resolve and reject.  If the sum is correct, it resolves the promise, and if it’s incorrect, it rejects it.  All promises behave in this way; there will be an option to resolve the promise and an option to reject the promise.

When the promise is called, either resolve or reject will be returned; you can’t ever return both.  Let’s look at an example of calling the promise:

sumChecker.then((result) => {
     console.log(‘Success!’, result)
}).catch((error) => {
     console.log(‘Error!’, error)
})


The result that is returned from calling the promise will either be resolve or reject.  If the result is resolve, then the program knows to continue and will return the resolve message.  If the result is reject, then the program knows to throw an error and will return the reject message.

You can try this for yourself if you have Node installed!  Simply copy the promise and the call and paste them into your favorite code editor.  Then at the beginning of the file, add these lines:

var a = 1
var b = 2
var c = 3


Save the file with the name myfile.js, navigate in the command line to the file’s location, and run the file with the command node myfile.js.  You should see this response: Success! You are correct!

If you make a change to the c variable and set it to 4, save and run the command again, you’ll see this response: Error! Sorry, your math is wrong.

Let’s put a log statement in between the promise and the call to the promise that looks like this: console.log(sumChecker), so we can see the state of sumChecker before we’ve called it. If we change the value of c back to 3 so we’ll get a positive result, save the file, and run the program with node myfile.js now, we’ll get the result Promise { ‘You are correct!’} in addition to the response we got earlier.  That seems easy!  But the reason why we can get the promise resolved so quickly is because the sumChecker promise executes really fast.  Let’s see what happens if we make the sumChecker work more slowly, like a real promise would.

Update the sumChecker promise to look like this:

const sumChecker = new Promise((resolve, reject) => {
     if (a+b==c) {
          setTimeout(() => {
               resolve(‘You are correct!)
          }, 2000)
     }
     else {
          reject(‘Sorry, your math is wrong.’)
     }
}


All we’re doing here is adding a two-second timeout to the resolved promise.  Save the file, and run the program again with node myfile.js.  This time you’ll first get the result Promise { <pending> }, and after two seconds, you’ll get the result Success! You are correct!  

Now it should be clear why you get Promise { <pending> } when you are making Javascript or Node calls.  It’s because the promise hasn’t completed yet.  This is why we use the .then() command.  We wait for the response to the call to come back, then we do something with the response.  If we’re writing a test, at that point we can assert on our result.

I hope you’ll take the time to try running this file with Node, because there’s nothing quite like doing hands-on work to generate understanding.  You can try changing the variables or any of the response messages to get a feel for how it’s working.  Here’s the final version of the file if you’d like to copy and paste it:

var a = 1
var b = 2
var c = 3

const sumChecker = new Promise((resolve, reject) => {
    if (a+b==c) {
        setTimeout(() => {
            resolve(‘You are correct!’)
        }, 2000)
    }
    else {
        reject(‘Sorry, your math is wrong.’)
    }
})

console.log(sumChecker)

sumChecker.then((result) => {
    console.log(‘Success!’, result)
}).catch((error) => {
    console.log(‘Error!’, error)
})


Enjoy your new-found understanding of promises!

Book Review: Perfect Software and Other Illusions About Testing

“Perfect Software and Other Illusions About Testing”, by Gerald Weinberg, is the best book on testing I have ever read.  It is a must-read for anyone who works with software: CEOs, CTOs, scrum masters, team leads, developers, product owners, business analysts, and software testers.

Before I get into why this book is so great, I’ll first acquaint you with the author.  Gerald “Jerry” Weinberg (1933-2018) was involved in the creation of software for over fifty years.  Early in his career he worked for NASA on Project Mercury, the project that created spacecraft that allowed a human to orbit the earth.  For decades he consulted with companies about building quality software, and over those years he gained a great deal of wisdom about software testing.  “Perfect Software”, which was published in 2014, seems to me to be the culmination of his years of experience.

The book is divided into several chapters, each of which looks at a particular aspect of software testing. Many examples are given from Jerry’s consulting experience, and each chapter closes with a summary and a list of common mistakes that companies make.  Rather than summarizing the lessons he imparts, I think it would be best to include Jerry’s own words here.  Here are some of my favorite quotes from the book:

“Before you even begin to test, ask yourself: What questions do I have about this product’s risks?  Will testing help answer these questions?”

“There are an infinite number of possible tests…Since we can’t test everything, any set of real tests is some kind of sample- a portion, piece, or segment that is in some way representative of a whole set of possible tests.”

“Knowing about the structure of the software you’re testing can help you to identify special cases, subtle features, and important ranges to try- all of which can help narrow the inference gap between what the software can do and what it will do during actual use.”

“Testing gathers information about a product; it does not fix things it finds that are wrong.”

“If you’re going to ignore information or go ahead with predetermined plans despite what the tests turn up, don’t bother testing.”

“If you blame messengers for bringing news you don’t want to hear, you’ll be rewarded by not hearing the news you should hear.”

“Quality is a product of the entire development process.  Poor testing can lead to poor quality, but good testing won’t lead to good quality unless all other parts of the process are in place and performed properly.”

“Testing starts at project conception, or before. If you don’t know this, you don’t understand testing at all.”

“Without a process that includes regular technical reviews, no project will rise above mediocrity, no matter how good its machine-testing process.”

“No developer is good enough to consistently do it alone, and do it right.”

“Data are meaningless until someone determines their meaning.  Different people give different meanings to the same data.  Gather data, then sit down and ponder at least three possible meanings.”

“When someone says, ‘The response should be very fast’, what does that mean, exactly?  What meanings do ‘should’, ‘very’, and ‘fast’ give to the stated information?”

“Numbers can be useful, but only if they’re validated by personal observation and set in context by a story about them.”

“Garbage arranged in a spreadsheet is still garbage.”

Jerry uses many great hypothetical scenarios to illustrate his points, and he also uses real-world examples from his years of consulting.  Here are some of my favorites:

  • The tester who didn’t log a bug he found because it wasn’t in “his” component
  • The manager who thought that the project was ready to ship because they ran 600,000 test cases and “nothing crashed the system”
  • The team who thought their biggest problem was their bug-tracking system, because the system couldn’t handle their 140,000 open bugs
  • The team who took so long to triage bugs that couldn’t make a decision on any of them, resulting in 129 undiscussed and unfixed bugs
  • The tester who assumed that her new automated test tool was working correctly because all the tests displayed in green at the end
  • The developer-tester team who were gaming the bug bounty system by having the developer add bugs to the code, the tester find the bugs quickly, and the developer fix them just as quickly, resulting in rewards for both
  • The VP of Development who wanted a really big written test plan so he could have something big to slam down on a desk to “prove” that they had tested well

If you would like to think about what role testing plays in your software development project, what constitutes a good test, how to plan testing for a project, or how to interpret test data in order to make management decisions, then “Perfect Software” is the book for you.  I plan to re-read this book every year to make sure that I have fully retained all the lessons it offers.