A Tale of Two Testers

Meet Derek and Emma. They are both Software Test Engineers. Derek works for a company called ContactCo, which is building a web application to allow users to add and manage their contacts. Emma works for a competitor of ContactCo, called ContactsRUs. ContactsRUs is building a similar application to the one ContactCo is buildling.

Emma is very proud of her ability to create test automation frameworks. As soon as development begins on the new app, she gets to work on a UI automation suite. She writes dozens of well-organized automated tests, and sets them to run with every build that the developers check in. The tests are all passing, so she feels really good about the health of the application. She also creates a set of smoke tests that will run with every deploy to every environment. If the smoke tests pass, the deployment will automatically continue to the next environment, all the way through Production; if the tests fail, the deployment will be rolled back and the deployment process will stop. After just three weeks, she’s got a CI/CD system in place, and everyone praises her for the great job she’s done.

Derek begins his involvement with ContactCo’s new app by attending the product design meetings and asking questions. He reads through the user stories so he understands the end user and knows what kinds of actions they’ll be taking. As the application takes shape, he does lots of manual exploratory testing, both with the API and the UI. He tries out the application on various browsers and with various screen sizes. At the end of the first two weeks of development, he’s found several UI and API bugs that the developers have fixed.

Next, Derek works with developers to find out what unit and integration tests they currently have running, and suggests some tests that might be missing. He talks with the whole team to determine what the best automated framework would be for API and UI testing, and works with them to get it set up. He spends a lot of time thinking about which tests should run with the build, and which should run with the deployment; and he thinks about which tests should be run solely against the API in order to minimize the amount of UI automation. Once he has a good test strategy, he starts writing his automated tests. At the end of the third week of development, he’s got some automated tests written, but he’s planning to add more, and he doesn’t quite have the CI/CD process set up yet.

At the end of the three weeks, both ContactCo and ContactsRUs release their applications to Production. Which application do you think will be more successful? Read on to find out!

**********

Derek’s application at ContactCo is a big hit with users. They comment on how intuitive the user interface is, and by the end of the first week, no bugs have been reported. Customers have suggestions for features they’d like to see added to the application, and the team at ContactCo gets started with a new round of product design meetings, which Derek attends. When he’s not in meetings, he continues to work on adding to the automated test framework and setting up CI/CD.

Emma’s application at ContactsRUs was released to Production, and the very same day the company started to get calls from customers. Most of the ContactsRUs customers use the Edge browser, and it turns out there are a number of rendering issues on that browser that Emma didn’t catch. Why didn’t she catch them? Because she never tested in Edge!

The next day the company receives a report that users are able to see contacts belonging to other customers. Emma thinks that this can’t be possible, because she has several UI tests that log in as various users, and she’s verified that they can’t see each other’s data. It turns out that there’s a security hole; if a customer makes an API call to get a list of contacts, ALL of the contacts are returned, not just the contacts associated with their login. Emma never checked out the API, so she missed this critical bug.

Developers work late into the night to fix the security hole before anyone can exploit it. They’ve already lost some of their customers because of this, but they release the fix and hope that this will be the last of their problems. Unfortunately, on the third day, Emma gets an angry message from the team’s Product Owner that the Search function doesn’t work. “Of course it works,” replies Emma. “I have an automated test that shows that it works.” When Emma and the Product Owner investigate, they discover that the Search function works fine with letters, but doesn’t work with numbers, so customer’s can’t search their contacts by phone number. This was a critical use case for the application, but Emma didn’t know that because she didn’t attend the product meetings and didn’t pay attention to the feature’s Acceptance Criteria. As a result, they lose a few more customers who were counting on this feature to work for them.

The Moral(s) of the Story

Were you surprised by what happened to ContactsRUs? It might have seemed that they’d be successful because they implemented CI/CD so quickly into their application. But CI/CD doesn’t matter if you neglect these two important steps:

  1. Understand the product you are testing. Know who your end users are, what they want from the product, and how they will be using it. Pay attention in planning meetings and participate in the creation of Acceptance Criteria for development stories.
  2. Look for bugs in the product. Many software test engineers jump right to automation without remembering that their primary role is to FIND THE BUGS. If there are bugs in your product, the end users aren’t going to care about your really well-organized code!

Every good fable deserves a happy ending! Hopefully you have learned from Derek and Emma and will make sure that you understand and test your software before writing good automation.

18 thoughts on “A Tale of Two Testers

  1. Shashank Shah

    This story was pretty awesome and helpful to understand that Automation is just not only the part of the successful product launch, instead one needs to understand the product, it’s workflow and the end-users or customers.

    And of course, not to forget meetings are always very important….!!!

    Cheers.

    Have a happy weekend..!!

    1. kristinjackvony Post author

      I’m glad you enjoyed my post, Shashank! Feature meetings can be boring, but they are very critical to understanding the product and the needs of the customers.

  2. Karlo

    Hi Kristina, excellent example when test plan is driven with risk analysis.

    I would like to comment on point 2.,

    “Look for bugs in the product. Many software test engineers jump right to automation without remembering that their primary role is to FIND THE BUGS. If there are bugs in your product, the end users aren’t going to care about your really well-organized code!”

    In Introduction to software testing, when primary goal is to find the bugs, that organization is at the second of five levels of Beizer software level, where fifth level is:

    Level 5
    Testing is a mental discipline that helps the whole team to produce high-quality software. Testers and developers are one team, and they put their mindset on software quality. Testers define what quality is and how to measure it. They teach developers what quality is and how to test software with quality in mind. Beizer uses a spell checker analogy. Spell checker identifies errors but also teaches us how to spell correctly.

    I wrote more on that topic here: https://blog.tentamen.eu/what-are-beizer-testing-levels/

    It seems that point 2 is the oposite of work in ContactCo company. What is your opinion/experience on Beizer testing levels?

    Thanks!

    Karlo.

    1. kristinjackvony Post author

      Hi Karlo- I had never heard of Beizer testing levels before you mentioned them. I’ll have to read about them! But based on what you said Level 5 is, I totally agree that the whole team should be looking for the bugs. In an ideal world, developers would check their own work more carefully, and the role of the tester would be more of a coach who helps think of exploratory scenarios and designs test automation plans. I wrote this fable because I have encountered scenarios where I interview testers and give them a challenge to both find bugs in a sample application and write test automation. Many testers find only a few bugs and spend their whole time working on the automation. This seems backwards to me.

  3. Pingback: Five Blogs – 14 September 2020 – 5blogs

  4. Nimish Bhuta

    Good Post. I agree that as a tester the primary job is to find bugs which requires good understanding about the product and how the customer is going to use it. Then comes the test automation and CI/CD pipeline.
    Also important thing is to bring value in each and every aspect of developing the product such as meetings, defining acceptance criteria(going to customer shoes),asking relevant questions,design test strategy, implement test sautomation framework etc
    I really like the way you have written it.

  5. Daksha Sosa

    Nice article Kristin, enjoyed , completely agree. unfortunately automation testers directly jump to automating without knowing the product features.

  6. Enrico Milani

    A simple story that tells a lot about the common “legends” and beliefs about testing.
    An automation-only approach might look cheap and efficient per se, but if it’s not coupled with sw exploratory testing and questioning about requirements, implementation, etc it will rapidly turn into an unjustified, low value task in the product pipeline.

    Thanks Kristin for sharing this interesting article.

  7. Khurram W Bhatti

    While these 2 companies are working their best to launch the product, there is another company, YourContacts. Simba works for this company, which is building a similar application. Simba attends all the planning, grooming, and designing meetings and has a very good understanding of what the business needs are. While the developers are implementing their design and busy programming, Simba is working closely with the business to understand the Acceptance Criteria and building his automation specifications on their basis. Simba is very pro-active and collaborative, and with the help of the right POCs, he is able to set up a pipeline, where he has tested his set up (not running any test, but to make sure the pipeline itself works). He is also working closely with the developers to understand the unit test coverage, so he can advise them on what they are missing and also to make sure that he does not write duplicate automated tests. By the time he gets the code to QA environment, he got many tests automated (while some of them might need a bit of tweaking) which include many API tests, some GUI tests, and a few end to end tests and some load tests which gets triggered since he has the pipeline set up. He gets a notification as soon as his tests pass/fail. I wont get into the details of what he did when any of the tests fail. Assuming that all went well, he starts exploring the application’s tiers. Simba has a very good understanding of the back end, the APIs and the GUI. At the end of the three weeks, YourContacts also release their application. Do you think this application would be successful at this point?
    All three companies are implementing new features, enhancements, and bug fixes. When anything is deployed to the QA environment for testing, Emma is busy sorting out all the issues from the previous releases/sprints, Derek is busy performing regression checks manually since he did not have enough automation coverage, and then he starts with his exploratory testing. And with each deployment, the number of regression checks keep adding up and since he did not have have enough automated checks, and the pipeline set up in the beginning, he still has not been able to sort that issue out as well. But Derek is a hard working individual and he wants his product to be successful, he works day and night whenever there is a deployment but unfortunately they are in an environment where the code gets deployed to the QA environment multiple times during each 2 weeks sprint and he has to perform his regression checks each time. Despite his efforts, either he misses some bugs because of the lack of time or the release date has to be pushed.
    On the other hand, since Simba had been keeping his regression suite up to date, and he has the pipeline set up. His automated tests gets triggered, while he does his exploratory testing, and the team is able to release their product on time. Six months have passed by, where ContactCo lost many of their customers and it seems that they would be out of business soon. ContactRUs has a quality product (which is starting to loose its quality), but they can not keep up with their clients demands. How do you think YourContacts is doing at this point?
    Oh, I forgot this is “A Tale of Two Testers”, and not “A Tale of Three Testers”. Also the fact that, this is the understanding of many about automation test engineers that they only care about automation and writing their little automation scripts and they do not care or understand the product it self. I know this because of articles like this every where, and comments on LinkedIn. We work in a very agile environment and the clients needs are continuously changing, and if we do not keep up with these automated checks which do not seem important to many, it would not take long for us running into these issues. Hope, I did not offend anyone. Thanks.

    1. kristinjackvony Post author

      What an awesome addition to the story, Khurram! You are right that Simba and YourContacts will be more successful than Derek and Emma and their companies. The main point of my story was that the purpose of testing was to find bugs, but finding bugs isn’t necessary when they are prevented in the first place. Because Simba is involved in the product’s development from the very beginning, he’s making sure that unit tests and other tests are built in that will provide all the coverage they need. It would be great if every tester did this! But it can take time for teams and testers to evolve.

  8. Lucas Gobitta Maglio

    Hi Kristin!
    I just got say, what amazing tale you wrote.
    I work at SVLabs, a QA consulting company in Brazil, and in the past few years lots of new clients come to us and asks for automation, well that’s totally reasonable, but we always do a brief evaluation of their testing maturity level (we here use TMAP methodology) and most of the times we see that automation is not what they need, at least not yet. Your tale exemplifies perfectly what may happen if they choose to go on with only automation (which in some cases they do).
    I also liked the third part added Khurram, but that I believe that is good to show what they’ll get when they achieve level 5, what can be a long and hard way depending on the involvement of the client’s company, in our case.
    If you allow me, I’d like to translate your tale to portuguese and publish in our blog (https://blog.svlabs.com.br/), with your page as the source of it. Just let me know.

    Anyway, thanks for the reading and sorry for any mistakes in my english, it’s been a while since I’ve last wrote anything other than in portuguese.

    1. kristinjackvony Post author

      Hi Lucas- I’m glad you enjoyed this post! You can translate it to Portuguese and publish it on your blog, as long as you credit me as the author and add a link to my site. And your English is great!

Comments are closed.