Why The Manual vs. Automation Debate is Wrong

I don’t generally editorialize in my blog- I prefer to focus on what to test rather than theories of testing- but I feel compelled to say that I’m tired of the whole “manual vs. automated testing” discussion.  Some people describe automated testing as the cure for bad code everywhere, and others lament the poor manual tester who has no technical skills.  Meanwhile, there are advocates who say that automation is merely a panacea, and that automation code should be used only for simple tools that will aid the manual tester, who is the one who really knows the product.

In my opinion, this debate is unnecessary for two reasons:
1) “Manual” and “automated” are arbitrary designations that don’t really mean anything.  If I write a Python script that will generate some test data for me, am I now an automation engineer?  If I log into an application and click around for a while before I write a Selenium test, am I now a manual tester?  
2) The whole point of software testing- to put it bluntly- is to do as much as we can to ensure that our software doesn’t suck.  We often have limited time in which to do this.  So we should use whatever strategies we have available to test as thoroughly as we can, as quickly as possible.  
Let’s take a look at three software testers: Marcia, Cindy, and Jan.  Each of them is asked to test the Superball Sorter (a hypothetical feature I created, described in this post).  
Marcia is very proud of her role as a “Software Developer in Test”.  When she’s asked to test the Superball Sorter, she thinks it would be really great to create a tool that would randomly generate sorting rules for each child.  She spends a couple of days working on this, then writes a Selenium test that will set those generated rules, run the sorter, and verify that the balls were sorted as expected.  Then she sets her test to run nightly and with every build.  
Unfortunately, Marcia didn’t take much time to read the acceptance criteria, and she didn’t do any exploratory testing.  She completely missed the fact that it’s possible to have an invalid set of rules, so there are times when her randomly generated rules are invalid.  When this happens, the sorter returns an error, and because she didn’t account for this, her Selenium test fails.  Moreover, it takes a long time for the test to run because the rules need to be set with each test and she needed to build in many explicit waits for the browser to respond to her requests.  
Cindy is often referred to as a “Manual Tester”.  She doesn’t have any interest in learning to code, but she’s careful to read the acceptance criteria for the Superball Sorter feature, and she asks good questions of the developers.  She creates a huge test plan that accounts for many different variations of the sorting rules, and she comes up with a number of edge cases to test.  As a result, she finds a couple of bugs, which the developers then fix.
After she does her initial testing, she creates a regression test plan, which she faithfully executes at every software release.  Unfortunately, the test plan takes an hour to run, and combined with the other features that she is manually testing, it now takes her three hours to run a full regression suite.  When the team releases software, they are often held up by the time it takes for her to run these tests.  Moreover, there’s no way she can run these tests whenever the developers do a build, so they are often introducing bugs that don’t get caught until a few days later.
Jan is a software tester who doesn’t concern herself with what label she has.  She pays attention during feature meetings to understand how the Superball Sorter will work long before it’s ready for testing.  Like Cindy, she creates a huge test plan with lots of permutations of sorting rules.  But she also familiarizes herself with the API call that’s used to set the sorting rules, and she starts setting up a collection of requests that will allow her to create rules quickly.  With this collection, she’s able to run through all her manual test cases in record time, and she finds a couple of bugs along the way.
She also learns about the API call that triggers the sorting process, and the call that returns data about what balls each child has after sorting.  With these three API calls and the use of environment variables, she’s able to set up a collection of requests that sets the rules, triggers the sorting, and verifies that the children receive the correct balls.  
She now combines features from her two collections to create test suites for build testing, nightly regression testing, and deployment testing.  She sets up scripts that will trigger the tests through her company’s CI tool.  Finally, she writes a couple of UI tests with Selenium that will verify that the Sorter’s page elements appear in the browser correctly, and sets those to run nightly and with every deployment.
With Jan’s work, the developers are able to discover quickly if they’ve made any changes in logic that cause the Superball Sorter to behave differently.  With each deployment, Jan can rest assured that the feature is working correctly as long as her API and UI tests are passing.  This frees up Jan to do exploratory testing on the next feature.
Which of these testers came up with a process that more efficiently tested the quality of the software?  Which one is more likely to catch any bugs that come up in the future?  My money’s on Jan!  Jan isn’t simply a “manual tester”, but she isn’t a “software developer in test” either.  Jan spends time learning about the features her team is writing, and about the best tools for testing them.  She doesn’t code for coding’s sake, but she doesn’t shy away from code either.  The tools and skills she utilizes are a means to ensure the highest quality product for her team.  

12 thoughts on “Why The Manual vs. Automation Debate is Wrong

  1. Nilan(jan)?

    In your example for Jan, the 2 activities she does, the testing part and the automation have almost nothing to do with each other. The purpose of both are very different.

    In addition, the reality in the industry is that very few people know how to do 'manual' testing. The industry is dominated by test automation. It bites that automation proponents console 'manual' testers with, 'automation helps you explore'. You can prove me wrong by showing me an example of a test automation expert demonstrating a credible example of 'manual' testing.

  2. Dmitry Shyshkin

    I just wonder, when Jan has the time to do all this.
    Or "Jan" is the team name, that consist of Cindy and Marcia, where Cindy knows the product, and helps Marcia with test cases, while Marcia spends her time converting those test cases to API and UI automated tests?

  3. Kristin Jackvony

    Hi Nilan- I agree that the manual/exploratory testing that Jan does and the automation that she sets up are different activities. But both of them have the same purpose in that they serve the end goal of making sure the software isn't buggy.

    I work with people who are good automation engineers and good manual testers. Perhaps it doesn't look like there are people who are good at both in social media, because bloggers and conference speakers often have a specific focus.

  4. Kristin Jackvony

    Hi Dmitry- Jan is her own person in this example! I do all the activities that Jan does in my position (with the exception of UI automation, which my colleague is doing), and I don't have trouble finding the time to do them. It is possible to be very familiar with your product while at the same time contributing to test automation.

Comments are closed.