Equivocation is a technique used to mislead others through the use of imprecise language. There are many words in the English language that have more than one meaning, such as the word “light”, which could mean “bright”, or it could mean “not heavy”. It’s also possible to use equivocation by being deliberately ambiguous about time or quantity. Children are excellent at equivocation, as you will see in the example below.
When I was a much younger woman, I taught piano lessons, mostly to young students. I gave each student an assignment book in which I would assign their lesson for the week. I expected each student to practice for fifteen minutes a day, six days a week, and the assignment book included a little chart where they could enter their practice time.
I discovered all kinds of ways that children equivocated about their piano practice! When a child’s mother asked her, “Did you practice piano?”, she might answer “Yes”. But upon further examination, it became clear that what she really did was practice yesterday. Other students would mark down the time they spent sitting at their piano bench looking out the window as “practice”. Still others would play the piano, but not play their assigned music, and call that “practice”. And one creative young man recorded one fifteen-minute practice session and replayed it on a tape player every day so his mother would hear him “practicing”.
The same thing happens in software testing. Many terms are used in an equivocating fashion to convey that extensive testing has been done, when in fact it hasn’t. Consider the following examples:
- “Code coverage”: a team could boast that they have 95% code coverage, when many of their unit tests are simply set to return true regardless of their state
- “Automation coverage”: saying that a team has 100% automation coverage could mean that they only ever run ten manual tests and they’ve automated all ten
- “Test plan”: this could refer to anything from a pages-long document to an idea for a few tests to run that the tester thought up while in the shower
- “Test results dashboard”: is this a tool that shows successes and failures over time, highlighting flaky tests, or a colorful page that doesn’t convey any meaningful data?
- “Continuous Deployment”: for some teams, this could mean “when I commit code, it is automatically deployed to production”, or it could mean “after I submit a change control request and it is evaluated, approved, and scheduled, then it goes to production”
I could go on and on. Even the word “testing” has been highly debated. Are automated scripts that exercise an application’s functionality “tests”, or merely “checks”? My point here is NOT to arrive at common definitions for everyone. My point is that it is very easy to equivocate to give an impression of software quality that is simply not true.
As software testers, we owe it to our end users to be honest about our testing practices. This means reporting our activities to our team with clear definitions and metrics.