I talked before about tradition and dogma in the testing field. It’s often interesting to see how the idea that get passed off for wisdom in the testing world come about. Let’s take one example and break it down.

I should note this will be one of my “bring it all together” posts. I often like to look back at my past thoughts and see if I still believe what I said, if I feel I was in error, or being too simplistic, etc. This is my own way of doing what the title of this post suggests. Here I’m questioning what I’ve passed off in my own mind as my “wisdom” but showing how I believe it should be broadening rather than reducing.

Case In Point

I recently saw a well known test practitioner say this on Twitter:

“Fundamental asymmetries in testing: we can’t verify that the product DOES work; only that it DID work. We can’t verify that it WILL work; only that it CAN work. We can’t verify that there are no problems; only that we’re not aware of problems.”

The Wisdom Begins

This, right there, could be the start of how a bunch of people start unreflectively quoting something that gets further reinforced simply because it gets quoted more and more.

And on the surface this quote sounds great. It sounds very zen-like while having just enough substance to probably be meaningful because, in fact, those are definite asymmetries. So it is meaningful so far as it goes. But it’s also exactly why many people simply stop taking testing seriously. Why? Because they see practitioners engaging in this kind of “word play” (note the quotes!) rather than actually helping people get things done.

And that’s a pity because that idea, as described in that quote, is important. It is a constraint and a sensitivity to be aware of. But there’s something inherent in that idea. It’s about the past and the future. (And thus subject to the project singularity I talked about.)

These are thus dimensions. And I talked about the importance of the dimensionality of testing. I also talked about telling good stories and not being such a tester. The quote above is a perfect example of: don’t be such a tester.

Is Now; Was Then

I’ve talked about how testers have to think like archaeologists or historians. And, relevant to that point, testers also have to think like meteorologists. As we all hopefully know, meteorology is a science. Its basis is in the empirical method. Practitioners of meteorology study aspects of an ecosystem. They look at properties of that ecosystem — like atmosphere, wind speed, pressure, moisture, etc — and they have a mechanism made up of a series of techniques (physics) that help them describe and model those properties.

So the point here is that meteorology could be seen as understanding the weather and seeing historical trends. Relevant to the above quote: “It DID work. It CAN work.” But, in fact, there’s also a component of meteorology that is probably more familiar to many of us: forecasting the weather. That’s the future. Relevant to the above quote: “It DOES work. It WILL work.”

The point is that what was once about understanding the past now becomes an understanding (and perhaps a prediction) for the future. Prediction, of course, does not imply certainty.

And so, actually, we can verify that the product DOES work. If we constrain temporal and spatial boundaries about what me mean. If I’m testing the product right now in production and it works, then it DOES work. And it DID work. I can say the same thing if I’m testing the product in a staging environment. It’s when my spatial and temporal boundaries change that my statements have to become provisional. In this case, when we move the product from the staging environment to the production environment, I can speak in terms of likelihood: a prediction.

But I can compare the operation in production to that in staging. And if all is good, I can say the product DOES work, it DID work, it CAN work, and it WILL work. So these aren’t “fundamental asymmetries” at all, as it turns out. Fundamental would imply they are built-in to the very structure of what we are studying; an essential part that acts as the basis for everything.

Think More Broadly; Not More Binary

Instead what these dimensions represent are shifting polarities. They are bounded contexts whose meaning can shift a bit.

As mentioned and referenced earlier, I talked before about tradition and dogma in testing. Implicit in that is that a lot of the words of wisdom we pass around sometimes need to be questioned a bit. Things that sound good on the surface, even if — and especially if — they contain a large kernel of truth, need to be examined in a way that tells a story about what we’re talking about.

That ability to craft a narrative, rather than sound bites, is important. Equally important — crucially important, in fact — is the ability for that narrative to actually matter to people. Try bouncing the above quote off of teams that are, let’s say, in sprints and that want to get things delivered and that have data integrity concerns across Hadoop clusters with eventual persistence and that use containerization as a their strategy. Or any other context you can imagine.

Test consultants — and keep in mind here, I was one — can do that and get away with firing off those quotes. Can you, working as part of a team? Even if you can, what does it really do to help things along?

Focusing On the Dichotomies

I just spent a lot of words describing what instinctively many of us know: when we move from one context to another, our confidence about what DID or COULD work does not necessarily translate into equal confidence about what DOES or WILL work. But spouting that above quote at someone would likely do much to diminish my perceived value as a tester because it’s essentially descriptive, not prescriptive.

And it basically comes off as a statement of me trying to absolve myself of responsibility for anything going wrong. Or, at least, that’s how people could see it. It sounds like the virtuoso wavering that politicians like to do.

All this being said, I do believe we need both kind of descriptive and prescriptive sentiments in the industry. But I’m finding the test industry tends to focus on the sound bites, the alleged dichotomies (“testing / checking”), the alleged “fundamental asymmetries.” And since the test industry has been doing that, we’ve seen the industry as a whole conflate, marginalize, or dismiss testing as a discipline. We’ve seen more testing relegated to “farms” of testers and so-called “crowd testing” services.

A Dangerous Trend

This trend has been on an upward glide since 2009. Demonstrably so. We are at a period of time where we have some of our most vocal and committed practitioners in the testing field — I might be one of them, I guess — and yet we are seeing a continued decline in the perception of testing. I believe there’s a correlation here, folks.

It’s easy to dismiss a discipline if it’s most active and vocal practitioners are essentially coming off as irrelevant at best, a hindrance at worst. These are the same people that are often generating much of the future angst I talked about. They are often applying a tyranny of the or type of thinking, framing much of the industry as a threat to testing when, in fact, they (we? I?) may be the actual threat.

So, as I said, likely many testers who saw the quote I started with are going to run around quoting it.

And it’s going to pass into the traditions and dogma of the testing discipline. At the risk of sounding a bit hubristic, what many of those testers aren’t going to do is what I did here: break down that statement and see the truth and falsity in it.

Betwixt Truth and Falsity

In between the truth and falsity there is a vast chasm of thought possible. I’ve tried to show that here by referencing some of my past thoughts, without any indication that my thoughts are wrong or right. They are simply my thoughts. But I’ve found that, when suitably distilled, these thoughts — broad-ranging over disciplines as they are — gets people more excited about the possibilities of testing as a discipline and an activity.

I feel like this broad-angle viewpoint of testing is critical for its survival in the future. Coupling that idea to the idea that testers (probably) should be developers is part of my current focus for how testing is a unique discipline in its own right. I want us to be part of a deep discipline and, as practitioners, I want us to be cross-discipline associative.

This is an exciting time for testing in the industry. But this is also a time where testers are, in a very real sense, struggling for their survival. I believe part of that struggle is figuring how to get the broad discipline of testing associated with a broader definition of what a “developer” means … and then promote our wisdom in that context.