A thread by Aaron Hodder

A fork in the road.

This thread is through a #testing lens because that’s the domain I’m most familiar with. But I believe it applies to most domains in IT. There is a lot of pressure on testers to be “more technical”. The term “technical” is ambiguous and means different things to different people. Let’s explore that briefly:

In many places testing looks like this:

A tester executing step-by-step scripts, and marking them pass or fail.

When confronted with this idea of testing, it’s natural to want to replace these expensive human executors with seemingly cheaper robotic executors:

In these situations, “Testers need to be more technical” means “Testers need to learn to code so they can get a machine to execute their test cases.” It’s these situations that are the birthplaces of the “Testing is dead” mantra. I’m not going to talk about this. I define “technical testing” as testing the structural aspects of the software and the delivery infrastructure. It is inward-focussed, and helps discover risks to how value is delivered to end users. Things like Performance, Security, Automated functional checks, etc. Regardless of what the product /does/, it’s good that it’s performant, securable, maintainable, extendable, testable, etc. In other words, I am defining “technical testing” as the activities that support testing the quality of the technology underlying the product solution. A non-exhaustive collection of the skills, knowledge, and activities in this domain might look like this:

As organisations look to continuously deliver value in a way that’s rapid, scalable, and agile, then these technical testing skills are critical in helping teams deliver stuff. These skills are specialist skills, and are important. We now know we can continuously deliver /something/ that is performant, secure, and beautifully architectured.

But…

It’s only one part of the picture. It doesn’t mean that what the teams are delivering is actually valuable.

#Siri is a wonderful piece of engineering. It still has its quirks though:

Sometimes the quirks border on irresponsible:

And sometimes they just walk right into being dangerous:

(For more information on why this might be a problem, start here: ) (medium.com/talking-microc…)

Here’s a message someone received from a smart scale. Pretty delightful.

Unless you have cancer.

Unless you’ve just lost a pregnancy.

Unless you have an eating disorder.

Just another quick example: A racist algorithm that sends black people back to prison more often than white people, all things being equal: (propublica.org/article/machin…) (theatlantic.com/technology/arc…)

As software becomes more deeply ingrained in our daily lives, as it is now integrated into our homes, cars, and hospitals, and as it becomes applied to increasingly complex domains, the need for deep technical testing ability increases. But…. what is a tester’s role again?

To analyse risk and uncover unintended consequences? All risks? All unintended consequences?

I think so. This tweet sums it up wonderfully:

“Silicon Valley is run by people [who] want to be in the tech business, but are in the people business. They are way, way in over their heads.”

Sometimes people say that testers break things.

I think the testers ought not just try to break the technology.

Testers should be breaking the design decisions, and discovering the consequences on /all/ people for those decisions (not just your “target” users). Who is going to question and analyse the consequences of the algorithm in the autonomous vehicle that decides it’s better to save the driver, rather than the bystander? Who questions the decision to incorporate cupcakes into Google Maps? () (washingtonpost.com/news/morning-m…)

It is absolutely a tester’s job! But this sounds complex and difficult. You’d need to understand ethics, and psychology, and probably need skills from anthropology, and UX. A non-exhaustive collection of the skills, knowledge, and activities in this domain might look like this:

As organisations look to continuously deliver value in a way that’s safe and ethical, then these humanistic testing skills are critical in helping teams deliver stuff. These skills are specialist skills, and are important. We now have two broad domains of important specialist testing skills and activities. Those that focus inwards on the technology, and those that focus outwards on the people. If the technology-focussed skills are called “engineering” skills, shall we call the people-focussed skills “#humaneering” skills? I’m not sure. But here’s a diagram illustrating what I mean:

At the moment, there is a lot of value placed on the engineering domain, but I suggest that we give the humaneering domain the value it deserves too. Each domain requires a lifetime of study, so I see a future where dedicated testers will need to choose one fork or another in order to specialise.