Back in October I was invited by IBM to attend their World of Watson event in Las Vegas – I wrote a little about it at the time.

Now that I have had some time following the event, I’ve been able to percolate and put my thoughts to paper, as it were. In the interest of you, dear reader, I’ve split these thoughts into three different posts; Technology, Business and Philosophy.

This post is the third and final, talking a little bit about Philosophy. You can find my first post, discussing the Technology and my experience of it, here. The second post, discussing the Business of IBM and the conference, here.

Engaging with Engagement

A word, an idea, that was thrown around an awful lot at the World of Watson conference was engagement. I spoke some in an earlier post about the import of language, and trying to suss out what it is that engagement is took up a lot of my brainpower at the conference.

It was of course a conference centered around practical uses of machine learning and artificial (augmented) intelligence, so when they say engagement, they most meant, contacting a customer, or potential customer, with the right information at the right time.

I am reminded some of the central struggle in Zen and the Art of Motorcycle Maintenance, the question of quality, or possibly meta-quality, the larger properties of an instance of quality that defines its own goodness or badness.

That is to say, if a trusted friend reaches out to me with a recommendation, say, that the new Tribe Called Quest album is fantastic (and it is), is that the same thing as when Amazon reaches out in the same way with the same recommendation? Is engagement defined solely by the end result (the consumption of the good or service?) – or is there a quality-defining characteristic of the means by which we arrive at the end result?

Put another way, is algorithmic collection, aggregation and prediction in some important way just like a close friend’s recommendation? Or is it different? Does intent count here?

I would posit that a recommendation from a dear friend is different because their recommendation is motivated by a genuine interest in my finding enjoyment – they have no real stake in the outcome financially.

Whereas any revenue-seeking organization, be it the ACLU or Honda, necessarily has at its core a need to continue existing to fulfill its own mission – for the ACLU that mission is broad, social, and noble. For Honda it is probably more shareholder value driven – but either way, both organizations are driven by their end goals first. As a consumer, even an engaged one, you are a means to their larger end.

This doesn’t make it immoral; if you’re a member of the ACLU, very likely you agree that their ends are ends toward which you are happy to be a means.

Engagement I suspect is one of the words that is going to come to mean very different things, with different desired outcomes, depending on who you talk to – like Customer Success or Flat White.

Artificial Intelligence as Replacement

I discussed this a little in the first Post in this series, on the technology being showcased at World of Watson. Here’s a link.

The focus on computers and artificial intelligence as aids to workers today, as assistants or another tool in the toolbox, rings very hollow for me.

We don’t have to look far to see an easy analog: automation of manufacturing jobs in the United States. When human labor can be replaced with machines that run more cheaply, don’t require health insurance, and can be overseen by mechanics rather than managers, the tide of capitalism demands that the human labor is in fact replaced by machines, posthaste.

If a call center is replaceable by an algorithmic chat bot, it seems to be only a matter of time before the humans face full replacement. I would speculate that a canny business owner would retain an algorithmic solution as an augmentation (rather than as a replacement of humans) only as long as absolutely necessary to train it to be at least as good as the humans it is assisting.

This is the kind of cold logic that success in a capitalist economy requires – surely, some members of the call center will remain, to handle especially difficult cases or customers, to audit the algorithm’s behavior for aberrations or potential improvements. The company that retains its humans will lose in the medium to long term, because they can’t compete with the companies that have replaced them.

Let’s not brush this reality aside: let’s take a moment to sit with it.

Think on this prediction from another angle: what if I told you that all of the boring, repetitive pieces of your job, the pieces that do not require creative thought or analysis or decision-making, disappeared tomorrow – would that be desirable?

Not to channel Engels too deeply here, but if we are able to realize the same or greater value while putting in less human effort, spending less of our (precious, irreplaceable) time on boring, repetitive work – isn’t that a good thing?

Put another way: consider the legend of John Henry. (here’s the Wikipedia if you’re not familiar). When we think about the steam drill, when we think about what it is essentially replacing, is it replacing John Henry the man, or is it replacing the pick and shovel?

That is, what is the tool here? Why do we feel anxiety and dismay at the idea that the parts of our work that make us behave like machines may in fact be done by machines in the near future?

I’d posit that we feel anxiety because we are failing to recognize that we are not the tool. We are operators, we are humans above and beyond the work that we do. A world in which we’re all able to pursue poetry or hiking or learning Spanish rather than filling out another spreadsheet, well, that seems like an awfully lovely world, doesn’t it?

The problem, and a realistic and very appropriate problem, is that we don’t have any reasonable expectation of that world coming to pass. As humans in auto factories are replaced by conveyor belts and robots, those folks don’t then continue to receive a paycheck while they pursue the perennial garden they’ve always wished for – they lose their jobs!

(You see, Engels as a Socialist had an easy out here: Just have a revolution.)

In this way, we can see automation, and machines replacing humans in their boring, repetitive work, as a movement not toward the society above, where humans spend less time behaving like machines and more time behaving like humans – but rather toward a future that is even more divided than it is today.

We can very easily look at automation and machine replacement as a wealth consolidation mechanism – as fewer paychecks are required for the displaced workers, that value is funneled back to the top, and to the shareholders, consolidating wealth upward in what looks like a very bleak spiral, indeed.

One piece that I hadn’t considered, I’ll touch on briefly here since I want to give it some time to percolate, is the possible concept of engineer as radical organizer. That is, we want to get humans out of machine-like work, out of coal mines and out of steel mills. The sooner we get to a place where all of those jobs are in the hands of machines, the sooner we, as a society, will be forced to figure out how to live together, equitably.

Like I said; it’s half-formed. Maybe more on that in a future Post.

A Story About Ethics and Algorithms

Can I tell you all a story?

There’s a bar way up at the very top of Mandalay Bay, with an outdoor deck, and you can see the whole of the Strip from up there. It feels like something out of a movie. As is my wont, I had no expectation of ever being in Mandalay Bay again, so I gave it a shot.

I spent some time chatting with a few folks more professional and more experienced in all things Watson, all things financial and algorithmic – in these moments I try very much to channel Fitzgerald’s Nick Carraway, the observer, the peripheral outsider.

I mentioned, foolishly, casually, that with all of the talk of machine learning and actionable algorithmic insights, it felt as though we’d all given algorithms an ethical pass, like we’d ushered them into an amoral world, a post-morality world. They sort of looked askance, and asked me to say more.

Being who I am, I didn’t recognize this as my opportunity to change course; instead, I put together an ad-hoc ethics of algorithmic life, positing that while a mathematical function itself does not think or behave in a way that is sensible to examine as a moral actor, its results could surely still be examined in a moral lens.

“You see any algorithm is bookended by deeply moral activities; the first is, naturally, creating the algo – how do you keep your own biases, conscious or otherwise, out of it? Or, or, how can we be certain that folks who build our algos have not intentionally built in biases to their own benefit? And, of course, the second bookend is our interpretation of the algorithm’s output – we’re all tempted to interpret information in our own best interest, after all.”

This was months ago, I’m paraphrasing from memory and my notes. You get the picture.

This topic gained no traction; if it were a 90’s sitcom this would be the moment where they dubbed over the record-screech sound and everyone would stop talking and the bartender would look at me over his neon Ray-Bans. They simply weren’t interested. Conversation started again, flowed around me. It was as though the topic of morality in this, our most important emergent technology, was not even on the table for discussion.

In that surreal moment, with night fallen and the Strip shining like the new stars of the sky, a sweating bottle of Bud Light in my hand, I felt very far from home.

As any philosopher can tell you; silence and disinterest are far worse than disagreement and debate. We thrive on disagreement, it’s how we sharpen our tools. Disengagement, refusal to recognize the import of a topic – that smothers a philosopher, it sucks all of the wind from the sails.

As much an outsider as I felt at World of Watson, I did take heart that I was not alone – Shannon Vallor, Professor of Philosophy at Santa Clara University, gave a really outstanding Innovation Talk – if you have a chance to hear her speak, I do recommend it.

If this Post rings your bell even a little, I’d also recommend her latest book.

She said;

"Without careful design, intelligent systems are likely to duplicate existing harmful human biases. " – @ShannonVallor at #ibmwow — Simon Ouderkirk (@saouderkirk) October 26, 2016

…and I felt less like a madman.

There isn’t a recording available of that talk, but she did a shorter format interview in IBM’s “Cube,” which you can watch here:

Between Dr. Vallor and the singular Cathy O’Neil (blog, book, definitely follow, definitely purchase), the morality of algorithms has been discussed in much higher quality and greater quantity than I ever could. Read their books – I’m certainly going to.

I’d like to close this three-part report on my experience at World of Watson with a quote from the Wendell Berry poem, Manifesto: The Mad Farmer Liberation Front:

As soon as the generals and the politicos

can predict the motions of your mind,

lose it. Leave it as a sign

to mark the false trail, the way

you didn’t go. Be like the fox

who makes more tracks than necessary,

some in the wrong direction.

Practice resurrection.

Thanks for reading.

About Simon

I’m Simon Ouderkirk, I write about small data, remote work, and leadership at s12k.com. If you liked the third part of my World of Watson recap, please do follow my blog via your favorite RSS reader – I’m also on LinkedIn & Twitter

Disclosure

IBM has paid for me to attend World of Watson and provide unbiased coverage of the event. They have not provided content for me to publish, but ask that I do publish regarding the event on blogs and social media in exchange for free admission and travel expenses. My thanks to the Watson Analytics team for inviting me.