Who serves whom?

Artificial Intelligence is a complex riddle for all sorts of experts. It’s full of magic, mystery, money, mind-boggling techno-ethical paradoxes and sci-fi dilemmas that may or may not affect us in some far or near future. Meanwhile, it already shapes our everyday life. Things already go wrong. And no one is responsible. What can we do?

Pocket calculators have been beating us at math for a couple of decades now. Bots programmed to influence human dialogue on social media are showing the middle finger to the everyday Turing tests called bot-or-not. Propagandists and marketers running fake accounts are feigning the voices of Millions, in a calculated attempt to take advantage of our good faith. Some machines may still take cauliflower for poodles, owls for apples, and cats for ice cream, but they beat us hands down at speech and face recognition, they beat world champions at Chess, Go and Poker and now mostly compete with themselves. There is little doubt that machines will out-simulate and outperform human intelligence at any specific contest. Just like mechanical robots outperform us physically in every way, bots will be bigger, faster, stronger than any natural being. When the rules are set, machines kick our ass.

“It’s about setting the rules,” Kasparov said. “And setting the rules means that you have the perimeter. And as long as a machine can operate in the perimeter knowing what the final goal is, even if this is the only piece of information, that’s enough for machines to reach the level that is impossible for humans to compete.”

Whatever point in time machines outperform the human brain in totality doesn’t matter. If they beat us at defined disciplines, networked machines will outdo us as individuals and as a species. We could just wait and see what happens. Leaving our future to the experts when we already see at what pace we are losing control now is comically lazy.

Technology is made to make our life easier, to serve us. Bots, robots and ultimately the Wizards of Oz that hold their strings are flipping this around. Computers don’t need to become self-conscious, they feed on our cognition. We fuel Facebook with our experiences. We see, think, write, speak and live for Google. Amazon runs the biggest Mechanical Turk that the world has seen—and it calls it just that. The ultimate secret of who we are is not in our hearts, it is in our iPhones. The big five are robbing the data bank with fire trucks.

Some claim that we should treat machines like our children. Treat them well, so they treat us well in return. Since our machines will be smarter than us, we should make sure to not anger and “inflict[…] existential trauma” on them. Right, let’s not hurt their potential feelings. Is your head already spinning?

What may we hope?

What matters foremost, is not how far we can go but where we are going. Why do we build artificial intelligence? What are machines supposed to do? What are machines for?

Do machines serve us as much as we serve those who own them?

Should humans serve machines or should they serve us?

May we give machines the technical, legal and political power to make decisions in our place, subjecting us to their processes?

Technology can be described as an amplifier or an extension of the human body. The hammer is an extension of the fist, the knife is an extension of our teeth, TVs are exaggerated eyes and ears. Marshall McLuhan claimed that every extension results in an auto-amputation of another part.

“Every extension of mankind, especially technological extensions, has the effect of amputating or modifying some other extension […] The extension of a technology like the automobile ‘amputates’ the need for a highly developed walking culture, which in turn causes cities and countries to develop in different ways. The telephone extends the voice, but also amputates the art of penmanship gained through regular correspondence. These are a few examples, and almost everything we can think of is subject to similar observations…We have become people who regularly praise all extensions, and minimize all amputations.”

The idea that machines might become an equal or a superior is discussed as exciting and scary at the same time. It’s exciting if machines stay extensions and help us; it becomes scary if they turn against us. If we imagine the human existence as a biological entity in constant search of equilibrium, both scenarios are disconcerting. Every extension of a body part logically leads to a decreased sensitivity of another. The harder the hammer, the less we feel the nail. As long as this is intended, it’s cool.

“Medical researchers like Hans Selye and Adolphe Jonas hold that all extensions of ourselves, in sickness or in health, are attempts to maintain equilibrium. Any extension of ourselves they regard as ‘autoamputation,’ and they find that the self-amputation power or strategy is resorted to by the body when the perceptual power cannot locate or avoid the cause of irritation. Our language has many expressions that indicate this self-amputation that is imposed by various pressures. We speak of ‘wanting to jump out of my skin’ or of ‘going out of my mind,’ being ‘driven batty’ or ‘flipping my lid.’ And we often create artificial situations that rival the irritations and stresses of real life under controlled conditions of sport and play.”

What amputation do we risk with artificial intelligence? Cognition? Thought? Intelligence itself? Freedom?

What should we do?

The debate when machines will be more intelligent than us and whether they can become truly intelligent is hypnotising. The question what kind of people a self-driving machine should kill in an either-or situation is shockingly entertaining. Talking science fiction, debating logical paradoxes and ethical trick questions makes for great small talk, promising business plans, lucrative promises, cheap marketing, fantastic hoaxes, fun games, entertaining illustrations catchy headlines, half knowledge, spectacular cock fights, world record bullshit, and great clickbait. Without a thorough reflection on the ethical principles of human/machine action, these discussions remain small talk.

Ethics may irritate a predominantly scientific mind. Natural science looks at what is and tries to describe it in a way that can be reproduced at any point in time. Ethics looks at what should be and tries to make sure that everybody can take part in making it real. Humans are, to a big extent, defined by nature. At the same time, we also have the miraculous power to define reality through a series of normative processes. Human sciences, language, art, literature, politics, law, and economy define how we perceive and process things. Strangely enough, natural science itself wouldn’t make sense without human sciences.

Now, ethically, you can’t just go and tell a machine whom to kill as if it were a pragmatic, mathematical or merely logical problem. If you cannot say who will take responsibility for those who die—as a result of an algorithmic calculation, a bug or unforeseen malfunction—you are in trouble, ethically. Ethically, we’d first need to find a way to decide how much the power we want machines to have over our lives before we tell them whom to kill.

Should civil machines be given the power to calculate whom to kill?

Can moral values be measured, weighed, quantified and thus “processed” at all?

What are the moral core values these calculations are based on? The biggest use for the biggest number of people? Duty? Maximize happiness? Economic profitability?

Who decides which ethical principles are relevant for human-machine interaction?

Most of us would not feel comfortable following a machine’s calculation over whom to date. Ironically, Facebook might already calculate better whom we should date than our wet bodies high on mad hormones. But no matter how statistically well machines would fare against our faulty instincts, most humans feel uneasy putting their freedom, imagined or real, in the electric hands of machines.

“Human beings don’t want to be controlled by machines. And we are increasingly being controlled by machines. We are addicted to our phones, fed information by algorithms we don’t understand, at risk of losing our jobs to robots. This is likely to be the narrative of the next thirty years.”

No one knows the future. Crazy things will happen. Imagine a future where Facebook decides whom we marry because Facebook marriages then have a 45% lower divorce rate than natural marriages. Who will be responsible for the failed marriages and the time lost? Those who trusted the machines? Those who make the machines? No one?

Let’s imagine that it will be scientifically and morally obvious that machines make better political decisions than humans. Who runs those machines that sit in parliament? Who monitors them? And aren’t we ultimately subjecting ourselves to those who build, manage, run and own the machines rather than the machines themselves? Who decides that machines make better decisions? The people that voted the machines into power? The smarter machines? The market? The Lobbyists? A group of programmers on Slack? The machines autonomously? Whom would you like to take such decisions?

As crazy as this may sound, all of this is not Science Fiction. It is happening right now. Machines already filter, sort and choose the information we base our decisions upon. They count our votes. They sort the tasks we spend our time on, they choose the people we talk to and meet. More and more key aspects of our lives are decided by information technology. And things go wrong. Machines are made by humans. As long as we make mistakes, our machines make mistakes.

When things go wrong, both parties—those who use machines and those who build, manage and own information technology—decline responsibility. Users hide behind their lack of power, owners hide behind “the algorithm”. They sell artificial intelligence as Deus ex Machina and when it fails they blame the machine as a mere machine.

The question “Who serves whom?” is not a topic for experts in 2047. It is a key question for all of us, today, right here and now. Whether or not machines can be intelligent is not just technically or scientifically relevant, it is existential.

What can we know?

Intelligence comes from “intellegere”, to understand. Natural intelligence, as opposed to artificial intelligence, requires understanding. We “naturally understand” when someone else’s words make sense to us. When we physically recognize what we hear and see, when we feel through language what someone else felt. When we know what other people’s words mean. When we share our feelings, words, sentiments, and positions. Your brain is not a computer.

Up until today, machines don’t understand. They don’t sense, feel, recognize, share or mean. They have no intentions, positions or perspectives. They follow fuzzy orders without having a conscious notion of what an order is. They receive and match patterns, they process, calculate and simulate. They don’t feel, think, or comprehend, they do not understand, they don’t even know what they are doing. Your computer is not a brain.

How can a being or thing qualify as intelligent when it doesn’t know itself, doesn’t understand others or even realize what it is doing?

Machines are not intelligent, but they already excel at simulating intelligence. They are really good at making us believe that they understand, that they know us, that they comprehend, that they play chess. Well, it is not hard to fool us. Without any manipulation, we readily believe that toasters have feelings, that cars have a personality, that ketchup bottles have intentions. It is in our very nature to project our inner life into the world outside.

With all the projections, we still do not understand our own minds. Few engineers playing with AI even care to ask what human intelligence may be. Algorithms have become so complex that those who build those algorithms do not understand how they work. One could say it’s magic. Or one could say it’s trial and error. But it is a profound mistake to equate a bricolage formula with human intelligence just because both are not understood.

A human-made machine can produce similar results as a human brain without us knowing why. This doesn’t mean that we are equal, or that we have reproduced or surmounted our own intelligence. It means that we like tinkering.

In order to get an idea of what is happening inside the machines, engineers write reverse algorithms that explain themselves. This is cool stuff, but, again, it doesn’t make a machine self-conscious and it does not put us in charge of what we built. In contrary. It forces us to trust something we built without proper understanding.

One might fantasize and claim that maybe, probably, surely in 3, 30 or 300 years machines will understand so much more than we do that they can explain everything. A more passionate advocate of artificial intelligence may counter by trying to put human understanding in doubt claiming that “maybe ‘understanding’ is nothing but an illusion on top of bioelectric processing, calculations, simulations.” What if our understanding is an illusion? Dude, just cut it out. Understanding is philosophically and biologically complicated, but it is not black magic. In essence, it’s feeling what you think. Your thinking might be wrong. Your understanding might be wrong. Your feelings might mislead you. No matter how wrong you may be, your understanding becomes real at the very moment where you feel that there is meaning in what you perceive.

Human reality is impure. It is a mix of fact and fiction, nature, and norm. History, art, philosophy, poetry, economy, culture and morals are not fully measurable and they never will be. Human sciences constantly evolve and they define us as they describe us. As long as natural science needs human language to express and discuss its findings it depends on human science. Natural science itself has discovered that there are limits to what can be measured. There is no pure measuring, no absolute measurable pure reality that can free us from the impurity of human science. Math itself is as much human as natural science. There is no pure measurement, there is no measurement without norms, no norm without relation, there is no reality without perception, no language without interpretation, hell, Kant said it long ago, there is no ‘thing in itself’.

Whether humans should serve machines or machines should serve humans cannot be decided scientifically. Whom a car should kill cannot be decided logically. These are not clean cut factual but normative questions. Ethics doesn’t just ask “What is?” it asks “What is?” in relation to “What should be?” What can be done needs to be discussed with regards to what should be done.

“Everything human not only means the generally human in the sense of the characteristics of the human species in contrast to other types of living beings, especially animals, but also comprises the broad view of the variety of the human essence. […] All practical or political decisions which determine the actions of people are normatively determined and exert in their turn a norm-determining effect.”

Should we cede power to anyone or anything that lacks a proper understanding of the realm it controls? Should people without understanding run things? Should things without understanding run our lives? Is okay to let mindless things decide what we think?

With power comes responsibility. Without understanding the potential effect of our actions, without the ability to realize the actual result of our decisions machines cannot take responsibility for what they do or make us do. Who cannot take responsibility should not be given power. Who or what cannot take responsibility for itself shouldn’t be responsible for others.

Whether or not machines can become intelligent, is interesting not just as a party topic. It becomes relevant when we debate how much power we should give them. Without any doubt, machines can simulate and amplify specific forms of human intelligence. In order to make existential decisions for us, we need them to understand us, to relate to us, to be like us, and for that, they need a human body. Understanding is feeling what you think.

What is human and who is machine?

Simulation of human intelligence is not a replication of real intelligence and does not suddenly become real intelligence. Artificial intelligence is by definition not real, artificial can mean man-made as opposed to natural. Up to this day, it means simulated, pretended, fake, feigned intelligence. Artificial intelligence is intelligent like artificial leather is leather, it is self-aware like a pocket calculator knows math. To become self-conscious you need a reference point, you need to feel what you think, you need a body.

History teaches us that one knows exactly what is possible or impossible. Neither the pessimists nor the optimists have prophetic powers to make that call. We walk backwards through time, ahead of us lays the past, in the corner of our eyes emerges the present. No one can see the future, all we know from looking at the past is that it will be crazy and dirty and complicated. Looking at the present from the corner of our eyes gives us hints though…

Humans already feed machines with cognition, we speak to machines without being aware of it, and sometimes we repeat what machines told us without knowing so. We can keep things a bit simpler if we make sure that technology stays a discernible extension and amplifier and doesn’t continue to increase the entropy between humane and artificial information. Let’s keep things as simple as possible. The information technology we have built to date—and the chaos of data it generates—are already hard enough to control.

It is fair to assume that human intelligence cannot be reproduced without reproducing the human body and the natural and cultural history it grew out of. Theoretically, one could imagine a Blade Runner future where machines make themselves produce and reproduce human intelligence to a point where human and machine become as good as indiscernible. Whether such a future is desirable or not is another question.

“If it would be possible to build artificial wet brains using human-like grown neurons, my prediction is that their thought will be more similar to ours. The benefits of such a wet brain are proportional to how similar we make the substrate. The costs of creating wetware is huge and the closer that tissue is to human brain tissue, the more cost-efficient it is to just make a human. After all, making a human is something we can do in nine months. Furthermore, as mentioned above, we think with our whole bodies, not just with our minds. We have plenty of data showing how our gut’s nervous system guides our ‘rational’ decision-making processes and can predict and learn. The more we model the entire human body system, the closer we get to replicating it.”

We need to make sure now that we do not grow into a future where we cannot discern humane from artificial, fake from factual, where we have no basis to decide what existence we want to lead. Right now, we need to make sure that the distinction between human and machine stays clear. We need to make sure that we and not those who own information technology decide what future we want. How?

The first step we need to take is making sure that machines make themselves recognizable when they talk to us. Processed speech needs to be discernible from human speech. Legally and visually. We need technology that rather than exploiting our cognition and privacy protects the human condition and our right to choose.

We need to know whether we talk to machines or humans, whether we devote our time to talk to computers or to living beings. Whether we try to re-feel what has been felt by another human being or whether we die a little offering our time and cognition to a robot to use our mind and time as a natural resource. We need to know who runs these robots. And we need to know how they work. Bots have no right to anonymity. Algorithms that influence human existence on the deepest level shouldn’t be trade secrets.

We need to add verification mechanisms—iris scanning, fingerprints, blockchain verification for publishing platforms—that offer us some security that the information we read has been created by humans with body and mind. There is no fully safe way to do this. And as all technology, it can be abused and it will be abused, but these measures will add expensive hurdles for crooks and send an important message: “There are rules.”

The imminent threat from technology today lies no longer just in its physical power. Information technology has the power to shape our perception and distort reality to the advantage of invisible owners of machines. Ultimately, information technology has the potential to rob us from our basic ability to shape the world into what we’d like it to be. You may find that old-fashioned, sentimental, unscientific, or “just too bad but inevitable”. The laws of nature are given, human autonomy is a hard-fought achievement that transcends the laws of nature. Our freedom to shape reality, to chose between right and wrong, the freedom to make mistakes and take responsibility is as real as gravitation. Let’s not cede that superpower to those who run the machines.