On their path to world domination, intelligent machines will steal all our jobs, but could they (or should they) ever steal the president's? The case for a computer-in-chief.

WATSON 2016: It's a placard you're unlikely to see in anyone's front lawn in the final months leading up to Election Day. You won't see the supercomputer on televised debates or speaking at national conventions. Watson doesn't kiss babies, court super PACs, or email campaign supporters for donations. Watson never even leaves its room.

None of this has stopped Watson's presidential campaign manager, Aaron Siegel, from stumping on behalf of IBM's hyper-intelligent, Jeopardy-winning supercomputer. An artist and design lecturer at the University of Southern California by day, Siegel began his campaign to elect an artificial intelligence to the highest office in the land last March, right around the time Republicans began announcing their candidacies en masse. Siegel was appalled by the general media spectacle surrounding the presidential race.

"So I started thinking: who would be an ideal candidate?" Siegel says. "After a while, I came to the conclusion that the ideal candidate was not a person, but a machine that could make decisions in a more objective way."

President AI offers one huge advantage over its fleshy competitors: the ability to make optimal decisions. As Siegel sees it, the modern world has gotten too complex for human candidates to possibly consider all the minutiae that go into making a decision, as well as the repercussions of every move. But not for an intelligent machine, which could make those decisions without the emotional politicking that defines American statecraft.

"I came to the conclusion that the ideal candidate was not a person, but a machine"

Why Watson? Like presumptive Republican nominee Donald Trump, the supercomputer has the notoriety that comes with a powerful personal brand, having catapulted into the national spotlight after trouncing Jeopardy champion Ken Jennings in 2011. Siegel also likes Watson for its transparency. For example: When the machine was playing Jeopardy, it would list its top three answers to the questions in terms of its certainty for each answer. Perhaps the world of presidential politics, where candidates are not allowed to appear anything less than absolutely certain, could use a little bit of Watson's honesty.

Watson 2016 is less a real campaign and more a rebuke of how crazy our political system has gotten. But his project raises some great questions: Are transparency, cold logic, and the ability to perform 80 trillion operations per second the ingredients for a good president? Despite Siegel's optimism about Watson's candidacy, many AI researchers aren't so sure. Still, given how many jobs the machines will be doing for us in the coming years—and looking at the less-than-sterling characters who run for higher office in this country—you have to ask yourself: Why not a computer-in-chief?

Programming a Politician

He shall take care that the laws be faithfully executed.—U.S. Constitution, Article II

One big knock on Barack Obama when he ran for president in 2008 was the candidate's lack of executive experience . The same criticism is liable to be levied at any supercomputer candidate, which would lack not only executive experience but also the experience of being a living, breathing, feeling meatbag. AI has already proven itself capable at politics, having analyzed and written political speeches . But would Watson or future AI contenders be able to quell voter doubts on the whole "not being a human" thing? Thanks to recent advances in machine learning, the answer might be yes.

When we talk about a machine that could be commander-in-chief, we're talking about what computer science researchers call Strong AI: machines with a functional intelligence comparable to that of a human. And to have Strong AI, you really need machine learning. As AI pioneer Arthur Samuel once famously described it, machine learning is "the field of study that gives computers the ability to learn without being explicitly programmed." It's not enough to have a powerful computer that follows directions. To lead a nation a of 300 million people, you've got to be able to learn and adapt.

Enter artificial neural networks (ANNs). This is a type of machine-learning algorithm modeled after the human brain, insofar as it uses a network of artificial neurons, or nodes, to process data. If you've ever run a picture through Google's DeepDream, the trippy computer-vision program that has been trained to recognize faces and other patterns in images, you've already encountered a neural net in action.

At the most basic level, there is an input layer and an output layer of neurons that are connected to one another via a number of hidden layers, each of which also contains its own clusters of artificial neurons. These networks are fed a certain data set and learn to recognize patterns in the data by performing many computations at once through a network of neural nodes. The more data is fed to the machine, the better its predictions become. It is, in effect, learning. This nonlinear data-processing style, in which a network can perform a number of different computations in parallel, has revolutionized machine learning. Neural networks are quick to train and remarkably responsive. They're adaptable to changes in input data and their ability to comprehend and solve complex, abstract problems is beginning to rival (and in some instances surpass) that of a human.

Will artificial neural networks ever have what it takes to run a country? At first glance, there are some presidential tasks that such an AI could perform better than its fleshy presidential forerunners. The way an AI uses a neural network to determine its output is very similar to the way a human president should make policy decisions. For example: The president begins with a set of data (say the state of the economy as judged by unemployment rates and the historical trends of the stock market) and parses through this data in accordance with a predetermined set of parameters for action (the ostensible aims of his/her party) to see if a particular plan is going to work.

Presidents do not arrive at a policy decision alone—they rely on the input and opinions of hundreds of policy advisors, which could begin with the lowly data-input intern and filters up, up, up until it reaches the president's inner circle of advisors—the cabinet. Each link in this chain leading to the president is a person capable of statistical error or willful manipulation—perhaps someone accidentally or intentionally leaves out a piece of data that is crucial to arriving at an optimal policy decision. This is a problem that could potentially be rectified by an AI executive. Data could be fed directly into President Watson at even the lowest levels of government. Supercomputers can parse through far more information at far greater speeds than people can, ensuring that every relevant data point is considered when arriving at a policy decision. By eliminating the long data-filtering process, this also eliminates the number of opportunities for human error—intentional or otherwise.

Yet some AI experts are not so sure that a supercomputer would outpace humans in the oval office when it comes to policy decisions. Exhibit A is Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield and a noted skeptic of the possibility of Strong AI.

"AlphaGo beating a human in a game of Go is an incredible achievement, but even if the machine won, it still can't make you a cup of tea afterwards."

Sharkey is quick to point out that despite the impressive advances made in machine learning, computers like Watson are still just narrow AIs—they are trained to do one very specific task, like search through medical records or play a game of Go . The narrow AIs tackle problems with a "brute force" approach, filtering through every possible answer before determining a correct one. While this is helpful as a tool, it is not necessarily intelligence. Or as Sharkey put it, "AlphaGo beating a human in a game of Go is an incredible achievement, but even if the machine won, it still can't make you a cup of tea afterwards." This is a far cry from the versatility of a president, who must make policy decisions on everything from the economy to the environment to war. Today's AI can do incredible numbers of calculations, but it cannot confront unfamiliar situations, a particularly human characteristic a president is likely to use every single day in the Oval Office.

"Watson is sucking up and pumping out information and doing a bit of problem solving, but the job of the president is really very different from this," Sharkey says. "A lot of what the president does is a politics. It requires an understanding of the nature of humanity and humans, and sympathy for humans and those who have suffered. I don't think a machine would be able to do anything like that for a very long time."

Kristinn Thórisson, the founder of the Icelandic Institute for Intelligent Machines, is not nearly as skeptical as Sharkey about the possibility of a general artificial intelligence that rivals the capabilities of a human, but he's also not sure neural nets are the way to get there. To combat the limits of narrow AI, Thórisson has proposed a constructivist approach to artificial intelligence. The general idea here is that rather than designing an AI manually from the top down to perform the mental functions it was designed to do (which is characteristic of all narrow AIs), you build an AI from the ground up by giving it the tools to manage its own cognitive abilities. In short, you teach the machine how to acquire the data necessary for programming itself.

This may sound like a pretty futuristic idea, but Thórisson has already demonstrated his constructivist methodology with an incredibly convincing proof of concept. Known as HUMANOBS , Thórisson and his colleagues managed to have an autonomous AI system learn human communicative behavior in real time by observing humans participate in a TV-style interview.

According to Thórisson, after only a few minutes of observing the humans participate in an interview, the AI began to generalize some basic human communication behaviors and learned how to structure an interview, to the point where it could join the interview as either the interviewer or interviewee and continue the conversation. Yet the ability to teach a machine to hold a conversation isn't worth much if no one wants to speak with something as uncharismatic as a computer. But luckily for Thórisson, a personable machine may not be too far away.

Cult of Personality

He shall hold his office during the term of four years, and, together with the Vice President, chosen for the same term, be elected.—U.S. Constitution, Article II

Before a computer can be president, a computer must run for president. Politics is personal, and a machine's personality (or lack thereof) could be a major stumbling block both on the campaign trail and once it's in office. But with recent advances in affective computing , an artificially intelligent candidate that is as handsome as Kennedy, as articulate as Obama, and as moral as Lincoln may one day be a reality. Imagine meeting a presidential candidate once and knowing they'd never, ever forget your name, face, or voice. Or an android pol who effuses empathy or humor. (Sorry, Data. One day you'll get the joke.) Artificial neural networks could also surpass other machine-learning methods when it comes to these campaign trail tasks that were once considered quintessentially human.

The art and science of teaching a machine to simulate human emotion starts with social signal processing. This is the process whereby machines learn to read certain behavioral cues—such as inflections in your voice, facial expressions, or other body language—as data. Once it learns to identify those tics, the machine can then infer the emotive state of people exhibiting these behaviors.

The way you teach a machine to tell a happy person from a sad one, a sincere tone of voice from a sarcastic one, actually isn't that far from Watson's training—specifically, how IBM taught the supercomputer to process natural language so it could parse the meaning of Jeopardy questions. A massive data set (in this case photos and videos of facial expressions or body language and recordings of people speaking emotionally) is fed into the computer. It then uses predetermined parameters to learn the social signals that correspond to emotions. This not only allows the machine to understand a human's emotional state, but also lets the computer respond in a way that makes sense given the context. That way, Candidate Computer doesn't spit out something glib and happy upon hearing that your pet just died.

Simulating human emotion in a machine brings with it a very particular set of stumbling blocks, though. As weird as it sounds to say this, there still isn't much scientific consensus on just what human emotion is. Is it something that can be reduced to a number of physiological reactions and behavioral cues, or is it more nuanced and bound by cultural contexts?

"Imagine teaching a machine the simple difference between crying for joy and crying out of anguish"

Given the startling range and complexity of human emotion, the data sets necessary to really teach a machine to understand human emotions would be massive. Imagine teaching a machine the simple difference between crying for joy and crying out of anguish—it's a gargantuan task. There are researchers at Affectiva who are trying to make it happen. A spinoff from MIT's renowned affective computing department (they basically invented the discipline), Affectiva has compiled the world's largest "emotion database," comprising some 40 billion data points derived from analyzing 4 million faces in an attempt to teach an AI to recognize human emotion.

Of course, we must also consider the possibility that even if machines get good at simulating and recognizing emotions, they will never really have them. This is a question that has haunted the pages of science fiction for decades. Her, Spike Jonze's 2013 film wherein Joaquin Phoenix falls in love with an affective AI, is an especially good example. Scarlett Johansson's Samantha AI is a fantastic conversation partner and listener. Yet the ability to simulate an emotion without actually feeling that emotion is the definition of a psychopath—probably not the person (or machine) you want leading your country.

Watson and Samantha are just computer programs with avatars. Where things really get weird is when robots that walk like us start pretending to have emotions. Known as social robotics, this area of artificial intelligence is where you'll find robots that are beginning to look so humanoid that they are just downright creepy—and that's exactly the problem. Although social robotics has its benefits, particularly when it comes to things like assistance for the elderly , it is also plagued by a side effect that becomes more prevalent the more social robots are perfected. The phenomenon is known as the uncanny valley, that sense of revulsion you feel when confronted by a robot or animation which has features that look and move almost—but not quite—like those of a real human.

Given that we're unlikely to have robots that move with the natural human grace of Ava in Ex Machina any time soon, the uncanny valley remains a major challenge for roboticists and AI researchers who are seeking to design emotionally cognitive robots without freaking out the humans.

This is just another reason Siegel thinks Watson is just the supercomputer for the Oval Office. It makes no pretension to being a person. The computer lacks anything resembling a human face; its mug is more like a planet with debris in orbit. The IBM crew programmed Watson with just enough "personality" to make for easy human–machine interaction and a few entertaining rounds of Jeopardy, but not enough character to make you wonder if there is really a human inside Watson's server.

Computer-in-Chief

The President shall be commander in chief of the Army and Navy of the United States, and of the militia of the several states, when called into the actual service of the United States.—U.S. Constitution, Article II

In addition to being the head of state, the POTUS is also commander-in-chief of the armed forces. Which means that a computer president is a killer robot.

Stephen Hawking and Elon Musk like to express their concern about the coming of the killer machines, which recently culminated in an open letter detailing just why we all need to be worried about the rise of AI and its use in autonomous weapons. This was no fringe worry. Dozens of researchers on the cutting edge of AI development, such as Google DeepMind's CEO Demis Hassabis, signed the letter, too.

We already have drones that can fly over enemy territory and drop death from the skies. But somewhere there's a human at the controls—even if she's in a beige office on the other side of the world. The real worry that "truly autonomous weapons" could be "practically if not legally feasible …within years, not decades." That's when you cross the bridge from machines that kill to machines that decide who and when to kill.

What if the commander of the armed forces was a computer? Does this count as one of the "offensive autonomous weapons" cited in the letter? According to Sharkey, who serves on the International Committee for Robot Arms Control, this largely depends on the authority we cede to President AI.

Sharkey and his colleagues on the committee believe the decision to kill another human in battle should always be left to a human. Although an AI president wouldn't be on the ground, making a decision as to whether to shoot a combatant, it would be making presidential-level decisions about whether to engage an enemy abroad. If the final decision to send human soldiers into battle was left to an AI commander-in-chief, it would violate the campaign's principles of not leaving the final decision about life and death up to a machine.

One way around this thorny issue is that a president isn't supposed to have unilateral authority to make war. As per the War Powers Resolution of 1973, the president must get Congress to declare war before sending U.S. troops into combat. Nevertheless, a number of presidents have been accused of violating this protocol in the past, such as Clinton during the bombing of Kosovo in the 90s and most recently Obama for his decision to intervene in Libya and for routinely making the final, unilateral call on whether to target a terrorist with a drone strike. For Sharkey, ceding such power to an AI president is a nonstarter.

And here's a national security issue for the far-off future if there ever was one: What if our president gets hacked?

"It is hard to reprogram humans, but it's really easy to redirect an AI, because all its code is available," says Thórisson. "It may not be quite as easy as writing a new program in Basic, but it's certainly easier than getting humans to do things they otherwise wouldn't do. That's the real danger of a human-level intelligence in a machine. It could be a very powerful weapon in the hands of the wrong people."

The thing about intelligent machines, and those making use of artificial neural networks in particular, is that they are something of a black box. There is an input and an output, but figuring out what happened in between to arrive at the output can be a very difficult and labor-intensive process. This could make discovering a hack into the president almost impossible. Yet Thórisson's concerns are far from unfounded. As many information-security experts agree, creating a perfectly secure computer is something of a fantasy. For every lock, there must be a key, which means that (barring a revolution in cryptography) an AI president will likely always be vulnerable to security breaches.

"A machine like Watson is a giant calculator," Thórisson says. "No matter how good of information it has, I would be concerned about how it would know whether people were gaming it and how to know it was making good decisions. All of these systems have their limits and faults and I'll bet that other politicians would be very quick at finding out how to use the machine to their advantage."

Make AI Great Again

The executive power shall be vested in a President of the United States of America.—U.S. Constitution, Article II

To predict that we're 10 or 20 or 100 years from potentially putting a robot on the ticket is a fool's errand. The future never comes at the pace you expect. But the capacity of AI has exploded in just the last two decades, and with innovations in areas such as quantum computing, which is also poised to greatly accelerate the development of AI, a computer capable of carrying out the necessary tasks to lead our country might not be as far away as we think.

"No wonder Trump and Sanders are doing so well … people will go for anything that promises escape from the political machines."

While at least some AI researchers concede that creating a presidential computer is within the realm of possibility, that is an entirely different question than whether we should elect an AI president. Siegel, for his part, is still convinced Watson would make a stand-up president.

Yet Sharkey and many others who are actually tasked with developing artificial intelligences have a far different outlook.

In the words of Alex Pentland, a renowned computer scientist at MIT's Media Lab, electing an AI to be president "sounds like a very, very bad idea." To put it bluntly, there's a difference between being a bureaucrat and being a leader. To Pentland, an AI would make the ultimate bureaucrat. But at this point, AI would be terrible at what we humans consider to be leadership. (An interesting caveat: A recent study showed that humans are more likely to trust an AI "leader" in times of crisis—until the AI makes its first mistake, that is.)

Despite the potential gains in efficiency promised by an AI president, Pentland is not sure a world with mechanical politicians is a world we'd want to live in. He cites the use of big data in political campaigns today—where candidates use sophisticated algorithms to build voter profiles and then send hyper-individualized campaign messages to these voters—as an example of the kind of creepy, soft dystopia that could arise from an AI presidency.

When you're polling behind death, maybe it isn't your year.

"We are already in a world that is disturbingly close to that envisioned in 1984," Pentland said. "The candidates' 'machines' watch our every peep. No wonder Trump and Sanders are doing so well … people will go for anything that promises escape from the political machines."

Where Siegel is optimistic that voters faced with a Trump vs. Clinton election might prefer anything else, other metrics suggest that the last thing the U.S. electorate wants is a robo-president. Case in point is a recent survey coming out of Chapman University, which showed that Americans are generally still terrified of machines taking over. In fact, people surveyed were more scared of AI and related technologies than they were of death itself. When you're polling behind death, maybe it isn't your year.

Open the Pod-Bay Doors, Mr. President

The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.—U.S. Constitution, Article II

Finally, we might want to reconsider the foundation upon which the promise of a robot president rests: the idea of AI as an impartial, unbiased machine. According to Pentland, it's little more than wishful thinking. Coding is political—or at the very least, biased—and just because a machine can extrapolate from the original algorithm without further input does not mean it doesn't carry that bias, its original sin.

"People have always been drawn to the unbiased, omnipotent father figure," Pentland says. "Unfortunately AI is nowhere near that. AIs are essentially unable to extrapolate beyond what they have seen already, and feeding them different perspectives doesn't really help, because they can't judge context and motives competently."

As machines become more creative, judicious, rational and social, it is not inconceivable that one might at least become the president's right-hand computer, more of a necessary tool than a leader in and of itself. For Sharkey, though, if speculative discussions about computers-in-chief do anything, they distract us from the ways in which AI is already beginning to take over the small, banal tasks of everyday life.

"It's very difficult to predict technology," said Sharkey. "I don't like to speculate too much because I think there are so many things of danger on the current horizon. We could think about a super-intelligent president, but overlook the fact that artificial intelligence is already taking over. We're ceding control to AI at a low level, and this is all very dangerous to me."

Perhaps Sharkey and the other #NeverAI researchers are right: While it's not only improbable but also ill-advised to cede total control to a computer by electing a machine to the Oval Office, AI is already invading the political sphere—and every other part of our lives—in ways we're only beginning to understand. Maybe we should be more worried about those very real problems than voting for a hypothetical candidate computer.

On the other hand, given the way that Election 2016 is going, we'd certainly understand if you wanted to write in Watson.

Photo research by Jennifer Newman; Photo Illustrations by Michael Stillwell