he HBO series pits machines with morals against man — but could it really happen?

Picture this: robots enslaved in a Wild West-themed park gain consciousness and revolt

against their tormenting programmers. Could any of this happen in real life?

The complicated answer lies within the intricacies of Artificial Intelligence — and the

stakes could not be higher. “If we try, it seems we will one day build intelligence that

matches our own,” says Pieter Abbeel, a professor in the robotic learning lab at UC Berkeley

and the cofounder of covariant.ai, a Silicon Valley company. “Then, the next minute, the

robot will be smarter than us, probably able to make us do whatever it wants, and shutting

it down will not be an option. So, the big question becomes — ‘What happens next?’”

In the second season of Westworld, returning this Sunday on HBO, AI-enhanced robots have achieved independence and are moving beyond the confines of their theme-park world, a

futuristic scenario rooted in research only now beginning to be explored. The science of AI,

first theorized in 1950 by British scientist and mathematical genius Alan Turing, has only

recently become a reality, due to modern advances in computer technology.



Through algorithms that tell computers what functions to perform, AI enables robots to solve

problems on their own, achieving goals they have been given by researchers. In the lab, a

scientist will come up with a specific objective (often called the “reward function”),

program the robot’s algorithm to achieve that objective (its “optimization”), and then let

the machine’s AI work through countless approaches until it accomplishes its goal — the

correct process then remembered so it can be repeated again more quickly (the “learning”).

For now, researchers have been able to program AI robots to figure out basic tasks, such as

folding laundry, working on assembly lines or using voice commands to retrieve

information (most home assistant devices on the market rely on AI). The ultimate goal,

however, is to write algorithms that will enable robots to deal with the real world — and all its millions of variables. “The problem is there are so many things system designers don’t take into account,” says Dylan Hadfield-Menell, a researcher in human-robot interaction at UC Berkeley. “Let’s say you program a robot to go find gold, factor in grass and dirt, and then set it in Hawaii, where there is lava. The robot will treat the [harmful] lava as perfectly acceptable — and bad things will happen.” He pauses. “We’ve been so focused on building systems to do anything at all, we haven’t been making sure we’re doing the right thing. So as automated systems become more powerful, we have to ensure their objectives are what we actually want —or we’re going to see large-scale harm.”

For scientists working to create sophisticated AI robots, the goal is to give machines the human-like ability to pause and reflect before taking action, an objective recently leading to a new advance — the meta algorithm. Essentially a safeguard to monitor robot behavior, the meta algorithm oversees the robot’s actions and can step in to force it to halt and take measure when it encounters an unknown variable, prompting it to send a wireless message to a programmer for guidance. For researchers, the meta algorithm is a means for man and machine to work together more closely, a trigger mechanism to ensure robots navigate the world safely. But the meta algorithm also conjures a new vision of robot intelligence, one in which they have the ability to pause, deliberate and make more human-like nuanced choices — a step closer to a possible real-life Westworld. “What humans achieved through evolution — a theory of mind — computers, in principle, can achieve through programming,” Abbeel says. “It might need to happen through simulating evolutionary paths similar to what we’ve experienced in the real world — but it doesn’t seem fundamentally out of scope.”

In order to create a true Westworld — a theme park in which humans interact with lifelike androids — AI robots must be able to realistically interact with people. For now, the first stumbling block is literal: the ability for robots to walk. At the moment, one of the most advanced two-legged walking robots is Atlas, a 5’, 165-pound humanoid created by Boston Dynamics. Incredibly, Atlas can walk, jump and even do backflips, but maintaining a lifelike gait on limited battery power (another hurdle for developers, along with realistic hand movement) is difficult. “These things are cool and progressing but still clunky and energy inefficient — the mechanics of running are a problem,” Abbeel says. “But the biggest obstacle is visual capability. You want these robots to be able to sense their environment and do things based on what they see. It’s the same thing you need for self-driving cars — the ability to recognize things.”

Despite the challenges, scientists believe they will soon create a robot with lifelike agility, perhaps in the next 10 years, leading to the more profound hurdle to a real-life Westworld: the ability for AI androids to interact with humans in a realistic manner.

“The next step is to have a back and forth conversation with a robot that is programmed to a set script,” says Omar Abdelwahed, the Head of Studio at SoftBank Robotics, “That’s possible, but where it starts to fall down is speech recognition, like when you ask your home assistant something and she responds with something crazy; getting a robot to a high level of understanding is difficult.” A former video-game designer, Abdelwahed, believes a future Westworld park will operate in the same way that video games do today; the robots give the illusion of ultimate choice, their scripted answers and movements performed by AI in conjunction with distant handlers who can override the robot if necessary. “The trick in video games is to give the players choices and then create branching paths that ultimately converge again at a central point,” Abdelwahed says. “You feel like you’ve explored and discovered stuff, but you’re still on a curated path. So, in a real-life Westworld park, if you have robots with hardware that could hear really well and the content is 100% curated by teams of writers who are overseeing various scenarios, it could be the best experience of your life.”

At the moment, scientists are working to create increasingly advanced visual, hearing and walking capabilities for AI robots, innovations to allow more humanlike interaction — but is it possible for the machines to one day gain self-awareness and morals, to essentially become human as they do in Westworld? While there is plenty of disagreement in the field, many AI researchers and leading tech visionaries warn of AI one day evolving beyond our intentions, even perhaps causing a large-scale disaster like World War 3. In his 2014 bestseller Superintelligence, Oxford professor Nick Bostrom argues that we may one day create AI robots smarter than ourselves, their super-intelligent capabilities giving them the power to overtake the world — not with violence, but with WiFi. “If it takes humans 20 years of growing up to learn a bunch of stuff, and another system can do it in a day, then who knows where that system will be the day after that?” asks Abbeel, who previously worked for OpenAI, a non-profit organization funded in part by Elon Musk. “The computer might prove every unknown theorem in math, plan a mission to Mars and solve a bunch of diseases. But it can also probably influence us to do whatever it wants because it can tell us stories. And the idea of shutting it down is naive because it will just convince us to do otherwise. So then what else it is going to do?” He pauses. “Just look at the robot Dolores on Westworld. She uses manipulation to get the job done.”

While a real-life Westworld seems possible, most researchers can agree on one thing — we won’t see it anytime soon. “Not in our lifetime,” says Anca Dragan, a professor at UC Berkeley who runs the InterACT lab, a research department enabling robots to work with people. “It might not happen for hundreds of years. You’re talking about creating robots that can basically do everything a human do — and at this point, we’re still working on ones that can handle your laundry.” While some companies like Hanson Robotics are creating lifelike androids, many researchers are instead focusing on building AI robots that look nothing like people but can convey human-like communication through gestures — more Wall-E than Terminator. In the near future, these AI robots will be programmed to perform single specific tasks, such as helping around the house, aiding in surgery or, most visibly, driving our cars. “We are working on robots that look like animated movie characters you can put in your home that will help you with day-to-day tasks,” says Dragan. “[Things like] lifting things or reaching high shelves, that will also move with grace. If you make a robot that looks like a human then you just raise expectation on its capability — expectations we can’t deliver on for a long time.” In fact, moving away from lifelike robots is a growing trend — humans are often disturbed by androids that are close to, but not exactly like, them. “Machines with lifelike skin creep me out,” says Abdelwahed, whose Pepper humanoid robot looks like an animated film creation, helping people with specific questions in airports and coffee shops. “No one’s going to buy a realistic-looking robot. It’s weird.” Among themselves, researchers even talk of people becoming so upset by AI robots that they destroy them. “There are stories of people putting robots out there and having them come back with broken fingers,” Hadfield-Menell says. “People are generally not cooperative. They like to break stuff — and they also expect these systems to be smarter than they are right now.”

Still, despite all hurdles, rising technology and increasing interactive destinations like hotels suggest we may be moving towards a real-life Westworld — perhaps sooner than we think. “It’s just really hard to predict,” Abbeel says. “I think the fairest thing to say is that I’ve been very surprised by all the developments over the past five years — and I expect to continue to be surprised.”