At the Star Wars: The Last Jedi Hollywood premiere this week, Radiohead frontman Thom Yorke sat down cross-legged on the red carpet to speak with BB-8, an intelligent robotic character from the Star Wars galaxy. He looked intently at the robot and pointed his finger as he spoke, as if in animated conversation with the famous space droid.

This was, of course, a light mockery of modern robots. Machines — while capable of extremely impressive automated feats — lack true intellectual and emotional development. Still, in the past year, we were inundated with reports of artificial intelligence seeping into our homes and cars. But while smarter machines may have entered our lives in 2017, humanity's AI is still in its elementary stages.

Today's mainstream AI programs typically complete simple tasks, like telling you the weather. They aren't sophisticated minds capable of evolving and growing. Rather, they're sophisticated machines.

"You deploy them and that’s the way they are," Arend Hintze, an artificial intelligence researcher at Michigan State University, told Mashable. "It sounds derogative, but they are amazingly complicated machines."

One of these complicated machines, Google's AlphaGo Zero, made giant news in 2017. Google's DeepMind lab, its AI research division, endowed the AlphaGo Zero program with self-learning: It taught itself to become the world's most dominant Go player, defeating both the top human and AI programs without previous knowledge of how to win the game. It developed "superhuman" capabilities "which humans don’t even know about or play at the moment," according to lead AlphaGo researcher David Silver.

"AlphaGoZero is a marvel," Hintze told Mashable. "It crunched an insane amount of numbers just to accomplish this one thing."

A top Chinese Go player, Ke Jie, ponders his next move during a match against Google's artificial intelligence program AlphaGo in 2017. Image: AFP/Getty Images

But just that one thing.

The AI we dream of — and Hintze researches — thinks, acts, and understands humans. Take for instance R2-D2, the Star Wars droid BB-8 is modeled after. R2-D2 could perceive Luke Skywalker's wants, and even his sadness.

AlphaGo Zero is a just a board game champion. "R2-D2 is not AlphaGo Zero," said Hintze, who studies what he calls neuro-evolution in hopes of developing an artificial mind that mimics the cognition of the human mind.

"At the end of our research, we want to have R2-D2," explained Hintze. "Something that understands human thinking."

"Every time we’re on the news it's about AI being better than humans—not collaboration."

Nowhere is this limited human understanding more noticeable than in our increasingly ubiquitous home or personal assistants, like Amazon's Alexa, Apple's Siri, and the Google Assistant.

"Alexa is good at weather," Subbarao Kambhampati, Arizona State AI researcher and President of the Association for Advancement of AI, told Mashable. "But can Alexa handle a high-level request?"

In 2017, explained Kambhampati, the answer is no. For instance, Alexa can't yet piece together three or four operations. This means not just putting together travel plans when asked to do so, but recognizing when you like to travel and what type of hotel you want to stay at. "It needs to have an understanding of my beliefs and desires as a human," said Kambhampati.

Truly understanding humanity means AI must cooperate with humans, rather than simply following commands to play a certain song or look up a definition. This requires a reasoning ability, because how does one cooperate if one can't understand what the other party wants? "The reasoning parts are still lagging behind," said Kambhampati, who noted that its difficult to predict in human years when this might come about, but as digital assistants progress, "it's suddenly close enough."

In popular culture, the need for machine and human cooperation, unfortunately, is often overshadowed by competitions between the two entities — as was highlighted by the successes of AlphaGo Zero. "Every time we’re on the news it's about AI being better than humans — not collaboration," said Kambhampati.

SEE ALSO: Apple is accepting job applications to make Siri a better therapist

Our large human minds were evolved to collaborate, he says, not for primitive, deer-like instinctual functions, like fleeing from danger. "We needed the brain we have not to run away from tigers, but to deal with each other," said Kambhampati.

Truly intelligent machine's, then, must collaborate too. Fictitious AI machines, like BB-8, are built to work with humans, not beat them at games or flee from danger (although BB-8, like R2-D2, can do that too).

This sort of advanced AI capability, however, is more than a century away, says Hintze, the neuro-evolutionist. In the coming few years, we'll be fortunate if personal assistants can learn to distinguish the humans interacting with it, and to provide answers, choose music, or make plans based upon the personal preferences of that human.

"We are very much at the beginning of this," said Hintze, noting that we're we're nowhere near building empathic space robots. Why, in the great realm of AI evolution, our machines are like primordial fish trying to leave the ocean.

"If you think of the tree of life," said Hintze, "we are just crawling to land, at best."