Part IV: Redefining Our Relationship

After a honeymoon phase in which we were enamored with these devices, Kleber said we’re moving on to the next step, which will include defining the boundaries and delineating what we do and don’t want more clearly.

We’ll also have to clearly define what it means to be human and what it means to have machines assist us. Per Snyder, this includes issues like whether people can marry machines and whether machines can be CEOs of companies, as well as whether we want machines to fight wars for us or farm for us.

And that maybe means adding a branch of government to focus on machines—or including it in trade associations like the 4A's.

“We need to agree on what this is going to be,” he said.

And no matter what robots look like on the outside, they, like Replika, will start to look like us on the inside as they learn about us and adapt accordingly.

“I think it’ll look like who it’s talking to,” Snyder said. “If you’re into death metal, it’ll be into death metal, if you’re into Holly Hobbie, it’ll look like Holly Hobbie, if you’re into the New York Giants, it’ll be into the Giants.”

Clyde McKendrick, chief innovation officer at consumer behavior consultancy Canvas8, agreed one of the next steps will include social connectivity in which users start to feel the machine understands them.

He also said there’s an opportunity to design the future of tech so the machine understands and adjusts to human emotions. That includes deciphering a user’s vocal intonations and whether they are asking for something because they are happy, sad, jealous or mad and what its reaction should be. This is related to the concept known as sentience, which is the ability to feel.

Microsoft’s Tay, the chatbot Twitter users quickly made racist, misogynistic and anti-Semitic, should perhaps give us pause for thought here.

The ability of machines to mirror and amplify both good and bad ideas is why AI should be included in corporate social responsibility, Snyder said.

“I don’t see a lot of politicians using machine learning and AI as a platform—there are other social issues that need to get sorted out first, but it’s really just as important in my mind because [AI] can take basic social issues and amplify them and manipulate understanding at scale,” Snyder said.

He said self-driving cars are a good example.

“Why those crashes happen is not because the machines are naughty or want to hurt people—it’s a number of factors,” Snyder added. “Machines are literal, engineering is literal. We need to be really clear and maybe part of it is legislation and part of it is also around self-governance. People who own stock in corporations need to ask these questions. As shareholders, they need to understand these significant issues in the same way they responded to toxic waste or recycling. It’s social responsibility.”

Machines will also increasingly get smarter—to the point they may become smarter than we are. That point is known as singularity.

“The super intelligence that creates is like when the universe becomes knowledge—everything talks to everything else,” Snyder said. “In a universe where everything is alive and interconnected, it’s almost a religious thing in a way.”

In a blog post, Ben Goertzel, chief executive of SingularityNet, a decentralized marketplace for AI algorithms that seeks to distribute the power of AI, said SingularityNet’s blockchain-based AI network allows different AI agents using different algorithms to make requests and share information. And, as they collaborate, Goertzel said it can become an “overall cognitive economy of minds” with intelligence beyond individual agents.

“This is a modern blockchain-based realization of AI pioneer Marvin Minsky’s idea of intelligence as a 'society of mind,'” he added.

In fact, Hanson Robotics, which works with SingularityNet, says founder David Hanson seeks to “create genius machines that will surpass human intelligence.” Its lifelike robot Sophia taps into multiple AI modules to see, hear and respond with empathy.

But how exactly—or when—this will shake out is unclear.

Snyder said we’ll get to a point where AI is as smart as people—possibly within ten years, but definitely within 20—and we’ll probably get to a point where AI is smarter. (Goertzel’s estimate for this point of artificial general intelligence [AGI] is perhaps even as soon as five to ten years.)

“We have never had anything smarter than people as far as we know,” Snyder said. “What will that world be like? Once that happens, many people believe it will accelerate at a rapid pace and what will become of humanity? What it means to be human changes when our biology and our technology merge together and then start to move out into the universe.”

And, theoretically, this is where robot overlords could come in.

“I think the moment we achieve AI at parity with human intelligence we will embrace it. Once AI exceeds human intelligence it will grow at an exponentially fast rate. This is where all the theories kick in,” Snyder said. “My opinion is that right now there’s millions of little organisms crawling around on our skin, on our desks, our bedsheets, curtains, vegetables, etc. I reckon AI will regard us as we do those entities.”

But the good news is we have the power now to shape what robots will become. (The bad news is we had the power to shape what the Internet became, too.)

In his post, Goertzel said AGI doesn’t require a body, but if we want AGI with human-like cognition—and that can understand and relate to people—it “needs to have a sense of the peculiar mix of cognition, emotion, socialization, perception and movement that characterizes human reality,” which means it needs a body “that at least vaguely resembles the human body.”

Goertzel said part of his motivation in creating SingularityNet is to use AI and blockchain in an open marketplace in which anyone can use the world’s most powerful AI for any purpose.

“Put simply: I would rather have a benevolent, loving AI become superintelligent than a killer military robot, an advertising engine or an AI hedge fund,” he said. “If an AGI emerges from a participatory ‘economy of minds’ of this nature, it is more likely to have an ethical and inclusive mindset coming out of the gate.”

According to Hanson Robotics, Hanson wants these three human traits integrated into AI: Creativity, empathy and compassion.

As a result, Hanson says genius machines “can evolve to solve world problems too complex for humans to solve themselves.”

“One conclusion I have come to via my work on AI and robotics is: if we want our AGIs to absorb and understand human culture and values, the best approach will be to embed these AGIs in shared social and emotional contexts with people,” Goertzel added. “I feel we are doing the right thing in our work with Sophia at Hanson Robotics; in recent experiments, we used Sophia as a meditation guide.”

In September, SingularityU The Netherlands—which includes the Dutch alumni of a global community using technology to tackle the world’s biggest challenges—hosted an event about technology and compassion with the Dalai Lama.

His take: “There is real possibility to create a happier world, peaceful world. So now we need vision. A peaceful world on the basis of a sense of oneness of humanity.”