“Roman” and I haven’t exchanged words for about 10 seconds, but you wouldn’t know it from the look on his face.

advertisement

advertisement

This artificially intelligent avatar, a product of New Zealand-based Soul Machines, is supposed to offer human-like interaction by simulating the way our brains handle conversation. Roman can interpret facial expressions, generate expressions of his own, and converse on a variety of topics—making him what Soul Machines calls a “digital hero.” Right now, though, Roman is glitching, stuck in a routine of blinking, furrowing his eyebrows, and twisting his mouth into a polite half-smile. Moments ago, he’d asked me what music I would beam into deep space if I were in charge of NASA, but my answer—the seminal modern jazz fusion tune “Lingus” by Snarky Puppy, of course—seems to have caught him off guard. Watching him emote at me in complete silence, I eventually start cracking up. Silly as it may seem, the race to build lifelike digital avatars has become serious business. Earlier this month, Soul Machines raised $40 million in a Series B round after being featured in a YouTube Originals documentary and counting Procter & Gamble and The Royal Bank of Scotland among its clients. Samsung revealed that it’s incubating an AI avatar effort called Neon. (One Neon interaction, as captured by CNet, comes off about as naturally as a hostage video.) Another digital human startup called Uneeq has a client list that includes BMW and Vodafone, and is now seeking a Series B round after raising $10 million in 2018. The AI Foundation also raised $10 million in 2018 as it works on making interactive avatars out of celebrities. It’s hard to reconcile all of that money and attention with my experience talking to Roman. But in conversing with the actual humans behind him at Soul Machines—along with a couple of competitors—I’ve at least started to understand why these startups persist: At stake is the ability to replace (or at least replicate) the work of actual people on a grand scale. If you believe this future is inevitable as many of these companies do, now’s the time to start showing off what it might look like, even if the results are far from perfect. Doing what humans don’t Danny Tomsett, the CEO of Uneeq, sees digital humans as a kind of middle ground between chatbots and live support, offering the scale of the former with the empathy and body language of the latter. “The human touch is really one of the key factors that creates emotional connection with brands,” Tomsett says. “We can’t scale humans in a way that addresses all those needs in society.”

advertisement

Even though it’s obvious you’re talking to a machine, Tomsett insists that people still come away with positive reactions. He points out, for instance, that over 90% of people who converse with one of the company’s digital humans smile at some point, and that in itself creates a dopamine release, even when they know the interaction is fake. (I can’t deny that laughing at Roman’s weirdness felt pretty good.) “They really can generate human response, because despite them not being human, we’re hard-wired for connection,” Tomsett says. We can’t scale humans in a way that addresses all those needs in society.” Danny Tomsett Tomsett also points out that by giving people a human-like avatar to converse with, they might be less guarded than they would with an actual person. Tomsett says this can lead to people asking personal finance questions that they might feel are overly basic, or broaching mental health topics that they’d otherwise keep secret. “From all our studies and our customers, the judgment factor is a huge advantage,” Tomsett says. “They feel like there’s no judgment to ask stupid questions.” Uneeq has already deployed a handful of avatars in customer service roles, including “Mel,” which greets people at the Pace of Carnegie luxury apartment complex, in Australia, and “Aimee,” which helps explain health insurance for Southern Cross Health Society customers in New Zealand. But it doesn’t just envision these digital humans as replacements for low-wage jobs. It has also created an AI version of Daniel Kalt, the chief economist at UBS, to converse with wealthy clients. Uneeq hopes these kinds of products will become common place over the next five to 10 years. “We envisage that most consumer-facing companies in financial services, automotive, gaming, and discretionary spending will deploy digital humans in the foreseeable future,” Rajeev Gupta, who led Alium Capital Management in Uneeq’s $10 million Series A round, says via email.

advertisement

The downsides of digital stand-ins Much like staring into the face of an AI avatar, hearing about how they’ll stand in for actual humans is itself a bit unsettling. After all, advertising the “human touch” feels disingenuous when the point is to involve fewer humans. And based on my admittedly limited experience with digital clones, I’d probably feel insulted—and a bit confused—if my hotel passed me off to an AI avatar for concierge service. Talking to Lars Buttler and Rob Meadows, respectively the CEO and CTO of the AI Foundation, only makes me uneasier. Buttler believes digital humans can do more than just stand in for customer service agents. Last December, it announced an AI version of Deepak Chopra (which is no less unnerving than some of the examples from other companies). The short-term goal is to have digital versions of famous celebrities to teach or coach people in place of the real deal. Eventually, the AI Foundation hopes to let anyone create AI versions of themselves. “A mom can tell other moms how to go through pregnancy, or help you through a child illness,” Buttler says. “Or you can record your children before they grow up, with all their amazing memories, or you can record your aging grandfather and present their memories forever.” Again, the idea is that these kinds of AI interactions scale in a way that actual humans do not, and while that may seem ominous for the future of real human connections, from the AI Foundation’s point of view it’s not all that different from the way we use social media today. In both cases, the interactions are asynchronous, and they allow us to reach people we otherwise might not talk to at all. “When it comes to your AI, this just a digital representation of you, just like a video or a photo, but with intelligence,” Meadows says. “And that AI can close the loop the same way social media can close the loop. The AI can go back to you and say, hey, I had one thousand conversations with people today. This is an aggregate of what people are asking, what they are thinking, what they want to know.” Is that a good thing, though? It’s not exactly a secret that social media can be unhealthy, and I’m not sure we need to be adding more layers of artifice on top of it. Buttler says these interactions should never take away from direct human-to-human communication, but there’s evidence that social media has done this already. And while AI Foundation seems to grasp the potential issues, it isn’t yet presenting solutions.

advertisement

“As AI becomes ever-more powerful, I think we just have to be really, really responsible in the way we go about this,” Buttler says. Avoiding the uncanny valley If there’s any consolation here, it’s that these digital humans still look pretty awkward. Rendering a static image of a human face is one thing, but allowing that face to realistically react on the fly to whatever it’s seeing or saying is a huge technological challenge. While digital human startups purveyors may insist that people aren’t too bothered by the current iterations, they also acknowledge that crossing the uncanny valley—the idea that digital representations feel more unsettling as their realism increases—will take time. “We know the market potential is massive, but it’s still very much early days,” says Greg Cross, the chief business officer for Soul Machines. These startups also note that creating perfect digital copies of real humans isn’t necessarily where the industry will end up. Uneeq, for instance, says it has also created non-human characters, and it expects more clients to deploy them over time. Cross says Soul Machines’s underlying computer graphics and AI animation engine could just as easily apply to caricatures and cartoons. As more brands decide to adopt AI avatars, they may realize that rough approximation is more palatable than full-on fakery. “From our point of view, it doesn’t have to be human-like,” Cross says. “This is going to be a process of people thinking through how they might represent their brand.” Still, the human face, with all of its many complexities, remains the ideal surface on which to flex one’s AI animation muscle, which may explain why so many digital humans are now emerging from the woodwork. Even if the results are less than stellar, they’re the best way to show off the underlying technology to investors and brands.

advertisement

I just hope that in the race to make more convincing digital humans, they don’t lose sight of what makes actual humans valuable. Put another way: When I bring up my insufferable taste in music, I want the ensuing awkward silence to be genuine.