The first time Alex Acero saw Her, he watched it like a normal person. The second time, he didn't watch the movie at all. Acero, the Apple executive in charge of the tech behind Siri, sat there with his eyes closed, listening to how Scarlett Johansson voiced her artificially intelligent character Samantha. He paid attention to how she talked to Theodore Twombly, played by Joaquin Phoenix, and how Twombly talked back. Acero was trying to discern what about Samantha could make someone fall in love without ever seeing her.

When I ask Acero what he learned about why the voice worked so well, he laughs because the answer is so obvious. "It is natural!" he says. "It was not robotic!" This hardly counts as a revelation for Acero. Mostly, it confirmed that his team at Apple has spent the last few years on the right project: making Siri sound more human.

This fall, when iOS 11 hits millions of iPhones and iPads around the world, the new software will give Siri a new voice. It doesn't include many new features or tell better jokes, but you'll notice the difference. Siri now takes more pauses in sentences, elongates syllables right before a pause, and the speech lilts up and down as it speaks. The words sound more fluid and Siri speaks more languages, too. It's nicer to listen to, and to talk to.

Apple spent years re-architecting the technology behind Siri, transforming it from a virtual assistant into the catch-all term for all the artificial intelligence powering your phone. It has relentlessly expanded into new countries and languages (for all its faults, Siri's by far the most worldly assistant on the market). And slowly at first but more quickly now, Apple has worked to make Siri available anywhere and everywhere. Siri now falls under the control of Craig Federighi, Apple's head of software, indicating that Siri's now as important to Apple as iOS.

It'll still be a while before the tech's good enough to make you fall in love with your virtual assistant. But Acero and his team think they've taken a giant leap forward. And they believe firmly that if they can make Siri sound less like a robot and more like someone you know and trust, they can make Siri great even when it fails. And that, in these early days of AI and voice technology, might be the best-case scenario.

Siri Grows Up

If you want a good example of why Apple likes to control everything about its products, just look at Siri. Six years after its launch, Siri has by most accounts fallen behind in the virtual assistant race. Amazon's Alexa has more developer support; Google Assistant knows more stuff; both are available in many kinds of devices from many different companies.

Apple says it's not its fault. When Siri first launched, another company provided the back-end technology for voice recognition. All signs point to Nuance as that company, though neither Apple nor Nuance ever confirmed a partnership. Whoever it was, Apple happily blames them for Siri's early issues. "It was like running a race and, you know, somebody else was holding us back," says Greg Joswiak, Apple's VP of product marketing. Joswiak says Apple always had big plans for Siri, "this idea of an assistant you could talk to on your phone, and have it do these things for you in a more easy way," but the tech just wasn't good enough. "You know, garbage in, garbage out," he says.

A few years ago, the team at Apple, led by Acero, took control of Siri's back-end and revamped the experience. It's now based on deep learning and AI, and has improved vastly as a result. Siri's raw voice recognition rivals all its competitors, correctly identifying 95 percent of users' speech. The AI works in two distinct and critical parts of the system: speech-to-text, in which Siri tries to figure out what you said; and text-to-speech, in which Siri speaks back.