In the last six months, every major tech company has unveiled its vision for the future of computing. And funnily enough, they’re all saying the same thing: in the future, we’re going to talk to our computers — and they’re going to answer back.

Microsoft calls it "conversation as a platform." Google says it wants computers to have an "ongoing two-way dialogue" with their users. This both is and isn’t a metaphor. Digital assistants that we talk to — Siri, Alexa, Cortana — are already commonplace, but creating computers that talk back will mean something extra: using machine learning to offer users prompts and suggestions. "Our goal with artificial intelligence is to build systems that are better than people at perception," said Facebook’s Mark Zuckerberg. "[At] seeing, hearing, language and so on." How do we get to machines talking? With Machine learning Language is key though. Talking to computers has been a sci-fi trope for decades, but it’s only in the last few years that we’ve been able to take the prospect seriously. Advances in artificial intelligence — deep learning particularly — have massively improved natural language processing, while the combination of the cloud and ever-more powerful smartphones have provided an infrastructure for these speaking assistants. If you want a measure of how ubiquitous talking to computers has become, consider the fact that Domino’s has had its own chatty, pizza-ordering assistant named Dom for years. ("He’s fun," said Domino’s head of marketing recently. "But very focused on the pizza ordering experience.") Dom's pizza-focused existence also illustrates a key feature of the world of talking computers right now: it’s confusing as hell. All sorts of fundamentally complementary tools and technologies are being built by different companies, but as with the advent of any new computing paradigm — be that desktop, web, or mobile — there’s no shared game plan or grand strategy. There's just machine learning and human imagination, creating exciting new pieces for an incomplete puzzle. How everything is going to fit together in 20 years’ time is anyone’s guess, but we can at least take a look at where things stand right now. Dom's pizza-focused existence also illustrates a key feature of the world of talking computers right now: it’s confusing as hell. All sorts of fundamentally complementary tools and technologies are being built by different companies, but as with the advent of any new computing paradigm — be that desktop, web, or mobile — there’s no shared game plan or grand strategy. There's just machine learning and human imagination, creating exciting new pieces for an incomplete puzzle. How everything is going to fit together in 20 years’ time is anyone’s guess, but we can at least take a look at where things stand right now.

Say 'hello' to the new chattering classes So. The most important players in this new world are the "digital assistants:" Siri, Alexa, Cortana, Facebook M, Google Assistant, and a handful of third-party players. These, say their makers, will be our computing familiars, the programs that we'll spend most of our time talking to. They’ll be accessible on different platforms (phones, watches, cars, home hubs) but keep tabs on our personal data, schedules, and location, across an entire network. And thanks to machine learning, they'll understand human speech better than any computers before, able to grok context and slang, and, eventually, emotion and intent. ambient computing means never talking to yourself again Assistants like this will offer us ambient computing. We won't necessarily access them through a screen or a console. Instead, they'll be hanging in the air — just speak and get an answer. But before we get assistants everywhere, we'll have to access them in particular places. Amazon has dug deep into the home, with the Echo, Dot, and Tap, while both Google and Apple are multi-platform: available on your phone, in your car, and on your wrist. Microsoft and Facebook’s strongholds are less well defined, but again, there’s a lot overlap and lots of space to expand into. And filling the gaps between assistants' spheres of influence (and a couple of rungs below on the ladder of artificial intelligence) are the bots and the apps. These are more simplistic than Siri, and much more utilitarian. At their most basic, they’re simply a replacement for graphical user interfaces, where you type out your instructions rather than navigate a set of dialog boxes. Many are little more than gimmicks (do you really need to buy cinema tickets from a bot?), but the more complex models have a lot of potential (like a chatbot in Slack or Gmail that can find every document your boss sent you in the last month). The Verge’s Casey Newton explained the rise of chatbots back in January, and even since then, they’ve made significant gains. In March, Microsoft launched a set of AI-powered tools to let anyone build their own, and in April, Facebook opened up Messenger as a bot platform. "You never have to call 1-800-FLOWERS again," was Mark Zuckerberg’s pitch, while Microsoft’s Satya Nadella described chatbots as the "new applications." Nadella went on to say that in the future, personal assistants like Cortana and Alexa will act as bot bosses, interacting with one another on a user’s behalf. However, this is an over-simplistic summary — things are going to be much messier than this. Will assistants talk to bots like they talk to APIs? How will these different programs exchange user data securely? What software is going to harvest the slang in your texts with friends and feed it into the program that writes your emails to your work colleagues? And how do you tell it to stop? It’s tricky territory, but here are the pieces we know about so far:

Amazon Amazon’s Alexa has become the benchmark for digital assistants in your house. While the functionality of the original Echo was as much about playing music as voice commands, Alexa’s skill set has quickly expanded. It now works with more than 1,000 services, and Amazon says its developer tools mean companies can integrate their software in just 60 minutes. Amazon is putting Alexa in more and more devices The Echo has done so well because it got the basics right. Alexa responds quickly and understands your query, even from across the room. Amazon is now capitalizing on this, widening Alexa’s availability with the release of the hockey puck-sized Echo Dot (which works with any speaker), and allowing other companies to make their own Alexa-powered hardware. But while it nailed the essentials, Alexa isn’t as "smart" as other assistants. It has limited personal information and to access one of its connected services you have to know the right keywords. In that respect, it’s more command line than natural language. Amazon boss Jeff Bezos says the company is "deeply committed" to AI, though, and that Amazon has more than a thousand people working on Alexa. The company is even reportedly working on building emotional intelligence into the digital assistant next, so it’ll know when you’re irritated and temper its responses accordingly.



Apple Despite being an early frontrunner in voice interfaces, Apple has fallen behind. Siri is well known, but its voice recognition abilities are spotty and its functionality limited. At WWDC this year, the company made some promising changes, porting Siri onto to macOS and allowing integration with third-party services (messaging apps, ride-hailing services, and fitness trackers were all used as examples), but the story until now was mainly one of squandered potential. The big question is whether Apple’s approach of infusing its own apps with AI smarts (which are being branded as "Siri" assistive features) will be more successful than creating generic "bots." Apple has done very well by apps in the past — will it be able to do so again? Siri has the potential to be everywhere — in your computer, your phone, your car Thanks to CarPlay, the Apple Watch, and the iPhone, Apple’s personal assistant has the potential to be a constant companion, but it’s still working on integrating everything it knows about you (your location, your schedule) in a truly useful fashion. Similarly, although Apple has put Siri on Macs, it’s not given the assistant any new skills to take advantage of its new home. And while there have been rumors of an Echo-rival Siri Speaker (you can already use Siri to control automated tech in your house via HomeKit), that’s yet to materialize. Oddly enough, Apple might be able to turn what has been seen as one of its major weaknesses in this area into a strength. The company’s development of AI has been hamstrung by its stance on privacy. It doesn’t collect as much as user data as its rivals, so it has less information to feed its machine learning programs. However, at WWDC the company turned this around, announcing new methods of data collection that it claims will preserve users’ privacy, and new methods of support for on-device AI. For users who want the benefit of smart assistants without the privacy sting, Apple’s approach could pay off in the long run. Assuming, of course, it works as well as Apple says it does.