There wasn’t any one big product announcement at Google I/O keynote on Wednesday, the annual event when thousands of programmers meet to learn about Google’s software platforms. Instead, it was a steady trickle of incremental improvements across Google’s product portfolio. And almost all of the improvements were driven by breakthroughs in artificial intelligence — the software’s growing ability to understand complex nuances of the world around it.

Companies have been hyping artificial intelligence for so long — and often delivering such mediocre results — that it’s easy to tune it out. AI is also easy to underestimate because it’s often used to add value to existing products rather than creating new ones.

But even if you’ve dismissed AI technology in the past, there are two big reasons to start taking it seriously. First, the software really is getting better at a remarkable pace. Problems that artificial intelligence researchers struggled with for decades are suddenly getting solved

“Our software is going to get superpowers” thanks to AI, says Frank Chen, a partner at the venture capital firm Andreessen Horowitz. Computer programs will be able to do things that “we thought were human-only activities: recognizing what's in a picture, telling when someone's going to get mad, summarizing documents.”

But more importantly, Chen says, AI capabilities are about to be everywhere. Until recently, big companies focused on adding AI capabilities to their own products — think about your smartphone transcribing your voice and Facebook identifying the faces in your photos. But now big companies are starting to open up their powerful AI capabilities to third-party developers.

And often, this is the moment when a new technology has a really big impact. The iPhone didn’t really become truly revolutionary until Apple created the app store, allowing third parties to create apps like Uber and Instagram. Soon every company and every ambitious kid in a dorm room is going to have access to the same powerful AI tools as the world’s leading technology companies.

AI is getting a lot more powerful

Primitive forms of AI have been around for a long time. Back in the 1990s, for example, you could get voice-to-text software that would transcribe your words into a word processor.

But these products used to be terrible. Speech-to-text software would make so many errors that wasn’t much faster than typing a document on a keyboard. The handwriting recognition feature on Apple’s 1990s tablet computer, the Newton, was so bad it became a punchline. As recently as the early 2010s, I remember the voice-to-text feature of my smartphone making a lot of mistakes.

Then AI technology suddenly started working better. A couple of years ago, I noticed that my smartphone hardly ever made mistakes. Photo apps from Apple, Google, and Facebook got good at recognizing faces. In his Wednesday keynote, Google CEO Sundar Pichai offered some data on just how rapid this progress has been:

This data illustrates how good Google’s smart speaker, Google Home, is at understanding user speech in a noisy room. In less than a year, the error rate has fallen by almost half.

Touting this rapid progress in voice recognition, Pichai told an audience of hundreds of developers that “the pace even since last year has been pretty amazing to see.”

And there are more impressive breakthroughs coming up. For example, Pichai said, suppose you took this photo of your daughter playing baseball:

Pichai says that you’ll soon be able to use Google technology to remove the chain-link fence, producing a photo that looks like this:

The two-hour keynote featured demonstrations from across Google’s product portfolio, from Android to YouTube. And seemingly every product had significant AI-based improvement.

Google’s photo app will soon be able to recognize your best photos, figure out who is in them, and then offer to send copies to the people in the photos with one click.

Google Home is getting smart enough to distinguish between different users in a household. If you say, “Call Mom,” Google’s software will be smart enough to know — just based on your voice — to call your own mother and not your spouse’s mother.

AI is the next platform war

The machine learning algorithms that underpin the AI revolution place extreme demands on conventional computing hardware. At last year’s Google I/O, the search giant announced that it had designed a custom chip called a tensor processing unit for machine learning applications. Tests show that these chips can execute machine learning code up to 30 times faster than conventional computer chips.

Over the past year, Google has installed racks and racks of these chips in its vaunted data centers to support the growing AI capabilities of various Google products. On Wednesday, Google announced that it will soon be opening up these chips for anyone to use as part of Google’s cloud computing platform. Google has already released its powerful machine learning software, called TensorFlow, as an open source project so that anyone could use it.

Google isn’t just being nice, of course. The larger goal is to establish Google’s AI platform as the industry standard thousands of other companies rely on for their own AI software. Once you build software on top of one platform, it’s very expensive to switch, so becoming an industry standard could make Google billions in the coming years.

Of course, Google’s rivals aren’t going to accept this without a fight. Amazon currently leads the cloud computing market with its Amazon Web Services, and it is offering developers a rival suite of machine learning tools. Microsoft offers machine learning tools on its own Azure cloud computing platform.

Consumers don’t care which tech giant’s cloud computing platform powers their favorite app or website. But this platform war will have big indirect benefits for consumers. Because in their rush to win the cloud computing war, these technology giants are making more and more powerful AI capabilities available to anyone who wants to use them. Which means we’re about to see an explosion of experimentation with AI capabilities.

Google showed off a small example of what this might look like with Google’s voice-based assistant. On the I/O stage a Google executive said, “I'd like delivery from Panera,” and this started a conversation with the app that worked a lot like a conversation you’d have with a human Panera cashier. The executive said she wanted to order a sandwich. The virtual assistant asked if she wanted to add a drink. After she chose a drink, the assistant told her the total price and asked if she wanted to place the order.

The remarkable thing about this exchange wasn’t so much the ability to carry out a simple conversation — something virtual assistants like Apple’s Siri have been able to do for a few years. It’s the promise that every retail establishment in America could build a similar capability without having to hire a bunch of computer science PhDs.

Google’s promise is that creating this kind of sophisticated AI experience will soon be as simple as building a website or a smartphone app is today. Google’s own engineers will do most of the hard work, creating powerful tools that allow non-software companies to build services that would have been beyond the reach of even the most sophisticated technology companies a decade ago. It might take a few years for this vision to be realized — the first websites and smartphone apps were often terrible — but eventually customers will expect every app to offer these kinds of capabilities.

At the same time, more sophisticated developers will be able to use the tools provided by Google, Amazon, Microsoft, and their competitors to push the envelope even further. Chen believes that machine learning techniques will lead to improvements in medical care — for example, helping radiologists identify cells with cancer. In the past, you needed a huge team of AI experts to even attempt to build something like this. Today, the basic tools are within reach of high school kids. It’s a safe bet that this will spawn totally new kinds of apps, just as the invention of the smartphone made Uber possible.

Disclosure: My brother works at Google.