AI is everywhere. At CES 2017 next week, it will become even more prevalent.

Without giving away any secrets, it feels like there is something in the air already. Looking over my schedule, almost every meeting and test has an element related to artificial intelligence and machine learning. I know of one smart home company that is announcing a new AI system that knows when you are home and can adjust the lighting, security cameras, front door alarm, and heat automatically — no more opening an app and punching a bunch of options. Sure, the Nest Smart Thermostat has some of these features already, but having your entire home benefitting from an AI that works in the background? That’s new to me.

There’s also a robotic vacuum will respond to voice commands, which sounds like an innovation that will allow me to watch even more Golden State Warrior games on the sofa. One of the more famous robotics companies, iRobot, already has a bot that uses machine learning to figure out how to clean your carpet more thoroughly, and I’m expecting more announcements. Meanwhile, I wouldn’t be surprised if Dyson announces some more AI-powered gadgets.

What really got me thinking about automations, though, is that the car companies are making a much bigger splash than usual. Ford has already announced they will have their new Fusion self-driving car at the show, capable of driving without direct driver interaction. In fact, after 13 years straight attending this show in Las Vegas, I’ve never had so many opportunities to test out a driverless car. I have three rides set up in one morning next week.

I haven’t even mentioned all of the robots, voice assistants, and chatbots. It’s a little crazy. I’ve scheduled 60 meetings next week, many of them a hands-on test, and almost every one is related to AI in some way. I’m interviewing my first robot in “person” — and that’s literal, not a diss against an entrepreneur with no social skills. I’ve certainly tested many bots, but never interviewed one in a way that wasn’t just a proof of concept. More on that another time.

You might wonder: Why the big change? As we all know, the platforms for building AI came into fruition in 2016, which caused all of the momentum initially. Machine learning is now convincing and helpful on a daily basis, not a future concept. Using the Google Home speaker the other day, I was able to have a conversation about the Warriors, asking about stats and records in a way that seemed fluid and helpful, not awkward and frustrating. Bots now understand us, thanks to natural language processing and speech recognition tech that has been learning how we say things for the past few decades. The Assistant on Google Home didn’t quite understand what I meant by “the Dubs,” but that’s an easy one to teach it.

My view is that consumer adoption is high because there is now a perceived value. AI tech has been around for decades, of course. You could talk to a chatbot in AOL Instant Messenger way back in 2001. Before that, cars made by Cadillac and others could tell you, with an audible voice, to fasten your seatbelt. Adaptive cruise control systems — which adjust the speed of your car automatically based on the car in front of you — have been around since the ’90s. The tech has advanced, the platforms are available, the costs are lower, but the perceptions have also changed dramatically.

We’ve adjusted as a society to the fact that these bots are mostly helpful, mostly safe, and mostly valuable, even if there are quite a few quirks and problems left to resolve. Bots are finally here, and they are proliferating. Now, they need to improve.