Google clearly went in to today with some key concepts they wanted to hammer home. Among their multiple product launches and updates, the major takeaway they wanted to leave people with was clearly this: Google is the pioneer, industry leader, and all around expert in machine learning, and they want everyone to know it.

Today, Google announced multiple new products: the Google Home Mini — to compete with the Dot — , the Google Home Max, the new Google Pixelbook with Google Pixelbook Pen, the 5" and 6" Pixel 2, the (somewhat terribly named) Google Pixel Buds, and finally, the Google Clip. And, with each of these announcements, machine learning was front and center.

Every product was improved by machine learning in some way. Every product implemented machine learning in some way. Every product was better than the competition because of machine learning. Mixed in between subtle Apple jabs and plugs for Youtube Music, machine learning was mentioned seemingly every few minutes.

Although Google was definitely lacking subtlety, it cannot be said that they lacked flair. Every product impressed, and the machine learning they so often touted was definitely a shining quality. So let’s take a quick look at the upcoming releases.

The Google Home Max

Max featured Smart Sound, which implemented machine learning to dynamically match the audio output to your environment. Max can adjust the volume based on the background noise (they mentioned Max automatically turning up the volume when the dish washer turned on) but, more importantly, can also adjust the levels to fit the room.

Have Max in the corner, and the output will be adjusted to make sure the output isn’t muddy. Move it to the backyard, and within seconds the levels are adjusted to make use of the ample space and lack of walls.

Touting Dual 4.5" subwoofers and dual custom tweeters for an output over 20x powerful than the current Google Home, the advertised audio quality seems impressive. Combining that output with adaptive equalization sounds like a great match. And it better be, for $399.

The Google Pixelbook

The new Pixelbook had the least out-of-the-box integrations with the Assistant. The real killer features came from the accompanying Pen (sold separately), and the fact that the Pixelbook is finally capable of launching — hopefully all — Android apps.

The pen is what really brings this computer to life. It can circle content on the screen and show you more information about that thing. They repeatedly showed the ability to circle a face in a picture to pull up information about that person. It seems to be a more robust and controllable version of Android’s “what’s on my screen” feature.

The Pixelbook was also made with an on-the-go mindset. Google showed the laptop leaving WiFi range, and automatically tethering to the owner’s Pixel for continued data use. It wasn’t clear if this was going to require you to work within your carrier’s limitations, however.

The Pixel 2

The most anticipated announcement today was the Pixel 2. While the headphone jack being removed is still puzzling, this is an all-around promising phone.

The camera features Portrait Mode with a single camera (other phones accomplish this with two cameras) made possible by — you guessed it — machine learning. This feature is available on both the front and rear facing cameras. They showed off portraits that you would expect an SLR to produce with the background blurred and the object of interest popping. It’ll be interesting to see the camera handles complicated arrangements, but from the examples of a person with curly hair, or a jagged flower, the feature seems ready for prime time.

The phones also featured Optical Stabilization for video, which looked great. OIS and EIS combined for a very stable video of a downhill motorcycle ride. I believe they mentioned the stabilization was enhanced by machine learning, but I might have misheard.

The Pixelbuds

With the Pixel 2 line getting rid of the headphone jack, I think it was obvious Google would be delivering a wireless headphone to compete with the AirPods. The design is sleek, and the addition of a cord to keep them together is a nice touch.

However, the knockout feature here is the inclusion of the Google Assistant, and Google Translate. Google showed two presenters, one speaking Swedish, the other English, conversing via the Pixelbuds and the Pixel 2, and it was very impressive.

The conversation was translated in real time; the English was translated to Swedish via the Pixelbuds, and the Swedish to English via the Pixel 2’s front-facing speakers. It will be interesting to see how comfortably the translations work in fluid conversations, but even just seeing this technology come to life is exciting in and of itself.

The Google Clip

This was the most unexpected product announcement. A hands-free camera with integrated machine learning capablities. Point the Clip at your family reunion, and capture photos the whole night. But, unlike a camera on a timer, this camera is always on.

Set it up while you’re playing with your puppy, or watching your baby try to take her first steps, and the Clip will capture the moments that matter, or so they say.

Machine learning was apparently implemented in every aspect of this device. From recognizing smiles, to being able to identify important moments, the clip attempts to bring you into your candid moments without needing a photographer. Also, it has built in machine learning capabilities, so it can improve its shots without having to share them with Google.

The feature list is impressive here. They weren’t very specific, but they suggested that the camera can:

Recognize smiles for perfect shots

Identify important moment for great candids

Implement facial recognition so it captures pictures of people that matter to you, not strangers

Do this all without any input from you. Simply set it and forget it.

The market for this device might be narrow, but the technology here is promising.