Google I/O 2018 kicked off today with an uptempo keynote from CEO Sundar Pichai. The tech giant’s annual developer conference is always a platform for big announcements, and this was no exception, with Google gearing up for what promises to be an especially busy year.

At last year’s I/O Pichai stressed the company’s ongoing shift from a “mobile first” to an “AI first” focus. He carried that theme into today’s keynote, where the term “AI” was mentioned more than ever.

Synced is on the scene to bring you the all the news from the Shoreline Amphitheater in Mountain View, California.

A more powerful Google Assistant

Google Assistant is now available in 30 languages, 80 countries, and on more than 500 million devices. The powerful AI Assistant has become a flagship Google product, and the company is investing heavily, not only in Assistant but also in early-stage startups that are innovating with Assistant. Today Google introduced a number of updates that will make Assistant even more powerful, throwing down the gauntlet against rival Amazon Alexa.

Google is adding six new synthesized human voices to Google Assistant this year, including that of American singer-songwriter and actor John Legend. If any interface can make mundane shopping lists, weather reports and schedules interesting and exciting, it’s John Legend’s mellifluous tone.

To make Google Assistant more natural, Google is introducing Continued Conversation, which enables seamless back-and-forth conversations, as users don’t have to repeatedly awaken Assistant with “Hey Google”. The new feature will be available next week.

Google also introduced Multiple Actions, which enables users to ask Assistant more complex and contextual questions, for example “What time is it and what’s the weather forecast this afternoon?”

In human conversations we often use “please” with the imperative form of a verb to express a polite request, and the response typically can include a “thank you” as a show of appreciation. Google Assistant can now also respond with positive reinforcement, thanking the user for their courtesy.

Earlier this year, Google introduced its Smart Display platform to integrate Google Assistant with screen display devices and provide a richer and more immersive experience. Google today announced YouTube & YouTube TV for Smart Display, added food pick-up and delivery services with partners such as Pizza Hut and Starbucks, and integrated Assistant with Google Maps.

Google Smart Display. Courtesy Google.

Google is also researching and developing more long-term integrations and solutions. This summer Google Assistant will add new test capabilities such as calling restaurants for reservations, scheduling appointments for example at a hair salon, and booking flights and hotels with a travel agent. This new technology is supported by Google Duplex, an advanced system that can understand more complex sentences and faster/longer speech, and respond naturally in phone conversations.

ML Kit to integrate AI into apps

Google has been simplifying the process for integrating AI capabilities into mobile apps. At last year’s I/O, Google introduced TensorFlow Lite, a lean version of its machine learning framework TensorFlow. Designed for mobile and embedded devices, TensorFlow Lite enables on-device machine learning inference with low latency and a small binary size, and also supports hardware acceleration with the Android Neural Networks API.

ML Kit. Courtesy Google.

Google took a big step forward today with the introduction of ML Kit, a software development kit (SDK) available on Firebase for both Android and iOS developers. ML Kit includes five machine learning APIs: recognizing text, detecting faces, detecting landmarks, scanning barcodes, and labeling images.

ML Kit also hosts and serves TensorFlow Lite models for developers who want to deploy custom models, which can significantly reduce app install size. It also enables developers to update their models without having to re-publish apps.

TPU 3.0 now 8X more powerful

Google has been developing its AI-dedicated Tensor Processing Unit chips since 2016. TPU chips power neural network computations for Google services such as Search, Street View, Google Photos and Google Translate, and run inferencing for AlphaGo, the Google AI masterpiece that beat human champions in the ancient Chinese board game Go.

Google today announced TPU 3.0, a next-generation AI chip that is eight times more powerful than its predecessor and can achieve up to 100 petaflops performance. To address the powerful hardware’s substantial heat dissipation issues, Google is bringing liquid cooling systems to its data centers for the first time.

TPU 3.0. Courtesy Google.

A smarter Google Lens

Last year, Google released Google Lens, a platform that can quickly identify and respond to information in a picture. Google today unveiled several new Google Lens features that enhance the platform, including text digitalization and superior image search capabilities. The new features will be rolled out in the coming weeks.

Google Lens. Courtesy Google

Google Lens was previously available only in Google Photos, but will now also work in Google Maps, and can even be built into cameras.

The new image search feature can match an item in a photo with the same or similar items from millions of online images. A real-time feature can automatically select key objects in a camera viewfinder and anchor their information even as they move in and out of frame.

DeepMind meets Android

Google’s British AI company DeepMind announced two new features for devices running Android P, the next-generation Android operating system coming later this year: Adaptive Battery, a smart battery management system that uses machine learning to anticipate which apps you’ll need next; and Adaptive Brightness, an algorithm-based system that learns users’ screen brightness preferences for different surroundings.

Waymo makes its I/O debut

Google parent Alphabet’s self-driving affiliate Waymo made its debut at I/O with its ambitious blueprint for deploying fully autonomous driving technology. This March Waymo announced plans for a 24/7 self-driving ride-hailing service in Arizona with no human safety drivers involved. Its self-driving truck Peterbilt 579 meanwhile will transport cargo for Google data centers in Atlanta.

Waymo Self-driving Chrysler Pacifica

Waymo CTO and VP of Engineering Dmitri Dolgov explained in detail how Google AI R&D, particularly in deep learning, can help Waymo more quickly achieve fully autonomous driving. Waymo engineers worked closely with the AI-dedicated Google Brain Research team to apply deep neural nets to their pedestrian detection system, which will expand to prediction, planning, mapping and simulation. Waymo is also accelerating its model training process with TensorFlow and Google Data Centers.

Google’s AI ecosystem

Google is fully committed to AI. Beyond the above announcements, the company also used I/O to demonstrate how they have effectively embedded AI into Google Photos, Google News, Google Maps, Gmail, Gboard, etc. A number of novel AI experiments were presented as a warm-up to the keynote, including NSynth, a synthesizer that generates new sounds using neural networks, and World Draw, an interactive experience based on Magenta AI, which can create drawings based on doodles.

Although there were no new consumer products announced today, Google I/O is definitely a tech bonanza for AI enthusiasts. Google is building a clear vision of how future AI will be delivered to its various platforms and products, while also attracting more talented developers and researchers to its expanding ecosystem.

* * *

Journalist: Tony Peng| Editor: Michael Sarazen

* * *

Subscribe here to get insightful tech news, reviews and analysis!