Google I/O 2019 was held on May 5th-7th. Unfortunately, I could not participate the event, but I know that some great new features were announced. I introduce them here, and at the same time, I try to mention my thought for each topics.

Current Google Assistant Statistics

Google Assistant has been starting on Google I/O 2016. That is, it was already passed about 36 months since the birthday.

Also, the supported languages and countries are normally increasing. Currently, Google announced that the number of supported languages is 30, and the number of supported countries is 80.

But, their numbers are small for me against Google Assistant becomes a next internet, I guess…

Improvement of Google Assistant

Some improvements was achieved for Google Assistant.

Voice Recognition on Local

To recognize a speech, a huge data was necessary. But, Google succeeded to reduce the size from 100GB to 0.5GB. This means that each user device can have the data, and they can recognize a speech without internet connection.

This size reducing brings users a high performance of the recognition speed. In the keynote, Google showed the demo to operate a smartphone to invoke apps by high speed speech.

I think that this improvement is very very important. Because, most users will become to operate their device with a voice. This voice operation should spread to other purposes. As the result, as user device, a screen output and a touch input would be not important. Instead, a speaker output and a mic input would be more important, I expect.

That is, I guess that applications, services and hardware would be changed in near future.

Personalize Help and Driving Mode

Google Assistant can understand more user’s context using the user’s personal information. In addition, Google Assistant can support users during driving a car.

For example, Google Assistant recommends recipes with the user’s history.

Drivers will doing some things: investigate a route, listen to music and receive a call. With the driving mode, users can handle them with a voice without a hand.

When receiving a call from mom, you should say that “No, thanks”… Of course, it is kidding.

I think that these improvements are normal evolution we can image.

Stop the alarm without “Hey, Google”

The time when participants were most excited in the keynote was “Stop the alarm”, I think.

Users can stop the alarm by only saying “Stop!”. It is unnecessary to say “Hey, Google” before the “Stop!”. This is a biggest improvement of Google Assistant in this Google I/O 2019, did you think that? Yeah, I also thought that.

Nest Hub / Nest Hub Max

Google has already providing a own Smart Display: Google Home Hub. In this I/O, Google re-branded for the Smart Display: “Nest”.

Nest Hub will sale on some countries including Japan! (because I’m Japanese).

Nest Hub Max has 10 inch display with Camera. Using the Camera, Nest Hub Max brings users some rich experiences. For example, users can stop YouTube video with gesture of his/her hand. Also, Google Assistant can recognize the user with face match.

Actually, we heard that someone said that a camera is unnecessary for Smart Display…right? Well, OK, we should forget that.

Duplex on the web

In Google I/O 2018, Google announced a new AI product called “Duplex”. And, in this Google I/O 2019, Google announced another product called “Duplex on the web”.

Currently, users need to fill in many information and operate many user interface components on many Web pages to get things done. This is a big cost.

Duplex on the web gets such things done instead of users. The new feature does not only operate each form on Web pages, it suggests users more information from Gmail. That is, we can say that Duplex on the web is another assistant to reduce a cost to do many things users did so far.

For the new feature, probably, we would need to adjust our Web pages to accept the feature. Search Console would provide related features for us.

There is a Search Engine Optimization. Today, we also need to do a Duplex on the Web Optimization. It can be omitted as “DWO”… Does this sounds good as you?

How-to and FAQ content

For people who have some content about like How-to or FAQ, Google provides a new feature.

<script type="application/ld+json">

{

"@context": "http://schema.org",

"@type": "HowTo",

"name": "How to Install a Dog Door",

"description": "...easy instructions on how to install a dog door...",

"step": [

{

"@type": "HowToStep",

"name": "Remove Screen Door",

"image": "https://diy.snding.com/content/...",

"text": "Remove the screen door from the door frame...",

"url": "https://www.diynetwork.com/.../install-a-dog-door#step1"

}, ...

]

}

</script>

If you have a How-to content and embed the metadata like above into each content page,

the content will be shown with rich user interface component like Carousel as a tutorial content on the search result. Also,

it will be shown as tutorial in Smart Display. Of course, users can move a next step by saying “Next!”.

If there is a video to introduce how to build a something, the content holder can also create a content like the how-to content described on above.

After creating a Google Sheet, and fill in each step into each row including a step title and the timestamp, and posting it on Actions on Google Console,

it will be shown as tutorial content.

This feature brings us to adjust content to search result and assistant. Especially, the supporting this feature on Smart Display is very important, because many users would ask how to do something against the Smart Display, I think.

Mini-apps

As my thought, Mini-apps is most important announcement in this Google I/O.

By Mini-apps, we will be able to provide a “Mini application” on the search result. Mini-apps provides an environment to execute an interactive content on there. The interactive content consists of each element using Web Technology. For example,

"name": "Google I/O Mini-app"

"description": "A Hello World Mini-app"

"company": "Google"

"url": "www.google.com"

...

"supportedLanguages": ["en"]

"supportedRegions": ["US"]

"webhook": "https://mywebhook.example.com/mini-app"

you can register the app with the code above. The point is a “Webhook” property. That is,

<button text="Update">

<on-click:event-handler>

<intent name="HelloWorldconfirmation">

<arg name="InputName" />

</intent>

</on-click:event-handler>

</button>

when invoking the app, the webhook endpoint will be called. Then, you need to return a markup to build the interactive user interface as like above. And,

"perLocaleQueryExamples": [{

"language": "en",

"queryExamples": [

{"pattern": "hello world"},

{"pattern": "hello Google I/O"}

}]

if registering a trigger like above, and when users fill in the phrase “hello Google I/O”, then,

the content will not only be shown on the search result,

but the user interface will also be shown in Smart Display. That is, Google Assistant will provide the interactive content which is suitable for the Smart Display. Of course, users can fill in to the text field with “Voice”.

I think that Google can change the search feature to an application platform. That is, I guess that Google will be able to create an ecosystem of Mini-apps on the search. Of course, as users, Mini-apps bring a big benefit, because users can get things done on the search result directly without moving to other place.

Probably, other search engine would release like the same feature in near future, I guess…

Interactive Canvas

As another new feature for Smart Display, Google Announced Interactive Canvas.

We can configure a theme design for each conversation action. But, Interactive Canvas brings a capability to provide an interactive user interface on the Smart Display using “Full Screen”.

We can build the Interactive Canvas with Web Technology.

As the important point, users can operate the app with “Voice”.

For example, when the user says something on the canvas, the Dialogflow handles it, and your intent handler can move the next scene.

Then, on the app side, the code can render the next scene depending on the state value.

It seems that Google will start providing this feature for building a game purpose only. Actually, I can understand this policy. Because, I recognize that the current goal is to increase conversation actions to bring users more valuables, but if we can use the interactive canvas freely, most developers might build actions depending on the canvas. At least, most important policy is a “Multimodal”. I think that the use case of interactive canvas should become limited.

Local Home SDK

Google Home/Nest Hub have a mic and a speaker. That was all so far.

But, Local Home SDK changes the devices to a code execution platform. That is, hardware developers can write a code to operate devices (ex. lights, air-conditioners, and so on) executed on Google Home/Nest Hub via WiFi. Actually, the codes will communicate with Hub devices, instead of concrete devices like a light.

Currently, users need to install, configure and use each device with different steps. However, if many devices support this Local Home SDK, users might do them with common steps. Also, we can expect that the internet connection would be unnecessary at operating devices via Google Home/Nest Hub, because the code in the Google Home/Nest Hub can communicate with the Hub device directly.

I think and believe that Local Home SDK brings a common way to use Smart Home devices like Plug and Play for PC…

Conclusion

In the Google Assistant, this Google I/O 2019 was a big milestone, I guess. For instance, Google Assistant get high performance by reducing data of voice recognition, Duplex on the web has announced, a new device “Nest Hub Max” has announced, some features for search and assistant have been announced and Local Home SDK brings a new architecture for Smart Home device.

Actually, the smart display is important. But, I believe that the multimodal, furthermore, the voice is more important. In this I/O, I understood that Google is working hard for Smart Display. However, I expect to know that Google will be working hard for Smart “Earphone” for near future in Google I/O 2020.