In the lead-up to Google IO 2018 and after peering into the Android P Alpha release, some people noticed a mysterious new “Slices” class within the P SDK docs. There was a lot of speculation as to what Slices actually did, and Sebastiano Poggi at Novoda did a great deep-dive into what was known at the time.

But now that IO has passed, what have we learned about Slices, how can we leverage them in our apps, and why would we even want to? To answer that question, let’s start with App Actions.

Actions Everywhere

App Actions, also known as Actions on Google, are part of a new framework that allows developers to integrate their app fairly deeply with Google Assistant, Search, the Play Store and wherever else Google wants in the future. Whilst App Actions are not yet even in developer preview, we know a little about them from the overview website and various talks at IO.

There is a lot that you can do with App Actions such as interacting directly via Google Assistant and registering custom Intents, but for now we’ll focus on how they act as a bridge to Slice functionality.

At a high level Google has defined a growing set of Intents with their own data schemes. Developers then expose content within their app to these Intents, and offer functionality via deep-linking. For the purposes of this article, we’ll be looking at the actions.intent.GET_CRYPTOCURRENCY_PRICE Intent, which has got me rather excited for obvious reasons.

Intents such as GET_CRYPTOCURRENCY_PRICE can be triggered by what Google are confusingly calling Semantic Intents — which are generally spoken-word or typed queries to the Google Assistant or Search. With our example Intent, a Semantic Intent might be:

What’s the current price of bitcoin in dollars?

What’s ETH doing today?

How much is Dogecoin worth?

For an app to handle such queries themselves would be extremely difficult, but instead Google can recognise the intention of the user and then deliver results in a format that we can then process in some way:

{

"cryptocurrency": {

"name": "string"

},

"exchange": {

"name": "string"

},

"target monetary spec": {

"currency": "string",

"valid_at": "date string (ISO 8601 format)"

}

}

In the examples above finding the name of the cryptocurrency is fairly easy, but the fiat currency is often implicit — I would assume that Google are using device locales or geographic location to infer the most likely local currency. Much of speech is implied, and Google handles this difficult task for you in many cases.

This is all well and good, but how do we register our app to take advantage of these Intents?

Enter actions.xml

actions.xml is a new file that you would include in your app which defines a mapping between App Actions Intents and the URIs that you wish your app to handle. Within this file, you can essentially pull values out of the App Action schema defined by the Intent and then insert them into whatever URL format your app accepts:

Note that this implementation is pulled from the IO session on App Actions — there’s no documentation yet and it may well not be correct when App Actions launch proper. As such this is somewhat speculative.

This file is then scanned by Google on uploading an APK (or app bundle) to the Play Store, and the supported app actions are stored in a database. Users who then trigger the Semantic Intent associated with your app can then potentially see a suggestion to download your app in the Play Store (as an example, with more to follow in the future). It goes without saying that this can give you an edge in terms of discoverability.

Enter Slices

From here, you can handle the URL/deep link as you would any other within your app by registering an Intent Filter and pointing towards your class extending SliceProvider :

Within this class, you then override onMapIntentToUri to return a properly formed non-null URI from the supplied Intent, and then you can start actually building your first Slice.

onBindSlice is the next method that needs overriding, and it’s here that you can extract more information from the URI (in this case, the cryptocurrency and fiat currency codes) and actually setup your UI.

Slices are somewhat unique in that you can’t merely inflate your own Views — Google instead are providing a host of preset templates into which you can pass data (Strings, images, icons etc) as well as a handful of interactive features such as progressbars, sliders and toggles. This may be an issue for some but it means that all Slices will be visually consistent, and Google says this means that they’re easily portable to new UI formats, devices, and dare I say it — operating systems.

Creating this UI is done via the ListBuilder class, which allows you to add rows of information; each row having methods such as setTitle , addEndItem , setPrimaryAction etc. You can also create a grid layout of sorts here. You also set a period of validity for the data, which in this example is infinite but should probably be quite short.

This simple ListBuilder statement creates a neat little Slice that looks like this in the Slices demo app, which for now is the only way to view them:

A Slice! Essentially an on-demand, highly portable widget and gateway to your app.

Loading Content

This is all well and good, but you don’t necessarily have price information to hand — you’ll probably want to fetch an up-to-date quote from the network. However in onBindSlice , we see some documentation:

onBindSlice should return as quickly as possible so that the UI tied to this slice can be responsive. No network or other IO will be allowed during onBindSlice. Any loading that needs to be done should happen in the background with a call to {@link ContentResolver#notifyChange(Uri, ContentObserver)} when the app is ready to provide the complete data in onBindSlice.

This, like the Slice templates, is another sensible API design choice from Google. Effectively, Google are running StrictMode around this particular function. The solution here is pretty simple — provide a “loading” placeholder UI Slice, load your data asynchronously, and then call notifyChange as required.

notifyChange causes onBindSlice to be called again, so in our naive example we store the update cryptocurrency price in a nullable property called bitcoinPrice , and decide whether or not to display a placeholder slice based on the property’s null status. There are definitely better ways to do this, but this works for our purposes.

As far as next steps are concerned, you would most likely want to add some interactivity to the Slice, or at the very least link back into your app. Opening an Activity is achieved with setPrimaryAction which takes a SliceAction object — pretty much just a wrapper with a PendingIntent , icon and description.

Slices that are fully interactive accept a PendingIntent via setInputAction , which you then register a BroadcastReceiver for and handle accordingly.

Sign me up!

If this all sounds great to you, Android Studio 3.2 Canary offers a handy quick-start for implementing Slices. New -> Other -> SliceProvider leads you to a wizard that adds the necessary manifest entries for you, and allows you to personalise some things such as the URLs that you want to handle.

Bear in mind that none of the App Actions stuff that actually links your Slices to Google’s indexing is available yet, nor will an app with a “valid” actions.xml compile for now.

Nevertheless, you can test out your Slices via the command line + Slice Viewer APK in the meantime until App Actions goes live. I’d encourage you to have a bit of a play with it and watch the Slices IO session, which really convinced me that Slices were a powerful feature worth investigating.

Great, but why?

With the example I’ve run with here, users who have the app installed who search for “what is the bitcoin price” or even potentially just “bitcoin” will see a Slice in Search or the Google Assistant. This can quite easily drive more engagement into your app, and I’m sure you can think of lots of interesting examples for your particular app or company. Interactivity is a great use case too, offering rapid access to features for users who may not have realised that such features existed. It’s these little surprise-and-delight features that we all love in our apps, and that personally I enjoy implementing the most.

At the same time, adding App Actions on it’s own can give you an edge in the Play Store listings, which can be extremely valuable. It’s unclear from Google quite what they plan to do with this stuff (again, App Actions aren’t yet even in developer preview), but the potential is there for driving more interaction with your app across multiple input types and devices.

For a more complete example, check out my test project repo where there’s a slightly rough but concrete implementation. For Google’s own documentation, check out their getting started guide here.

If this was helpful to you please leave some applause, and I’m more than happy to receive feedback. Finally, if you want to work on this sort of stuff, we’re hiring!