« Hey Siri, help me »

The first part of this concept is focused on Siri. The idea here is not to create new commands, rather to display existing vocal requests that work well (like « Find me a good restaurant nearby » or « Get me pictures of Japan I took last year ») in a different way so they could be more useful to the user.

In iOS Mogi, Siri has been designed around a concept I call parallel help. The idea is to have a vocal assistant that is non-intrusive (it won’t take the whole screen like it does today), context aware, and can do things in the background for the user while they are doing something else.

As images are more explicit than words, here’s a very simple example:

Using Siri in Messages.

When using apps, Siri takes the shape of a notification so as to be less intrusive as possible (if summoned from the lock screen or the home screen, it will still be fullscreen).

Siri in iOS Mogi.

In the example above, I ask Siri to show me pictures of Japan as I want to send one to my friend Yannick. Once the request is fulfilled, the result is displayed in the Siri notification so I can continue to do what I was doing without being interrupted. I can swipe down the notification to reveal more and select the photos I want to send.

Selecting pictures before sending them.

Or, I can directly drag a picture from the expanded notification to the app below (finally putting to contribution the drag-and-drop API on the iPhone):

Dragging a photo from Siri to Messages. To cancel the drag-and-drop, drag the object to the screen’s borders.

If it has not been used for a while, the notification shrinks down to the top:

Shrinking down.

And it is possible to open it again by swiping down from the top of the screen (we’ll come back to it later):

Opening Siri again. The last request pops up again.

What’s really cool about the new Siri is that the results it displays can be used in another app very easily if needed:

When switching between apps, the result of the request remains accessible so it can be used again.

I believe designing Siri to be non-intrusive allows for so many use cases and could make a real difference with its competitors, thanks to its deep integration into the OS. Here’s a few examples of what could be achieved with this new Siri:

In response to “Find restaurants nearby”. Tap the bubble for more info, or drag to share it. Swipe to see other restaurants nearby.

Proactive Siri. If Siri detects an address or determines you are running late, it pops up and lets you take actions. In this case, tap the notification to send a message to Craig, or swipe down the notification to see your options. An action sheet will let the user choose between their favorite messaging apps.

And there’s even more to it. In iOS Mogi, it is possible to ask Siri to show pages from an app while doing something else on another one. So, for instance, I could be writing an email and wanting to add a picture that a friend sent me on iMessage. Here’s what it would look like in iOS Mogi:

Opening a conversation while writing an email thanks to Siri.

That’s the beginning of a true multitasking on a mobile device, and I think it is more adapted to the mobile limitations than just vertically splitting the screen in 2.

« Hey Siri, I want to… »

You know how sometimes, you feel frustrated when you try to send a message with Siri, and end up taking your phone and typing your text? In iOS Mogi, instead of relying completely on Siri to do things for you, you can ask it to help you get things done faster. No more wandering in the UI, simply begin your sentence with « I want to… » and Siri will let you do it, without leaving what you were doing (in iOS Mogi, what you are doing is really precious).

Writing a message to Yannick while writing an email.

And it works right from the lock screen:

Writing to Yannick from the Lock Screen with Siri.

« Hey Siri, scroll down a bit »

With Siri now being completely non-intrusive, new use cases show up. One of them is Siri actions.

Siri actions basically translate any touch gestures into voice commands. From the tap to the scroll, everything can now be performed using solely vocal requests. So I can ask Siri to scroll down my list of albums for instance, and open High as Hope from Florence + The Machine.

Navigating in Apple Music with Siri.

Elementary use cases like this one were previously not possible as Siri would take the whole screen and would not be context aware. Now, when writing an email for instance, I can ask Siri to change the recipient on the go, modify my signature, or even change the style while I am writing my email:

Editing an email on the go with Siri.

I think it would be a huge step forward for disabled people in particular. Siri actions would make it easier than ever for them to navigate in the OS. Even for non-disabled people, I think it would be really useful when hands are busy, like when cooking for example, or just to make some redundant tasks easier (as seen above in Mail).

And things could go even further thanks to ARKit 2 which has proved to be precise enough to track the eye (but let’s try to keep things simple for this concept):

I believe that Siri actions are very coherent with the initial approach of Siri, which was to allow users to perform simple actions with their voice. Making Siri non-intrusive just takes it a step further.

Siri actions also come with a new ability for Siri: saving elements from anywhere to use them elsewhere. Open a photo, and say « Save this photo » for Siri to save it.

Ask Siri to save elements and it will keep them for you.

Open another app, and drag the photo from the Siri notification to the app:

Dragging a saved picture in Medium.

And it works for copy-pasting too:

When content is copied from somewhere, it appears in a Siri notification so it can be used elsewhere easily. Swipe down the notification to see all of your previous saved content, like text, images, or emails.

Simply tap to paste your text in the current app where the cursor is at. You can also drag the content to where you want:

Pasting content from Notes to Medium in a second thanks to Siri.

You can ask Siri to show you all your saved elements with the sentence « Show me all my saved elements »:

Just ask “Show me my saved elements” and Siri will display them in front of you. You can drag them onto the app below or tap them to edit before use.

Once saved elements are used elsewhere, they disappear from the list.

And that’s it for the new Siri. Now, what if we applied the same principles of non-intrusive multitasking to other areas in the OS? Surely, the new gestures and visual code could be applied elsewhere. What could we do with it? Before digging further into this idea, let’s see if we can improve the experience in a particular area where I personally spend a lot of time: Apple Maps.