There are basically two different ways how to design applications for new product categories like Google Glass. On the one hand side, you may simplify most things, showing for example never more than one message at once and limiting the possible interactions. This is also what the current simple Glass API apparently aims for (though Google is probably going to release a second, more powerful API for Glass soon). This does not mean that it's a bad way. It makes the overall experience very clean, good looking and working fast. And it might be the best way to start on such a new system.

On the other hand side, you could also try to create full-functional experiences, on a level which the user would also expect to get on other devices.

In this concept I tried to focus on two main aspects. On the one hand, side I wanted it to show such a full-functional experience, being similar to the real play store on Android phones. On the other hand side, I also tried to find a way the store could be used as well without the need of any voice-commands, but only with the touchpad.

To achieve the latter one, scrolling and swiping plays a very important role. Scrolling upon the pages bottom or top can set off full interactions, as for example to install an application or to navigate back (much like the ‘swipe down to refresh’ feature found in many mobile applications).

Further more, there is also a new way to point the user to voice commands incidentally: The commands are often part of headlines. So the real title is formatted differently than the thing you can say.

I think all in all this could work really well. Especially the fact that you don’t need voice commands is something I’d really like about it. You just cannot always speak freely, sometimes it’s too noisy and sometimes you just don’t want everybody to know what you’re doing right now.

What’s your opinion on this concept? Is there a need for an app store access directly in glass at all? Might this be too complex for Google Glass?