From grabbing robots in a dance club to stroking flower petals, we love designing new interactions that put your hands into new digital landscapes. Featuring just three scenes and a few core interactions, our V2 orientation appwas designed to provide users with a quick introduction to the new software’s tracking capabilities – as well as showcase our rigged hand asset for Unity

In this post, we’ll look at Playground’s unique object interactions, including how “ghost hands” can help orient your users. If you want to take a closer look, be sure to download the Unity source for Playground on our GitHub page.

The first scene serves solely to orient the user with their digital hands in space. When the scene fades in, a “ghost hand” appears, visually encouraging you to wiggle your fingers in the air. By physically demonstrating an introductory action instead of using text, the ghost leads by example, without feeling dogmatic – encouraging exploration and play.

The next stage is Dancing Robots, which helps users learn how to pick up objects and interact with them in 3D space. The ghost hand makes a reappearance – gently instructing you to reach out and “pinch” one of the blocks between your thumb and index finger. Because the blocks are sized to mimic the point where the pinch gesture can be enabled, users begin to learn to expect when the object will remain in their grasp.

Colors are another important visual cue, changing to reflect the state of the hand’s relation to them. This allows the user to learn three action states: rest, hover, and grab. The resting block is the darkest. When the hand approaches the block, it glows slightly. When you successfully pick up the block with a pinch, it glows brightly. This type of visual feedback rewards repeated “good behavior,” while at the same time encouraging exploration within the scene to see what other interaction opportunities might lurk beneath the surface.

Once the first robot body marches through the door, it bumps suggestively into one of the blocks, provoking the user to explore a potential connection between the two objects. Sure enough, a lightning bolt appears – illustrating the magnetic property between the two. Once this property breathes life into the robot, the user is eager to try it with the rest of the blocks to see what other personalities they can unlock. Giving the user a goal makes the user feel productive, and trains them to perform the action with greater and greater ease as each robot emerges.

It’s also important to note that the reflections and shadows built into the floor give the user a sense of the Z-depth perspective, so you have a sense of how far away or close their digital hand is in the scene. From a design perspective, all of these dynamic visual cues reduce cognitive friction and make the experience feel progressively more intuitive.

The third and final scene in Playground takes us to a tranquil pond. A delicate flower emerges jauntily from it. Compared to the Dancing Robot club we just left, the colors and audio have an immediate calming effect. (But you might notice that the reflectiveness of the pool carries the same Z-depth perspective as the shiny dance floor had previously!)

The ghost hand returns, this time playfully pressing down on the flower, inviting the user to reach out and interact with it. Very few animations were programmed into this scene – it all stems from the user’s dynamic hand movements. This reinforces the user’s freedom to be dexterous within their digital environment in a radically divergent way from traditional 2D computing interfaces.

In this scene, the goal of plucking the petals off the flower and watching them transform into fish is perhaps slightly less straightforward than the robot scene – but that’s OK! We want people to feel like they have the freedom to explore, because it creates a sense of immediacy, power, and presence. Bringing in more challenging textures and objects, such as the paper-thin petals and the spring-like stem, lets you build upon the pinching paradigm you learned in the previous scene, gaining confidence in the space with each action.

The flower petals themselves are designed to orient themselves naturally when you grab them. Diving into the project on GitHub, you can see their unique structure and behavior in the object properties and scripts. The Hand object contains all of the grabbing logic – when it makes a grabbing motion, it will look for an object to grab.

You’ll also find scripts attached to the object data designed for hand interactions, so that blocks turn different colors, or the petals respond to virtual touch. Behaviors like tweaking the flower petals to feel realistically delicate, or deciding when a robot head should leap into your hand or drop back down, were among the most challenging aspects of designing Playground.

What do you think about Playground’s interaction progression and UX design? Let us know in the comments. If you’re looking to build your own 3D interactive experience, be sure to check out our Unity getting started guide, including new assets for VR.

Epilogue: Matt on Twitch!

In March 2015, Playground designer Matt Tytel joined us on our Twitch channel to dive into building intelligent 3D interaction schemes, translating sound into physics simulations, and how to “use the whole buffalo” in user interface design:

For cutting-edge projects and demos, tune in every Tuesday at 5pm PT to twitch.tv/leapmotiondeveloper. To make sure you never miss an episode, enter your email address to subscribe to up-to-the-minute Twitch updates:



