Now that you’ve downloaded Interhaptics and follow all the instructions (if you didn’t, click here), you’re ready to extend your reality. In this blog post, we will analyze how Interhaptics manages the implementation of 3D interactions for VR /AR to optimize the user experience.

3D and Hand interaction for VR and AR are usually inspired by reality. We see a button in VR, and we know immediately that we must push it.

Similarly, when we see a virtual object representing a real one, we respond with an interaction that we are familiar with. We see a ball, we reach out and grab it.

If we look a little more in detail of this interaction we have:

Reach out and grab a ball

Displace it

Release it

From this segmentation, we can generalize. We can represent 3D interactions in mainly three components:

The starting condition (grab)

The spatial transformation (displace)

The ending conditions (release)

This seems obvious! Why bother to define and segment a simple concept?

This framework is useful when we approach more complex interactions, with programmable ending or starting conditions to meet a user experience objective.

In our website, we provide multiple ready to use demos to explain this concept. In a professional training scenario, we want a user to wield a key in order to turn a bolt exactly 750 degrees and release the hand at the end of the interaction.

If we segment this interaction with the previous framework we have:

Starting condition: Grab

Transformation: 750 degrees rotation on the bolt axis

Ending Condition: Reach the final position

This interaction is a little more complex than the previous one, but the segmentation into the 3 main blocks allows to visualize it as simply as a logical consecution. We call these items “interaction primitives”. An interaction primitive is defined by a starting, a transformation, and an ending condition.

If you think about your smartphone, you are already using these interaction primitives every day. The drag and drop on the screen of your smartphone have a starting condition (tap), transformation (drag), and ending condition (reach the target or lift the finger from the screen). Apple was and still is one of the best companies optimizing interaction primitives. With this method, the transformation doesn’t need to follow exactly the movement of the finger, and it allows to simply add inertial effects to the graphical outcome.

One of the fascinating outcomes of this method is that we can represent a unique interaction as a single block and a set of interaction as a block scheme.

What makes interaction primitives interesting

When you are creating interactive scenarios with several interactions in sequence, you need to quickly execute and implement a set of interactions. You need the consistency and the ability to modify on the go the parameters of your interactive content. Each interaction primitive provides an ending information once completed, allowing to trigger animation/actions or enable further interactions to create user scenarios.

Interhaptics developed a large set of interaction primitives that you can apply with just a few clicks inside your 3D engine making your scenario interactive. These interaction primitives can be also applied via an API and be implemented in a developed product to easily implement interactive content. You can find some at this video

Interaction Demonstrator video available on Youtube

A bonus of these interaction primitives is that during the transformation phase you can load haptics material and enrich your interactions with tailored haptic feedback. This is just a few clicks away.

Interaction primitives are the best scriptwriters for modeling your interactive content into your XR environment. Check out our demo video now to show you the full potential of interactions.