Over the last few months, a few determined developers have been hard at work redefining what the ideal cross platform toolkit would look like. The goal is to create a unified experience across AR, XR, and VR, sharing and enabling synergies where those platforms overlap, and exemplifying their core strengths where they stand apart. We’ve taken community feedback and used it to create a cutting edge tool for unlocking the potential of developers, no matter what platform your project is targeting.

Enter the next generation of the Mixed Reality Toolkit.

It should be easy as one, two, three…

Setting up the Mixed Reality Toolkit should be as easy as downloading, importing, and pressing configure. No additional changes to your project settings, nor adding prefabs or objects into your scene. The goal here is to speed the portability of projects and ease the burden of getting things up and running quickly.

The default configuration profile

Customize all the things!

Unity’s scriptable objects are powerful tools that unlock the potential for extreme customization of every feature area. We’ve set up multiple “profiles” for developers to use right out of the box, and we’ve also given them the ability to create new custom profiles with ease.

We’ve created a main configuration profile where you’ll see each feature area, and it’s specific options. Each feature can be individually turned on or off depending on the use case of the project.

We have also provided you the ability to replace features within the toolkit to specify your own class and implementation.

Defining Custom Implementations

We’ve given the developers the keys to the castle with the ability to swap out feature areas with their own implementations. Don’t like the way one system handles a specific task? Already have a tried and true system for handling your input? Great, this is the path for you. Simply declare an interface in the class that handles your main feature business logic, satisfy the requirements for that feature, and then specify your type in the configuration profile.

Here are the systems currently implemented that you can configure or replace to your hearts desire. There will be more as the toolkit evolves.

The camera profile

The Camera Profile

The camera profile was designed to centralize all the camera properties into a single place. This profile intelligently handles switching the camera’s properties depending on the platform the project is built for. Opaque and Transparent display settings are just the first few options available and will likely grow in the future.

The Input System

The Mixed Reality Toolkit’s Input System is quite extensive and it’s designed to handle just about every kind of input as Unity’s old input system leaves a lot to be desired. We’ve streamlined the process of setting up devices and taken all of the hassle out of defining the controllers, and figuring out which button and axis maps to what. Much of the workflow for utilizing input hasn’t changed much from the original HoloToolkit, with one exception:

The Input Action profile

Input Actions

Input Actions are the key to a whole new way of working with input from any input source. This level of abstraction removes the need to listen to platform specific code or key presses. Developers can specify any number of actions for their application to use. They also have the ability to be constrained to a specific axis, to filter them when assigning actions to physical inputs in the controller profile.

Controller Profiles

Long gone are the days when developers had to be bound by specific platform code and listening for specific button presses. The toolkit’s controller templates accelerate the setup time by providing developers with a simple interface where they only have to assign Input Actions to physical controller inputs.

Windows Mixed Reality Motion Controller Action Assignment

Developers will also be able to specify custom prefabs or models to render in place of the default controller model.

Handling Input

Not much here has changed from the original HoloToolkit when it comes to handling input, sans the addition of the Input Action. Developers who are writing components that will consume the input by implementing specific input handler interfaces, and assigning actions to listen for in the inspector.

Example of assigning an action in an input handler

We’ve included a BaseInputHandler class to inherit from for simple setup and ease of use. This enables developers to specify if focus (gaze) is required for the input, and handles registering the GameObject with the Input Manager.

The pointer profile with a default and teleport pointer

Pointers!

Any motion controller can have any number of pointers registered to it when the controller is recognized and setup by the Input System. The pointer configuration profile enables developers to setup pointers based on a specific controller type (or all of them if none is selected), which hand that pointer should attach itself to, and it’s prefab reference.

Setting up custom pointers

The toolkit comes with two basic pointers to choose from, a simple line pointer and a parabolic teleport pointer, but let’s get creative and make our own! So what does it take?

Example of a simple line data provider

Lines

Line data providers are specialized components and help define what a line should look like, and how it will behave under different circumstances. Coupled together with Solvers, such as gravity, to apply distortion and physics, lines are powerful tools at the developer’s disposal.

Example of line renderer settings

Line Renderers take the line data and apply graphics to it so that it can be viewed in the scene, and at run time. There’s a few different options to choose from, such as unity’s default line renderer to custom meshes and particles.

Example of line pointer settings

Last, developers will need to add a pointer component to their prefab. There’s currently three to choose from: a Line Pointer, Teleport Pointer, and a Parabolic Teleport Pointer.

Each has its own set of requirements, such as specific line renderers, color options, input actions, and controller synchronization settings.

Creating your own pointer components is also possible using the IMixedRealityPointer interface.

Speech Commands profile

Speech Commands

Any spoken keywords users will be able to say will raise a specific Input Action in a developer’s application. Keywords can also have a corresponding key press that will also trigger the Input Action. A minimum confidence level can also be specified to help filter out results.

Boundaries

The last configuration profile I’d like to touch on is the Boundary Visualization. This profile gives developers greater creative freedom to customize and apply specific themes to their Room Scale Experiences without sacrificing safety.

The Boundary Visualization Profile

That’s only the beginning!

This is just the beginning of the new Mixed Reality Toolkit, there is still a way to go to reach full feature parity with the old HoloToolkit, and even more to do with all the new features we have planned. We’re currently in the process of adding shared experience capabilities, improved spatial awareness, and implementing even more cross platform capabilities for all the Mixed Realty <AR / XR / VR> devices we can get our hands on. We welcome your feedback, critiques, and contributions as we move ever closer to the initial release.

If you’d like to get involved and help steer the direction of this amazing tool, please join us on GitHub — Mixed Reality Toolkit for Unity.