Today, we’re highlighting Currents, a narrative experiment from Venice, Calif.-based design studio Active Theory. Currents places users in ocean currents, so that the force of the sea is palpable. An awe-inspiring immersive experience, it also serves to remind humans that we’re a small part of larger global forces.

We sat down with Active Theory’s interactive director and co-founder, Michael Anthony, to learn more about the studio’s collaborative and iterative design approach and its latest creation. While Currents showcasing an interesting learning experience for consumers, the way that Active Theory is developing for Spatial Computing means interesting things for the future work they’re doing for clients in 2020.

What is Active Theory, and how did you all get into XR?

Active Theory is a digital experience studio that tells stories with stunning 3D graphics. Our tightly integrated team of designers and developers work with tools we’ve built in house which consistently deliver award winning work through performance, efficiency, and attention to detail. We started by making experiences on the web, helping people to fly around Hogwarts castle and adding interactive graphics to documentaries. In recent years, we’ve begun to branch out, taking the same workflow we use on the web to new exciting platforms such as Magic Leap. While 3D web experiences are incredible, being able to take our creations and put them in physical spaces opens up a new realm of creative possibilities for us and our clients.

What’s the origin story on the Currents project?

We often create new experiments in order to play with technology that we’ve not yet been able to deploy in client work. The purpose of Currents was to build a narrative-driven experience to explore a topic of cultural significance on the web, VR, and Magic Leap.

You’re using WebGL to drive all your native environment projects, can you expand on Aura and Hydra, your in-house products?

Hydra is a JavaScript framework which provides the rendering engine and browser-based GUI to create experiences such as Currents. Aura is a native environment which runs on desktop, native mobile AR, and Magic Leap, in order to bridge Hydra’s JavaScript code to the native platforms. This means that a WebGL site becomes a native OpenGL app instantly.

The particle simulation uses one of Hydra’s tools called Proton, which enables us to design particle systems using layers of effects much like a Photoshop file. The complex current motion is calculated on the GPU with octaves of curl noise. The great thing about this setup is that calculating on the GPU means that the only restriction to the number of particles is the power of the GPU itself. On Magic Leap, we’re rendering hundreds of thousands of particles.

What challenges did you face targeting native Magic Leap development? How did you overcome them?

Magic Leap is the most exciting platform we target. In order to run an isolated JavaScript app on Magic Leap, we compile Node.js against the tools provided by the Lumin SDK. This is particularly challenging because Lumin is a new operating system, so we had to work through compilation step-by-step in order to get Node running successfully.

In future projects, what are the most interesting features of spatial computing you’re excited to explore using Aura and Hydra?

We’re particularly excited to explore cross-platform connected devices that are networked directly with RTC. Multiple people experiencing the same spatial content across different devices is just so compelling.

We’re working on a platform to enable shared persistent experiences at live events and public spaces. We see that as the next-level evolution of all the software we’ve been building. While currents is a single user experience, we’re working to build similar applications that could run across devices, where multiple users are seeing the same content at the same time, on whatever devices they have available to them.

How do you see Active Theory playing a part in the future of spatial computing?

We’re having fun innovating in this space from a few angles. While we’re working on our shared experience platform, we’re also working towards computer vision in mobile browsers that enable native-quality spatial computing experiences with just the click of a link. And we’re always looking for opportunities and talented people who aim to be in a similar space.