This post originally posted on the Emerging Experiences blog here.

In my role as a developer on the Razorfish Emerging Experiences team, I’m fortunate to be able to get my hands on a ton of new tech, platforms and devices, including the Microsoft HoloLens. And the nature of our practice requires new thinking that blends development with design. Since our wave one delivery of the HoloLens, I’ve been brushing up on my Unity development skills while learning the new Windows Holographic SDK. Here are some initial thoughts.

Unity + Visual Studio

For the new release of Unity, the integration with visual studio has been revamped and is much easier than with previous versions. As someone who spends much more time in a text editor than the 3D designer, this is a great improvement. The builds go fast and create a whole VS solution for you. There are a few known issues with Visual Studio but for the most part the workarounds aren’t too bad. I’ve deployed via USB and over WiFi. WiFi deployment is pretty fast and really convenient. I’ve gone so far as to writing code and deploying new builds while wearing the device, but it does not feel very good on my eyes to do this. Remote debugging is also possible, however it does bog down the system a bit. Definitely usable though, you can hit breakpoints no problem (although it seems to sometimes crash the device).

Yo dawg, we heard you like computers so we stuck a computer on your head so you can computer while you computer

The Holographic SDK

I’ve taken a cherry-picking approach to learning the SDK so I haven’t made it through all the tutorials or investigated all the APIs. I have, however, managed to learn how to move holograms with my head, place them on spatially-mapped surfaces and create custom voice commands. Microsoft’s HoloToolkit streamlines much of this work, but I had to dig some of the scripts out of the Holographic Academy tutorials. I’m guessing the tutorials were made by multiple developers, since each one takes a slightly different approach. All in all, it was super easy to get stuff working and the docs are not bad, it’s just a lot to read through.

My take on a holographic “Menu”

Throw design rules out the window

Something that has become very apparent in digging through documentation is that Microsoft is providing “best practices” relatively sparingly. There are some recommendations about camera FOV, some user flow, and a few things I’ve seen about how to implement voice commands that make sense. But or the most part, UX and UI is green fields. This may be because Microsoft is racing to get docs written and they just haven’t posted them, but I believe they want it this way. There are no “best practices” because developing and designing for HoloLens requires a massive shift in thinking. The AR interaction and the head space is fundamentally different than anything I’ve ever used, let alone written code for. I’m too young to remember, but I imagine this is sort of how it felt the first time there was a touchscreen on a cell phone. It really does feel like we are making a lot of the rules as we go, so cross-disciplinary, radically collaborative, and iterative approaches are going to win out. I’ve been experimenting with more traditional design paradigms and using the “fail fast” approach to create new modes of interaction: if something’s not working, try a different approach. Simple as that. The headset becomes your design tool because you don’t quite know what is going to work until you try it out.

For this use case, I prefer the voice commands to a traditional menu system

Big and exciting challenges

Unity is a beast of an environment and so much of my time has been spent on a workflow scripting. It is much a much different mentality than previous work (my 3D work has mostly been OpenFrameworks and Cinder, which provides a render loop, but are not full game engines by any means.), so expect a bit of a learning curve. But there is so much to learn and explore and I’m excited to see what is next.