At Archilogic last year, we embarked on a project to highlight some of the design automation tools we’ve been building and explore new ways to use our 3d.io platform.

Today we launched the first iteration of that, called Homestory. The idea is simple: an iOS app that uses Augmented Reality and machine learning to create suggestions for furniture layouts that complement a room’s available space, and helps people discover something new.

Under the hood we leverage Apple’s awesome ARKit tools, but what might surprise you is that much of the application was built with React Native and JavaScript, which helped us stay agile and ship quickly with a team of three.

Here’s how we went about that, and why we chose to write Homestory using web tools rather than Apple’s native languages.

Homestaging AI

An important part of what we set out to do is make useful suggestions for a person’s home, but to do that we need to know about the layout of their room.

The next big challenge was finding a way to actually understand a space. For a long time, the main input to our algorithm was 2D pixel-based floor maps. Unfortunately, the average person doesn’t have a floor plan of their home or office lying around, so we decided that it should be possible for a user to map out a space with a phone.

By understanding a floor plan, we’re able to make meaningful suggestions about how furniture could be arranged in the space. In Homestory, we take the process of mapping the floor plan and abstract it out into something user friendly that only requires the use of their camera — and create the floor plan from that data.

From there, we draw from tens of thousands of data points we’ve gathered from professional architects and designers, fed into an algorithm that’s able to understand the important factors that go into creating a beautiful space.

The most important part of this process was creating what we call ‘home staging automation’ to cater for small rooms, which required massively diversifying the input data-set and improving the potential density of furniture before even beginning to create the AR experience.

Once we had that in place, it was much easier to create recommendations because we could understand the full spectrum of how a potential space can be used, and what would fit into it in a way that adheres to those architectural principles.

Why React Native

The first iterations of Homestory involved a 2D/3D drawing tool built with SVG and the fantastic A-frame library, which allowed users to map out a space by drawing the shape and correcting it with real measurements. A huge drawback to this was accuracy, and some heavy UX complications.

Enter ARKit. Three features of ARKit, which only hit the market in late 2017, made it apparent that the framework was a game changer for accurately understanding space: feature detection, stable tracking, and accurate measuring.

Just testing…

A big question was then whether or not we could use our familiar tooling from the web along side ARKit. Essentially, we wanted to know if it would be possible to combine React Native and ARKit to build the perfect pipeline.

React Native provides an easy escape hatch to leverage native device APIs, and the community had already introduced several ways to combine React Native with ARKit: React-native-ARKit, Viro and Expo.

While there are strengths to all of these approaches, we ended up adopting and sponsoring the react-native-arkit project. First, because it gave us the most control over a new and changing API, allowing to ensure it’s up to date but stable enough to use.

This was crucial as we experimented and sought to understood the limits of ARKit and allowed us to implement support for missing features as we needed.

Second, one of the most attractive components of iOS development is SceneKit, a declarative 3D API. SceneKit provides a DOM-like scene graph — a natural fit for React. Composing SceneKit nodes in React Native feels just like writing A-frame, with the added performance advantage of using Apple’s Metal rendering engine under the hood. The obvious disadvantage here is that this approach is iOS only.

Setting up a simple react-native-arkit scene feels like setting up any other react project :

A simple react-native-arkit scene loading a collada model and geometric primitives

Challenge 1: Retrieving furniture from the network

One of the biggest drawbacks about native iOS development to date for us is retrieving 3D data that is not known at compile time.

While 3d.io provided us optimized furniture from its API, we needed this to be a format accepted by iOS. Using another API from 3d.io, the convert API, we converted our library to collada, which should have first class support in SceneKit, but to our frustration that wasn’t enough.

SceneKit-compatible collada files need to be a special ‘optimized collada’. We custom-built a pipeline for converting files to the optimized format needed by iOS. We hope in the near future we can move entirely to GLTF thanks to community projects like this one, but for now we’ve found a way to make things work.

Here's a simplified gist of what the entire process looked like, using 3dio-js from node.js:

Once you have these optimized files saved to the network somewhere you can use them directly in an ARKit app, without ever needing to open them in Xcode. Here's an example implementation using react-native-arkit:

Loading a 3d file into ARKit at runtime with react-native-arkit

Challenge 2: Debugging

One of the more frustrating things about developing an AR app was debugging.

Initially we would have to scan the floor and physically move around the room to inspect the scene, co-workers in our office constantly felt like they were being filmed, and we had to get out of our seats to test even basic bug fixes.

A huge improvement to this was process was to build a simple fallback to orbit / touch controls so the scene could be viewed and inspected in the iOS simulator.

In the future this could be extended to provide access to people who don’t have access to AR so they can play even if their device doesn’t technically support it.

One of the magical things about combining React Native with ARKit was that we could avoid recompiling the app just to see small changes. Hot reloading in AR feels very futuristic and we use this on a regular basis to quickly test tweaks to the app without getting up.

The Future

While Unity has become the default path for building AR apps, an increasing number need only a sprinkling of AR. React Native provides the perfect way to do this and the performance you can achieve with this approach may surprise you — we were able to reduce overheads, become much more agile, and ship in a matter of months with a very small team.

To date, we are very happy with what we’ve been able to build with this approach, however Android ARCore support is a massive concern and something we would love to add in the future to welcome more users into Homestory. Contributions to react-native-arkit are always welcome if you want to help us with this challenge.

For now, the best way to support both platforms is to use a platform like Unity, Viro or Expo, but it’s clear that this isn’t always the right approach. Google is, however, committed to pushing WebXR forward perhaps the best way to achieve this could be creating a bridge to Android support via pure web technologies.

Facebook has big news to share next month at its annual developer conference in the AR/VR space. No doubt this will affect the space too, given that Facebook has heavily invested in React Native and plans lower powered portable devices based on mobile chipsets.

We’re excited that web technology has reached the point where we can quickly and easily build a complex application using familiar languages but still see fantastic performance.

Over the coming weeks, we plan to share more about out process and how we build for AR with React Native so you can do the same!

Resources:

Thanks to everyone at Archilogic who supported this project, as well as the incredible team at Panter who we collaborated with to make it happen.