A Matterport Space, or 3D Space, is a complete three-dimensional representation of a space, which lets you “walk” through the space to experience it as if you were there in VR.

Anyone involved at the intersection of real estate and Virtual Reality has probably experienced a immersive walkthrough made with Matterport technology.

Matt Bell founded Matterport after the release of Microsoft Kinect and seeing it’s potential to create 3D environments to give someone that immersive feeling in a space. Matt and I discuss what it was like founding the company, building the prototype and looking forward to Google Project Tango.

Matt Bell

How did Matterport come to be?

MB: Back in 2011, my co-founder Dave and I were very interested in the idea of creating the 3D equivalent of the camera. In the same way that photography became an instant automated process to capture moments, we wanted to do the same thing for entire 3D spaces. Instead of several 3D artists taking weeks create a 3D model.

We built the software to capture the raw 3D capture that was created by this 3D sensor, so that we could have a real time 3D reconstruction system and actually get that to work reliably, so people could use it to build 3D models of buildings.

So it was a very much a 3D tech startup, and we had to bring in a lot of our own and other computer vision expertise to make it happen.

tl;dr — Matt and his co-founder Dave founded the company to map existing 3D spaces quickly and efficiently.

When you guys got started, did you have VR in mind, was that part of the mission?

MB: Funny enough, we were founded in 2011 and so VR was not on our radar. We were mainly focused on the web and mobile. When the first Oculus DK1 (Developer Kit 1 — the first iteration of the Oculus headset) came out, I remember getting my demo of it, and I realized that we had basically created the perfect way of bringing real spaces into VR.

We very quickly exported some of the 3D models that our camera was already creating so that you could walk around them in VR. We had a handful of really interesting and fun demos that we could show off to people even when VR was very nascent.

tl;dr — VR wasn’t even around when Matt founded Matterport.

It sounds like you guys had a pivot somewhere around 2011–2012 when you had that realization.

MB: Yeah, I wouldn’t describe it as a pivot as much as an added capability that we could deliver. The content we create — the 3D models created by our cameras — can be published on different mediums and different display types.

So VR is the ideal way of experiencing that due to the maximum conversion that you get. But you can also experience them on the web and on mobile via WebGL on the browser. That’s actually really good because, although the VR headset penetration numbers are impressive and growing rapidly, there are over a billion smartphones out there, and it’s going to take a few years for VR to reach that level of penetration.

tl;dr — VR is actually an added capability that Matterport has presented, not their core competency.

What was it like building your first prototype camera?

MB: Our very first prototype was literally a Microsoft Kinect plugged into my laptop, and we would walk around the house trailing a power cord. It was good for a proof of concept, but we quickly evolved from that.

We quickly ended up working with the Israeli company that developed the guts of the Kinect, they’re called PrimeSense. We ended up incorporating three of their sensors into a camera. What’s nice about that is that we can set that up with an embedded system and batteries that give you 10+ hours of scanning time, and then you can just bring that camera around the space and you control it with an iPad. That provides the real time display and alignment as you’re scanning the space.

tl;dr — Their first prototype was actually a Microsoft Kinect plugged into a laptop.

Kinect Technology has found many uses

Does your camera have internal navigation where it maps out the space as well?

MB: It’s getting in color and depth data and as you move the camera from spot to spot, it’s stitching together all of the data into a coherent 3D model. So as you’re capturing a space, you’ll see it come together in real time.

And to give you an idea of how fast it is, you can scan a typical 3-bedroom house in 30–45 minutes. The alignment is all automated and then you just upload it to the cloud. Fairly quickly thereafter, you get the final results back which you can then view on the web, mobile or VR.

tl;dr — The camera has internal systems that map out it’s location within a structure. No editing necessary, this is all uploaded to the cloud afterwards.