I am deeply convinced that augmented and virtual reality will be the primary way we work, play, and connect for the next 50 years, just as personal computers and smartphones have changed the world for the last 45 years and counting. Over the years, I’ve described that vision many times, including here, here, and here. It’s a dual vision — of stylish AR glasses that let you conjure virtual objects and devices into existence at will; connect with others instantly; enhance your perception, memory, and cognition; and give you a truly personalized assistant; and of a VR headset that can teleport you anywhere to be with anyone. In those talks, I’ve described the technology that needs to be developed in order to get to that future, but I haven’t talked about how that technology is being developed and the people who are doing it.

Today that changes – Facebook Reality Labs is kicking off a yearlong series of blog posts that will take you inside our labs and show you how we’re building the future. Each post will highlight a different FRL team that’s trailblazing a new technology to get us to that future.

Autumn Trimble is scanned by a highly-customized system of cameras and microphones in Facebook Reality Lab's Pittsburgh office. Creating lifelike avatars currently requires capturing large quantities of high-quality audio and video of research participants in FRL's lab. This data is used to train AI systems that may one day quickly and easily build your Codec Avatar from just a few snaps or videos.

I expect these blog posts to be markers on the journey to the AR/VR future for a couple of reasons. First, there are many people who are skeptical about AR and VR, and we hope to change that perception by sharing why we believe these platforms are in fact on the path to changing the world. Second, given the world-changing impact we expect AR and VR to have, we’d love to see a broad discussion of how we as a society want to incorporate the power of these technologies into our daily lives, and by being open about what we see coming before too long, we hope to help jump-start that conversation.

Over the coming months, you’ll see deep dives into optics and displays, computer vision, audio, graphics, haptic interaction, brain/computer interface, and eye/hand/face/body tracking.

You can find the first post here, describing the Codec Avatar research done by the FRL Pittsburgh team, which has produced some of the most compelling real-time avatars ever made. New installments will appear on a regular cadence, so please check back from time to time to see what’s new.

It takes a critical mass of exceptional people across a remarkably broad range of expertise to build the future of AR and VR, and it is a great pleasure, five years into our AR/VR research, to share those people and their work with you.

— Michael Abrash, Chief Scientist, Facebook Reality Labs