Every December, Adam Savage—star of the TV show MythBusters—releases a video reviewing his “favorite things” from the previous year. In 2018, one of his highlights was a set of Magic Leap augmented reality goggles. After duly noting the hype and backlash that have dogged the product, Savage describes an epiphany he had while trying on the headset at home, upstairs in his office. “I turned it on and I could hear a whale,” he says, “but I couldn’t see it. I’m looking around my office for it. And then it swims by my windows—on the outside of my building! So the glasses scanned my room and it knew that my windows were portals and it rendered the whale as if it were swimming down my street. I actually got choked up.” What Savage encountered on the other side of the glasses was a glimpse of the mirrorworld. View more The mirrorworld doesn’t yet fully exist, but it is coming. Someday soon, every place and thing in the real world—every street, lamppost, building, and room—will have its full-size digital twin in the mirrorworld. For now, only tiny patches of the mirrorworld are visible through AR headsets. Piece by piece, these virtual fragments are being stitched together to form a shared, persistent place that will parallel the real world. The author Jorge Luis Borges imagined a map exactly the same size as the territory it represented. “In time,” Borges wrote, “the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.” We are now building such a 1:1 map of almost unimaginable scope, and this world will become the next great digital platform. Google Earth has long offered a hint of what this mirrorworld will look like. My friend Daniel Suarez is a best-selling science fiction author. In one sequence of his most recent book, Change Agent, a fugitive escapes along the coast of Malaysia. His descriptions of the roadside eateries and the landscape describe exactly what I had seen when I drove there recently, so I asked him when he’d made the trip. “Oh, I’ve never been to Malaysia,” he smiled sheepishly. “I have a computer with a set of three linked monitors, and I opened up Google Earth. Over several evenings I ‘drove’ along Malaysian highway AH18 in Street View.” Suarez—like Savage—was seeing a crude version of the mirrorworld. It is already under construction. Deep in the research labs of tech companies around the world, scientists and engineers are racing to construct virtual places that overlay actual places. Crucially, these emerging digital landscapes will feel real; they’ll exhibit what landscape architects call place­ness. The Street View images in Google Maps are just facades, flat images hinged together. But in the mirrorworld, a virtual building will have volume, a virtual chair will exhibit chairness, and a virtual street will have layers of textures, gaps, and intrusions that all convey a sense of “street.” The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world. Carl De Torres At first, the mirrorworld will appear to us as a high-resolution stratum of information overlaying the real world. We might see a virtual name tag hovering in front of people we previously met. Perhaps a blue arrow showing us the right place to turn a corner. Or helpful annotations anchored to places of interest. (Unlike the dark, closed goggles of VR, AR glasses use see-through technology to insert virtual apparitions into the real world.) Eventually we’ll be able to search physical space as we might search a text—“find me all the places where a park bench faces sunrise along a river.” We will hyperlink objects into a network of the physical, just as the web hyperlinked words, producing marvelous benefits and new products. The mirrorworld will have its own quirks and surprises. Its curious dual nature, melding the real and the virtual, will enable now-unthinkable games and entertainment. Pokémon Go gives just a hint of this platform’s nearly unlimited capability for exploration. These examples are trivial and elementary, equivalent to our earliest, lame guesses of what the internet would be, just after it was born—fledgling Compu­Serve, early AOL. The real value of this work will emerge from the trillion unexpected combinations of all these primitive elements. The first big technology platform was the web, which digitized information, subjecting knowledge to the power of algorithms; it came to be dominated by Google. The second great platform was social media, running primarily on mobile phones. It digitized people and subjected human behavior and relationships to the power of algorithms, and it is ruled by Facebook and WeChat. We are now at the dawn of the third platform, which will digitize the rest of the world. On this platform, all things and places will be machine-­readable, subject to the power of algorithms. Whoever dominates this grand third platform will become among the wealthiest and most powerful people and companies in history, just as those who now dominate the first two platforms have. Also, like its predecessors, this new platform will unleash the prosperity of thousands more companies in its ecosystem, and a million new ideas—and problems—that weren’t possible before machines could read the world.

Glimpses of the mirrorworld are all around us. Perhaps nothing has proved that the marriage of the virtual and the physical is irresistible better than Pokémon Go, a game that immerses obviously virtual characters in the toe-stubbing reality of the outdoors. When it launched in 2016, there was an almost audible “Aha, I get it!” as the entire world signed up to chase cartoon characters in their local parks. Pokémon Go’s alpha version of a mirrorworld has been embraced by hundreds of millions of players, in at least 153 countries. Niantic, the company that created Pokémon Go, was founded by John Hanke, who led the precursor to Google Earth. Today Niantic’s headquarters are housed on the second floor of the Ferry Building, along the piers in San Francisco. Wide floor-to-­ceiling windows look out on the bay and to distant hills. The offices are overflowing with toys and puzzles, including an elaborate boat-themed escape room. Hanke says that despite the many other new possibilities being opened up by AR, Niantic will continue to focus on games and maps as the best way to harness this new technology. Gaming is where technology goes to incubate: “If you can solve a problem for a gamer, you can solve it for everyone else,” Hanke adds. But gaming isn’t the only context where shards of the mirrorworld are emerging. Microsoft, the other big contender in AR besides Magic Leap, has been producing its HoloLens AR devices since 2016. The HoloLens is a see-through visor mounted to a head strap. Once turned on and booted up, the HoloLens maps the room you’re in. You then use your hands to maneuver menus floating in front of you, choosing which apps or experiences to load. One choice is to hang virtual screens—as in laptop or TV screens—in front of you. Microsoft’s vision for the HoloLens is simple: It’s the office of the future. Wherever you are, you can insert as many of your screens as you want and work from there. According to the venture capital firm Emergence, “80 percent of the global workforce doesn’t have desks.” Some of these deskless workers are now wearing HoloLenses in warehouses and factories, building 3D models and receiving training. Recently Tesla filed for two patents for using AR in factory production. The logistics company Trimble makes a safety-­certified hard hat with the HoloLens built in. Eventually, everything will have a digital twin. This is happening faster than you may think. In 2018 the US Army announced it was purchasing up to 100,000 upgraded models of the HoloLens headsets for a very nondesk job: to stay one step ahead of enemies on the battlefield and “increase lethality.” In fact, you are likely to put on AR glasses at work long before you put them on at home. (Even the much-maligned Google Glass headset is making quiet inroads in factories.) In the mirrorworld, everything will have a paired twin. NASA engineers pioneered this concept in the 1960s. By keeping a duplicate of any machine they sent into space, they could troubleshoot a malfunctioning component while its counterpart was thousands of miles away. These twins evolved into computer simulations—digital twins. General Electric, one of the world’s largest companies, manufactures hugely complex machines that can kill people if they fail: electric power generators, nuclear submarine reactors, refinery control systems, jet turbines. To design, build, and operate these vast contraptions, GE borrowed NASA’s trick: It started creating a digital twin of each machine. Jet turbine serial number E174, for example, could have a corresponding E174 doppelgänger. Each of its parts can be spatially represented in three dimensions and arranged in its corresponding virtual location. In the near future, such digital twins could essentially become dynamic digital simulations of the engine. But this full-size, 3D digital twin is more than a spreadsheet. Embodied with volume, size, and texture, it acts like an avatar. In 2016, GE recast itself as a “digital industrial company,” which it defines as “the merging of the physical and digital worlds.” Which is another way of saying it is building the mirrorworld. Digital twins already have improved the reliability of industrial processes that use GE’s machines, like refining oil or manufacturing appliances. Microsoft, for its part, has expanded the notion of digital twins from objects to whole systems. The company is using AI “to build an immersive virtual replica of what is happening across the entire factory floor.” What better way to troubleshoot a giant six-axis robotic mill than by overlaying the machine with its same-sized virtual twin, visible with AR gear? The repair technician sees the virtual ghost shimmer over the real. She studies the virtual overlay to see the likely faulty parts highlighted on the actual parts. An expert back at HQ can share the repair technician’s views in AR and guide her hands as she works on the real parts. Eventually, everything will have a digital twin. This is happening faster than you may think. The home goods retailer Wayfair displays many millions of products in its online home-furnishing catalog, but not all of the pictures are taken in a photo studio. Instead, Wayfair found it was cheaper to create a three-­dimensional, photo-­realistic computer model for each item. You have to look very closely at an image of a kitchen mixer on Wayfair’s site to discern its actual virtualness. When you flick through the company’s website today, you are getting a peek into the mirror­world. Wayfair is now setting these digital objects loose in the wild. “We want you to shop for your home, from your home,” says Wayfair cofounder Steve Conine. It has released an AR app that uses a phone’s camera to create a digital version of an interior. The app can then place a 3D object in a room and keep it anchored even as you move. With one eye on your phone, you can walk around virtual furniture, creating the illusion of a three-dimensional setting. You can then place a virtual sofa in your den, try it out in different spots in the room, and swap fabric patterns. What you see is very close to what you get. When shoppers try such a service at home, they are “11 times more likely to buy,” according to Sally Huang, the lead of Houzz’s similar AR app. This is what Ori Inbar, a VC investor in AR, calls “moving the internet off screens into the real world.” For the mirrorworld to come fully online, we don’t just need everything to have a digital twin; we also need to build a 3D model of physical reality in which to place those twins. Consumers will largely do this themselves: When someone gazes at a scene through a device, particularly wearable glasses, tiny embedded cameras looking out will map what they see. The cameras only capture sheets of pixels, which don’t mean much. But artificial intelligence—embedded in the device, in the cloud, or both—will make sense of those pixels; it will pinpoint where you are in a place, at the very same time that it’s assessing what is in that place. The technical term for this is SLAM—simultaneous localization and mapping—and it’s happening now. LEARN MORE The WIRED Guide to Artificial Intelligence For example, the startup 6D.ai built a platform for developing AR apps that can discern large objects in real time. If I use one of these apps to take a picture of a street, it recognizes each car as a separate car-object, each streetlight as a tall object different from the nearby tree-objects, and the storefronts as planar things behind the cars—dividing the world into a meaningful order. And that order will be continuous and connected. In the mirrorworld, objects will exist in relation to other things. Digital windows will exist in the context of a digital wall. Rather than connections generated by chips and bandwidth, the connections will be contextual, generated by AIs. The mirror­world, then, also creates the long-heralded internet of things. Another app on my phone, Google Lens, can also see discrete objects. It is already smart enough to identify the breed of a dog, the design of a shirt, or the species of a plant. Soon these functions will integrate. When you look around your living room with magic glasses, the system will be taking it all in piece by piece, informing you that here is a framed etching on the wall and there is four-­colored wallpaper, and that this is a vase of white roses and this is an antique Persian carpet, and over here is a nice empty spot where your new sofa could go. Then it will say, based on the colors and styles of the furniture you already have in the room, we recommend this color and style of sofa. You’ll like it. May we suggest this cool lamp as well? Augmented reality is the technology underpinning the mirrorworld; it is the awkward newborn that will grow into a giant. “Mirrorworlds immerse you without removing you from the space. You are still present, but on a different plane of reality. Think Frodo when he puts on the One Ring. Rather than cutting you off from the world, they form a new connection to it,” writes Keiichi Matsuda, former creative director for Leap Motion, a company that develops hand-­gesture technology for AR. The full blossoming of the mirrorworld is waiting for cheap, always-on wearable glasses. Speculation has been rising that one of the largest tech companies may be developing just such a product. Apple has been on an AR hiring spree and recently acquired a startup called Akonia Holographics that specializes in thin, transparent “smart glass” lenses. “Augmented reality is going to change everything,” Apple CEO Tim Cook said during an earnings call in late 2017. “I think it’s profound, and I think Apple is in a really unique position to lead in this area.” But you don’t need to use AR glasses; you can engage using almost any kind of device. You can kind of do this today with Google’s Pixel phone, but without the convincing presence that you get with 3D visors. Even now, wearables like watches or smart clothes can detect the proto-mirrorworld and interact with it.

Everything connected to the internet will be connected to the mirrorworld. And anything connected to the mirrorworld will see and be seen by everything else in this interconnected environment. Watches will detect chairs; chairs will detect spreadsheets; glasses will detect watches, even under a sleeve; tablets will see the inside of a turbine; turbines will see workers around them. The rise of a massive mirrorworld will rely in part on a fundamental shift underway right now, away from phone-centric life and toward a technology that is two centuries old: the camera. To recreate a map that is as big as the globe—in 3D, no less—you need to photograph all places and things from every possible angle, all the time, which means you need to have a planet full of cameras that are always on. We are making that distributed, all-seeing camera network by reducing cameras to pinpoint electric eyes that can be placed anywhere and everywhere. Like computer chips before them, cameras are becoming better, cheaper, and smaller every year. There may be two in your phone already and a couple more in your car. There is one in my doorbell. Most of these newer artificial eyes will be right in front of our own eyes, on glasses or in contacts, so that wherever we humans look, that scene will be captured. The heavy atoms in cameras will continue to be replaced with bits of weightless software, shrinking them down to microscopic dots scanning the environment 24 hours a day. The mirrorworld will be a world governed by light rays zipping around, coming into cameras, leaving displays, entering eyes, a never-­ending stream of photons painting forms that we walk through and visible ghosts that we touch. The laws of light will govern what is possible. New technologies bestow new superpowers. We gained super speed with jet planes, super healing powers with antibiotics, super hearing with the radio. The mirrorworld promises super vision. We’ll have a type of x-ray vision able to see into objects via their virtual ghosts, exploding them into constituent parts, able to untangle their circuits visually. Just as past generations gained textual literacy in school, learning how to master the written word, from alphabets to indexes, the next generation will master visual literacy. A properly educated person will be able to create a 3D image inside of a 3D landscape nearly as fast as one can type today. They will know how to search all videos ever made for the visual idea they have in their head, without needing words. The complexities of color and the rules of perspective will be commonly understood, like the rules of grammar. It will be the Photonic Era. Longread The Untold Story of Magic Leap, the World's Most Secretive Startup Virtual reality is posed to become a fundamental technology, and outfits like Magic Leap have an opportunity to become some of the largest companies ever. virtual reality Oculus’ $399 Quest to Take VR Mainstream Facebook's Oculus division wants more people in VR. The Quest, its new high-powered stand-alone headset, takes a flying leap in that direction. The Race for AR Glasses Starts Now The big players in tech, and some little ones too, are jockeying to own your field of vision. But here’s the most important thing: Robots will see this world. Indeed, this is already the perspective from which self-driving cars and robots see the world today, that of reality fused with a virtual shadow. When a robot is finally able to walk down a busy city street, the view it will have in its silicon eyes and mind will be the mirrorworld version of that street. The robot’s success in navigating will depend on the previously mapped contours of the road—existing 3D scans of the light posts and fire hydrants on the sidewalk, of the precise municipal position of traffic signs, of the exquisite details on doorways and shop windows rendered by landlord scans. Of course, like all interactions in the mirrorworld, this virtual realm will be layered over the view of the physical world, so the robot will also see the real-time movements of people as they walk by. The same will be true of the AIs driving cars; they too will be immersed in the mirrorworld. They will rely on the fully digitized version of roads and cars provided by the platform. Much of the real-time digitization of moving things will be done by other cars as they drive around themselves, because all that a robot sees will be instantly projected into the mirrorworld for the benefit of other machines. When a robot looks, it will be both seeing for itself and providing a scan for other robots. In the mirrorworld too, virtual bots will become embodied; they’ll get a virtual, 3D, photorealistic shell, whether machine, animal, human, or alien. Inside the mirror­world, agents like Siri and Alexa will take on 3D forms that can see and be seen. Their eyes will be the embedded billion eyes of the matrix. They will be able not just to hear our voices but also, by watching our virtual avatars, to see our gestures and pick up on our microexpressions and moods. Their spatial forms—faces, limbs—will also increase the nuances of their interactions with us. The mirrorworld will be the badly needed interface where we meet AIs, which otherwise are abstract spirits in the cloud. There is another way to look at objects in the mirrorworld. They can be dual use, performing different roles in different planes. “We can pick up a pencil and use it as a magic wand. We can turn our tables into touchscreens,” Matsuda writes. We will be able to mess not only with the locations and roles of objects but with time as well. Say I’m walking along a path beside the Hudson River, the real Hudson River, and I notice a wren’s nest that my bird-­watching friend would be keen to know about, so I leave a virtual note along the path for her. It remains there until she passes by. We saw the same phenomenon of persistence with Pokémon Go: virtual creatures remaining in a real physical location, waiting to be encountered. Time is a dimension in the mirror­world that can be adjusted. Unlike the real world, but very much like the world of software apps, you will be able to scroll back. History will be a verb. With a swipe of your hand, you will be able to go back in time, at any location, and see what came before. You will be able to lay a reconstructed 19th-century view right over the present reality. To visit an earlier time at a location, you simply revert to a previous version kept in the log. The entire mirror­world will be like a Word or Photo­shop file that you can keep “undoing.” Or you’ll scroll in the other direction: forward. Artists might create future versions of a place, in place. The verisimilitude of such crafty world-building will be revolutionary. These scroll-forward scenarios will have the heft of reality because they will be derived from a full-scale present world. In this way, the mirrorworld may be best referred to as a 4D world.