The first time I saw 3D graphics was on my dad's computer when I was a boy. Dad was a programmer for the phone company, back before they called programmers "coders" and before being a coder was cool. He didn't make apps; he wrote routines for a brute of a mainframe that lived somewhere in the bowels of Pacific Bell. He stayed up all hours of the night telecommuting before it was a perk of every job. Sometimes, my younger brother and I would wander in to see what he was up to, and if we were lucky, he'd set us on his lap and boot up a game, Wolfenstein 3D, a first-person shooter. You were a GI trying to escape a Nazi castle. I never thought of the graphics as looking real, but they were undeniably effective. The three of us would sit in the darkened office, soldiers running down corridors, anxious and scared at the possibility the next turn might lead to Gestapo lying in wait. In the tensest moments, Dad would physically lean his body, my brother and me with it, to try to peer around corners.

Did the rudimentary machine we were playing on intuit my father's movements and respond? Did we become part of the game, in anything more real than our imaginations? Dad was good, but not that good. Besides, it was the Dark Ages—the mid-90s—what do you want?

The protagonist of this story is a machine. A wearable, holographic computer that leaves your virtual, surpasses your augmented, and just gives you the reality. You might call it a second-sight machine, giving you the cognitive powers of the machine you were born with, but freed from the tyranny of physics. It is a set of magic lenses through which humans see, with unprecedented clarity, their relationship to the world.

The applications for this machine are limitless, from planning tricky surgeries to designing other intricate machines with your partner who is in Tierra del Fuego but right there with you at the same time to just having the time of your life. With this machine, the Nazis wouldn't have known what hit them.

What God created this particular universe?

In 2007, Alex Kipman—today a technical fellow at Microsoft, then a leader of the Windows team—had just overseen the release of Windows Vista. It debuted to something less than acclaim, so much less, in fact, that eight years later it made the cut for a Silicon Valley joke about the worst high-profile tech products of recent memory. Kipman was disappointed in himself: Six years into a career at Microsoft—the only place he'd worked since earning his bachelor's degree—he realized he had no point of view of his own. He'd simply been following the dictates of other people. Of the industry. So he took a sabbatical to his native Brazil to find a purpose.

He repaired to the Atlantic Forest, the vast tropical cover of Brazil's eastern coast, to an off-the-grid farm. He walked with a notebook in a place teeming with life and devoid of technology and he did not stop until his mind had conceived a machine, one that would take an unknowable number of years to create. It was a machine with one animating impulse: to understand the world.

It was a machine with one animating impulse: to understand the world.

Subconsciously at incalculable speed, the human brain is always reasoning: commanding senses to take in information, gain context, and deduce what is happening in the world around it. When Alex Kipman left the Atlantic Forest, he couldn't yet build the machine he dreamed of. But he had an idea, a guiding principal. It was a way of organizing the world as a machine might, were it attempting to understand the world like a human brain. The things in the world on one axis, and the ways of interacting with them on the other. When Kipman returned from Brazil, he began building.

He started with the simplest machine he could imagine. Microsoft's naming conventions dictate that developing projects be named after cities, so he called it Project Natal, after a city in Brazil whose name means "birth." A depth-sensing camera that sat on a television, it was a machine that could see and respond to the movements of the human body. In 2010 it was released to the public as Kinect, an Xbox peripheral that allowed gamers to control games with the movements of their bodies, and it sold faster than any piece of consumer electronics in history.

So Kipman moved on to a machine that could not only see a person and his environment, but could also make him see things.

Kipman called his new project Baraboo, this time named after a peculiar town in Wisconsin, once the headquarters of the Ringling Brothers Circus, a town that Kipman says is home to the only clown cemetery in the United States. He'd been pitching a strange idea around Microsoft: mixed reality, a headset that showed its wearer three-dimensional holograms, accurately rendered in the space around them. Might as well name the damn thing after the place I'll end up when it fails, Kipman figured.

Hololens, most notably, is the gift of sight. It can see like no machine before it.

In March of this year, after years of development with partners like Volvo and NASA's Jet Propulsion Laboratory, Project Baraboo became available to select developers willing to pay $3,000 for a developer's kit. As of August, anyone can buy it. Kipman had dodged the clown cemetery. The machine was called HoloLens.

There are many ways to describe what HoloLens is: a mixed-reality device, a holographic computer, an expensive escapist technology. But what it is, most notably, is the gift of sight. It can see like no machine before it.

HoloLens floats over a person's head like a smoke-ring halo. A padded inner gray circlet rests on the crown of the skull and cinches tight in the back. The glasses—actually a clear glass shield with a second set of trapezoidal lenses underneath—float around the inner ring on a tilting axis. They are adjusted to hover in front of the eyes. And the bulk of the machine is above the lenses, in a crescent moon of plastic and silicon that rests against the forehead: a bundle of sensors.

These sensors—a variety of cameras and motion detectors—all send their data, terabytes per second, to a control center called the holographic processing unit. The result is a coordinate system that tells HoloLens what the room looks like, where the wearer is, and what is within his field of vision. Then what HoloLens does is learn, with the help of a calibration program, the particular quirks of the wearer's eyes. And because HoloLens understands the environment around the person and where he is looking, when its two tiny projectors shine holograms into his retinas, those holograms—aside from their odd, shimmery essence—are truly in the scene. No longer holograms, they are real. They can be half-hidden behind a couch, sit on a kitchen counter, or come crashing through a wall.

The device is controlled with a "cursor" that follows the wearer's gaze and is activated with hand gestures (or voice commands). There are essentially only three. The most ubiquitous, the air tap, is done by holding the index finger straight up in the air, then bringing it down and back up, as if it's being pricked by a needle. It is the equivalent of a mouse click or a tap on a smartphone. Scrolling is accomplished with an "air tap and hold": keep the index finger at the bottom of the tap, then scrub the hand up or down (also used for zooming and resizing). Finally, the virtual back button is the bloom: Hand out, palm up, fingers together, you raise your hand and open the fingers, like a flower to the sun.

To observe a person using HoloLens is to regard something that looks like a religious experience. A strange play of light comes over his eyes, greens rolling across the lenses like the aurora borealis. He gestures as if having a conversation in sign language, but with no one, and using only three words. He sees things that others, the unwearing, cannot see.

To observe a person using HoloLens is to regard something that looks like a religious experience.

As Microsoft has slowly introduced HoloLens to the world, it has set up demo rooms around the country. These are tiny rooms, each decorated and well appointed, each different. One looks like an urban apartment's living room. Another like the sales floor of a Volvo dealership. A den with a chandelier and a busted dimmer switch; a design studio. Inside each room, somewhere—on a table, on a desk—is a HoloLens. In the living room there are holographic adornments hanging on the walls and scattered across the floor like a child's things. The sales floor has a holographic demo S90 that zooms across the room and pops its chassis the way the real thing pops its hood. These experiences are technically impressive, but ultimately facile, too designed and sleek to awe. The more compelling way to see what HoloLens can do is to see what people do while wearing it.

Aviad Almagor is the director of the mixed-reality program at Trimble, a company that—among other things—works with clients to create digital models of buildings. Almagor has worked extensively on developing ways to use HoloLens for collaboration, which often involves gathering multiple HoloLens wearers on different corners of the planet around a single 3D model. Each individual's HoloLens shows him the same model, as well as avatars of his collaborators. Everyone can move freely around the model as they discuss it.

"When you bring out a virtual model," Almagor says, "people tend to put it on a table. There is no reason for this—the model can float in midair. But for some reason we need a solid surface to place the model on. And people will always walk around it. They will not cross it. It's the same with avatars—people will not get too close to an avatar. They keep some kind of distance, like in real life."

This is an important lesson, because humans do not treat computers like reality. They treat computers like machines.

The Jet Propulsion Laboratory used HoloLens to develop an application called OnSight, which uses existing photos of Mars's Gale Crater to create a fully immersive Martian environment. Earthbound geologists explore as if they were in the field, walking around Mars and examining it with the same facility they have in their own gravity on this home planet.

But in designing OnSight, JPL hadn't fully accounted for reality. It had designed it as if it were any other graphically intensive application, using a philosophy common to game design: If there's an object like a hill that blocks a user's view, why waste computing power rendering what's behind it? Users can't see it, anyway. But when JPL gave it to testers, one of the first Martian environments geologists got to see found the rover, Curiosity, stationed in front of a small hill, and the first thing most of them did was dash to the top of it—and what they saw behind it was low-res and ugly.

"We asked them why they did that," says Parker Abercrombie, a software engineer at JPL. "And they said, Well, if I was out in the field, the first thing I would do is I'd go to the top of the tallest point and get the lay of the land."

In 2014, shortly after he'd started at JPL and gotten to try out OnSight himself, Abercrombie went camping in the California High Desert. Not too far outside Los Angeles, bound up in the larger province of the Mojave, the landscape of the high desert is a monochrome of low brush, dirt, and rock. Ringed by the tall, dry landscape—the San Bernardino Mountains, the San Jacintos, the Granite and Providence ranges off in the distance—Abercrombie took in the distant ridgelines with a pang of familiarity. This feels so much like Gale Crater, he thought. Never mind that he had never been there: Because of OnSight and a gray headset, out in the California desert Abercrombie wasn't recognizing Mars the way you recognize a place you've seen only in pictures. He was remembering it.

"Ever since we invented machines—even before computers—we've bent over backward to be able to speak, and communicate, and give them instructions," says Dav Rauch. "We've learned the languages of the machines that we have created. We've been wrapped around their finger."

Rauch is a creative resident and senior design lead at IDEO, the global design firm. He got his start in movies, designing interfaces for futuristic technology. If you saw scientists standing at their computers in Avatar or Tony Stark looking through his suit's headset display in Iron Man, you've seen his work. His point is that typing into a keyboard is about as far from natural human communication as you can get—but we had to develop it to utilize computers effectively. The complexity of our interactions with computers grew for most of their history—punch cards, then a keyboard and monitor, then a keyboard, monitor, and mouse—but now are dissolving into gestures, voice, and gaze. The latter three, of course, are how we communicate with each other.

"I think the point we're finally getting to—starting with gestures, and we're going to see it more completely with virtual and augmented-reality environments—is where user interfaces disappear," says Rauch.

Kipman, for his part, believes the future is easy to see. "If you create technology that removes the interface from technology," Kipman claims, "the species will evolve."

"If you create technology that removes the interface from technology, the species will evolve."

The early days of augmented reality suggest this is true. In 1992 a doctoral candidate from Stanford University named Louis Rosenberg noticed its evolutionary potential when he was working on a U.S. Air Force–funded project to help surgeons perform surgeries remotely, with robotic arms. Rosenberg came up with the idea for what he called virtual fixtures, digital aids that could help surgeons make more accurate incisions. If a needle needed to be injected into a patient in a precise location, Rosenberg's system could make a virtual cone out of visual and vibrational feedback that would funnel the needle tip to the right spot. Or suppose a surgeon had to make a cut that would be lifesaving if one centimeter deep, but nick an artery if even one millimeter deeper. It would be helpful to build a depth stop, like you would for a table saw. Except it's hard to build a depth stop for a cut inside a human body. But what if the depth stop were virtual?

"You could get to a level of suspension of disbelief where you didn't know what was the real information and what was the virtual information," Rosenberg says. "And, in fact, when people were working in the virtual fixtures system, if there was a virtual cone that they could feel, they were going to rely on that cone as if it were real. They were basically, in their mind, merging their perception of those two spaces, a merger of real and virtual information, and the boundaries between those two really didn't matter to them."

When Rosenberg ran standard performance tests on people using virtual fixtures, he found their performance increased by 70 percent. He'd given them superpowers, simply by harnessing their tendency to treat the virtual as the real.

The trouble was that virtual fixtures required massive amounts of hardware. Rosenberg had built a room-size rig of robot controls, goggles, and monitors.

Alex Kipman's sabbatical to off-the-grid Brazil led to XboxKinect- a Breakthrough winner in 2009- and now HoloLens.

HoloLens fits atop a human head.

On Mars, the OnSight team came to realize they could do better than simply rendering the landscape behind the hills, so people could run up them with abandon. They're giving the geologists the ability to fly. What better way to get the lay of the land? And having seen geologists constantly struggle to orient themselves, they've already added another new feature. In OnSight's virtual re-creation of Mars, any time a geologist looks up into the Halloween-colored sky, he'll see the numbers of an azimuth ring, floating, pointing the way north.

"Man, no, I'm completely not happy with it," says the man who invented HoloLens.

HoloLens is the most beguiling machine thus far produced, on the cutting edge of sensory processing, image technology, and sheer audacity of vision. But it is large. It is heavy and wearies the neck in just a short period of time. The batteries die too fast and the field of view is limited and the holograms themselves lack the acuity of the sharpest 3D graphics, not to mention of the real world.

"It's the only fully untethered holographic computer, and it's a jewel and achievement of computing, but it's incomplete," Kipman says.

What HoloLens is is a harbinger, an inflection point.

"It's the equivalent of throwing a blanket over the real world, which is our spatial map. It creates a mesh of the real world that doesn't know the difference between a human, a couch, the floor, the ceiling, anything like that," Kipman says. "Which is epic, because it does it in real time at frame rate, and nobody's anywhere near that kind of technology. But it's the beginning of a journey."

When Kipman thinks back to Brazil, he can see HoloLens is but a few steps down the road toward the ultimate goal.

"In Kinect, the machinery exists in the environment," Kipman says. "It's plugged in and tethered underneath your TV. You don't wear anything. And yet it understands humans. In HoloLens, the machine is on the human, and the human is ambulatory, walking around. We didn't talk at all about what happens when you put the machinery on objects. The same mixed-reality understanding, when applied to an object, gets you robots and self-driving cars. If you imagine over time the proliferation of this machinery—existing everywhere, becoming ubiquitous—then to some extent the only time you need the machinery on the human is in an environment that doesn't contain the machinery."

Kipman suggests holograms emanating from the home, the office, the bus. Who needs a holographic computer on his head when holographic computers are everywhere? In some not distant future, humankind will be a race of supermen, bone machines enabled on all sides by digital machines that understand us.

And it will not stop there.

Jason Alan Snyder, a futurist whose patents shaped aspects of Google Glass—a forerunner, in a way, of HoloLens—is one of many developers working on experiences for HoloLens. He works for a marketing agency, focusing on what he calls digital sampling, the ability to test products and experiences virtually before buying. But he sees past that. He thinks technology is at a place where we can begin to transcend language. He described an experiment in which subjects from different cultures greeted each other by thinking of a greeting in their own language. They wore EEG helmets to read their brain waves, and when a computer detected a thought forming—say, a greeting—in one person's brain, it triggered a phosphene—a brain-produced optical artifact that looks like a bright light seen in peripheral vision—in another.

"By thinking that greeting, a person on the other side of the world would see that greeting," Snyder says. "If we could direct that into one of these AR [augmented reality] devices, like HoloLens, that would be tremendous."

Mind-to-mind communication sounds like science fiction. But then, so do holograms.

In the basement of Building 92 of Microsoft's Redmond, Washington, campus, in a room dressed as an office at NASA's Jet Propulsion Laboratory, I got to try OnSight myself. When I went into this room, I put on a HoloLens and with the click of a button, a Microsoft guide made the landscape of Mars, the red planet, appear all around me in remarkable 3D. I paced forward. Mars. Looked down at my feet. Mars. I panned in a circle, scanning the entire landscape. As I turned to look back over my left shoulder, a giant shape loomed over me, and I jumped back, startled. It was the Curiosity rover.

The guide called me over to the desk in the office. The monitor showed the pictures—actually taken by Curiosity—that had been stitched together to create the landscape. I could click on a rock in a picture and a flag would appear staked in the ground of the room, where I could walk up to it and take a closer look, in three dimensions.

One of the flags I planted was on a rock that, in 2D, seemed to bulge over the landscape. It begged to be explored. "Good choice," my guide told me. "That's a pretty interesting rock. We've got a lot of pictures of it," she said.

"Come look."

I walked over to the rock. It did indeed bulge over the landscape. I could almost feel it next to my leg. Its overhang cast a shadow on the rocks below it. "We even know what the bottom of it looks like," my guide said, subtly beckoning.

I thought of those late nights on Dad's lap, straining to peer around corners. And with that on my mind, I got down on my hands and knees in the red Martian dust, to have a closer look at the underside of a rock that was sixty million miles away.

*This article originally appeared in the October 2016 issue of Popular Mechanics.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io