First, it was punch cards; then it was the iconic mouse and keyboard. The tools and systems we use to engage with computers are what allows us to control and build the world around us in ways unimaginable to our ancestors. We've come a long way to be sure, but when it comes to the field of user interface (UI, the means by which we interact with computer systems), we haven't seen anything yet.

Some might say it’s odd to start our Future of Computers series with a chapter about UI, but it’s how we use computers that will give meaning to the innovations we explore in the rest of this series.

Every time humanity invented a new form of communication—be it speech, the written word, the printing press, the phone, the Internet—our collective society blossomed with new ideas, new forms of community, and entirely new industries. The coming decade will see the next evolution, the next quantum leap in communication and interconnectivity, entirely intermediated by a range of future computer interfaces … and it may just reshape what it means to be human.

What is 'good' user interface, anyway?

The era of poking, pinching, and swiping at computers to get them to do what we wanted began over a decade ago. For many, it started with the iPod. Where once we were accustomed to clicking, typing, and pressing down against sturdy buttons to communicate our wills to machines, the iPod popularized the concept of swiping left or right on a circle to select the music you wanted to listen to.

Touchscreen smartphones entered the market shortly after that, introducing a range of other tactile command prompts like the poke (to simulate pressing a button), the pinch (to zoom in and out), the press, hold and drag. These tactile commands gained traction quickly among the public for a number of reasons: They were new. All the cool (famous) kids were doing it. Touchscreen technology became cheap and mainstream. But most of all, the movements felt intuitive, natural.

That's what good computer UI is all about: Building more natural ways to engage with software and devices. And that's the core principle that will guide the future UI devices you're about to learn about.

Poking, pinching, and swiping at the air

As of 2018, smartphones have replaced standard mobile phones in much of the developed world. This means a large portion of the world is now familiar with the various tactile commands mentioned above. Through apps and games, smartphone users have learned a large variety of abstract skills to control the relative supercomputers sitting in their pockets.

It's these skills that will prepare consumers for the next wave of devices—devices that will allow us to more easily merge the digital world with our real-world environments. So let's take a look at some of the tools we'll use to navigate our future world.

Open-air gesture control. As of 2018, we’re still in the micro-age of touch control. We still poke, pinch, and swipe our way through our mobile lives. But that touch control is slowly giving way to a form of open-air gesture control. For the gamers out there, your first interaction with this may have been playing overactive Nintendo Wii games or the Xbox Kinect games—both consoles use advanced motion-capture technology to match player movements with game avatars.

Well, this tech isn't staying confined to video games and green screen filmmaking, it will soon enter the broader consumer electronics market. One striking example of what this might look like is a Google venture named Project Soli (watch its amazing and short demo video here). Developers of this project use miniature radar to track the fine movements of your hand and fingers to simulate the poke, pinch, and swipe in open-air instead of against a screen. This is the kind of tech that will help make wearables easier to use, and thus more attractive to a wider audience.

Three-dimensional interface. Taking this open-air gesture control further along its natural progression, by the mid-2020s, we may see the traditional desktop interface—the trusty keyboard and mouse—slowly replaced by the gesture interface, in the same style popularized by the movie, Minority Report. In fact, John Underkoffler, UI researcher, science advisor, and inventor of the holographic gesture interface scenes from Minority Report, is currently working on the real-life version—a technology he refers to as a human-machine interface spatial operating environment. (He'll probably need to come up a handy acronym for that.)

Using this technology, you will one day sit or stand in front of a large display and use various hand gestures to command your computer. It looks really cool (see link above), but as you might guess, hand gestures might be great for skipping TV channels, pointing/clicking on links, or designing three-dimensional models, but they won’t work so well when writing long essays. That’s why as open-air gesture technology is gradually included into more and more consumer electronics, it will likely be joined by complementary UI features like advanced voice command and iris tracking technology.

Yes, the humble, physical keyboard may yet survive into the 2020s.

Haptic holograms. The holograms we’ve all seen in person or in the movies tend to be 2D or 3D projections of light that show objects or people hovering in the air. What these projections all have in common is that if you reached out to grab them, you would only get a handful of air. That won’t be the case by the mid-2020s.

New technologies (see examples: one and two) are being developed to create holograms you can touch (or at least mimic the sensation of touch, i.e. haptics). Depending on the technique used, be it ultrasonic waves or plasma projection, haptic holograms will open up an entirely new industry of digital products that we can use in the real world.

Think about it, instead of a physical keyboard, you can have a holographic one that can give you the physical sensation of typing, wherever you’re standing in a room. This technology is what will mainstream the Minority Report open-air interface and possibly end the age of the traditional desktop.

Imagine this: Instead of carrying around a bulky laptop, you could one day carry a small square wafer (maybe the size of a thin external hard drive) that would project a touchable display screen and keyboard hologram. Taken one step further, imagine an office with only a desk and a chair, then with a simple voice command, an entire office projects itself around you—a holographic workstation, wall decorations, plants, etc. Shopping for furniture or decoration in the future may involve a visit to the app store along with a visit to Ikea.

Speaking to your virtual assistant

While we're slowly reimagining touch UI, a new and complementary form of UI is emerging that may feel even more intuitive to the average person: speech.

Amazon made a cultural splash with the release of its artificially intelligent (AI) personal assistant system, Alexa, and the various voice-activated home assistant products it released alongside it. Google, the supposed leader in AI, rushed to follow suit with its own suite of home assistant products. And together, the combined multi-billion competition between these two tech giants has led to a rapid, widespread acceptance of voice-activated, AI products and assistants among the general consumer market. And while it's still early days for this tech, this early growth spurt shouldn't be understated.

Whether you prefer Amazon's Alexa, Google's Assistant, iPhone's Siri, or Windows Cortana, these services are designed to let you interface with your phone or smart device and access the knowledge bank of the web with simple verbal commands, telling these ‘virtual assistants' what you want.

It’s an amazing feat of engineering. And even while it’s not quite perfect, the technology is improving quickly; for example, Google announced in May 2015 that its speech recognition technology now only has an eight percent error rate, and shrinking. When you combine this falling error rate with the massive innovations happening with microchips and cloud computing (outlined in the upcoming series chapters), we can expect virtual assistants to become pleasantly accurate by 2020.

Even better, the virtual assistants currently being engineered will not only understand your speech perfectly, but they will also understand the context behind the questions you ask; they will recognize the indirect signals given off by your tone of voice; they will even engage in long-form conversations with you, Her-style.

Overall, voice recognition based virtual assistants will become the primary way we access the web for our day-to-day informational needs. Meanwhile, the physical forms of UI explored earlier will likely dominate our leisure and work-focused digital activities. But this isn’t the end of our UI journey, far from it.

Wearables

We can't discuss UI without also mentioning wearables—devices you wear or even insert inside your body to help you interact digitally with the world around you. Like voice assistants, these devices will play a supporting role in how we engage with the digital space; we'll use them for specific purposes in specific contexts. However, since we wrote an entire chapter on wearables in our Future of the Internet series, we won’t go into further detail here.

Augmenting our reality

Moving forward, integrating all of the technologies mentioned above are virtual reality and augmented reality.

At a basic level, augmented reality (AR) is the use of technology to digitally modify or enhance your perception of the real world (think Snapchat filters). This is not to be confused with virtual reality (VR), where the real world is replaced by a simulated world. With AR, we'll see the world around us through different filters and layers rich with contextual info that will help us better navigate our world in real time and (arguably) enrich our reality. Let's briefly explore both extremes, starting with VR.

Virtual reality. At a basic level, virtual reality (VR) is the use of technology to digitally create an immersive and convincing audiovisual illusion of reality. And unlike AR, which currently (2018) suffers from a large variety of technological and social hurdles before it gains mass market acceptance, VR has been around for decades in popular culture. We’ve seen it in a large variety of future-oriented movies and television shows. Many of us have even tried primitive versions of VR at old arcades and tech-oriented conferences and trade shows.

What’s different this time around is that today’s VR technology is more accessible than ever. Thanks to the miniaturization of various key technologies (originally used to make smartphones), the cost of VR headsets have cratered to a point where powerhouse companies like Facebook, Sony, and Google are now annually releasing affordable VR headsets to the masses.

This represents the start of an entirely new mass-market medium, one that will gradually attract thousands of software and hardware developers. In fact, by the late-2020s, VR apps and games will generate more downloads than traditional mobile apps.

Education, employment training, business meetings, virtual tourism, gaming, and entertainment—these are just a few of the many applications cheap, user-friendly, and realistic VR can and will enhance (if not entirely disrupt). However, unlike what we've seen in sci-fi novels and films, the future where people spend all day in VR worlds is decades away. That said, what we will spend all day using is AR.

Augmented reality. As noted earlier, the goal of AR is to act as a digital filter on top of your perception of the real world. When looking at your surroundings, AR can enhance or alter your perception of your environment or provide useful and contextually relevant information that can help you to better understand your environment. To give you a better sense of how this may look, check out the videos below:

The first video is from the emerging leader in AR, Magic Leap:

Next, is a short film (6 min) from Keiichi Matsuda about how AR might look by the 2030s:

From the videos above, you can imagine the near limitless number of applications AR tech will one day enable, and it’s for that reason that most of tech’s biggest players—Google, Apple, Facebook, Microsoft, Baidu, Intel, and more—are already investing heavily to AR research.

Building upon the holographic and open-air gesture interfaces described earlier, AR will eventually do away with most of the traditional computer interfaces consumers have grown up with thus far. For example, why own a desktop or laptop computer when you can slip on a pair of AR glasses and see a virtual desktop or laptop appear right in front of you. Likewise, your AR glasses (and later AR contact lenses) will do away with your physical smartphone. Oh, and let's not forget about your TVs. In other words, most of today's large electronics will become digitized into the form of an app.

The companies that invest early to control the future AR operating systems or digital environments will effectively disrupt and seize control of a large percentage of today electronics sector. On the side, AR will also have a range of business applications in sectors like healthcare, design/architecture, logistics, manufacturing, military, and more, applications we discuss further in our Future of the Internet series.

And yet, this still isn’t where the future of UI ends.

Enter the Matrix with Brain-Computer Interface

There’s yet another form of communication that’s even more intuitive and natural than movement, speech, and AR when it comes to controlling machines: thought itself.

This science is a bioelectronics field called Brain-Computer Interface (BCI). It involves using a brain-scanning device or an implant to monitor your brainwaves and associate them with commands to control anything that’s run by a computer.

In fact, you might not have realized it, but the early days of BCI has already begun. Amputees are now testing robotic limbs controlled directly by the mind, instead of through sensors attached to the wearer's stump. Likewise, people with severe disabilities (such as people with quadriplegia) are now using BCI to steer their motorized wheelchairs and manipulate robotic arms. But helping amputees and persons with disabilities lead more independent lives isn’t the extent of what BCI will be capable of. Here’s a short list of the experiments now underway:

Controlling things. Researchers have successfully demonstrated how BCI can allow users to control household functions (lighting, curtains, temperature), as well as a range of other devices and vehicles. Watch the demonstration video.

Controlling animals. A lab successfully tested a BCI experiment where a human was able to make a lab rat move its tail using only his thoughts.

Brain-to-text. A paralyzed man used a brain implant to type eight words per minute. Meanwhile, teams in the US and Germany are developing a system that decodes brain waves (thoughts) into text. Initial experiments have proven successful, and they hope this technology could not only assist the average person but also provide people with severe disabilities (like the renowned physicist, Stephen Hawking) the ability to communicate with the world more easily.

Brain-to-brain. An international team of scientists were able to mimic telepathy by having one person from India think the word “hello,” and through BCI, that word was converted from brain waves to binary code, then emailed to France, where that binary code was converted back into brainwaves, to be perceived by the receiving person. Brain-to-brain communication, people!

Recording dreams and memories. Researchers at Berkeley, California, have made unbelievable progress converting brainwaves into images. Test subjects were presented with a series of images while connected to BCI sensors. Those same images were then reconstructed onto a computer screen. The reconstructed images were super grainy but given about a decade of development time, this proof of concept will one day allow us to ditch our GoPro camera or even record our dreams.

We’re going to become wizards, you say?

At first, we'll use external devices for BCI that look like a helmet or hairband (2030s) that will eventually give way to brain implants (late-2040s). Ultimately, these BCI devices will connect our minds to the digital cloud and later act as a third hemisphere in for our minds—so while our left and right hemispheres manage our creativity and logic faculties, this new, cloud-fed digital hemisphere will facilitate abilities where humans often fall short of their AI counterparts, namely speed, repetition, and accuracy.

BCI is key to the emerging field of neurotechnology that aims to merge our minds with machines to gain the strengths of both worlds. That’s right everyone, by the 2030s and mainstreamed by the late 2040s, humans will use BCI to upgrade our brains as well as communicate with each other and with animals, control computers and electronics, share memories and dreams, and navigate the web.

I know what you’re thinking: Yes, that did escalate quickly.

But as exciting as all of these UI advances are, they will never be possible without equally exciting advancements in computer software and hardware. These breakthroughs are what the rest of this Future of Computers series will explore.

Future of Computers series

* Future of software development: Future of computers P2