Betaworks CEO: There Will Be No Line Between Us and Our Devices

A critical look at artificial and augmented intelligence

Artificial intelligence is back. Whether in the dystopian portrayals of recent movies or the utopian singularities dreamed of in the tech world, the general agreement is that we are on the path to thinking machines. But as fun, twisted and thought-provoking as the dystopian show Black Mirror is, I don’t believe machines are going to think or achieve a human level of consciousness any time soon.

I want to focus on a different dimension of our relationship to machines—how we are integrating computing into ourselves. How we are augmenting ourselves with technology. I believe this augmentation and integration is transforming us and our world faster than any external singularity event.

In some cases we are augmenting our intelligence, but in others we are dumbing ourselves down to accommodate poorly designed software or hardware. Like the proverbial frog that sat in a slowly warming pot of water, we are now boiling ourselves in this new world. Aspects of this integration are functional and clearly visible—wearables, watches, beacons and nearables, haptic triggers and navigation, virtual reality and augmented reality, even a bionic mattress that emulates the womb to sooth premature babies. The list is long and getting longer.

Will it be a nice God, and more on the coming AI revolution: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

As we connect devices and services, complexity rises exponentially. My IFTTT triggers connect to a set of web services that I use a lot, yet occasionally I experience unexpected collisions that take a while to unwind. For example, in the middle of a winter storm, my Nest thermostat got a point software upgrade and decided it was summertime. It took awhile to unravel the chain of interdependencies that resulted in my pipes freezing. We are at the beginning of this curve, right at the beginning. As a culture we are becoming dependent on the network and its increasingly complex interconnections.

Our personal phones are an obvious example. It’s fair to say my attention is either directly or ambiently connected to this device, and its network, throughout the day. So for a month I tracked my usage, to get a more concrete view of my dependence.

Tracking phone usuage

The graph to the left shows my phone usage from December 3 to January 6. On average I used my phone for 162 minutes a day (the blue line). The number of times I pick up or manually activate my phone is more variable—it’s the orange line—and averages out at 35+ a day. The average time per use is 6 minutes.

This isn’t far off the norm. The radio show New Tech City, on WNYC, has a project in which 12,000 people are volunteering their phone usage data. Estimates from that study suggest we use our phones for an average of 95 minutes per day and actually pick them up 50 to 60 times daily. In the spring of 2014 Millward Brown, a market research company, reported the results of a survey, which showed that people in the U.S. spend 151 minutes per day on their smartphones, as compared to 147 in front of TVs. China’s consumers use their smartphones even more, with an average of 170 minutes a day. Our mental focus on our devices is indistinguishable from immersion.

Step by step we are integrating computing into our selves, blurring the line between natural biological intelligence and augmented intelligence. Over the span of a year, my 162 minutes a day adds up to 41 days. That’s a lot of attention, and that’s only the directed attention — it doesn't include my ambient attention.

Here are three specific ways in which I see computers being integrated into the human experience: at times augmenting it, at times not.

Our Prosthetic Self

The 2020 Tokyo Special Olympics will be the first in history where across a majority of categories athletes will outperform the abilities of contestants in the Olympics. It might happen in Rio in 2016, but my hunch is that it won’t be till 2020 that outperformance will apply to a majority of categories. We saw the start of this in 2012 with Oscar Pistorious’s entry into the Olympics. Studies of his legs concluded that he was using 25% less energy than his biological counterparts. Once you add automation, the increases in efficiency will become even more pronounced.

As a society we are moving beyond viewing disabilities as limiting and restrictive, to them becoming an augmentation to our biological bodies. The chance to transcend disabilities has been a dream of people in this field, and it’s finally becoming a reality. It’s hard to get a sense of the scale of this, but I believe this development will affect a material part of the population. Extend what you think of as prosthetics today to include pacemakers, embedded corneal lenses, contacts (with zooming lenses), artificial joints, braces, drug implant systems, artificial skin and glasses and you get a sense of the scope of the change. Prosthetics, exoskeletons, soft exosuits and other physical enhancements can extend our natural limitations. With the assistance of machines and data we can more safely navigate our lives and surroundings.

I was at an event last year where I saw Missy Cummings speak about robotics, drones and what the Air Force refers to as the “Mode1 approach.” In Mode1 approach, the pilot relinquishes take off and landings because computers can do this so much more safely. Missy explained how the last thing she was required to do as a fighter pilot before taking off from an aircraft carrier was to hold up both her hands—in an “I surrender” gesture—saying to people on the flight deck: “Look, no hands!” Nice metaphor. Autonomous planes and trains are already in use. By 2020 self-driving cars will be on our roads, and as Sebastian Thrun and others have discussed, they will make our roads much safer.

Every time I see a wearable I ask myself—could this be integrated into our selves? I’m not making an ethical statement about our biological selves versus our prothetic selves. I am seeking to reset the framing of the computer as an object and us as the person manipulating that computer. Computers are no longer that “other” thing, that “other” object. The line between machines and humans is becoming indistinguishable.

This spring Apple will ship its watch. The product has spurred a lot of discussion: do people want a computer on their wrist? I don’t think that’s what this device is about, and it’s certainly not how Apple has designed it. On the Apple Watch promotional site, the company makes three product commitments: Timekeeping, new ways to connect, and health and fitness. The first is obvious — it’s a watch! Thankfully, Apple has remembered that. As for the second commitment, Apple says, “You’ll express yourself in new, fun, and more personal ways. With Apple Watch, every exchange is less about reading words on a screen and more about making a genuine connection.” Outside of the navigational crown, the Apple watch has a single button on it. Its purpose is to connect this device to its cousins worn by our friends and family in what Apple is branding as a genuine connection, as opposed to one that is mediated through an electronic interface. Apple is seeking to eliminate the interface of a screen and connect the device directly and intimately to our bodies. It’s the difference between seeing something and feeling something.