Rugged individualists aside, many people find themselves increasingly connected not just to one another but also to the devices that make those connections possible. It’s clear that dependence on smartphones, tablets and other gadgets will only strengthen as broadband access, wireless connectivity and content grow. Less obvious is the impact this human–machine bond will have on our lives.



Cultural anthropologist Genevieve Bell leads a group at Intel Labs—Interaction and Experience Research—that aims to understand what people want from their technology and what might happen if they get it. The group also studies how people use technology, what motivates this use and what frustrates them, all in an effort to design microprocessors that help meet those demands.



A second-generation anthropologist, Bell grew up at her mother’s field sites in central and northern Australia in the 1970s and ‘80s. Scientific American recently spoke with Bell—who joined Intel in 1998 as one of the company’s first social scientists—about her role there, our evolving attachment to our gadgets and making “magic” from silicon and circuits.



[An edited transcript of the interview follows.]





What is a cultural anthropologist doing at the world’s largest chipmaker?

Anthropology is a classically well-designed discipline for making sense of what people want. To make those insights about what people care about legible and intelligible to an engineering-oriented organization, you have to do a bit of translation.



How do you translate ideas from the social sciences to the technology world?

You have to say, here are the things we have seen in the field, and here are some consequences of those insights. Then you present ways to turn those ideas into prototypes. Some of those ideas are smoke and mirrors, some are sketch-board prototypes and some are fully fledged working things that make you ask: What would it take to actually [manufacture] this?



What happens to the smoke-and-mirror ideas?

With those, we realize that if we want to create a product out of an idea, we’re going to have to invent new technology to make that possible or hack the hell out of something else to get us close. Sometimes our scientists start by going, there’s this piece of technology and everyone’s using it for this thing, but if we do this other thing with it, oh my God, it would be totally cool. You come at a technology from different angles.



Customization has come a long way, with services such as Amazon and Netflix trying to anticipate our needs and make recommendations based on our behavior on those sites. How will our interactions with technology change as it becomes more personalized?

At the moment those recommendation algorithms sit in a number of different places in our lives, and there’s a little bit of bleed in between them. But we are getting to a point where recommendations won’t just come from services [like Amazon and Netflix]. They’ll come from our devices as well. Google+ and [Apple’s] Siri have learning algorithms that respond to your voice. Now imagine a world where our devices know our bodies. Apple’s new iPhone fingerprint sensor is a lovely example of that. Devices start by recognizing your thumb or your voice; then they could learn to recognize your friends’ voices, recognize the way you walk. Imagine if those devices put that information together with information about your location and the appointments on your calendar. That device gets to know you as a human being.



Why is it important that your devices get to know the real you?

This is about moving from human–computer interactions to human–computer relationships. The moment this really crystalized for me, and it’s a silly thing really, was when I saw a YouTube video of a Furby talking to Siri. And it was [48] seconds of splendor where the little Furby waves its ears and [bats] its little eyelashes and [makes noises]. [Editor’s note: To Furby’s nonsensical sounds, Siri responds, “I don’t see ‘Killher’ in your address book, should I look for businesses by that name?” Later in the video, Siri announces it is searching for Shell.com in response to more Furby gibberish.]



I was utterly mesmerized by the video for a really long time, and I couldn’t work out why. Then I realized it was a genealogy of talking things, a classic kinship diagram—a granddaddy thing talking to grandbaby thing. It was a thing that talked to a thing that listened. Siri promises to listen to you. There’s a notion of reciprocity with Siri. Once things listen, there is an implicit transformation that is no longer you telling something what to do, there is relationship building.



What are the benefits of devices being able to add context to the information that devices collect about their owners?

A device might be able to give you directions home, including the best ways to avoid traffic. It starts to make recommendations. The thing that is fascinating is that we are moving closer to a world where the technology in our lives—partly because of the devices themselves and partly because of the services that sit on those devices—has the capacity to know us and to start to act in concert with us on our behalves without us needing to tell them all the time what to do.



Is the technology available to make these human–gadget relationships a reality?

Devices don’t often know that they’re [near] your other devices. If you’re sitting in front of a television and tweeting about Breaking Bad on your laptop, your television doesn’t know that’s what you’re up to. And your laptop doesn’t know you’re in front of a television that’s showing you that show. There might be interesting things you could do if it did. Your laptop might have known not to show you any spoilers, for example. That stuff isn’t hard to imagine. It’s just that at the moment the devices don’t [work together]. Pieces of the technology needed to do this do exist.



How do we get the rest of the technology needed to create these relationships?

Partly, it takes time for people to start using the technology that exists. I was just reading a Pew Internet & American Life Project study [about that]. [Editor’s note: The study reports that that smartphones, broadband Internet and other technology aren’t as widely adopted as they might seem. For example, only 56 percent of all Americans even own a smartphone, much less have a relationship with it.] So there’s a cultural acceptance piece—it takes awhile for new technologies to settle in. There’s also a regulatory piece, where different countries have different regulations that affect how you experience the technology. The European Union clearly has a different point of view about personal data than they do in the U.S.



So, there is a lot more work to be done beyond developing smarter gadgets?

We talk about it as though it was one world, but we’ve truthfully been in a world of multiple Internets for at least a decade, even if you just talk about it in terms of physical infrastructure. [South] Korea has true two-way Internet—high-speed uploading and high-speed downloading, no throttle. On the other hand, in places like Australia download is a seven-to-one ratio to upload, which means it’s much easier to consume content than it is to create and share it. That means the Internet feels different [to people in different places].



When the pieces—infrastructure, content, hardware, algorithms—come together, when people start to have “relationships” with their devices, what will that be like?

The scenarios about smart context-aware phones are always like, “Oh, Genevieve, you’re in New York, we know you like coffee, and we’re going to take you to a place that serves flat whites because we know you’re ridiculously attached to your Australian coffee.” [What if your device said,] “Listen, there is a piece of art at the MoMA that is transcendental, and it will make you weep.”? I know that’s not what you’re expecting, but that’s where I’m going to take you now.” That would be a completely different notion, an object deciding at some level that coffee can happen any time but this piece of art is one-off and deciding that would be a time for delight and surprise and a little bit of magic, and that’s actually much harder to do.



That piece—the algorithm for delight, the algorithm for surprise—you crack that code, and I think it will be magic. I don’t quite know how to get there yet. I was joking with someone recently about that I thought Arthur C. Clarke got it wrong about any sufficiently advanced technology is indistinguishable from magic, because I actually think we should be making advanced technology that is magic. The first photos, the first time electricity flickered on, the first time you touched a touch screen—not all technology has to be about problem-solving efficiency gains. In some of the most popular technologies over the last 100 years, part of what they delivered was magic. Television was about magic before it was about the Real Housewives of New Jersey.

