This is based on an ongoing conversation at the Media Lab and is a compilation of thoughts from conversations with the faculty, students and researchers at the MIT Media Lab. Mostly written by Joichi Ito with help from Kevin Slavin and the rest of the Media Lab.

We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon. As we develop increasingly powerful tools to process information and network that processing, aren't we just adding new pieces to the EI that every actor in the network is a part of?

[1] [2] [3]

Artificial Intelligence has yet again become one of the world’s biggest ideas and areas of investment, with new research labs, conferences, and raging debates from the main stream media to academia.

We see debates about humans vs. machines and questions about when machines will become more intelligent than human beings, speculation over whether they’ll keep us around as pets or just conclude we were actually a bad idea and eliminate us.

There are, of course, alternatives to this vision, and they date back to the earliest ideas of how computers and humans interact.

In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine. Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers. For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world. John Markoff

But beyond distinguishing between creating an artificial intelligence (AI), or augmenting human intelligence (IA), perhaps the first and fundamental question is where does intelligence lie? Hasn’t it always resided beyond any single mind, extended by machines into a network of many minds and machines, all of them interacting as a kind of networked intelligence [4] that transcends and merges humans and machines?

If intelligence is networked to begin with, wouldn’t this thing we are calling “AI” just augment this networked intelligence, in a very natural way? While the notion of collective intelligence and the extended mind are not new ideas, is there a lens to look at modern AI in terms of its contribution to the collective intelligence?

We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon. As we develop increasingly powerful tools to process information and network that processing, aren't we just adding new pieces to the EI that every actor in the network is a part of?

Marvin Minsky conceived AI not just as a way to build better machines, but as a way to use machines to understand the mind itself. In this construction of Extended Intelligence, does the EI lens bring us closer to understanding what makes us human, by acknowledging that what part of what makes us human is that our intelligence lies so far outside any one human skull?

At the individual level, in the future we may look less like terminators and more like cyborgs; less like isolated individuals, and more like a vast network of humans and machines creating an ever-more-powerful EI. Every elements at every scale connected through an increasingly distributed variety of interfaces. Each actor doing what it does best -- bits, atoms, cells and circuits -- each one fungible in many ways, but tightly integrated and part of a complex whole.

While we hope that this Extended Intelligence will be wise, ethical and effective, is it possible that this collective intelligence could go horribly wrong, and trigger a Borg Collective hypersocialist hive mind? [5]

Such a dystopia is not averted by either building better machine learning, nor by declaring a moratorium on such research. Instead, the Media Lab works at these intersections of humans and machines, whether we’re talking about neuronal interfaces between our brains and our limbs, or society-in-the-loop machine learning.

Where the majority of AI funding and research is to accelerate statistical machine learning, trying to make machines and robots “smarter,” we are interested in the augmentation and machine assistance of the complex ecosystem that emerges from the network of minds and our society.

Advanced Chess is the practice of human/computer teams playing in real-time competitive tournaments. Such teams dominate the strongest human players as well as the best chess computers. This effect is amplified when the humans themselves play in small groups, together with networked computers.

The Media Lab has the opportunity to work on the interface and communication between humans and machines–the artificial and the natural–to help design a new fitness landscape [6] for EI and this co-evolution of humans and machines.

EI research currently includes:

Connecting electronics to human neurons to augment the brain and our nervous system (Synthetic Neurobiology and Biomechatronics)

Using machine learning to understand how our brains understand music, and to leverage that knowledge to enhance individual expression and establish new models of massive collaboration (Opera of the Future)

If the best human or computer chess players can be dominated by human-computer teams including amateurs working with laptops, how can we begin to understand the interface and interaction for those teams? How can we get machines to raise analysis for human evaluation, rather than supplanting it? (Playful Systems)

Machine learning is mostly conducted by an engineer tweaking data and learning algorithms, later testing this in the real world. We are looking into human-in-the-loop machine learning [7][8] , putting professional practitioners in the training loop. This augments human decision-making and makes the ML training more effective, with greater context.

building networked intelligence, studying how networks think and how they are smarter than individuals. (Human Dynamics Group)

developing humans and machine interfaces through sociable robots and learning technologies for children. (Personal Robots Group)

develeloping “society-in-the-loop,” pulling ethics and social norms from communities to train machines, testing the machines with society, in a kind of ethical Turing test. (Scalable Cooperation)

developing wearable interfaces that can influence human behavior through consciously perceivable and subliminal I/O signals. (Fluid Interfaces)

extending human perception and intent through pervasively networked sensors and actuators, using distributed intelligence to extend the concept of “presence.” (Responsive Environments)

incorporating human-centered emotional intelligence into design tools so that the “conversation” the designer has with the tool is more like a conversation with another designer than interactions around geometric primitives. (e.g., “Can we make this more comforting?”) (Object-Based Media)

developing personal autonomous vehicle (PEV) that that can understand, predict, and respond to the actions of pedestrians; communicate its intentions to humans in a natural and non-threatening way; and augment the senses of the rider to help increase safety. (Changing Places)

providing emotional intelligence in human-computer systems, especially to support social-emotional states such as motivation, positive affect, interest, and engagement. For example, a wearable system designed to help a person forecast mental health (mood) or physical health changes will need to sustain a long-term non-annoying interaction with the person in order to get the months and years of data needed for successful prediction. [9] (Affective Computing)

(Camera Culture Group) is using artificial intelligence and crowdsourcing for understanding and improving the health and well-being of individuals.

The Macro Connections Group is collaborating with the Camera Culture Group on artificial intelligence and crowdsourcing for understanding and improving our cities.

Macro Connections has also developed Data Viz Engines such as the OEC, Dataviva, Pantheon, and Immersion, which served nearly 5 million people last year. These tools augment networked intelligence by helping people access the data that large groups of individuals generate, and that are needed to have a panoptic view of large social and economic systems.

collaborating with Canan Dagdeviren to explore novel materials, mechanics, device designs and fabrication strategies to bridge the boundaries between brain and electronics. Further, developing devices that can be twisted, folded, stretched/flexed, wrapped onto curvilinear brain tissue, and implanted without damage or significant alteration in the device's performance. Research towards a vision of brain probes that can communicate with external and internal electronic components.

The wildly heterogeneous nature of these different projects is characteristic of the Media Lab. But more than that, it is the embodiment of the very premise of EI: that intelligence, ideas, analysis and action are not formed in any one individual collection of neurons or code. All of these projects are exploring this central idea with different lenses, experiences and capabilities, and in our research as well as in our values, we believe this is how intelligence comes to life.

Citations: