How do new scientific disciplines get started? For Iyad Rahwan, a computational social scientist with self-described “maverick” tendencies, it happened on a sunny afternoon in Cambridge, Massachusetts, in October 2017. Rahwan and Manuel Cebrian, a colleague from the MIT Media Lab, were sitting in Harvard Yard discussing how to best describe their preferred brand of multidisciplinary research. The rapid rise of artificial intelligence technology had generated new questions about the relationship between people and machines, which they had set out to explore. Rahwan, for example, had been exploring the question of ethical behavior for a self-driving car — should it swerve to avoid an oncoming SUV, even if it means hitting a cyclist? — in his Moral Machine experiment.

“I was good friends with Iain Couzin, one of the world’s foremost animal behaviorists,” Rahwan said, “and I thought, ‘Why isn’t he studying online bots? Why is it only computer scientists who are studying AI algorithms?’

“All of a sudden,” he continued, “it clicked: We’re studying behavior in a new ecosystem.”

Two years later, Rahwan, who now directs the Center for Humans and Machines at the Max Planck Institute for Human Development, has gathered 22 colleagues — from disciplines as diverse as robotics, computer science, sociology, cognitive psychology, evolutionary biology, artificial intelligence, anthropology and economics — to publish a paper in Nature calling for the inauguration of a new field of science called “machine behavior.”

Directly inspired by the Nobel Prize-winning biologist Nikolaas Tinbergen’s four questions — which analyzed animal behavior in terms of its function, mechanisms, biological development and evolutionary history — machine behavior aims to empirically investigate how artificial agents interact “in the wild” with human beings, their environments and each other. A machine behaviorist might study an AI-powered children’s toy, a news-ranking algorithm on a social media site, or a fleet of autonomous vehicles. But unlike the engineers who design and build these systems to optimize their performance according to internal specifications, a machine behaviorist observes them from the outside in — just as a field biologist studies flocking behavior in birds, or a behavioral economist observes how people save money for retirement.

“The reason why I like the term ‘behavior’ is that it emphasizes that the most important thing is the observable, rather than the unobservable, characteristics of these agents,” Rahwan said.

He believes that studying machine behavior is imperative for two reasons. For one thing, autonomous systems are touching more aspects of people’s lives all the time, affecting everything from individual credit scores to the rise of extremist politics. But at the same time, the “behavioral” outcomes of these systems — like flash crashes caused by financial trading algorithms, or the rapid spread of disinformation on social media sites — are difficult for us to anticipate by examining machines’ code or construction alone.

“There’s this massively important aspect of machines that has nothing to do with how they’re built,” Rahwan said, “and has everything to do with what they do.”

Quanta spoke with Rahwan about the concept of machine behavior, why it deserves its own branch of science, and what it could teach us. The interview has been condensed and edited for clarity.

Why are you calling for a new scientific discipline? Why does it need its own name?

This is a common plight of interdisciplinary science. I don’t think we’ve invented a new field so much as we’ve just labeled it. I think it’s in the air for sure. People have recognized that machines impact our lives, and with AI, increasingly those machines have agency. There’s a greater urgency to study how we interact with intelligent machines.

Naming this emerging field also legitimizes it. If you’re an economist or a psychologist, you’re a serious scientist studying the complex behavior of people and their agglomerations. But people might consider it less important to study machines in those systems as well.

So when we brought together this group and coined this term “machine behavior,” we’re basically telling the world that machines are now important actors in the world. Maybe they don’t have free will or any legal rights that we ascribe to humans, but they are nonetheless actors that impact the world in ways that we need to understand. And when people of high stature in those fields sign up [as co-authors] to this paper, that sends a very strong signal.

You mentioned free will. Why even call this phenomenon “behavior,” which seems to unnecessarily invite that association? Why not use a term like “functionality” or “operation”?

Some people have a problem with giving machines agency. For instance, Joanna Bryson from the University of Bath, she’s always outspoken against giving machines agency, because she thinks that then you’re removing agency and responsibility from human actors who may be misbehaving.

But for me, behavior doesn’t mean that it has agency [in the sense of free will]. We can study the behavior of single-celled organisms, or ants. “Behavior” doesn’t necessarily imply that a thing is super intelligent. It just means that our object of study isn’t static — it’s the dynamics of how this thing operates in the world, and the factors that determine these dynamics. So, does it have incentives? Does it get signals from the environment? Is the behavior something that is learned over time, or learned through some kind of copying mechanism?