The science that I do is on how our brains let us think about other minds. There's at least three ways that that kind of science could help us think about conflict. One is the idea that conflict is actually conflict about other people's minds. What conflict is, in part, is the suspicion of other people's motives, the inability to trust and forgive, and the way that our expectations of group boundaries make us less empathetic and more damning of other people's actions.

People who have studied conflict, especially intergroup conflict, before have focused on fear of the unknown. But I think that these very strong expectations of malice and lack of expectations about reasonable other perspectives that are real drivers of conflict. There's empirical evidence that I can talk about that's part of the cycle of what keeps a conflict going. Understanding the perception that other people are irrational, and only understanding the language of violence is part of what keeps the conflict going. So that's one way.

The second way is that there are all these people around the world trying to do conflict intervention programs. This is actually both the second and the third way. The problem with conflict intervention programs is they are designed by intuition. People who want to make the world a better place have an intuition about how to make it a better place, and build a program based on their intuition of what is driving the conflict and how to fix it. The empirical evidence suggests that those only work some of the time, for some people, and we don't know in advance. It's the same problem that we have in clinical context with current medical treatment. The best current treatment works for some of the people some of the time, and we don't know how to predict who it's going to be, when and why.

If we knew how to target our interventions at the people for whom it would work, and how to pick out the people for whom the best current treatment doesn't work and design new interventions, we would have so much more power to make the change in the world that we want to make. Some of my work right now is trying to figure out can we use a scientific approach even just to figure out that, to figure out which interventions work for whom, and under what circumstances? Then the craziest idea is that we could use neuroscience to help because conflict resolution efforts are a context in which introspection seems like the worst possible way to go. Where we know that we don't have access to all the thoughts we have about other people, where we can't predict our own thoughts and behaviors towards out-groups. Even if we could, we wouldn't be willing to admit it. Even if we had introspective access to the hostility and suspicion we feel towards other groups, if a scientist came and asked us about it, we probably wouldn't admit to it.

The advantage of neuroscience is being able to look under the hood and see the mechanisms that actually create the thoughts and the behaviors that create and perpetuate conflict. Seems like it ought to be useful. That's the question that I'm asking myself right now, can science in general, or neuroscience in particular, be used to understand what drives conflict, what prevents reconciliation, why some interventions work for some people some of the time, and how to make and evaluate better ones.

I'm a Cognitive Neuroscientist. One piece of the work that I've been doing is on how people think about other people's thoughts. The key piece of that so far has been the discovery that there is a specific brain region that people use that helps them think about other people's thoughts. Not just one brain region in isolation, but there a group of them. One that's particularly interesting to me is called the right temporal parietal junction, which is above and behind your right ear. The question we've been asking recently is how much can we get out of the neuroscience of this system? By being able to take apart this incredibly central human cognitive function, the ability to think about other people's minds, by being able to take that function apart neuroscientifically, what can we learn about it? What can we learn about when we use that function, how we come to have it as children, how it's different in humans from what it is in other animals, how it could be affected by diseases? In this new line of work, do the same systems that we've studied in thinking about neutral other people's minds shows signatures, show anything useful at all, about how we think about other people in group conflict.

The research that we were doing for a long while, initially, was just on what we call typical human adults, many but not all of whom were MIT undergraduates. The question there was, all human adults are capable of taking into consideration other people's thoughts and beliefs (what other people know, what they don't know, what they desire, what their motivations are), but how do people do it?

This was the remarkable discovery of about ten years ago: the existence of a group of brain regions in the human brain that we didn't know about before we could do human functional neuro-imaging, whose function seems to be something specific to do with human social cognition, the way that one person thinks about another person. For almost 15 years now, scientists have been pulling apart these brain regions, trying to figure out what they are, where they are, and what they do because in a broad perspective, there's a huge amount of the human brain that is devoted to one or another social function: to seeing another person's faces, to recognizing their actions, to seeing their bodies, to responding to their emotions. There's a huge amount of the brain devoted to social function and the part that's thinking about other people's thoughts.

There's a group of brain regions that are involved in different parts of that problem. We don't understand all of them yet. This is still an open question, what are the different pieces there? There's five or six different brain regions that you get if you just say what brain regions, in general, show more metabolic activity when you're thinking about other people's thoughts. Figuring out why is there so many, what does each one contribute to the problem, that's been a project I've been working on, and other people have been working on, for a while.

Did I discover it? When I first started working on this as a brand-new graduate student, this as a problem was preexisting. People had hypothesized based on data from autism that such a brain system would exist because people with autism could be disproportionately impaired in their ability to think about other people's thoughts, compared to the rest of their cognitive capacities. That might be a signature as a disproportionate effect of the disease on some particular brain system, which could have been a region, it could have been a connection, it could have been a chemical. The hypothesis was out there from the early '80s, that there must be something, some specific brain system that typical people use for thinking about other people's thoughts that could be the etiology of autism.

When I started working on this, there were two previous neuro-imaging papers. This is what's totally amazing about this field. I'm not that old, and when I started working in this field, there were two papers about this topic, and there are now hundreds of thousands of them, just ten or 12 years later. The basic answer is so robust that everybody agrees on what it is, which is astonishing in science. Everybody almost immediately agreed on what the answer to a crazy question like that is, about which brain regions are the basic brain regions you need for thinking about other people's thoughts, and also the fact that such brain regions exist; that there are brain regions that have specifically social functions.

Since then, the devil has been in the details of figuring out exactly what does it do, when does it do it, how does it develop, how does it function? Is it, in fact, the etiology of autism? That's one of the big open questions. In spite of the fact that that's where the hypothesis came from, we still don't know if that's true. I'm working on that and how it matters for real social problems, for society, for self-knowledge, for the big picture questions.

I work on other minds, and philosophers work on the problem of other minds. And they're not to be confused. The problem of other minds is the question of whether other minds exist. Frankly, with regard to my research, it doesn't matter whether that's true, because I study the part of our brain that believes that other minds exist. If it's wrong, it doesn't matter because as a function of our brains, there it is, right? Every one of us, in the end, thinks about other minds as if they exist. That's one of the most striking creations of the brains that we're given, and I study that. In fact, I sometimes say I study the invention of other minds by every human brain because other minds are not directly perceivable. In fact, the ways in which we understand other minds to exist could be completely false. There may not be such a thing as a belief, in fact. Surely the theories we have of other minds are biased and oversimplified, just as every theory we have, like our naïve theory of physics is biased and oversimplified. Surely naive theory of other minds is just as biased and oversimplified. The thing that I study is the cognitive and neural mechanism inside each human brain that invents the kinds of minds that we see in all the people around us, whether they have them or not.

The other thing is how do we know whether the people we study are lying to us? It's exactly when people can't introspect about their own mind that cognitive science and neuroscience is necessary. If it were possible to figure out how human minds work by introspection, then you wouldn't need a science of it. You wouldn't need external observation and replicable controlled conditions, and you wouldn't, in particular, need neuroscience. The promise of neuroscience for figuring out the structure of the mind and for diagnosing what's wrong with it, and for understanding how to fix it, the reason neuroscience might be useful is because introspection isn't reliable, it doesn't have access to most of the important things about other minds. If we want to know what does predict human behavior, or when will people break into a fight or have a disease, you can't just ask them to think about it and tell you. Even if they wanted to be honest, it's not possible to be honest.

We need a science of it, just like we need a science of physics to get beyond naïve physics; we needed a science of the mind to get beyond naïve psychology.

Our lab studies many questions and uses many tools, and we do many experiments. There's no single day at the lab. But the most basic run-of-the-mill experiments that we're doing, we do with fMRI. FMRI is the same machine as an MRI. Anybody who's ever needed an MRI (most people by now have had a clinical MRI for one thing or another) remembers lying on their back in a dark tube and with loud banging noises in the background. Our experiments start very similarly. We put you inside an MRI machine.

We then take two kinds of images. The first is like a photograph that I would take of your face. It shows me what your brain looks like, just as a photograph shows me what your face looks like. Just like a face, all brains have basically the same parts. Everybody has two eyes, a nose and a mouth; everybody has the same lobes and the same basic landmarks. But just as every face is unique, every brain is unique. The first picture we take just tells us, okay, where are the parts of your brain, what does it exactly look like. Then we take what is effectively a movie. We take a picture of change over time in your brain, over the course of the next two hours, while we're doing different things. This movie is showing us blood oxygen changing. Based on the same principle that if you run really hard, you'll use the oxygen in your leg muscles and your body will send oxygen there to replenish it, the same thing is going on in your brain. As you're thinking with one bit of your brain, it's using its local supply of oxygen, and your body is compensating, sending oxygen there. With an MRI machine, we're watching that happen. We're watching the blood oxygen levels change in one place, or another, as we ask you to do one thing or another. That gives us this movie of the change in oxygen over time.

In some of our experiments, you're reading stories. Sometimes you're in face-to-face dialogue with a person. Sometimes you're listening to somebody read you a story or watching a movie or looking at a photograph. But the key thing is, we've always designed it so that we know what you're doing when, and then we can compress these movies of your brain activity saying, well, at all the times when you were looking at a picture of a face or reading a story about a thought, compared to all the times when you were doing whatever the control condition is, looking at a picture that's not of a face, or reading a story about somebody's physical appearance rather than their internal states, how was the blood going at one kind of time as opposed to the other? That's the central experiment.

If you were in an experiment that we're doing right now, you might also get assigned to one of two groups in a competition. In order to study group dynamics, we're studying people who are in real conflict groups, so people who come from ethnic or political groups that are in conflict. Sometimes we're scanning them and having them think about each other, getting democrats and republicans, or scanning Israelis and Palestinians while they're thinking about each other and about members of their own group. Sometimes we take people into the lab who aren't yet a member of a group in any kind of conflict, and we create a conflict. We say you're a member of this team called the Eagles, and your team is in a battle with the Rattlers, and there's only one prize, and whichever team wins, gets the prize. And so now you're on the team, and now we have you thinking about how do you feel about Eagles and how do you feel about Rattlers? To try to create a kind of microcosm of conflict that we can study in the lab.

The question is how valuable is fMRI actually? It has so many different kinds of promise as a tool. But actually, mostly all of those remain promissory notes. To be fair, we've only had 15 years of fMRI. It's a brand-new tool. But the basic ways that people thought and hoped that fMRI would be practically useful are close but still just on the horizon. People thought fMRI would be useful clinically for figuring out what the cause of a disease was. It might be but it's not clear how useful that is. Knowing that something's wrong in dyslexia, in a particular pattern of connectivity doesn't actually tell you what to do about it. Then people thought, well maybe fMRI will be useful for figuring out which interventions have been successful. But so far, nobody's shown any case where fMRI has been useful on an individual basis, so fMRI hasn't, as far as I know, yet been used to diagnose a disease better than a clinical impression.

One of the most exciting current thoughts about how fMRI could be useful, which is being promoted by my colleague, John Gabrieli, is that fMRI could be used to identify which kids will benefit from which interventions. He has the first kind of example of this from dyslexia, showing that a scan of kids when they're younger can tell you which kids will benefit from the current intervention for dyslexia; the current kinds of training programs. The idea is that if that's true, then we could figure out, first of all, in advance who are the kids for whom the current program is working so that we know to make sure to send those kids to the current program. The kids for whom the current program is not likely to work, those are the targets for whom new interventions should be invented. They're the ones we don't yet know how to help. That's the current promise. But there's actually one paper like that, that just came out from John Gabrieli's lab. But other than that, as far as I know, there are very few cases where you can show that fMRI is already useful, clinically. Again, I think there is lots of promise.

FMRI is functional MRI. That's asking which bit of your brain you're using during a given cognitive task. MRI is the anatomical picture. There's no question MRI has been useful. The question is, and again, I don't mean this to sound pessimistic; on the contrary, I think it's optimistic. But we don't know yet exactly when and how fMRI, knowing where the brain activity is during a particular kind of task, how that will be practically useful. It's already been extremely revealing for helping to pick apart the components of the mind. That, in and of itself is deeply satisfying, as an intellectual endeavor. But if you want to know how will it make people healthier, wealthier, or better people? I think that's where we're getting to, with wanting the promise of fMRI to come to fruition, sooner rather than later.

The big picture image that we have, of how fMRI could be used in this context, although it has not happened this way yet, is that some interventions seem to be successful for some people (some dialogue programs, sometimes watching a movie, reading a novel). Sometimes there are experiences that you have, deliberately or through a government program, that make a difference to how you think about people on the other side. Sometimes that works and sometimes it doesn't work. If we knew what were the drivers of conflict in the first place, what made you want to perpetuate conflict with another group, and how the interventions that do work have their effects, maybe we could design better ones. Or at least maybe we could figure out who are the people most likely to be helped by the kinds of interventions that exist so far.

The idea with neuro-imaging is: first of all, can we diagnose the problem? Can we see under what circumstances, when you're thinking about somebody on the other side, is your brain doing something that might be diagnostic of what the psychological driver of conflict is? There's lots of possible things you could imagine here, like what's the source of a loss of empathy, or the source of suspicion of the other's motives, or the perception that the out-group is always irrational. Those are coming from our brain. It's the structure of the human brain that makes us feel that way about the out-group.

Maybe we could figure out where and how that's coming from. This is my big picture, far future thought. It hasn't happened yet. But if it could, then maybe we could take the programs that work and say do they work because they have their influence on one or another of these brain systems. And if so, could we do the same thing that John Gabrieli's trying to do? Could we take a group of people and tell you these ones, even though they feel really hostile to the other side right now, if you put them through a dialogue intervention program, it will help, it will make them less hostile; but these people need something else.

The work that we're doing right now isn't in any way using technology to intervene on people's brains. It's trying to use the technology as a window onto how the mind works. Potentially as the most detailed, most accurate, most truthful window we could have, although again, that's a promise, not a reality. A theory of that direct window into the thing that makes our mind what it is, is the only way, in the end, to truly understand it. That's the theory behind this work. But it's not, at the moment, an intervention. It's still deeply exploratory. It's something we understand almost not at all.

In terms of mirror neurons, people mean different things by mirror neurons. The original discovery of mirror neurons was a class of neurons in the brains of monkeys with a beautiful, abstract property, which is that they would respond during a given action, like a grasping action, regardless of whether the action was executed by the monkey or by somebody the monkey was looking at. It's a common code for a certain kind of action, whether you execute it, or somebody else executes it. It's incredibly specific. In monkeys, mirror neurons that respond to a grasp when it's done by me or by you, don't respond to the same grasp if it's done with a different effector; so if it uses it's whole hand versus a pincer, that's different mirror neurons. They also don't respond if you merely mime the grasp. There needs to actually be something you're grasping. It's a very specific response to a small, particular set of body movements that you could make on objects in the world.

It's an incredibly interesting class of neurons. But people have taken them as a panacea, as an explanation of everything interesting and abstract in the human brain, and that cannot possibly be true. Mirror neurons are a little bit more abstract than most of the kinds of neurons studied by monkey physiology, and incredibly interesting for that reason. But they don't explain the vast array of either human social cognition, or any of the other cognitive capacities that have been attributed to them, like human language and morality and theory of mind. Mirror neurons are just this one particular class of neurons that, in most ways, have nothing to do with human morality.

Its controversial whether human fMRI reveals mirror systems. Probably you can see mirror systems in humans with fMRI. I think that because the data are consistent with it, and because it's very unlikely that a brain system that you find in macaques doesn't exist in humans. It's always possible, but it's very unlikely, and so the combination of the prior probability of continuity across species and the data as they exist so far suggest that we really do have something like a human mirror system. It seems to serve different, maybe complementary functions, to the theory of mind brain systems I was telling you about before. Those brain systems are not overlapping. That says that when we're watching other people's actions, we've got at least two maybe quite independent things going. One of which is knowledge of how their bodies are moving through the world, and the other one of which is an invention of the stuff inside their minds.

For the research I've been telling you about, which is only one of the things we're doing in the lab, the end vision is a vision of being useful. In particular, what we imagine is being useful to the people who are already working to intervene on conflict, and to reduce it. We're thinking right now about conflict between groups, but actually it could be conflict at any level between people. If conflict sometimes is driven by biases and expectations that our minds make because of the nature of the human mind that we don't necessarily intuit or endorse, then it's possible self-knowledge on its own, and it's possible that the knowledge guided programs that we could construct will be better ways for us to choose to intervene on our own minds. If we wanted to stop feeling suspicious and hostile towards members of other groups, the interventions we designed, knowing where those biases come from, would be better more effective tools. That's the grand plan.

An aspect of conflict that has often got left out of the science of conflict is dynamics of power. In any conflict, one of the key factors is who has been in power and who has been out of power, in terms of how the conflict has gone up until now, and what it takes to make the conflict better in the future. In the little bit of work that we've done so far on conflict and how conflict plays out, thats the one discovery we have up until this moment, which we just published minutes ago, is that having a dialogue with a person from the other side is one of the standard things that people do when they're trying to make a conflict better. They bring groups from each side together and they have them talk to each other. The new discovery that we have is that dialogue works differently for different reasons, depending on which side of the power dynamic you're on.

If you're in a conflict situation and you're the one who's been in power, making you do a good job of listening to what the other side is saying, making you take their perspective and really hear what they're saying, makes you a little more open to them. It makes you perceive them with more empathy and as less irrational then you did a minute ago, or half an hour ago.

If you're on the disempowered side of a conflict, if you're coming from the less-empowered position, being told to take the perspective of the more-empowered side helps not at all; it might even hurts. Instead, what helps is feeling like the other side is hearing you. Getting the chance to talk and be heard makes you a little bit more open, a little more empathetic and a little less likely to see them as irrational. This is obviously not a solution to all possible problems, but this is a little bit of empirical data from a randomized controlled trial showing that one of the reasons dialogue works differently for the different sides is that, when you come from two sides of a power dynamic, you have different needs.

Of all the empirical things I've ever published, I think this one might be the most practical: the idea that if you're on opposite sides of a power dynamic, that the person more empowered needs to take a moment to listen, and the person who's less empowered needs the experience of being heard. Just as the first move to open both sides up a little. That could be really useful to know.

Right now, the approach that I'm taking is trying to figure out how far empirical data can take us. How far can we go. We got a new tool, just as I became a scientist, fMRI became available. The question at the time was, what could we learn with this new tool? The big fundamental question about how the mind relates to the brain and the nature of the human self was open in a new way with the existence of this new tool. Many people thought, I certainly thought, well, let's just see how far we can ride this. Let's see what kind of new insight is made possible. Every five or so years, I and probably everybody else, gets worried that we've exhausted it. That we've done the things that you can do with this tool and we're going to need another new technological opening.

But that hasn't been true yet. At each turn in the field when it feels like it might be the end of the line, something new opens up, whether it's a new analysis and a new way of looking at it, or a new kind of question that seems approachable that wasn't approachable before. Or a new population that we thought we couldn't study that now we can study. At least for the first 15 years, every time I thought I was going to give up, it turned out there was so much more still to learn with this one tool. The kind of problems that will be the limit of our capacity to understand, are the modern mind/body problems. How the mind links to the brain. What it is about the pattern of firing across a set of neurons that makes a thought or a concept? That is the fundamental big picture challenge for all neuroscientists.

I don't know if it's going to be possible to get there. Pushing towards that limit is one of the big challenges for all of cognitive neuroscience. That's the question, how far can fMRI take us? At first it seems like maybe all fMRI could tell us was where something happened. But that hasn't been true, actually. It turns out one can get beyond where, but maybe in limited ways, and maybe in unlimited ways. I think we're still finding out.

We understand ourselves through metaphors. Many of the metaphors have been metaphors of artifacts of things that humans have constructed. We understand the function of the heart through the metaphor of the function of tools that we can create. We have understood the function of the brain through the metaphor of the information processing systems that we can build. In some sense, that is right and inevitably right. What it will mean to understand the brain will always be to have a better and better metaphor, by way of a better and better comparison system, a thing we can build that's closer and closer in its capacities to the human mind.

Current cognitive neuroscience is struggling to understand something that is beyond any of our current metaphors, and certainly beyond our current understanding. It feels like we are before the right concept. It feels like the physicists, before they distinguished heat and temperature. It feels like the things that we're saying to ourselves right now don't work because they confuse central distinctions. I don't think I am currently inventing a better way, although I would love to be. I'm certainly trying to push the empirical approaches, the data that we can collect to make it force us to something more correct.

The feeling of working with the wrong concept, which I think many of us have, knowing that our best theories are still deeply confused, if not merely superficially wrong, can be disheartening. But it is also the promise of huge future discoveries. It's only if you're before a huge future discovery that you have a chance of living through one.