Who watches the watchmen?

Harry Collins. That's who.

A professor of social sciences at Cardiff University in Great Britain, Collins has spent his career studying other scientists.

In particular, Collins has spent more than 35 years following scientists who work in the field of gravitational wave physics. That's how I found out about him, during a dinner in February with several gravitational wave physicists who work at the University of Wisconsin-Milwaukee. They kept talking about "our sociologist", who attended their meetings, took notes during their debates, and generally seemed to observe and record their behavior the way Jane Goodall did with chimpanzees.

I was immediately intrigued, but Collins work turned out to be a lot more fascinating than I'd even guessed. What he does isn't simple ethnography, or even real-time recording of science history. Instead, Collins uses his observations of gravitational wave physicists and their internal culture to better understand how science, as a human endeavor, works—how researchers go about learning new information, how we use science as a tool to arrive at truth, and what happens when scientists disagree with one another.

In the process, he's become one of the world's leading experts on decision-making, how science and politics work together, and even the nature of expertise, itself.

Maggie Koerth-Baker: You're a social scientist who studies scientists. How common is that job?

Harry Collins: There's quite a big field called the sociology of science. There's a professional society with 700-to-1000 members. What there aren't many of are sociologists studying the physical sciences. Lots study the biological sciences, but these days only two or three of us do research on physical sciences—I might even be the last one apart from one of my Ph.D. students.

Really, though, I'm a sociologist of knowledge, and just happen to study science to see knowledge being formed.

Nowadays I also do something a bit different. About 10 years ago, I started to worry that it was difficult to use what we were doing to help policy makers make policy. I wanted to see how we could use science and technology to make policy before consensus formed in the scientific community. I decided to switch from studying the making of truth to studying the nature of expertise.

MKB: The nature of expertise is a really interesting subject to me. In the wake of the Fukushima nuclear disaster, I saw a lot of problems and misinformation arising from journalists speaking to scientists as if they were experts in everything, rather than experts in one specific field. For instance, somebody interviewing a nuclear engineer and asking them questions that were better suited to a health physicist. From your perspective, do you think the public has a tendency to think of scientific expertise in too broad of terms? And, if so, is that a problem?

HC: People do tend to think of scientists as general "experts". I think you're spot on. One of the things we point out is that scientists are only experts in very, very narrow domains—like crevasses. And as soon as they're out of their expertise crevasse a scientist is no better off than anyone else.

You can't just wheel on any scientist. That's no good. But at the same time, if you take any particular domain, there are sometimes people who are experts that have no qualifications, but do have experience in the practical domain—they count as experts too. That widens the area to certain small groups of unqualified experts—experience-based experts without qualifications.

Thirdly, there are what we call 'interactional experts'. These are people who have learned about the domain by long and hard immersion in the discourse—the spoken language, but without practicing. So our theory narrows the domain of expertise to just those who know about a specific kind of problem—that rules out most scientists, just leaving a few—but it also widens the domain to include experience-based experts who may have no qualifications and a few interactional experts who have no practical experience but long immersion in the discourse.

In our book from 2007 called Rethinking Expertise you'll find what we call the Periodic Table of Expertises, where we try to classify every kind of expert. It has about 10 or 12 categories. The key, as far as we are concerned, is being connected and immersed in what counts as the body of experts for that domain. We're looking at the extent to which different groups are immersed in the knowledge in one way or another.

MKB: By learning about concepts like expertise, is this how you distinguish what you do from the work of science historians? It seems like there's some overlap, now that you've been following the development of this one particular field of physics for almost 40 years.

HC: I wrote this great big book called Gravity's Shadow: The Search for Gravitational Waves

in 2004. But one thing I said in the preface is that this isn't history. By that, I meant that historians have certain professional standards, such as long footnotes that give an exact source reference for every statement, and that's not what I do. Most of my evidence is from talking to people and learning about the culture by immersion—I try to gain interactional expertise.

There isn't an archive I can refer people back to full of snatches of conversation that they can listen to; my kind of evidence is gone almost as quickly as it comes—the talk happens and then it's happened. Professional historians would feel a bit edgy about what I do because they feel they have to reference everything, but I'm not writing professional historian's history; I am reporting events as they unfold and the way a new culture is born and changes.

MKB: Tell me a little about why you think the sociology of science is important. One aspect you've written about that I found interesting was the way that your work helps clarify the process of science as something that isn't divine. An exceptional human endeavor, sure. But a human endeavor. How does sociology provide that clarity?

HC: It's a matter of professional roles again. It's the job of scientist to do their best with experiments and theory and so on, but a sociologist must distance themselves from time-to-time and take a different perspective. I just finished a book manuscript about a controversy in science, and one thing I point out is that it was not settled by calculation-based decisions.

When it comes time to pick sides, the scientists had to make something like 25 different choices, to figure out which side they were on, and the answers couldn't be calculated from pure data. Instead, it was choosing among philosophical options or traditions or, just the sociology of the thing.

MKB: This field you've chosen to study—gravitational wave physics—is particularly interesting because it contains a lot of legitimate debate internally, and has been, especially 30 or 40 years ago, the subject of a lot of outside skepticism. The physicists have this thing that they are quite certain must exist—gravitational radiation emitted by exploding or colliding stars many light years away from us—but we've not yet been able to build detectors that are sensitive enough to find direct evidence of these gravitational waves.

As you've studied this field, what have you learned about the way a weird theory becomes an accepted reality? I'm particularly interested in this because a lot of laypeople have picked up the idea that the scientific establishment doesn't have room for truly paradigm-changing discoveries, and actively tries to suppress them. How does what you've seen with gravitational waves contradict or support that idea?

HC: I think I have more evidence about resistance to radical change from the way my own recent work has been received than from what I've seen happening among the physicists I study.

I've certainly seen strong bias against the new kind of work on expertise that I started on 10 years ago. There were four or five years where I couldn't get papers published because we took a different line to what had gone before, looking at the nature of expertise and who is an expert rather than just being critical of the way scientists makes truth. There was an element within sociology that really wanted to democratize science—make everyone as good as everyone else when it came to making technological decisions—and they thought of what we did as elitist. Our program is hugely successful now, but it is very hard to get something going if it is radically new.

There are also structural reasons why it's difficult to do something truly new. There's tremendous demand on grants and on publication outlets. Demand really outstrips supply. Hugely. Much more so than when I first started my career. Today, if you want to get anywhere with a grant or getting a paper published you need three good referee reports. The trouble is that if you're a bit unusual, you'll always get one referee who says, "It's no good." And you only need one bad report to scuttle everything. There's a huge conservatism built into the process that wasn't there when I first started my career. That's the case with sociology, anyway.

I think it's much easier to get published in physics than in the social sciences and humanities. Rejection rates are much lower in the physical sciences. Physicists don't mind incorrect papers because they think that, over time, any incorrect results will be shown to be incorrect. Social scientists, in contrast, are a lot more political.

That is not to say that physical scientists are saints; I have seen cases of too fierce negative refereeing in physics too.

MKB: What is scientific consensus? In a lot of ways, that seems like the question you're really studying.

HC: The surprising thing for somebody who comes from the history book or schoolbook version of science is that scientific consensus turns out to be a lot like other kinds of consensus. Of course, you have theories and evidence. But at the heart of it there's usually a point where decisions are made in a much more commonsensical or philosophical way.

MKB: What about climate change? How does your idea of consensus-building play out here?

HC: I think a problem like climate change is where our kind of analysis of expertise plays its part. If there's a consensus among experts, and you think you can trust these people, and they're working with integrity and trying to argue that opposition are wrong using the normal ways of arguing in science—rather than political suppression—then you should base policies on the consensus even if you can't be sure that consensus is the truth.

And no-one can be sure about the truth except in the very long term. Uniform consensus can take half a century to form in science, and policy needs to work faster than that. You have to make policy with something less than perfection. Sometimes experts will be wrong, but what else are you going to do? Will you just ask your mum, or flip a coin?

I have a name for this approach and it's "elective modernism"—choosing the methods of science. Really it's about making a choice about the kind of society you want to live in. Do we really want to make decisions on technological issues by popular vote? For instance, if a woman is pregnant and HIV positive, should she get access to anti-retroviral drugs even though there are blogs that tell us these drugs are dangerous? The South African case shows that you have to go with the scientific consensus even though there will often be small numbers of active and energetic critics of that consensus. You can make that decision based on emotion, or on current scientific consensus. Which would would you rather live with?





MKB: You wrote a book called Gravity's Ghost: Scientific Discovery in the Twenty-first Century about a 2007 event in the gravitational physics field, where two separate gravitational wave detectors in different places turned up signals that could have been evidence of a gravitational wave. The signals turned out to be a test of the system, but I'm curious about what you, and the physicists, learned. Why is that event so important?

HC: It's important to me. It was probably much more important to me that to the scientists. In 2007, I was able to watch the scientists struggling over what to make of the first bit of data coming in on the new generation of gravitational wave detectors. From this process, they learned how to do data analysis better … and they learned how hard it's going to be to convince themselves that they've seen evidence of a gravitational wave when they've really seen it. But as soon as the learned that it was not a real signal it lost most of its importance. To me, however, the lessons about how scientists argue and make knowledge are lasting, not ephemeral.

MKB: The researchers I spoke with at the University of Wisconsin-Milwaukee told me a little about a live debate you were present for, where physicists were arguing over the way they analyze data from the LIGO detectors. If I recall correctly, there had been a measurement that looked like it could be a gravitational wave. Researchers had figured out it was an airplane, instead, but they couldn't just say in the paper, "This was later found to be a passing airplane." And it turned into a really heated argument. Do you recall the incident I'm talking about? I'm curious about how a sociologist sees an argument like that.

HC: The physicists—they're very good these guys, and they have a lot of integrity. They make very careful rules for themselves so they can't massage their statistics post hoc—so you can't dredge out the result you want from the data you have. They impose a very strict rule: You decide how you're going to do the analysis before you "open the box", by which, they mean, before you see the data. You make all your decisions before you open the box and then you can never change anything.

In the case of the "Airplane Event" they followed procedure, but afterwards they found out that it was an airplane. According to the rule, though, they had to leave the airplane in the data. Of course, according to common sense you'd just take it out. The official rules did not include rules for their own application.

The Airplane Event led to some really violent arguments, with people determined that one side or the other was right. When they decided to take the airplane data out, somebody actually resigned from the whole gravity wave business in protest. It's a wonderful illustration of how science is made by common sense, and not just by calculation. And it can't be made just by calculation. There are things you can't anticipate that will force you to break your own rules.

MKB: You write that your position is to remain neutral on the science. You're there to study the scientists. That makes sense. But how, after 40 years of following this one field, can you possibly not have picked sides on certain issues?

HC: You're quite right. It is more complicated. In the new book, I really did struggle to maintain my neutrality. I've gone a bit native. If you're a good sociologist, you're aware of that, though. And you've got to learn how to step back from it. It's a complete fallacy that you have to stay neutral all the time. You can have opinions, just so long as you retain the ability to step back from those opinions when the time comes to analyse rather than understand the science. If opinions were killer, then you could never do sociology of own society. The trick is to know when to allow your opinion free reign and when to distance yourself from it.

Read More: Harry Collins has lots of great resources on his Cardiff University website. You can find out more about the nature of expertise, and about his specific studies within the field of gravitational wave physics.

Image: Some rights reserved by Orin Zebest