Every couple of years, a journalist or a grumpy political scientist writes a piece bemoaning how irrelevant, technical, and specialized the field of Political Science has become, and pining for the glory days when political scientists ruled (supposedly). Nicholas Kristof’s recent op-ed is the latest high profile version. It doesn’t bring much new to the table for this genre, apart from being unusually clear about what’s expected by these folk of political scientists versus economists. Kristof lauds the economists for their “empiricism and rigor,” which he must know is associated with a much higher degree of technical and specialized work than you’ll find in PoliSci. (Try reading a typical American Economic Review piece and tell me that it is easier to get through than a typical American Political Science Review paper.) Kristof and company clearly think that this kind of empirical and theoretical rigor is fine for economics but not for political science.

Kristof has gotten a lot of pushback, including from Erik Voeten here at the Monkey Cage. One of the main reactions has been that he has it exactly backwards — that in fact political scientists are more engaged and more relevant than in the past, and that this is in some part because they are doing more empirically rigorous research than they used to, on average. This is definitely my own impression based on about 25 years in the field. But I’ll admit I am probably biased and could be completely wrong here. Maybe because of my advancing age I know of many more cases of political scientists and their research informing foreign policy discussions in government, and I didn’t know about such cases when I was in my 20s and new to the field. Maybe I’m too taken by the growth of a political science blogosphere and the way it has (among other things) conveyed research results that have led at least some journalists to rethink the way they write about U.S. elections. Maybe there really were glory days that Kristof knows about because he was a working reporter way back when, and he found political scientists and their research much more useful than he does today.

So I got to wondering if there was some actual data we could look at, some measure of the “relevance” of political scientists over time. The measure shown above — total annual mentions of “professor of political science” or “political scientist” in The New York Times from 1980 through 2013 — is definitely far from perfect, but it’s a start and it’s kind of interesting. The black line, which shows the data for just “professor of political science,” trends slightly upward over the whole period. (I’m starting in 1980 because this is when Lexis-Nexis indexing seems to start being comprehensive for the NYT.) If you search on “professor of political science” or “political scientist,” however, you get a really strong upward trend over the last 30 plus years — more than a doubling of annual mentions over the whole period.

So New York Times reporters and editors, at any rate, appear to be finding political scientists more relevant than they used to, at least in terms of quoting their remarks, publishing their opinion pieces and letters, reviewing their books, and referring to their research findings.

Possible problems with this measure: First, it picks up stuff where the “political science” part is largely incidental, like the obituaries of political science professors. But I doubt this would affect the overall trends.

Second, it might be more interesting and informative to code whether the articles mentioning political scientists are referring to research findings, or are just using political scientists for comments on specific election races (I’m guessing that is the modal reference). That would probably be better, but I’m not going to do it because I have too much overly specialized and irrelevant research to get back to.

Third, what if the NYT is just publishing more pages over time, or what if there is some stylistic trend among reporters to rely more on alleged experts of all types in their reporting? So I also collected the same data for “professor of economics” and “professor of economics” or “economist.” These are shown below, using the same scale as the Polisci graph for comparability. Now things are getting interesting. The Times refers to “professor of economics” about as often as “professor of political science,” and the overall trend is almost identical (slightly upward). However, if you include “economist” as a search term as well, you find that the Times refers to them a lot. The line on the graph is the number of references divided by 10 to get it on the same scale, so references to “economist”/”prof of economics” are routinely around 2,000 per year, versus around 300 on average for “political scientist/prof of political science.”



Why is that? My impression is that this is because the business pages very frequently quote economists who are employed as analysts and forecasters by banks, financial companies, and in specific industries. They are constantly predicting what’s going to happen to growth etc. in this or that sector or the economy as a whole. So that’s certainly a dramatic example of the greater relevance of economists to something many newspaper readers care a lot about.

But, interestingly, there isn’t a pronounced long-run, upward trend for references to {“professor of economics” or “economist”}, while there is for {“professor of political science” or “political scientist”}. There is a big increase in the last couple of years (2012 and 2013 are missing on the graph because Lexis Nexis apparently only gives you the first 3,000 references), but that’s true for the political science series as well.

I also checked two “control words” to see if there was evidence of some secular upward trend that might be due to more pages overall. I looked at the annual average for “legislature” and “diplomat” for 1980-1982 versus 2011-2013. These actually saw declines of 20 percent and 28 percent respectively. Though not conclusive, this argues against the possibility that the strong upward trend seen in the first graph above is some purely mechanical effect.

Again, I’m not saying that this is a perfect measure by a long shot, and I’m also not trying to say that political scientists shouldn’t try harder to work on important public policy questions and to write more for broader audiences. Certainly we should. But for what it’s worth these data suggest that over the last 30 years, Kristof’s NYT colleagues have found political scientists increasingly useful to go to for comments and insights, and that what is interpreted as overly specialized and technical research in the field has so far not led to disciplinary “suicide,” as Kristof fears.

Some technical notes: (1) I started out trying to use the NYT’s own search function on its archive. This seems to produce basically similar patterns, except for the years 2004-2008, during which time references to political scientists explode by factors of 6 to 10. You see this for the other terms as well, to lesser degrees. There is something strange going on with the search matching, possibly having to do the indexing of multiple versions of the same article? I don’t know. Anyway, this is why I went with Lexis-Nexis, which seems to be more consistent in how it indexes over time. (2) After accounting for trend, political scientist references are about 24 percent greater on average during presidential election years, and about 6 percent greater during midterm election years. The former is very strongly statistically significant, the latter not at all. “Economist” references are not systematically different in election years.