If you follow social-science news, maybe you saw the headlines recently: “Conservative political beliefs not linked to psychotic traits, as study claimed,” noted Retraction Watch, for example. As the site explained, four political science and psychology papers published since 2010 have now been corrected for wrongly implying a positive correlation between psychoticism (a trait that isn’t what it sounds like — we’ll get to that) and conservatism. What the researchers should have reported, it turned out, was an inverse correlation: The higher someone rates for psychoticism, the more likely they are to be liberal. There was an error in the way the researchers coded and interpreted their data.

Given the common shortcomings of media coverage of social science, and given that we’re turning a corner into peak presidential-election season, it’s no surprise that conservatives had a field day with this news, ignoring the fact that psychoticism, in this case, doesn’t mean psychotic in the everyday sense of the word. “Epic Correction of the Decade,” trumpeted Power Line, where Steven Hayward snidely suggested that “maybe the authors were hoping for a job with Dan Rather or Katie Couric if tenure didn’t come through?” “Science says liberals, not conservatives, are psychotic,” went a New York Post headline. “Well, pundits and Senator John McCain have called Trump supporters crazies,” said a Fox & Friends host, “but science says liberals, not conservatives, are psychotics.” On a live version of Slate’s Political Gabfest at the Aspen Ideas Festival, Mitch Daniels, the president of Purdue University and the former governor of Indiana, referenced the corrections as evidence that liberals are more authoritarian than conservatives.

At first glance, this seems like a fairly straightforward story: Researchers get something wrong, discover their error, and fix it. And the media, as is so frequently the case, respond with overblown, too-cute-by-half coverage.

If you dig deeper and talk to the people who were actually involved, though, what emerges is a frustrating, occasionally bizarre story of scientific dysfunction. And while this is by no means one of the biggest errors to have made its way into the social-science literature, it’s a useful cautionary tale in what happens when the norms social scientists are frantically trying to establish — replicability, openness, and data-sharing — are ignored.

The two common authors on the corrected papers are the political scientists Pete Hatemi of Pennsylvania State University and Brad Verhulst of Virginia Commonwealth University. In one of the four corrections they have published between the two of them (one of the corrected papers had Verhulst, but not Hatemi, as an author), they write that “The potential for an error in our article initially was pointed out by Steven G. Ludeke and Stig H. R. Rasmussen in their manuscript, ‘(Mis)understanding the relationship between personality and sociopolitical attitudes’” — a reference to a paper published recently in Personality and Individual Differences (PAID).

This is inaccurate. Rather, Hatemi, whose primary research interest is in how political ideology intersects with biology and genetics, was informed three years earlier, in July of 2012, of exactly what he and his colleagues had gotten wrong. That was when Colin DeYoung, a University of Minnesota personality psychologist, wrote him an email on behalf of DeYoung’s then-grad student, Ludeke, who had discovered the errors (Ludeke now works at the University of Southern Denmark). Emails provided to Science of Us, and conversations with all four researchers, show that in addition to suggesting to Hatemi and later Verhulst exactly what they had gotten wrong, DeYoung and Ludeke repeatedly asked for the raw data that would have allowed them to definitively demonstrate the error — and were repeatedly rebuffed in their attempts to attain it (Hatemi and Verhulst tell a different version of the data-sharing side of the story, which we’ll get to). After being informed of their error, Verhulst and Hatemi proceeded to publish still another paper containing that error, in PLOS One in 2015 — despite the fact that Ludeke, in the course of anonymously reviewing the manuscript that would become that paper, pointed it out once again.

It was only in 2015, when Hatemi and Verhulst found out that Ludeke and Rasmussen were planning on publishing an article detailing the errors, that Hatemi and Verhulst moved to acknowledge those errors and publish corrections on their faulty papers — undercutting, intentionally or not, Ludeke and Rasmussen’s ability to publish their critiques in the academic press.

In interviews with Science of Us, meanwhile, Hatemi has sought to paint Ludeke and DeYoung as unhinged inquisitors unfairly attacking his research, claiming that both men were so rude to the people who owned one of the data sets in question that the owners refused to work with them, and that Ludeke aggressively confronted him at a conference, threatening to “get him” — claims Ludeke and DeYoung flatly deny, and which are contradicted, at least partially, by the email records they have provided (Hatemi has declined to provide any details corroborating either claim). Much of the drama centers on sniping between Hatemi, a big-name researcher with an aggressive, hard-charging reputation, and Ludeke, an early-career academic who says he normally has no qualms about challenging people about their research, but who now wishes he had never stumbled into this controversy at all.

The whole thing, in short, is a bit of a mess.

***

And it all started with a broken hyperlink.

It was July of 2012, and Ludeke, a Ph.D. student in psychology at the University of Minnesota, was working on his so-called “specials” — the research paper that would fulfill the final requirements for his degree, other than his dissertation.

Ludeke’s area of interest was the relationship between political attitudes and personality traits. To take a couple simple examples, most psychologists and political scientists who study personality believe that, all else being equal, people who are more open to new and unusual experiences tend to be more liberal, and those who are more conscientious (that is, organized and duty bound) are more conservative. Ludeke’s adviser, DeYoung, had cautioned him not to miss any of the papers that had been published in this area, and now he had come across two that baffled him.

One was published in 2010 in the journal Personality and Individual Differences by Brad Verhulst, Pete Hatemi, and Nicholas Martin, and it had to do with the question of how personality and political beliefs are connected. The authors of the PAID paper disagreed with the prevailing wisdom that personality traits like openness “cause” political beliefs. “[W]e present strong evidence that the assumed causal relationship between personality and left–right ideology is too simplistic,” they wrote.

In the course of laying out their argument, Verhulst, Hatemi, and Martin noted that their data showed a positive relationship between a trait called psychoticism and conservatism. Ludeke found that Verhulst and Hatemi, working with the behavioral geneticist Lindon J. Eaves, had made a similar claim in a paper they published in the American Journal of Political Science, one of the top journals in that field, in 2012, based on a different set of data.

If you just perked up in your seat a little — Conservatives are psychotic? — that’s part of the problem. “Psychoticism,” an idea introduced by the German psychologist Hans Eysenck, sounds a lot more intense than it is. Basically, Eysenck had a model of personality which included three traits: extraversion, neuroticism, and psychoticism. Psychoticism, the only one relevant for this discussion, is a cluster of concepts related to people’s level of individuality and penchant for falling in line — it’s measured using questions like “Do you prefer to go your own way rather than act by the rules?” Being high in psychoticism means you have less respect for rules and for order in general — it doesn’t mean you are psychotic or otherwise mentally ill. Researchers sometimes call psychoticism P, with italics, to prevent this misunderstanding.

Eysenck’s personality model is no longer in vogue — these days, the favored framework is the so-called Big Five, which in addition to conscientiousness and openness includes extraversion, neuroticism, and agreeableness. But since Verhulst and Hatemi had access to large survey data sets in which respondents answered Eysenckian questions, that was what they used.

Their claim that P was positively correlated with conservative beliefs leapt off the page at Ludeke. To fully understand why, it would be helpful to look at the exact P questions Verhulst, Hatemi, and their colleagues examined in their original two papers. Here they are — the ones in bold were used in both papers, while the ones in plain formatting were only used in the later AJPS paper:

Would you take drugs which may have strange or dangerous effects?



Do you prefer to go your own way rather than act by the rules?



Would you like other people to be afraid of you?



Do good manners and cleanliness matter much to you? *



Do you stop to think things over before doing anything? *



Is it better to follow society’s rules than go your own way? *



Do you enjoy co-operating with others? *



Do you try not to be rude to people? *



Do you think people spend too much time safeguarding their future with savings and insurances?

On items without stars, agreement with the item raises your score, while starred items are reverse-coded, meaning disagreement raises your score. The key thing to keep in mind is that higher P scores on individual items are always correlated with responses indicating less interest in cooperation, less concern with order, and more impulsivity.

Taking all this together, Verhulst, Hatemi, and their colleagues were saying that people who are more drug-friendly, care less about cleanliness, and prefer to go their own way rather than follow society’s strictures were more conservative. Not only that, but they found strikingly powerful correlations in some cases — in the PAID paper, for example, they found that the correlation between psychoticism and conservative religious attitudes in their sample was about .5, and between psychoticism and conservative sexual attitudes was about .6. These values are high, according to Marc J. Hetherington, a political scientist at Vanderbilt and the author of Authoritarianism and Polarization in American Politics. “A correlation of .5 would be roughly what you would get if you asked people their partisanship and their ideology and calculated the correlation between the two,” he wrote in an email.

In both papers, Verhulst, Hatemi, and their colleagues also found that high scores on Eysenck’s Lie scale, which “measures both the tendency to try to present oneself in a saint-like manner as well as the tendency to be a bit saint-like” with regard to behavior like lying, as Ludeke summed it up to me, were correlated with liberal beliefs. This, too, ran contrary to just about all the literature with which Ludeke was familiar — that question had been answered over and over and over in the past, and here a research team was claiming to have found exactly the opposite.

To Ludeke, the most likely explanation for what was going on was that the authors had erred, that they had simply inverted how the items in the surveys were scored so that the actual correlation they should have observed — people higher in P are more liberal — flipped itself into a very unlikely finding.

“To obtain a copy of the data for replication,” instructed the AJPS paper, “please go to http://polisci.la.psu.edu/facultybios/hatemi.html,” which suggested to Ludeke the data would be downloadable from Hatemi’s website and he’d easily be able to check his theory. But the link was broken, meaning Ludeke would have to reach out directly to the authors. That made things a bit fraught. If Ludeke was right and this was an error, it was a really silly one, and he was worried he’d be seen as impugning the methodological chops of the papers’ authors, several of whom were highly respected academics. “I actually got scared right away,” Ludeke told me. “Upon reading and understanding the error I was pretty uncomfortable, because the names involved were really large but the errors were a big deal. These were big correlations, these were highly cited papers. Frankly, I was certain they were errors immediately, so I was actually immediately pretty nervous and a bit sick to my stomach.”

Ludeke, who was born and raised in St. Paul, isn’t the type to get worried about telling someone they might be wrong. “I don’t show a large amount of humility,” he said early in our first interview. He is rarely shy about expressing his beliefs. But the situation with these papers was different. “Whatever that tendency is to voice my opinion, it felt differently when it was criticizing stuff that’s in print,” he said – especially for someone at bottom of what he described as the research “totem pole,” which was how he saw himself at the time.

Ludeke sent an email to his adviser, DeYoung, with the subject line “career havoc?,” in which he explained what was going on. “[U]nless we’re grossly mistaken, they got confused about which end of their scale was liberal and which was conservative,” he said in the email, which he shared with me, in which he noted the papers’ “impressive co-authors.” Luckily, DeYoung had just met one of those co-authors, Hatemi, at a conference, and he offered to reach out on his advisee’s behalf. He emailed Hatemi on July 27, 2012, laying out to Hatemi what he and Ludeke thought the issue was:

It looks to us like there may be a mistake in your interpretation of your results, based on the coding direction of the attitude scales. … I’m familiar with the literature on personality and politics, and it suggests the opposite direction for all of your correlations (our own datasets bear this out too). Liberalism is typically positively related to Psychoticism and negatively related to the Lie scale, and sexual liberalism is positively related to Extraversion. We are wondering if it’s possible that your attitude scales are coded backward in both articles.

Ludeke was right: This is exactly what Hatemi and Verhulst got wrong — highlighted by DeYoung, writing on his grad student’s behalf, in his very first email to Hatemi.

The first of many emails, it would turn out. Hatemi responded in a friendly enough manner the following morning, but sounded surprised by what DeYoung and Ludeke were claiming. “[Y]ou have a [data] set where P tracks with being more liberal? Weird. The scale is pro authoritarian and militarism - that doesn’t make a lot of sense to me.” (This is a clear misreading of the P scale.) A few emails later, after Hatemi noted he was on vacation but assured DeYoung that “the directions of the relationships… [were] right” when he looked at the raw data, DeYoung responded, “Thanks Pete. Didn’t mean to bug you on your vacation. Maybe we can talk about this further when you’re back at work. We’d love to take a look at your data to see if we can understand why your results are opposite to ours.”

That marked the first time, according to the correspondence DeYoung and Ludeke shared with me, that they asked Hatemi for his team’s data. And it kicked off what they described as a frustrating, months-long exchange in which they were never able to get their hands on any of it. In the emails, DeYoung repeatedly checked in with Hatemi after not hearing back over long spans. “It’s not going to be a rush,” said Hatemi in one response from October 29, more than three months after the initial request, “so if you have a short timeline on this, unfortunately you’ll have to be adjust and be patient.”

Other than a final attempt DeYoung made to get a slice of the data in April 2013, the last email in the chain, according to DeYoung and Ludeke, took place on December 16, five months after they had originally reached out, with the two sides in negotiations about the process of obtaining the data for one of the papers, which was housed at VCU’s Virginia Institute for Psychiatric and Behavioral Genetics. DeYoung and Ludeke never heard back after that. (You can read what Ludeke and DeYoung say are all the emails, with the exception of DeYoung’s final two followups that he said he got no response to, in PDF form here, here, and here — they’re uploaded exactly as the the researchers forwarded them to Science of Us.)

Hatemi and Verhulst tell a wildly different story about what happened.

***

Hatemi is convinced that Ludeke is out to get him. In our phone conversation, he repeatedly impressed on me just how minor the error is, how few times the papers in question had been cited, and how much of an overreaction it was for anyone to care all that much. “This error is freaking tangential and minor and there’s nothing novel in the error, whether [the sign on the correlation] was plus or minus,” he told me. “There’s no story. And I wish there was — if there’s any story, it’s, Should people be allowed to honestly correct their errors, or should you lampoon them and badmouth them for everything they didn’t do because they had a real error they admit to?”

He and Verhulst argue that since the point of their papers was to argue that the correlation between personality and ideology is not a matter of the former causing the latter, whether the correlations are positive or negative don’t matter. The problem with this is that if you publish original, eye-popping correlations that run contrary to a lot of prior research in a given subfield, obviously researchers in that subfield are going to take notice of it — whether or not those correlations matter to your particular argument. “Published findings are important to people,” said Hetherington. “Moreover the reason the correlation doesn’t matter to Pete is that he doesn’t believe that personality causes policy preferences, that it is all about genetics. That means to him the size of the correlations between policy [preferences] and personality don’t matter. Any relationship is spurious. But to believe that story requires a lot of faith in their statistical method that purportedly establishes causation. I suspect that most readers will either be unfamiliar with the method or skeptical of it. For those people, which is most people, the correlation itself matters.” In other words, you can’t say the surprising correlation doesn’t matter simply because according to your statistical method, which not everyone agrees with, it doesn’t matter. Causality questions aside, you’re still arguing that according to your data impulsive people who dislike rules are more conservative.

Hatemi and Verhulst also claim that determining the source of the error was so difficult — involving pulling the original, paper surveys and poring over them and the scoring keys — that even if they had handed over the data, DeYoung wouldn’t have been able to figure out what was going on. Ludeke said he disagrees. “The error is detectable in the data without any codebook,” he told me. Which is hard to argue with because, well, Ludeke detected the error in the data without any codebook, simply by looking at the 2010 and 2012 papers.

Hatemi told me that he had set the wheels in motion to get access to Ludeke and DeYoung, anyway, but that Ludeke and DeYoung had been so overly aggressive they had blown their opportunity. “We actually tried to help them get access, and they were so offensive to people that the people who controlled the data were like, No, we’re not going to engage with them,” he said. DeYoung flatly denied this in an email. “No, this never happened,” he wrote. “I never corresponded with anyone other than him and Verhulst about the data, and I sent [Science of Us] all of the emails between us.” (I emailed the researchers who, according to Ludeke, controlled the data, asking them if they remembered a time from 2012 when someone was so rude to them they refused to share data, but none of them responded.)

Verhulst, who like Ludeke is on the earlier side of his career, told a different story about Ludeke and DeYoung’s incivility. In his version, around the time Hatemi allegedly handed them over to the data gatekeepers, Hatemi and Verhulst started hearing rumors that Ludeke and DeYoung were saying mean things about them to other academics — that the errors may have been intentional or otherwise malicious — and this sapped their desire to help Ludeke and DeYoung obtain the data. Hatemi and Verhulst declined to send me any evidence of incivility on the part of DeYoung and Ludeke — there isn’t really any in the emails DeYoung and Ludeke shared.

According to Hatemi, Ludeke was aggressive in person as well. He said that Ludeke “threatened” him and “came up to us at a conference and accused us of destroying his dissertation and [being responsible for him] not getting a job.” “I understand it sounds kind of outlandish,” Hatemi said, “but then again being threatened in public was a pretty weird thing too… We were at this conference, and this guy comes up — and I didn’t even recognize him at the time — and says, You tried to destroy my dissertation and I’m going to get you.”

Just as DeYoung denied having ever corresponded with the VCU researchers who controlled the data, Ludeke denied ever having threatened Hatemi. “Never interacted with Hatemi at a conference,” he said in an email. “[…]Met him in person only the once, in August or September of 2014, at [Southern Denmark University, where the two overlapped for a bit], for perhaps 10 minutes, with him talking for nearly all of it about his promotion to full prof. We didn’t discuss the papers.” Ludeke also forwarded me some correspondence from last year in which it appears that, to that point, at least, Ludeke hadn’t made much of an impression on Hatemi, and it’s hard to square their exchange with the idea they’d had a heated confrontation. In it, Ludeke asks Hatemi for the code he used to analyze his data for the PLOS One paper.

Hatemi’s response, dated March 30, 2015:

If I understand your email you are asking for some help. Based on your email, I’m assuming you are undergraduate or early career graduate student. My advice is to run these types emails by your advisor or instructor first, so they can help guide you in how we communicate in the academe and help you get what you’re looking for. It is quite common for scholars to ask for help from one another, but usually we start by introducing ourselves, and the project we are working on, what we are trying accomplish. If we are running analyses, and asking for help on those, then we usually provide the script we are using, the data, and our results, and then identify where we are stuck. From your email, I have no idea who you are, what paper you are referring to, what script you are referring to, etc.

I asked Hatemi to elaborate on his claim that Ludeke had confronted him — where did it take place? In his response, Hatemi said that he actually didn’t want to make this personal. “[T]the point I was making to you was not so you would write anything negative about anyone, or verify crazy behavior, but more as backstory so you see how absurd this situation seems to me,” he wrote. “I prefer to focus on the science and hope you do too. Typically injecting the personal into science is not a good thing. Yes, there was/is a bunch of juvenile behavior surrounding a rather small issue in science, but the real story is being lost.”

***

Throughout the process of trying to obtain the data, Ludeke explained, he was hoping to simply provide Hatemi and Verhulst with the means to correct their error — both for political reasons (the perils of a junior researcher criticizing senior ones) and because of the resources it would entail, he didn’t really want to investigate the issue further on his own. Plus, he’d made his points multiple times: In 2013, he was asked to be an anonymous reviewer on a prospective paper for the AJPS that he immediately realized came from Hatemi and Verhulst (reviewers are blinded to authors, and vice versa, but it’s oftentimes not difficult, especially in small academic communities, to figure out who’s who), and in that review, he once again raised red flags: “I find their results showing conservatives as disinhibited and liberals as high in moralistic bias … to be so surprising as to be unbelievable,” he wrote, “and this pattern of results appears to be more or less unique to these samples.” Again Verhulst and Hatemi ignored the warning: their paper was rejected from AJPS, but published in PLOS One in 2015, repeating the same errors.

By 2014, thanks to a one-year visiting professorship at Colgate, Ludeke actually had some money to play with, so in June of that year he collected some Amazon Mechanical Turk data which included, among other things, respondents’ levels of psychoticism and social desirability and their sociopolitical beliefs. Collaborating with Stig Hebbelstrup Rye Rasmussen of the University of Southern Denmark, where Ludeke worked after his stint at Colgate, Ludeke found solidly negative correlations between Eysenck’s psychoticism and conservative political beliefs, and between psychoticism and authoritarianism — exactly what the theory and literature predicted. “I knew what would pop out ahead of time,” he said. “As I say, this is settled science.”

Then, on March 4, 2015, he finally got access to some of Hatemi and Verhulst’s data, after the PLOS One paper was published, since that journal has an open-data policy. While the authors didn’t include information about how the variables should be coded, Ludeke simply took individual items on the psychoticism and other scales, determined intuitively how they should be coded (that is, setting “yes” answers to 1 and “no” answers to 0, as appropriate), and started crunching the numbers based on his interpretation. Voilà: His results were exactly opposite what Hatemi and Verhulst reported — meaning they were in line with the literature.

Ludeke and Rasmussen initially planned to write up a two-study paper responding to the Verhulst and Hatemi errors. The first study would run down the data analysis they had done on the Mechanical Turk surveys, laying out their results and how they fit into the established theory. The second would explains their re-analysis of the PLOS One data, show how they fixed the error, and offer something of a postmortem in an attempt to explain why the four faulty results had survived peer review.

Ludeke said that when he and Rasmussen submitted this paper to the Journal of Personality and Social Psychology for publication, they asked for Hatemi and Verhulst to not be chosen as reviewers since the paper criticized their work. This is a fairly standard sort of request, Ludeke said, but he added that editors do have the right to ignore it (Hetherington confirmed this, on both counts). And an editor at JPSP did just that, sending the paper to Hatemi as a reviewer. Suddenly, in Ludeke and DeYoung’s telling, Verhulst and Hatemi raced to get corrections to the four journals where they had published work mentioning a positive correlation between psychoticism and conservatism. To Ludeke and Rasmussen, this had the effect of taking the wind out of their sails, somewhat obviating their critique. Sure enough, some of the reviewers who rejected the paper at JPSP and other publications noted that the corrections had already been issued.

While Ludeke is clear that he doesn’t think this is the reason the paper was initially rejected, he does think the quick corrections had an effect. DeYoung agrees. “All I can say is that it may have taken longer for Steven to get his paper published,” he said, “and in the end I think Steven’s paper focuses less on the demonstration of the errors and the proving the errors in Hatemi’s papers than it would have, precisely because there were already these errata published.” He added that the paper “might have been published in a higher-impact journal had it not happened the way that it did.”

Eventually, Ludeke and Rasmussen got a “substantially revised and less critical” version of the paper, as Ludeke put it, published in the August 2016 issue of Personality and Individual Differences. Ludeke sent Science of Us the original manuscript he submitted to JPSP, and it is indeed a much more thorough accounting of what happened — not just the error itself, but what Ludeke and Rasmussen see as a very sloppy, incomplete recounting of past research on the part of Hatemi and Verhulst that may explain why they didn’t notice the strangeness of the results they were reporting.

The rejected paper is a rich buffet for methodological nerds, but to take two quick, telling examples: Hatemi and Verhulst described their 2012 AJPS paper as “only the second study we are aware of to explore the relationship between any ideological dimension and social desirability,” referencing their own 2010 study as the other. Ludeke and Rasmussen were able to find not one, not two, but 18 published papers which studied exactly that subject, “each of [which] presented results precisely opposite those presented by Verhulst and Hatemi.” Verhulst and Hatemi similarly disregarded a solid body of past work on the Lie scale, according to Ludeke and Rasmussen. In short, Ludeke and Rasmussen argue – rather convincingly – that a competent reading of the literature would have immediately suggested to Hatemi, Verhulst, and their co-authors that something was seriously wrong with their results.

While the edges of published Ludeke and Rasmussen paper are sanded down significantly as compared to the rejected JPSP manuscript from which it evolved, Hatemi and Verhulst are still furious about it, if their response paper, which is in press at PAID, is any indication. (When Hatemi sent me a draft of it, he mentioned that it had been “accepted by the same journal that slandered us.”)

Their paper, entitled “Correcting Honest Errors Versus Incorrectly Portraying Them: Responding to Ludeke and Rasmussen,” argues that Ludeke and Rasmussen unfairly misrepresented the errors in question, but it contains some important errors of its own. It’s misleading about the timeline, for one thing: “Briefly, the potential for error became known through a blind-review request in June of 2015,” write the authors in a footnote. “This manuscript provided some hint that a coding error might be present in our work, and based upon this we immediately re-examined the issue.” This is an odd way of recounting a story that begins in July 2012 with a fellow researcher alerting them to exactly what they had done wrong — whatever else is true, they certainly had “some hint” of a coding error as soon as they saw DeYoung’s email — and it constricts the timeline in a manner that portrays Hatemi and Verhulst as positively peppy in their response to being notified there might be an error, when in fact nothing happened for three years.

But the strangest, sloppiest part of the new paper is a sentence in which Verhulst and Hatemi argue that a “longstanding published literature … already identified a positive correlation between conservatism and Psychoticism (Eysenck & Wilson, 1978; Francis, 1992; Nias, 1973; Pearson & Greatorex, 1981; Powell & Stewart, 1978; Wilson & Brazendale, 1973). [emphasis theirs]” Four of these citations are easy to check, with the other two being books. And three of those four, the papers by Pearson & Greatorex, Francis, and Powell & Stewart, note a negative relationship between psychoticism and conservatism (or religiosity, in the case of the Francis paper — but religiosity is itself correlated with conservatism), directly contradicting how they are portrayed in this sentence.

Even here, in a paper ostensibly fighting back against the claim that they didn’t accurately or comprehensively evaluate past literature on conservatism, authoritarianism, and psychoticism, Hatemi and Verhulst inaccurately describe what that research says.

***

However you evaluate the sniping between these two research groups, some things about this episode are pretty clear: It’s clear that on four different occasions, including once in a top journal, research findings were published that don’t make sense in light of decades of research about personality and political preferences, and that these errors got through many reviewers. It’s also clear that when the authors of these papers were informed about what they had likely done wrong, they either ignored the warnings or lacked the expertise to understand them, and that their data weren’t shared.

What should we make of all of this? Partly, of course, this is a story of conflicting personalities, of competitiveness between researchers, of academics acting — let’s be frank — like dicks.

One obvious question, then, is how institutional structures and incentives can be tweaked to account for the universal human potential for dickishness. The simple answer is: openness and transparency. “You should get a gold star when you make your data and the code to analyze that data available,” said Ludeke. That isn’t always possible, he allowed, but “if you do everything right, you should get into a better journal than if you’re not willing to [share] data.”

There’s progress on this front: Today, for example, the American Journal of Political Science has beefed-up transparency guidelines requiring its authors to provide ready access to replication data. That will make it easier for the anonymous grad students of the future to discover potential problems — and harder for authors to ignore those problems. “At the start of this incident, it begins with me having somebody else write for the data, because I didn’t even want to pick that fight,” said Ludeke. “So if you have all the stuff easily available, even peons like I was at the time can invest the time in trying to catch those mistakes and won’t shy away from it, and that will be good for science.” These days, at least when it comes to AJPS and the increasing percentage of journals with open-data policies, the peons have more of a chance.

So chalk that up on the encouraging side of the ledger. But viewed more broadly, this story still suggests the social-science landscape isn’t yet as embracing as it could be — and should be — of the replicators, challengers, and other would-be nudges like Ludeke who tend to make science better and more rigorous, who make it harder for people to coast by on big names and sloppy research. “Frankly, my advice to all the other fellow peons out there is until that happens [and data openness is more common], my experience is not supportive of engaging in this,” said Ludeke. “The amount of time that this project took for me, the amount of time and the amount of stress, makes this clearly a bad choice to have pursued, unequivocally. If you have a chance to say something to that effect in the article — ‘I do not find this to be a recommendable experience’ — I would like that.”

(Update: After this article was published, Ludeke emailed me to say that the version of his anonymous American Journal of Political Science review that he shared with me included some extra notes at the bottom of the document that he did not submit with the review, and that Verhulst and Hatemi therefore wouldn’t have seen. The link has been updated, but the original link still lives here.)