Last year was, in many respects, a bad year for academic psychology. A replication effort of 100 published studies in psychology found that the majority of such ‘classic’ studies are hard to replicate (Open Science Collaboration, 2015). The American Psychological Association (APA) became embroiled in scandal, with a report they commissioned concluding that APA staffers and some eminent psychologists had colluded with the US government to change the APA ethics code to allow psychologists’ participation in the torture of detainees (see Grohol, 2015). The APA further tarnished their image as a scientific organisation by releasing a policy statement on video games that was soon criticised for stacking the task force with conflict-of-interest members, lack of transparency, and excluding inconvenient data from its review (see Wofford, 2015). The British Psychological Society (BPS) was not immune from controversy, with the election of Peter Kinderman as President eliciting some critique regarding his public comments on mental illness (see Coyne, 2015a). And the refusal by scholars in the PACE trial of chronic fatigue treatment to release data revealed continued problems with transparency in published science (see Coyne, 2015b).

These are, in fact, only some of the most dramatic pieces of ‘bad news’ for academic psychology during 2015. It is worth taking a step back, all of us, and asking what has gotten our field into the credibility hole it currently finds itself.

Last year, I wrote an essay for American Psychologist (Ferguson, 2015) detailing several areas of dysfunction within academic psychology that are harming our reputation among the general public, policy makers and scholars in other fields. These included the replication crisis, as well as both the questionable researcher practices that contribute to this unreliability and the hostile response by some scholars toward efforts to shore up methods and improve transparency. I also identified psychology’s tendency to grab for newspaper headlines with catchy counterfactual findings (many of which prove unreliable) on one hand, and wag a moralising finger over issues ranging from parenting practices to media consumption with the other.

Thus, at the moment, academic psychology appears to me to be poised somewhere at a crossroads between an actual science, and something closer to pseudoscience. The online Merriam-Webster dictionary defines pseudoscience as ‘a system of theories, assumptions, and methods erroneously regarded as scientific’. Do psychology’s scandals of 2015 suggest this is the path we are heading for? And, if so, what can we do to change course?

Science as a flawed endeavour

Perhaps the best defence we can raise for academic psychology is that the problems we’re seeing for psychological science are not unique. Medical science, for example, is known for its problems with publication bias, conflict-of-interest funding, and poor replicability. Some reproducibility efforts have suggested, for instance, that a majority of pre-clinical cancer research is difficult to replicate (Begley & Ellis, 2012). But pointing to problems with other fields hardly absolves our own. We need to look at our own science and investigate how we can improve what we are doing.

In part, our problems may stem from the folk tale we often tell ourselves that academic psychology is a ‘real science’, and the ‘facts’ handed down through published studies into textbooks are ‘objective’. We think of ourselves as disinterested, our findings immutable because they’ve gone through peer-review, and consider our fields of research open to correction, even as we personally resist any correction to our own published research. I don’t mean to suggest a post-modern alternative in which all knowing is equal, and a world devoid of facts. But I do suggest that, too often, academic psychology has created a veneer rather than reality of science. Sometimes this is due to defensiveness, egos, politics or outright fraud. But I maintain that the majority of issues stem from good faith efforts: individuals who value science but are all too human, and apply scientific values unreliably. I think this is a fault all scientists have, and I do not exclude myself. But, by failing to acknowledge the human limitations of science, we fail to consider the human limitations of our endeavours and remain appropriately humble.

Psychology’s mythmaking

Due to psychology’s problem with replication, and its sometimes stubborn resistance to correction, it has become apparent we are increasingly as responsible for creating myths about the way humans work as we are for correcting myths. At times it seems academic psychology is so fascinated with ‘myth-busting’ that it creates statements of absolute certitude based on flawed or limited science, simply for the satisfaction of being able to say to the public, ‘Hah, you thought people worked this way, but see… they don’t. We know better!’ I understand how this must be satisfying to academics who have invested their lives studying human behaviour. And of course it can get newspaper headlines, if not the potential for future grant funding. But it’s an inherently dangerous strategy for a field, which will look twice as bad for having stuck its neck so far out, should those counterfactual findings prove to be erroneous.

Consider the common belief that venting anger is ‘cathartic’. Granted, the relationship between anger and catharsis is likely a complex one as is most human behaviour. The ‘folk wisdom’ that punching a pillow is good for you is undoubtedly too simplistic. But so, too, has been the response of academic psychology which presents the idea as a ‘myth’ (see Grant, 2015). When one publically states the phenomenology experienced by so much of the general public to be a ‘myth’, the data better be solid. Unfortunately, that’s where academic psychology often has the greatest problem. The myth/countermyth of catharsis is illustrative. Rather than counter a simplistic popular view with an informed and nuanced discussion, academic psychology has countered with an opposing extreme, simplistic and ideologically rigid view that it always has an inverse effect, which ignores evidence against this position and relies on weak data. The problem is that studies on catharsis (e.g. Bushman, 2002) typically randomise people to specific tasks, like punching a bag, they would likely never do in real life when angry. Using catharsis to reduce anger is very much an individual choice between numerous behavioural options (whether it works or not). Few punch a bag or pillow. By focusing on this, psychologists are studying a cliché, not real life.

With a little thought, the flaws to taking such a simplistic, extreme view are obvious. It should be no surprise that giving people a specific task to do under contrived circumstances that they may feel is ridiculous, might increase rather than decrease frustration (and this ignores the potential for demand characteristics). We also forget, other studies suggest that catharsis may work under varying circumstances for different individuals, having both benefits and pitfalls (Bresin & Gordon, 2013). We forget too, that psychological findings very often report what the scientists wants to see, and most recent studies of catharsis have been by scholars advancing social cognitive theory, in many ways catharsis theory’scompetitor. Just because Ford says Ford cars work better than Peugeot doesn’t mean we should stop thinking critically about the issue and challenging assumptions. But this is exactly what academic psychology has done on the catharsis myth/countermyth and so many other issues.

My point is not that catharsis works or does not work. I merely highlight an example of where academic psychology has labelled a popular perception a ‘myth’ but has done so using shaky studies, ignoring the slanted ideologies of the field itself and data that might suggest the ‘truth’ is nuanced. If academic psychology had its act together in terms of reliability, transparency, ideological and moral neutrality, absence of politics, methodological rigour, absence of publication bias, etc., such behaviour may be defensible. But, in light of only increasing problems facing academic psychology, I argue that humility, nuance, qualifications and asterisks are our safer path.

To me, it seems too often that psychological science is eager to rub in the faces of the general public that their thoughts about how humans work are wrong and psychologists know better. Whether from the perception of free will to the phenomenology of the g-spot, psychologists tread a loose board when telling people their personal experiences aren’t real. Again, if the science is solid, this may be worth doing. I am not advocating to allow people their myths. But we do need to be careful not to create our own myths in our zeal to show the public how smart we are. This issue is fundamentally a cultural one. That is to say, we need to look for ways to change our academic culture, to focus it on the long slog of objective fact, rather than

the short fix of newspaper headlines, politically right thinking, moral superiority, and grant grabbing.

A recent entry on the British Psychological Society’s own Research Digest blog highlights this issue, with a list of the supposedly most counter-intuitive psychology findings ever published (Jarrett, 2015). This post repeats the controversial venting anger issue, and also makes some conclusions that could clearly lead to problems should they prove wrong, such as that teaching to learning styles may be without value, that depression in pregnant mothers can be good for their infants, or that including narcissists in teams can be good for their creative productivity. Given the remarkable replication failure rate for psychological studies, and the potential particular susceptibility of ‘counter intuitive’ findings to exaggeration, the risk of highlighting these in ways that may result in changes in behaviour or policy are not trivial.

Society’s nervous nanny?

Most of us can remember people in our lives, perhaps our childhoods, who mainly seemed to function to tell us everything we were doing was wrong. Keep scrunching up your face like that, it’ll freeze that way. Crack your knuckles, you’ll get arthritis. Stop touching that thing, it will fall off. My observation is that psychological science spends too much time being this nervous nanny for society, dispensing moralising yet dubious bits of folk wisdom about why whatever the public is doing is wrong. Eventually, most of us learned to tune out these people in our lives. Psychological science risks the same.

‘Stop what you’ve been doing for years!’ statements from academic psychology are typically phrased as absolutes with clear moral consequences. You are bad people if you do not follow our advice. The APA’s flawed 2015 statement on video games and aggression provides a template for how this happens. First, it must be observed that psychologists, as a group, are a self-selecting sample, typically liberally biased (Redding, 2001). But when constructing policy statements, academic guilds like the APA routinely construct task forces from among scholars with clear a priori hardline positions on an issue who can be counted on to render a set conclusion.

This was the problem with the video game statement. When the task force was announced it worried enough scholars that 238 of them wrote to the APA asking them to simply retire all their policy statements on video games (Consortium of Scholars, 2013). Nonetheless, the APA allowed the task force to continue to its strange conclusion: a meta-analysis that included only 18 studies (including at least one with no relevant contrast) out of a field of over 100, with the task force particularly neglecting available null studies (in their report, the task force at one point acknowledges voting on what studies to include or exclude). None of the task force’s data or notes on study inclusion/exclusion have been released publically.

Another illustration of academic psychology’s tendency to finger-wag, often based on biased and limited data, is the debate on spanking. Spanking (open handed, non-injurious swats to the behind as punishment) is largely unpopular with liberally minded psychologists (myself included). But public moral pronouncements with a veneer of ‘science’ need to be careful. However, in typical form, task forces on the issue (such as the recent interdivision, APA Division 7 and 37 task force: Task Force on Physical Punishment of Children, 2015) often include only scholars on one side of the debate, excluding sceptics (e.g. Larzelere & Cox, 2013).

Spanking research is also a good example of what I sometimes refer to as ‘the scientific pile-on effect’. Once something is identified as ‘naughty’ (video games, spanking, soda, etc.), it’s predictable to see an ever-increasing crescendo of studies linking the naughty thing to everything bad imaginable… bad behaviours, low intelligence, adult health problems, cancer, global warming… This is really the inverse of snake oil salesmanship. Just as hucksters sold junk medicines with cure-all promises, academic psychology spends too much time selling moral agendas with claims that the naughty thing, whatever it is, causes all problems, just as snake oils cure all ills. This scientific pile-on effect should be a warning that something has gone amiss in the scientific process.

With spanking, my observation, once again, is a kind of dishonesty in representing weak and inconsistent results as more conclusive than they actually are. One example is from a study that received wide press attention claiming to link spanking to adult health problems (Afifi et al., 2013; I note in defence of the authors, they can’t control press coverage). The study, in fact, did not isolate spanking from potentially abusive forms of physical punishment. Moreover, an examination of their results reveals that of seven health outcomes considered, results in models controlling for other influences were significant for only two, arthritis and obesity, and these at the fragile level of significance near p = .05. Public discussions of this study ignored the mishmash of significant to non-significant results, the high potential for type I error in marginal findings, and overall weak effect sizes. It is the failure of psychological science to put results into proper context that so often causes us harm.

Instead, when presented with scepticism or doubt, we often see psychological science react defensively with ludicrous claims. Far too often I see psychological scientists defend their work by comparing it to climate science, medical effects or evolution. Or scholars sometimes conflate within-individual effect sizes to population-level impact. The logic goes something like, ‘Well, if the correlation between eating blueberries and suicide is r = .01, that means that one out of ten thousand people could be saved from suicide if we convince everyone to stop eating blueberries.’ Or defenders might cite the importance of blueberries/suicide by saying the effect size is similar to that of the Salk Vaccine Trial, with its infamously miscalculated effect size of r = .011 (the actual effect size of the Salk Vaccine Trial is closer to r = .74. The Physicians’ Aspirin/Heart Attack Trial is another infamously miscalculated and misused effect size, with reports often suggesting it is near to r = .03. The actual effect size is closer to r = .52; see Ferguson, 2009). These spurious comparisons between psychological science and other, well-established fields (despite the statistics behind them having been debunked) are part of the evidence establishing so much of academic psychology as pseudo-scientific in its enterprise.

Where do we go from here?

The good news for academic psychology is that many scholars really are invested in objective science. Unlike, say Flat Earth beliefs, astrology, or phrenology, many adherents to academic psychology understand there are problems and are dedicated to fixing them. At the same time, there is also certainly resistance to change, transparency, improved rigour and conservatism in public pronouncements. Much of this resistance, unfortunately, appears to have originated within professional guilds, with their unhelpful policy statements on multiple issues. This is why I state that academic psychology is at a crossroads between science and pseudoscience.

Many of the suggestions for how to improve matters have been stated publically so often, they need only a brief repetition here. We need to focus on replication rather than novel findings. We need more transparency, and pre-registration of research protocols. We need to become less rigidly ideological about theory. We need to be careful about letting good-faith advocacy beliefs corrupt scientific integrity. Here, though, are a few thoughts on issues I feel are important, but often missing from these discussions.



We need to be more realistic about effect size. Much of the discussion of replication has focused on whether results do or do not exist across replication efforts. There’s been less discussion about results that may replicate but are so small as to be trivial. Unfortunately, psychology has no real conception of the trivial, and that has invited all manner of pseudoscientific efforts to extend tiny effects into important findings. Psychology needs to develop a healthy sense of the trivial, which, frankly, probably encompasses a majority of findings, and stop highlighting these as crucial for people to know about.



Death by press release. The urge to see one’s research get recognition from the masses in print is entirely human and understandable. But a certain recklessness often creeps into press releases, which are not bound by the peer-review of the original article. It’s curious that a science that seems so concerned about myths would be willing to blithely misinform the public based on novel findings, the replicability of which may be unknown.

I am not suggesting the end of press releases. But given their lack of peer-review oversight, I am suggesting that press releases are ultimately the responsibility of the study authors. Acknowledge tiny effect sizes, inconsistent findings from other studies, methodological weaknesses. Adopt a cautious, qualifying tone. I get it that this is the sort of thing that results in less news coverage. But as things currently stand, press releases from psychological studies are probably creating more myths than they are challenging.



Stop picking on the kids. Nothing seems to get attention more than the latest study suggesting how youth today are worse than ever before. More narcissistic, less empathic, more addicted to video games, less interested in homework. Most of this is rubbish and makes us look bad. For some reason, youth appear to be the last demographic that psychological science feels free to disparage with complete disregard. Unfortunately, those youth eventually grow up and remember…



We need better leadership from our guilds. First, we need to remember that our professional organisations are not neutral arbiters of facts, but professional guilds driven to promote our professions (often come what may). Our guilds have often been the shaft of the spear, pushing researchers to make bolder and more irresponsible statements. New myths are created that appeared to benefit the profession. What better than a set of counterintuitive findings that upend how most people view the world?

I believe our guilds hold a primary responsibility for the damage done to the reputation of our fields. But they can be a source of guidance for responsible conduct too. Most ‘policy statements’, at least those that appear to speak to scientific ‘fact’ or make declarations on moral issues, should be eliminated or retired immediately. Our guilds need to become more proactive in encouraging careful, cautious, balanced communication of research findings. This would take a considerable change of culture within these organisations and among the staffers that run them, but it’s a change members should insist upon.



My impression is that academic psychology has been here before. None of the issues being raised under the umbrella of ‘replication crisis’ are inherently new. And, arguably, the history of how psychology has responded to these crossroads is not encouraging. In the past, academic psychology seems to have settled on pseudoscience more often than it has pushed itself to be better. But perhaps this time will be different. Our problems are attracting considerable attention and, as the saying goes, there’s no better disinfectant than sunlight. And there seems to be real momentum behind change among many scholars.

I do think things will improve. But it will, fundamentally, take a change in culture. This will mean a difference in the way we train students, the way we publish, the importance put on grants, and the centrality of professional guilds to our profession. An effort to make psychology a true science will be long, painful and require determination. But, I believe, it is a goal worth striving for, and one we can achieve.

Meet the author

‘Even as a graduate student I realised there were often extreme gulfs between the public statements of psychologists and the data available to support them. I have become increasingly curious about academic culture itself, how the field of academic psychology, often acting in good faith, promotes certain myths and misbeliefs about human behavior. Although these issues relate to statistical problems, the weaknesses of null-hypothesis testing and our aversion to replication, at root, cultural issues within the field appear be critical to understand if our field is to move forward. Too often psychologists think what they’re doing is an objective science, but instead, it may be important to increasingly open up psychological science to its own sociological analyses to understand how knowledge is constructed, communicated and sometimes mis-communicated.’

Chris Ferguson is Professor of Psychology at Stetson University, Florida

[email protected]

References

Afifi, T.O., Mota, N., MacMillan, H.L. & Sareen, J. (2013). Harsh physical punishment in childhood and adult physical health. Pediatrics, 132(2), e333–e340.

American Psychological Association (2015). APA review confirms link between playing violent video games and aggression. tinyurl.com/puvjw2u

Begley, C. & Ellis, L. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483, 531–533.

Bresin, K. & Gordon, K.H. (2013). Aggression as affect regulation. Journal of Social and Clinical Psychology, 32(4), 400–423.

Bushman, B.J. (2002). Does venting anger feed or extinguish the flame? Personality and Social Psychology Bulletin, 28(6), 724–731.

Consortium of Scholars (2013). Scholars’ open statement to the APA Task Force on Violent Media. Retrieved from tinyurl.com/pqbb32r

Coyne, J. (2015a). The Holocaust intrudes into conversations about psychiatric diagnosis: Godwin’s rule confirmed. PLoS Blogs. tinyurl.com/hnse3ae

Coyne, J. (2015b). Why the scientific community needs the PACE trial data to be released. PLoS Blogs. Retrieved from tinyurl.com/htczqhm

Ferguson, C.J. (2009). Is psychological research really as good as medical research? Effect size comparisons between psychology and medicine. Review of General Psychology, 13(2), 130–136.

Ferguson, C.J. (2015). ‘Everybody knows psychology is not a real science’: Public perceptions of psychology and how we can improve our relationship with policymakers, the scientific community, and the general public. American Psychologist, 70, 527–542.

Grant, A. (2015). Why behavioral economics is cool, and I’m not. Retrieved from tinyurl.com/jaxyr7d

Grohol, J. (2015). The Hoffman report: After years of lies, who holds the APA accountable? PsycCentral. Retrieved from tinyurl.com/qak98tr

Jarrett, C. (2015). 10 of the most counter-intuitive psychology findings ever published. BPS Research Digest. Retrieved from tinyurl.com/ofrc3en

Larzelere, R.E. & Cox, R.J. (2013). Making valid causal inferences about corrective actions by parents from longitudinal data. Journal of Family Theory & Review, 5(4), 282–299.

Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.

Redding, R. (2001). Sociopolitical diversity in psychology: The case for pluralism. American Psychologist, 56, 205–215.

Task Force on Physical Punishment of Children (2015). Statement regarding hitting children. Retrieved from tinyurl.com/gvssfl2

Wofford, T. (2015, August). APA says video games make you violent, but critics cry bias. tinyurl.com/nfjcc2m