The Credibility Revolution in psychology is full of people diagnosing common malpractice in psychology research and offering us ways not to do our jobs badly. It’s less common to see these kinds of comments paired with a discussion of why it’s so hard to get psychology right.

That’s why a 1991 article by psychologist David Lykken stands out to me: Lykken, who was a close friend of the “prescient” psychology critic Paul Meehl, doesn’t just lay out common issues reformers are addressing now (e. ., non-replication, inappropriate reliance on “p-values” without common sense when evaluating research). He also talks about why being a psychology researcher is a hard job.

Source: Photo by Daria Shevtsova from Pexels

The first point he makes is about abstraction. Physicists (for example) are working with tiny particles that the average person has no real intuitions about. If you tell me that leptons come in two varieties, charged and neutral, who am I to argue? I am definitely not checking that against my last interaction with a lepton. I just don’t have intuitions about them. This allows physicists and other scientists to conduct their research programs with relatively “rough cut” abstractions. Newton did a lot of his original calculus with the simplifying assumption that planets could be treated basically as perfect, ideal points. The moon didn’t have mountains, and Jupiter didn’t have weather. They were just sky things that moved. We were capturing the big picture.

Psychology is different. Everyone has intuitions about psychology, because we’re using our understanding of other people to navigate the world every day. If I tell you that all emotional experience can be boiled down to a score on two fundamental dimensions—how positive to negative the emotion is (termed valence) and how calm to intense the emotion is (termed arousal)—then you will immediately check that theory against your intuitions. What about the difference between and , you might ask? Both are negative and intense, but they definitely feel different. I already know the valence/arousal model is wrong in an important way, so I'm not checking whether the simplification is useful. This familiarity makes it difficult to get started with formalizing things, because the caveats and limitations of a “rough cut” model seem so glaring.

But, Lykken argues, you need to start somewhere. Just trying to create a model that captures big picture stuff, like where planets are, can be tough. So getting that stuff right and then building out can be a good strategy. Interestingly, other social sciences (and even other areas of psychology—I see you, cognitive modelers!) have tried this approach. Economics built a lot of formal theory based on simplifying assumptions about how people behave in the economy. Some of the fundamental assumptions haven’t turned out so great: It turns out people can’t be abstractly represented as always acting in a way that rationally maximizes their payoffs. People are “irrational” from a mathematical point of view, and those biases throw off some models.

Source: Photo by Arvind shakya on Pexels

In evolutionary anthropology, though, the approach has been pretty fruitful. One of the most exciting theories in social science is Dual Inheritance Theory, which deals with the ways that cultural information can be passed from person to person—like are. Dual Inheritance means that we inherit the traits that allow us to survive and thrive from two things: our genetic makeup and our cultural traditions, and these two parallel systems can even interact.

The foundational work in this topic is a book of abstract mathematical models that deal with super simplified situations, like a case when people have to choose between two technologies that allow them to solve a problem. In these models, people are as simple as Newton’s planets: they all have the same fundamental properties, and all we care about is whether they switch from using one technology to another. We don’t even pretend to care about their personalities, emotions, or social networks. We just let them be simple to get a sense of how broad strokes processes might play out. Decades of research on this paradigm has further refined it and compared it to painstakingly collected data from cultures around the world—and even though it started simple, it’s really starting to take off as an explanatory framework. Social psychologists who do this work are often treated as sort of off-beat figures doing the scientific version of experimental theater. In fact, they may be foundational scientists once we seriously embrace the necessity of simplification in theory.

Using the by-now-worn-out metaphor of the brain as computer hardware and mental processes as software programs, the next point Lykken argues is that we simply don’t take individual differences in people’s lived histories seriously enough. We may all start with similar brains, but our strategies for tackling problems, our emotional reactions to others, and even the things we notice in our day-to-day life are all shaped by the experiences we’ve had.

Even if most events have only a tiny effect on our behavioral tendencies, if you have enough of them—like, let’s say a year’s worth of interactions with antagonistic people—that can add up to a significant change (defensiveness, say). It’s like the drop of water that wears away the boulder: our lives are shaped through the repetition of almost imperceptible feedback we get from the world around us about our behavior.

Given this accretion of learned responses across a lifespan, it may be wrong-headed to try to make general statements about what people do in a given situation. There isn’t going to be a lot of consistency or predictability if we try to get an *average behavioral response* of a generic person. There isn’t a generic person, and taking an average across different kinds of people might not end up telling us about what to expect from any of them.

Instead, we might want to consider different subtypes of people based on their responses to situations, and think about responses within these classes of people. Lykken dates himself by suggesting that some people are doing text processing in Word Perfect and others in WordStar. Each software program has different options, and even accessing the same options might require a different set of commands. For example, some people might get into lots of arguments while others don’t even seem to know how to argue, and even among people who argue a lot, different things might trigger these (e.g., I’m more likely to go off on statistical methods than on football scores).

Source: Photo by Rebecca Zaal on Pexels

This revisits a long-standing debate in psychology over Person-Centered (or idiographic) versus Variable-Centered (or nomothetic) methods. I touched on this in one of my first blog posts, but really these approaches give us different views on people. The Variable-Centered approach is currently dominant, but it takes a sort of “public policy” view of human behavior. It’s great for telling you what would happen on average if you implemented a program (let’s say it’s using a app) that was the same for everyone. That would allow you to plan at the population level, predicting how people are likely to be.

It doesn’t address the individual processes that lead people to do what they do—whether a person is running WordStar and how to get WordStar to properly display subscripts. That’s more like an individualized approach. It suggests that we need to intensively study one person’s reactions to many different situations to understand their behavior—and that we probably need to do intensive, repeated observations for each person. We need to look at a lot of people individually before we start trying to generalize or even form a set of common types. Most psychologists just assume a variable-centered model, but I worry that we as a field bet on the wrong horse here. In my own work, person-centered modeling seems like it provides substantially more accuracy and consistency in predicting behavior than variable-centered modeling does.

Lykken makes a few other interesting points about the customs and habits in psychology, which I’ll just mention briefly: first, he suggests that without a central unifying theory (or paradigm) it’s hard for psychologists (or any scientists) to make progress. In particular, he argues for a sort of “great person” view in which path-breaking thinkers are needed to establish promising programs, and this allows mid-level scientists to do valuable work in filling in the details and implications. Without these paradigms, us non-geniuses are likely to spend a lot of time just spinning our wheels doing work that isn’t adding much (or maybe we’ll get really into of science and start blogging!).

His other point is that original research is over-valued. Every psychologist feels like they have to be churning out ground-breaking new studies every year, but just explaining and promoting reliable findings (or arguing against unreliable ones) is really valuable. Lykken himself sounds proudest of his work testifying about how unscientific and inaccurate lie detectors are. In terms of the practical effects on people’s lives—not being wrongly accused or incorrectly exonerated—increasing public knowledge of this might have been the best use of his scientific training.

So why is psychology so hard? On the one hand, the subject matter is hard: we can’t capture the big picture without being dinged for missing the details, and we can’t capture people’s life histories without intensively studying specific individuals. On the other hand, our culture could use some change: individual psychologists often seem to lack a common framework that allows them to build on each other’s work, and we make too big a deal out of the need to do new work. Sometimes what you really need is just to take a step back and take stock of what you really do—and don’t—understand.

LinkedIn Image Credit: REDPIXEL.PL/Shutterstock