I.

In 2006, Bryan Caplan wrote a critique of psychiatry. In 2015, I responded. Now it’s 2020, and Bryan has a counterargument. I’m going to break the cycle of delay and respond now, and maybe we’ll finish this argument before we’re both too old and demented to operate computers.

Bryan writes:

1. With a few exceptions, Scott fairly and accurately explains my original (and current) position. 2. Scott correctly identifies several gray areas in my position, but by my count I explicitly acknowledged all of them in my original article. 3. Scott then uses those gray areas to reject my whole position in favor of the conventional view. 4. The range of the gray areas isn’t actually that big, so he should have accepted most of my heterodoxies. 5. If the gray areas were as big as Scott says, he should reject the conventional view too and just be agnostic.

I think the gray areas are overwhelming and provide proof that Bryan’s strict dichotomies don’t match the real world.

I also think, as a general philosophical point, that we ought to be suspicious of arguments of the form “the gray areas are small”. Even if this is true, and your model only fails in a few places, controversial questions are likely to be controversial questions precisely because they’re located where your model fails. Nobody challenges a model on an exactly typical case where everything makes sense. So if a point is under debate, let’s say in a fifteen year back-and-forth argument between two bloggers that’s attracted hundreds of total comments, the a priori size of the gray areas doesn’t matter. Even if your model is good at most things, you have strong evidence this isn’t one of them.

In this case, the model we’re debating is Bryan’s idea of constraints vs. preferences. My previous summary of this (which Bryan endorses) goes like this:

Consumer theory distinguishes between two different reasons why someone might not buy a Ferrari – budget constraints (they can’t afford one) and preferences (they don’t want one, or they want other things more). Physical diseases seem much like budget constraints – the reason a paralyzed person can’t run a marathon is because it’s beyond her abilities, simply impossible. Psychiatric diseases seem more like preferences. There’s nothing obvious stopping an alcoholic from quitting booze and there’s nothing obvious preventing someone with ADHD from sitting still and paying attention. Therefore they are best modeled as people with unusual preferences – the one with a preference for booze over normal activities like holding down a job, the other with a high dispreference for sitting still and attending classes. But lots of people have weird preferences. Therefore, psychiatric diseases should be thought of as within the broad spectrum of normal variation, rather than as analogous to physical diseases.

I countered by pointing out that this was in fact very analogous to physical diseases:

Alice has always had problems concentrating in school. Now she’s older and she hops between a couple of different part-time jobs. She frequently calls in sick because she feels like she doesn’t have enough energy to go into work that day, and when she does work her mind isn’t really on her projects. When she gets home, she mostly just lies in bed and sleeps. She goes to a psychiatrist who diagnoses her with ADHD and depression. Bob is a high-powered corporate executive who rose to become Vice-President of his big Fortune 500 company. When he gets home after working 14 hour days, he trains toward his dream of running the Boston Marathon. Alas, this week Bob has the flu. He finds that he’s really tired all the time, and he usually feels exhausted at work and goes home after lunch; when he stays, he finds that his mind just can’t concentrate on what he’s doing. Yesterday he stayed home from work entirely because he didn’t feel like he had the energy. And when he gets home, instead of doing his customary 16 mile run he just lies in bed all day. His doctor tells him that he has the flu and is expected to recover soon. At least for this week Alice and Bob are pretty similar. They’d both like to be able to work long hours, concentrate hard, and stay active after work. Instead they’re both working short hours, calling in sick, failing to concentrate, and lying in bed all day. But for some reason, Bryan calls Alice’s problem “different preferences” and Bob’s problem “budgetary constraints”, even though they’re presenting exactly the same way! It doesn’t look like he’s “diagnosing” which side of the consumer theory dichotomy they’re on by their symptoms, but rather by his assumptions about the causes.

But Bryan doesn’t budge:

I’m unimpressed, because I not only anticipated such objections in my original paper, but even proposed a test to help clarify the fuzziness…can we change a person’s behavior purely by changing his incentives? If we can, it follows that the person was able to act differently all along, but preferred not to; his condition is a matter of preference, not constraint. I will refer to this as the ‘Gun-to-the-Head Test’. If suddenly pointing a gun at alcoholics induces them to stop drinking, then evidently sober behavior was in their choice set all along. Conversely, if a gun-to-the-head fails to change a person’s behavior, it is highly likely (though not necessarily true) that you are literally asking the impossible. I then presented multiple forms of evidence that a wide range of alleged mental illnesses are responsive to incentives. Scott barely mentions said evidence. Still, does this mean that the flu isn’t “really” an illness either? No. Rather it means that physical illness often constrains behavioral and changes preferences. When sick, the maximum amount of weight I can bench press falls. (Yes, I’ve actually tried this). Yet in addition, I don’t feel like lifting weights at all when I’m sick. Anyone who has worked while ill should be able to appreciate these dual effects. If you literally get sick, your ability and desire to work both go down. When you metaphorically get “sick of your job,” in contrast, only your desire goes down.

I reject the heck out of this answer. I agree the “gun to the head” test is a good summary of Bryan’s position, but we already agreed what Bryan’s position is. The only thing he’s adding here is a claim that the flu still qualifies as a real disease because it sometimes constrains behavior (the amount of weight Bryan can lift). But nobody cares how much weight they can lift during a flu! When we talk about having the flu being bad, we’re talking 0% about how much weight we can lift, and 100% about the sorts of problems Bob has – feeling too ill to go to work, not wanting to do things, etc. If Bryan searches hard enough, he can find a way the flu results in slightly weaker muscle strength. But if I search hard enough, I can find a way depression results in slightly weaker muscle strength. Neither of these things are what the average person thinks about when they think of “flu symptoms” or “depression symptoms”, and I consider them both equally irrelevant.

But if a change in weight-lifting ability really disqualifies the flu for Bryan, we can talk about other diseases.

What about shingles? It’s a viral infection that causes a very itchy rash. But sometimes (herpes sine zoster) the rash isn’t visible, and you just get really itchy for a few days. Like, really itchy. I had this condition once and it was just embarrassing how much I was scratching myself. But if you had put a gun to my head and said “Don’t scratch yourself, or I’ll kill you”, I would have sat on my hands and suffered quietly. For Bryan, an itch is just a newfound preference for scratching yourself. Shingles, like depression or ADHD, is just a preference shift, and so doesn’t qualify as a real disease.

Or what about respiratory tract infections that cause coughing? My impression is that, put a gun to my head, and I could keep myself from coughing, even when I really really felt like it. Coughing is a preference, not a constraint, and Bryan, to be consistent, would have to think of respiratory infections as just a preference for coughing.

Or what about migraines? Sure, people with migraines say they feel pain, but that’s no better grounded than someone with depression saying they feel sad. If Bryan is allowed to bring in concepts like “pain”, I’m allowed to bring in concepts like “sadness”, “anxiety”, etc. And since an anxious person feels anxiety and cannot stop feeling it even if threatened with a gunshot, the anxiety counts as a constraint, and so mental disorders are constraining. For Bryan’s constraints-vs-preferences dichotomy to work at all, he has to endorse a sort of behaviorism, where we need not believe anything that doesn’t express itself as behavior. And the only behavior we see in a migraine is somebody lying in bed, turning off all the lights, and occasionally clutching their head and saying “auggggh”. But put a gun to their head and demand they be in a bright room with lots of loud music, and they’ll go to the bright room with lots of loud music. Threaten to shoot them unless they stop clutching their head and moaning, and they’ll stop clutching their head and moaning. In Bryan’s model, migraines are just a newfound preference for saying “auggggh” a lot. Why medicalize this? Some people like saying “auggggh” and that’s valid!

Bryan’s preference vs. constraint model doesn’t just invalidate mental illness. It invalidates many (maybe most) physical illnesses! Even the ones it doesn’t invalidate may only get saved by some triviality we don’t care about – like how maybe you can lift less weight when you have the flu – and not by the symptoms that actually bother us.

II.

We need a model that lets us describe shingles as something more than “this person has a preference for scratching themselves frantically, and that preference is valid, nothing to worry about here”. I don’t have a beautiful elegant version of a model like this yet, but I think Bryan himself has gone most of the way to an at-least-adequate one.

In his post The Depression Preference, Bryan admits that most depressed people don’t want to be depressed. But he terms this a meta-preference – a preference over preferences. They have depressive preferences – for example, a preference for sitting around crying rather than doing work. They would meta-prefer not to have those preferences. But they do have them.

I agree this is a fruitful way to look at things, but I think we have to be really careful here, and that using the same term for endorsed meta-preferences and unendorsed object-level preferences is preventing this level of care. Let’s call endorsed preferences which people meta-prefer to have “goals”, and unendorsed preferences which people would meta-prefer not to have “urges”. I think this closely matches our intuitive understanding of these terms.

Suppose I created a sinister machine that beamed mind control rays into Bryan’s head and gave him an urge to constantly slap himself in the face. This urge could theoretically be resisted, but it’s so strong that in practice he never managed to resist it. It didn’t make him enjoy slapping himself in the face, or think this was a reasonable thing to do. It just made him compulsively want to keep doing it. He loses his job, his friends, and his dignity, because nobody wants to be around someone who’s slapping himself in the face all the time. I hope we can common-sensically agree on the following:

1. This is bad

2. Bryan would want to find and destroy the sinister machine

3. That would be a pretty reasonable goal for Bryan to have, and society should support him in this

This seems a lot like the shingles case. A sinister outside imposition (the viral infection) gives its victim an urge to constantly scratch themselves. It doesn’t make them enjoy scratching themselves, or think this is a reasonable thing to do. These people want to cure their shingles infection, and everyone agrees this desire is reasonable.

But this also seems a lot like some cases of OCD. Did you know that a subset of childhood OCD is caused by a streptococcal infection? So again, you get a sinister outside imposition (an infection) that gives its victim an urge to, let’s say, wash their hands fifty times a day. It doesn’t make them enjoy washing their hands, or think this is a reasonable thing to do (some OCD patients do believe their rituals are necessary, others don’t). These people want to cure their OCD, and I at least agree this desire is reasonable.

If you would support the sinister machine victim and the shingles victim, it’s hard for me to see a case for putting the OCD victim in a different category. I agree I’m using as clear a case as possible (most mental disorders aren’t obviously due to infections), but both Bryan and I are trying to avoid bringing specific facts about biology into this mostly-philosophical debate. The distinction between goals and urges turns what looked like an acceptable situation (these people are following their preferences, which is good) into an unacceptable situation (these people’s goals are being thwarted by unwelcome urges which they can’t resist).

I expect most of Bryan’s skepticism to focus on those last two words – “can’t resist”. He will no doubt bring up his gun-to-the-head test again. If we put a gun to the head of a shingles patient, they could stop scratching. So although we can be sympathetic to the trouble their unwanted new preference causes them, how can we recommend anything other than “just suck it up and resist the preference”?

The best model of decision-making I know of comes from research on lampreys. Various areas of the lamprey brain come up with various plans – hunt for food, hide under a rock, wriggle around – and calculate the “strength” of the “case” for each one, which they convert into an amount of dopamine. They send this dopamine to a part of the brain called the pallium, and then the pallium executes whichever plan has the most dopamine associated with it.

Suppose I have shingles. I’m giving a speech to a group of distinguished people whom I desperately want to impress. Then I get a very strong itch. Part of my brain calculates the expected value of continuing to speak in a dignified way, and converts that into dopamine. Another part calculates the importance of scratching myself vigorously, and converts that into dopamine. The pallium compares these two amounts of dopamine, one is larger than the other, and the decision gets made. If the itch is bad enough, and if whatever lizard-brain nucleus makes me want to scratch itches has enough dopamine to spare, then I never had a chance.

“But,” Bryan objects, “if I put a gun to your head, and threatened to shoot you if you scratched the itch, you wouldn’t do it, would you?”

In that case, a part of my brain calculates the expected value of continuing to speak in a dignified way plus not getting shot. This is a very high expected value! It sends lots and lots of dopamine to my pallium. The part of my brain calculating the expected value of scratching the itch and getting shot calculates this as a very low-expected-value course, and sends some a very low (maybe negative?) signal. The pallium decisively selects the plan to keep speaking and not get shot.

To summarize: the brain compares the strength of various preferences and executes the strongest. Anything that strengthens your urges at the expense of your goals makes you more likely to do things you don’t endorse, and makes you worse off. In a counterfactual world where a threatened gunshot is also weighing down the scale, maybe the calculus would come out different. But in the non-counterfactual world where there is no gunshot, the calculus comes out the way it does.

(also, if Bryan uses his gunshot analogy one more time, I am going to tell him about all of the mentally ill people I know about who did, in fact, non-metaphorically, non-hypothetically, choose a gunshot to the head over continuing to do the things their illness made it hard for them to do. Are you sure this is the easily-falsified hill you want to die on?)

This model doesn’t use the word or the concept of “choice” anywhere. There are various algorithms mechanically evaluating the expected reward of different actions, and a more central algorithm comparing all of those evaluations. Those algorithms could have resolved differently in different situations, and you can be uncertain how they will resolve in the same situation, but there’s no point at which they actually could resolve differently in the same situation. If this makes you want to start debating free will – in either direction – I cannot recommend this Less Wrong post highly enough.

A few examples to hammer this in:

1. Most weekends, Alice stays in and reads a book (preference strength 20). But today is her firstborn child’s wedding, which she has been looking forward to for years (preference strength 100). Just before she leaves for the chapel, she gets a terrible migraine, and she feels like it would be unbearable to go out of her room (preference strength 200). Since 200 is greater than 100, Alice misses the wedding and feel miserable, since she would have meta-preferred to go to the wedding. If you had threatened to shoot her unless she went to the wedding, she would have gone to the wedding and been miserable the whole time, because she is terrified of death (preference strength 9999) and 9999 is greater than 200.

2. Bryan is a responsible member of society and wants to work hard and take care of his family (preference strength 100). He drinks some alcohol, but because he has no genetic or environmental risk factors for alcoholism, it doesn’t make him feel any urge to drink himself to death (preference strength 0), so he doesn’t. If we CRISPRed him to give him every single alcoholism risk gene plus crippling anxiety, then drinking the alcohol would make him feel a very strong urge to drink himself to death (preference strength 200), and he would drink himself to death instead of caring for his family.

3. CRISPRed alcoholic Bryan goes to an addiction doctor. The doctor advises him to take the anti-alcoholism drug naltrexone (-20 preference strength for alcoholism). Then the doctor advises him to go to Alcoholics Anonymous and get a whole new friend group in which his status depends entirely on his ability to remain sober (+20 for staying sober). Now his preferences are “stay sober and take care of my family” (strength 120) vs. “drink myself to death (strength 180), but the preference to drink is still stronger, so he does.

4. Bryan goes to a therapist who asks him to visualize the things he loves about his family and why he thinks it’s important to take care of them, which makes this more vivid in his mind (preference +10 for sobriety). Bryan’s boss threatens to fire him if he misses one more day of work because of drunkenness (preference +20 for sobriety). Now he’s at 150 for sobriety vs. 180 for drinking. He gives $20,000 to Beeminder, which they will only give him back if he stays sober for the next year (+20 for sobriety), and he reads George Ainslie’s Picoeconomics which describes ways to reconceptualize choices across time to better account for all of their implications (+20 for sobriety). Now he’s at 190 for sobriety vs. 180 for drinking, so he stays sober.

5. A few months later, Bryan’s friend dies in an accident. He feels angry, depressed, and anxious. This makes alcohol seems more attractive, since it would temporarily help him forget these feelings (+20 for drinking). At the same time, he stops going to AA because it’s annoying and far away (-20 for staying sober). Now he’s at 170 for sobriety vs. 200 for drinking, so he falls off the wagon.

I’m not claiming this lamprey model is exactly literally true for humans. And I’m not claiming there’s a perfect binary distinction between endorsed goals and unendorsed urges. This model is full of complications and gray areas. I’m just saying it’s a better model, with fewer gray areas, than trying to separate everything into just “preference” or “constraint”, and shooting yourself in the foot again and again like some kind of tipped-over Gatling gun.

And it goes a lot of the way to modeling mental illness: the mentally ill have conditions that give them strong unendorsed urges. For any given strength of goal, having strong urges will make people less able to pursue that goal, in favor of pursuing the urges instead, and that will make them worse off, for a definition of “well off” that involves being happy and achieving goals. These people very reasonably want to stop having these weird urges so they can pursue their goals in peace.

Bryan will correctly point out that there are awkward implications in identifying “unexpected generator of strong unendorsed urges” with “disease”. For example, gay people in a traditional religious community will have strong urges to have homosexual relationships, and they won’t endorse those urges – they would probably rather be straight instead.

Or: obese people feel an urge to eat which they don’t endorse. Should we call obesity a disease, and describe them as having a disease which produces urges contrary to their preferences? Some people say yes (and keep in mind that both genetics and viral infections can induce obesity). But suppose some normal-weight person would rather be supermodel-thin, and their perfectly normal urge to eat a normal amount prevents them from looking like a broomstick. Is their normal level of hunger a disease? A naive equation of “biological generator of unendorsed urges” and “disease” would say yes!

We want some criteria that let us call shingles a disease, but don’t let us call “being thin but wanting to be even thinner” a disease. Unfortunately, there is no perfect solution to this problem. People have wanted perfect solutions to definitional questions ever since Plato defined man as “a featherless biped”, and it’s never worked. Luckily, there are kludgy, good-enough solutions, which I describe in Dissolving Questions About Disease, the fourth most popular Less Wrong post of all time. If you still think this is confusing, please read it. If it’s still confusing even after that, try The Categories Were Made For Man, Not Man For The Categories.

I think Bryan should be happy with this solution. It’s very libertarian. It says that it’s up to every individual to decide how to satisfy their own preferences (including meta-preferences). If your problem is constraints (you want to go to Hawaii, but you don’t have enough money), you can work to resolve those constraints (eg go to work and earn more money). If your problem is urges (you want to go to Hawaii, but you’re too anxious to leave your room), you can work to resolve those urges (eg go to a psychiatrist and get medication). The job of a good liberal society is to support people in achieving their own goals as they understand them, and this includes supporting their decision to get the job they want and their decision to get the psychiatric treatment they want.

As I write this essay, I’m a little bit caffeinated. I looked at my preference set – which included an urge to get back in bed instead of writing blog posts – decided it didn’t achieve my goals, and took a psychotropic drug to shift my preference set to one I liked better. And if we’re willing to accept this in relatively trivial cases, the argument for accepting it is even stronger for people whose preference sets have been deranged by obvious bizarre causes – infections, hormone imbalances, brain injuries, addictive substances, genetic defects – and for people whose irresistible urges are ruining their lives in preventable ways.