[Previously in sequence: Fundamental Value Differences Are Not That Fundamental, The Whole City Is Center. This post might not make a lot of sense if you haven’t read those first.]

I.

Thanks to everyone who commented on last week’s posts. Some of the best comments seemed to converge on an idea like this:



Confusing in that people who rely on lower-level features are placed higher, but the other way would have been confusing too.

We need to navigate complicated philosophical questions in order to decide how to act, what to do, what behaviors to incentivize, what behaviors to punish, what signals to send, and even how to have a society at all.

Sometimes we can use theories from science and mathematics to explicitly model how a system works and what we want from it. But even the scholars who understand these insights rarely know exactly how to objectively apply them in the real world. Yet anyone who lives with others needs to be able to do these things; not just scholars but ordinary people, children, and even chimpanzees.

So sometimes we use heuristics and approximations. Evolution has given us some of them as instincts. Children learn others as practically-innate hyperpriors before they’re old enough to think about what they’re doing. And cultural evolution creates others alongside the institutions that encourage and enforce them.

In the simplest case, we just feel some kind of emotional attraction or aversion to something.

In other cases, the emotions are so compelling that we crystallize them into a sort of metaphysical essence that explains them.

And in the most complicated cases, we endorse the values implied by those metaphysical essences above and beyond whatever values we were trying to model in the first place.

Some examples:

People and animals need a diet with the right number of calories, the right macronutrient ratios, and the right vitamins and minerals. A few nutritional scientists know enough to figure out what’s going on explicitly. Everyone else has evolved instincts that guide them through this process. Hunger and satiety are such instincts; when they’re working well, they make sure someone eats as much as they need and no more. So are occasional cravings for some food with exactly the right nutrient – most common in high-nutrient-use states like pregnancy. But along with these innate heuristics, we have culturally determined ones. Everyone has a vague sense that potato chips are “unhealthy” and spinach is “healthy”, though most people can’t explain why. Instead of asking ordinary people and children to calculate their macronutrient and micronutrient profile, we ask them to eat “healthy” foods and avoid “unhealthy” foods. There’s something sort of metaphysical about this – as if “health” were a magic essence that adheres to apples. And in fact, sometimes this goes wrong and people will do things like blend a thousand apples into some hyper-pure apple-elixir to get extra health-essence – but overall it mostly works.

EXPLICIT MODEL: Trying to count how many calories and milligrams of each nutrient you get

EMOTIONAL EXPERIENCE: Feeling hungry or full

REIFIED ESSENCE: Some foods are inherently healthy or unhealthy

ENDORSED VALUE: Insisting on only eating organic foods even when those foods have no quantifiable benefit over nonorganic

Every society has some kind of punishment for people who don’t follow their norms, whether it’s ostracism or community service or beheading. There’s a good consequentialist grounding for why this is necessary, with some of the most academic work being done in the field of prisoners’ dilemmas and tit-for-tat strategies. But again, we don’t expect ordinary people, children, and chimpanzees to absorb this work. The solution is the (innate? culturally learned? some combination of both?) idea of punishment. Punishment relies on a weird metaphysical essence of moral desert; people who do bad things deserve to suffer. The balance of the Universe is somehow off when a crime goes unavenged. Take this too far and you get the Erinyes and the idea that justice is the most important thing. There are references from ancient China to Hamlet that if you have something important you need to avenge, you need to do that now or you’re a bad person. None of this follows from the game theory, but it’s a really good way to enforce the game-theoretically correct action.

EXPLICIT MODEL: Trying to figure out how to best deter antisocial behavior and optimize society

EMOTIONAL EXPERIENCE: Feeling angry when someone wrongs you

REIFIED ESSENCE: Justice: the world is out of balance when crimes go unavenged

ENDORSED VALUE: Wrongdoers must suffer whether or not that prevents future crimes

If you reward people who create value, sometimes those people will be inspired to keep creating value. This is hard for people to keep in mind, and there’s a constant temptation to confiscate other people’s things for our own enrichment. Some kind angel gave us the metaphysical idea of “deserving”, the opposite of punishment. We get rights claims like “People deserve to keep what they’ve earned”. Five thousand years of taxation have made only a partial dent in this intuition, to the point where many people still feel like something is going wrong when a producer and the value they produced are separated. Some would argue that this has gotten completely out of hand, to the point where we insist on people keeping money far past the point where it could possibly be any further incentive to them.

EXPLICIT MODEL: Letting people keep what they produce incentivizes further production

EMOTIONAL EXPERIENCE: Anger when someone takes something rightfully yours

REIFIED ESSENCE: Natural rights; governments cannot take away property rights because they are ordained by God or natural law

ENDORSED VALUE: You can’t take people’s property, whether or not this will affect further production

In past societies, STDs were a common cause of death and disfigurement. Nobody had the medical knowledge to really understand what an STD was or how to avoid getting one. But every society had some kind of complicated code of sexual purity. Usually this was designed from a male perspective, and said that women who had sex with too many other men were “impure”, virgins were especially “pure”, and a woman who had only had sex with you was “pure” relative to you. These rules protect people who follow them against STDs, and plausibly culturally evolved for that purpose (among others). But because no one knew about STDs, the rules rely on a kind of metaphysical notion of “purity” that doesn’t correspond to any real-world characteristic. For example, someone who’s had sex with a hundred people but who nevertheless never contracted an STD would seem metaphysically “impure” by the rules, but in reality safe to have sex with; this would be irrelevant to medievals who had no way to identify such people, but is very relevant now. Or: if you have good sexual protection and STDs are easily treatable, the whole “purity” system seems a lot less important, but if you think of it as a metaphysical construct important in its own right you might not realize this.

(before you tell me that STDs aren’t important enough to inspire something as universal and compelling as sexual purity laws, remember that in the pre-antibiotic era about 10% of city-dwellers had syphilis (see studies from early Mesoamerica, 1700s Chester, early 1900s London. During this period syphilis had a mortality rate of up to 20%, with survivors often permanently unhealthy and disfigured. And this is just one of many dangerous STDs!)

EXPLICIT MODEL: Figuring out the likelihood that your partners have STDs helps you avoid high-risk pairings

EMOTIONAL EXPERIENCE: Feeling grossed out at the idea of having sex with somebody who “sleeps around”

REIFIED ESSENCE: Idea of sexual purity

ENDORSED VALUE: It’s wrong to be slutty or have sex with a slutty person even if there are effective strategies for preventing STDs

Most people are happier when they’re in at least some Nature, whether this means a grand national park or just a leafy suburb with lots of chirping birds. The average person would consider a concrete lot full of Brutalist apartments a little soul-crushing. This probably comes from an evolutionary heuristic in favor of fertile areas and against barren ones; the closest chimpanzee-parseable equivalent to a concrete lot would be a desert or lava flow, where food and shelter are scarce. But nowadays we can order takeout, and the Brutalist apartment buildings provide all the shelter we need. This is probably another obsolete evolutionary relic, but it’s a very persistent one.

EXPLICIT MODEL: More plants and less gray rock means a more hospitable area with more food sources

EMOTIONAL EXPERIENCE: Contentment when surrounded by plants; depression when surrounded by concrete

REIFIED ESSENCE: Idea of “Nature”

ENDORSED VALUE: Environmentalism; the preservation of Nature for its own sake whether it benefits humans or not

Value differences, then, are people who operate at different levels of the ladder.

For example, “sexually liberated” people might use condoms, or ask their partners to check if they have STDs. But having done this, they’ll ignore the metaphysical idea of “purity”; they don’t care how many other people their partner has had sex with. And if they’re not planning to have sex with someone, they’ll ignore their “purity” full stop – having STDs doesn’t make you a bad person. These people explicitly model the complicated dynamic of STD contagion, and cast off the metaphysics as a primitive approximation they no longer have any use for.

Traditional or religious communities are more likely to endorse values based on the “purity” metaphysical essence. They understand the biology of STDs just as well as the sexually liberated people. They just don’t care. On an intellectual level, they believe that sexual purity is more than just a predictive model of STD risk, or that they’ve gained some additional function in the meantime, or that the predictive model still works better than calculating it out explicitly. Or they might not think in these terms at all, and value purity as a terminal good. Or they might be following their instincts in a way that can’t be reducible to anything at all.

Other people are somewhere in between. I know in theory that I cannot get AIDS through touching infected blood left on a sheet or chair. Absent some sort of very unlikely chain of events involving weird mouth ulcers and very fast turnaround times, I can’t even get it by rubbing the blood on food and eating it. But I would still feel more than zero trepidation about doing this. It would seem that even though I identify with the sexually-liberated explicit-complicated-dynamic-modellers, I have some weak vestige of the metaphysical purity heuristic left. This makes me more sympathetic to people with the full version. They don’t seem like weird mutants too stupid to figure out what an STD is, they feel like people with my instincts magnified a million times until they’ve become irresistible.

I have tried very hard to cultivate a vital rationalist skill called “admitting I am being an idiot while feeling no obligation to change”. That is, I feel comfortable saying I’m being very silly by objecting to touching the HIV-infected blood. If someone were to lecture me that admitting this obligates me to touch the blood or else I will have proven myself a hypocrite, I would tell them to go jump in a lake. This is important because I’m pretty sure my purity-instinct urge not to eat HIV-infected blood is stronger than my urge to be right about factual issues. If I was forced to either eat the blood, or to make up some plausible-but-false reason why the experts were wrong and blood was unsafe, I would make up the reason. And then not only would I not eat the blood – a venial sin if ever there was one – but I would be obfuscating the debate, screwing up lots of people’s ideas about epidemiology, and taking a step on the slippery slope toward becoming a dishonest and dishonorable person. I would rather just admit I’m silly.

But I think this is a hard skill, one that I often get wrong even despite frequent practice, and one I don’t expect anyone to succeed at all the time. I think some people with strong metaphysical heuristics – around HIV, around sex, around whatever – are going to get to work justifying them. If many people share a certain strong metaphysical heuristic, then there will be entire communities dedicated to researching justifications for it, coming up with philosophy around it, and reinforcing one another for being wise and good enough to believe it.

I think this is a big part of where value differences come from, and why I’ve insisted that despite the differences being real, they’re not incomprehensible. Most people have at least some level of metaphysical-STD-purity-intuition. And most people have at least some level of explicit-dynamic-modeling-of-STD-risk. Our differences come not from some people being enlightened and other people being mutants with the bizarre idea of “sluttiness” as a terminal bad, but by settling on a different part of the ladder from totally-endorsed-value-based-on-essence to total-explicit-modeling.

II.

A natural interpretation of Part I: people with explicit modeling are smart and good, people who still use metaphysical heuristics are either too hidebound to switch or too stupid to do the modeling.

I think this is partly right, but since our goal is to make value differences seem less clear-cut and fundamental, I want to make the devil’s advocate case for respecting metaphysical heuristics.

First , the heuristics are, if nothing else, proven to be compatible with continuing to live; the explicit models often suck.

Soylent uses an explicit model of nutrition to try to replace our vague heuristics about “eating healthy”. I am mostly satisfied with the quality of its research; it generally avoids stupid mistakes. It does not completely avoid them; the product has no cholesterol, because “cholesterol is bad”, but the badness of cholesterol is controversial, and even if we grant the basic truth of the statement, it applies only at the margin in the standard American diet. If you eat only one food item, you had better get that food item really right, and it turns out that having literally zero cholesterol in your diet is long-term dangerous. This was an own-goal, and a smarter explicit modeler could have avoided it. But explicit models that only work when you get everything exactly right will fail 95% of the time for geniuses and 100% of the time for the rest of us.

And even if Soylent had avoided own-goals, they still risk running up against the limit of our understanding. Decades ago, doctors invented a Soylent-like fluid to pump into the veins of patients whose digestive systems were so damaged they could not eat normally. These patients tended to get a weird form of diabetes and die. After a lot of work, the doctors discovered that chromium – of all things – was actually a really important dietary nutrient, and nobody had ever noticed before because it’s more or less impossible to run out of chromium with any diet except having synthetic fluids pumped into your veins. After years of progress on nutritional fluids, the patients who need them no longer die; we can be pretty sure we’ve found everything that’s fatal in deficiency. But these patients do tend to feel much worse, and be much less healthy, than people eating normal diets. How many mildly-important trace micronutrients are left to discover? And how many of these are or aren’t in Soylent?

We know that for some reason eating multivitamins does not work as well even for vitamin-having purposes as eating food with the relevant vitamins in them. This seems to have something to do with absorption and bioavailability, but we’re not sure what. Does Soylent have the good bioavailability of food, or the bad bioavailability of multivitamins? Nobody knows, because we still don’t quite understand how bioavailability works. All we know is that evolution seems to have found one viable solution, given that people who eat food do not immediately die. If we replace food with intelligent application our best available explicit models, we might do okay – or we might feel vaguely ill all the time because there’s something important we’re missing.

On the society-wide level, the sort of explicit-modeling that created Soylent becomes high modernism, the philosophy critiqued in James Scott’s Seeing Like A State. You subject everything to the command of a central planner, who is supposed to be able to explicitly model social dynamics, and try to prevent people from using fuzzy evolved heuristics like tradition or “the way things are”. The extreme version of this is those Young Adult Dystopias: can’t justify exactly why there should be families? Then families are just obsolete detritus of our evolutionary past, and we should form a Department Of Child-Rearing that takes all kids and subjects them to carefully-doled-out industrial-scale parenting techniques.

Second , all of our values are unjustifiable crystallizations of heuristics at some level, and we have to have some value.

One of the examples above supposes that our love of nature comes from heuristics about where to find food and water. Suppose we proved this conjecture was right. Given that we can now order pizza and bottled water to concrete lots, the heuristic is obsolete. Does this mean we should stop caring about nature, and cut down all our forests and national parks and replace them with concrete lots? Suppose this would be very profitable, and that on cost-benefit analysis this outweighs the practical economic benefits of wild spaces (carbon sinks, drug discovery from exotic species). Is there any remaining reason we still want the national parks?

Compare this to punishment-for-the-sake-of-punishment. Maybe now we can replace this with an explicit model of consequentialist punishment where we should only punish people up to the point where it’s necessary to have a safe and stable society. Returning to the dialogue:

Simplicio: I admit – I believe in punishment in a sense stronger than as a heuristic for consequentialism. I think it’s morally important, in a terminal sense, that evildoers be made to suffer for their deeds. Not suffer infinitely. But suffer some amount proportional to how much they hurt others. I want this regardless of whether it deters them or not. Sophisticus: But that’s just reifying a weird misfiring of an obsolete heuristic about how to maintain a safe community. Simplicio: Yup! And me wanting Yellowstone to continue to exist is just reifying a weird misfiring of an obsolete heuristic about how to get delicious elk meat. And surely you don’t want to pave over Yellowstone. Sophisticus: I take joy in Yellowstone. That’s an emotional experience in my brain. I’m happier and more comfortable in Nature. Even if the heuristics that produced this are wrong, that feature of my brain isn’t going away any time soon. So on a consequentialist level, I can argue that Yellowstone should be maintained for my sake and the sake of everyone else who enjoys it, even though I’m not sure my enjoyment comes from a reasonable source. Simplicio: I take joy in watching murderers and rapists get what they deserve. This is a base-level pleasure for me, just like seeing trees and mountains are for you. I am under no more imperative to justify what I want than you are. Sophisticus: You are, though. Because you directly desire for people to suffer, which violates some of our other shared values. We have to reach reflective equilibrium among our values, and for me at least the value to wish happiness rather than suffering on other people overwhelms the desire for punishment. Simplicio: First, I think we should be careful to frame it the way you just did: “Making people suffer for their crimes is good, but this is outweighed by other goods”. If we say it that way, it sounds no more exotic than the trolley problem. Sophisticus: It’s at a – Simplicio: – but second, if paving over Yellowstone would have economic benefits, then those benefits would cash out in jobs, lower housing costs, cheaper consumer goods, and the like. All of those produce utility for people. Both of our weird preferences – mine for punishment, yours for nature – satisfy some crystallized heuristic at the expense of general utility. I still fail to see how we’re different. Sophisticus: I agree that preserving Yellowstone may incidentally fail to maximize utility. But it seems like you’re directly aiming at reducing people’s utility. That’s a pretty big difference. Simplicio: Exactly which principle are you invoking here? The act-omission distinction? Or the principle that the morality of an act depends upon what feelings are going through your head when you do it? Sophisticus: Um… Simplicio: Because I think both of those are sometimes useful – as heuristics. But if you’re going crystallize those heuristics, let me have my crystallized heuristic about punishment.

Simplicio is actually being nice here. If he wanted to be especially brutal, he might ask Sophisticus something like – wait, why are we privileging utilitarianism (here being called “consequentialism”, but it seems like both of them are working from an implicitly utilitarian framework) anyway? Utilitarianism says that what’s really important is reducing suffering, but we can invent an evolutionary story for that too. We want to help other people and make them happy because that’s a useful heuristic for creating a flourishing community, being well-liked, and being likely to have other people help us in our own time of need. But some utilitarian applications of this principle go beyond that; certainly caring about effective charity for the Third World, or wild animal suffering, or anything in those realms brings us just as far from the proper domain of our help-and-don’t-harm-others heuristic as how to build a suburb with widely-available pizza delivery takes us from our nature-as-fertile-lands heuristic. Why should we privilege the harm foundation over the justice foundation? Why not just say “My urge to relieve suffering conflicts with my urge to inflict punishment on evildoers. Both urges have their place, and either can be extended out to infinity with weird results. Today I choose my urge to inflict punishment; tomorrow I might choose the other. So it goes.”

To be absolutely brutal about it:

EXPLICIT MODEL: Helping others will key me in to networks of reciprocal altruism and raise my status in the community

EMOTIONAL EXPERIENCE: Desire to help others, empathy, horror at the suffering of others

REIFIED ESSENCE: “Utility”

ENDORSED VALUE: Utilitarianism, the belief that maximizing utility is the highest good regardless of what other goods it produces

III.

Leave it there, and the fundamental-value-differences narrative starts to sound more appealing again. I reify and endorse utility, you reify and endorse punishment, now we have to fight. So I want to talk about how in principle people end up choosing what level to crystallize heuristics at.

First, let’s be blunt: dumber (here meaning either less educated or lower-IQ) people probably crystallize heuristics lower on the ladder. Chimpanzees, cavemen, and children can’t understand game theory and shouldn’t try. They usually run off instinct and taboo, and if you take that away from them they will just get confused.

There are widely replicated findings that higher-IQ and more-educated people tend to be less socially conservative. Social conservativism means a lot of things, but I think in this case it’s probably a stand-in for where you crystallize your heuristics; sexual purity intuitions are an obvious example. This makes sense; smarter people are probably more successful at explicit models, or at least have a higher estimation of their likelihood of success at such models. Smarter people do better on the Cognitive Reflection Test, a measure of whether people go with snap intuitive answers or try to explicitly model situations.

But there’s also reason to think that the more exposure someone has to a heuristic-relevant situation, the more compelling the heuristic will be. I described how my great-grandmother, usually a very kind and forgiving person, became more vengeful once someone close to her was murdered; I was able to partly replicate her experience just by vividly imagining terrible crimes happening to people close to me. This matches the cliche that “a conservative is a liberal who has been mugged; a liberal is a conservative who has been arrested”.

One of the weirdest examples of this is the germ theory of democracy, which finds the presence of tolerant multicultural individualist societies to be correlated with pathogen stress even after accounting for other relevant confounders. In this view, people at high risk of disease feel an urge to stick to people they know well – their family members, neighbors, and co-ethnics – to avoid the sort of mixing that spreads exotic pathogen strains. People at low risk of disease are more cosmopolitan, happy to receive anyone who comes around.

Related: people crystallize heuristics on a lower level when the system the heuristic is meant to model is a system they care about getting right. Consider Haidt’s Moral Foundation of Authority, which he says conservatives have and liberals lack. This fits nicely into the explicit-model-to-essence-to-endorsed-value model. The explicit reasoning is that social groups need to coordinate, and once whatever mechanism you have to produce rules has produced its rules, people need to respect and listen to them or else they’ll be in a Hobbesian state of nature. Liberals may say they’re “against authority”, but when the Vice-President of the NAACP asks an NAACP staffer to prepare a report by next week, she will probably prepare a report by the next week, not just because she’s afraid of being fired but because the NAACP will fail if it can’t handle basic tasks like “get reports prepared”. When a labor union leader tells the workers to strike, they will probably strike, even if they don’t feel like it, because they know that unless they act as a coordinated group they’ll never be able to exert any power. So:

EXPLICIT MODEL: Top-down organization is an effective way to coordinate large organizations

EMOTIONAL EXPERIENCE: Respect, deference

REIFIED ESSENCE: “Authority”, “legitimacy” (in the sense of “this guy is the rightful king, but that guy is a pretender”)

ENDORSED VALUE: Respect for authority

Frimer et al (study, popular article) have done some work on this. They find that when you ask people to imagine “Authority”, they imagine a police officer, a military commander, or some other stereotypically conservative figure who conservatives respect and liberals do not. Since liberals have little interest in making the police more effective, there’s no reason for them to “respect authority” in this case. When researchers give subjects the example of some environmental organization trying to coordinate its environmental activism, liberals are much more likely to say people should respect the authority of the organization leaders.

A more recent study (study, popular article) found similar results, although with similar caveats. They were investigating a construct called cognitive rigidity, and asked questions like “true or false: a group which tolerates too much dissent among its members cannot exist for long?”. Conservatives tend to agree with the base question more, but when you specify “an environmental group”, liberals agree more. I still think this is kind of stupid and more about liberals’ willingness to agree to anything that sounds vaguely pro-environmental. At one point they’re investigating the question “A dead hero is better than a live coward”, they change it to “When it comes to preventing global warming, a dead hero is better than a live coward”, and liberals just go ahead and agree with the statement instead of asking what the f@#k. I consider these studies very questionable and preliminary. But here are some true-or-false questions I offer to the next person to do a study like this:

A: It is dangerous to show too much mercy to people who commit crimes

B: It is dangerous to show too much mercy to people who commit gun violence

A: Barack Obama was the president, and his opponents should have treated him with respect even when they disagreed with his policies

B: Donald Trump is the president, and his opponents should treat him with respect even when they disagree with his policies

A: If I were an employee in a company, I would try to carry out the CEO’s orders even if I disagreed with them, because otherwise we would be disorderly and totally ineffective

B: If I were a member of a labor union, I would try to carry out the union leader’s policies even if I disagreed with them, because otherwise we would be disorderly and totally ineffective

I’m not asserting that liberals and conservatives would answer their respective questions exactly the same. My guess is that even in a value-neutral way, conservatives have these foundations a little more crystallized than liberals, just as they have most other heuristics a little more crystallized than liberals. But I am saying that nobody has done this experiment correctly, and I am suspicious that the groups would be closer than people think.

People can choose metaphysical heuristics or explicit models based on their own innate tendencies, their education, their intelligence, their experiences, and what kind of question we’re thinking about. Rather than talking too much about fundamental value differences, we should be asking where a given person has chosen to place themselves on the metaphysical-heuristic-to-explicit-model ladder at any particular moment.

IV.

This way of looking at things will be valuable if it helps people who crystallize heuristics at different levels understand each other. Here are a couple of common mistakes I think I see:

People who endorse values based on crystallized essences might think that people who use explicit models are weirdly and inexplicably evil, because the essentialists assume the modelers believe in the essences but don’t care about them, or prefer the opposite. If you believe in Essential Purity, then someone who doesn’t might seem like someone who supports Essential Impurity, rather than somebody working off a totally different system.

On the opposite side, if you’re a pure consequentialist, you might see someone who endorses crystallized-essence values as doing something inexplicable and evil. If you think of it as no different from what you do when you like nature, it might be easier to understand.

I have to bring these up because they’re obvious, but really I don’t see either of these that often. The main mistake I see is people on both sides having at least moderately good understanding of how to do explicit causal models, but accusing the other of being Neanderthals who only care about a crystallized-metaphysical-essence and have totally abandoned reason.

That is, I see communists assuming every single libertarian in the world is a fundamentalist about property rights and thinks they’re so sacrosanct that they must be maintained even in the face of horrible suffering, whereas they (the communists) quite reasonably want what makes a flourishing society full of happy people. Whereas the libertarians say they just want universal wealth and prosperity, whereas communists so bloody-mindedly attached to the metaphysical principle of Equality that they don’t care whether attempts to create it will lead to gulags and total economic collapse.

I see cosmopolitans believing that they want what’s best for society, but that nativists are working off an essentialist racism, where foreigners are inherently inferior in some vague metaphysical way. And the nativists, for their part, are arguing that they’re really concerned about the effects of too much immigration, but the cosmopolitans’ blind adherence to Multiculturalism as good in itself makes them unwilling to debate the real-world consequences of their actions.

Since most metaphysical heuristics are a stand-in for something real, we should expect blocs of allied people to contain some people who want the real thing, and other people who are running metaphysical heuristics that point at the thing. That is, the Tough On Crime bloc will have some members who just want to deter crime more, and other members who believe criminals deserve to suffer because of metaphysical Justice. The Soft On Crime bloc will have some members who question whether people need a ten year prison term for stealing a CD-ROM, and others who believe that prison is torture (metaphysical essence!) and so unconscionable regardless of its deterrent effect. If both sides try to position themselves as the hard-headed practical people, but weak-man the other side as having some incomprehensible metaphysics that makes them impervious to reason, that’s going to effectively shut down the possibility of debate.

Except my actual position is that the same sort of experiences that give you the metaphysical Justice intuition – having personally been a victim of crime, really caring a lot about making it as rare as possible, not being very well-educated – are also likely to make you overestimate the consequentialist value of deterring crime (and vice versa for the other side). My guess is a lot of people fluidly move back and forth between these levels, just as I would expect people who are very interested in only eating organic food to also be more likely to care about what percent RDA of vitamins are in their food. This isn’t sinister, or a reason to think that people are only claiming consequentialist arguments for their heuristics. It’s just a natural consequences of the way our values get produced and the fuzziness in everybody’s value system.