In this post I want to consider whether something inconsistent is happening when someone simultaneously makes strong substantive ethical assertions – assertions about what they and other people ought to do in a specific scenario, what “the good” might be in general, whatever – and makes strong negative metaethical assertions. By a negative metaethical assertion I mean specifically one of four assertion types: non-cognitivist assertions that moral statements don’t express propositions; error-theoretical assertions that all moral statements are false; anti–realist assertions that there are no moral facts; and moral-skepticist assertions that there is no one knows anything about what’s right and wrong, and therefore that no one can justify any moral statements.

Before I get into the meat of this I want to make a few quick notes. The first, something people cleverer than me have failed on multiple occasions to understand, is that positive metaethical stances are existential and negative metaethical stances are universal. Thus in denying non-cognitivism, a cognitivist does not say that all moral statements express propositions, only that at least one does; in denying anti-realism, a realist does not say that all moral statements are about mind-independent entities, only that at least one is; and so on. This means that you can no more “mix” a positive and a negative metaethical theory than you can mix physicalism and dualism; once you try to mix them, you have the positive theory. And this is what we should expect from a well-constructed debate: that competing theories contradict one another.

The second note should limit my scope. I want to quickly deal with error theory and moral skepticism; then the meat of my argument will be about non-cognitivism and anti-realism. First, no error theorist can convincingly make ethical exhortations. Once we know an exhorter is an error theorist, we can ask: “Don’t you think that’s false?” And when they reply “yes”, we can ask, “So why do you keep saying it?” And there’s no feasible reply. Only slightly less clear is that no moral skeptic can convincingly make ethical exhortations. Once we know an exhorter is a moral skeptic, we can ask: “Do you know that?” And when they reply “no”, we can ask, “So why do you keep saying it?” And they cannot reply with “Because x” – it is precisely because they believe there is no “because”, because they believe there isn’t any satisfactory justificatory mechanism for moral statements, that they are moral skeptics.

I had two very different communities in mind when I first wrote this a few months ago. One is LessWrong rationalists who are usually very committed utilitarians and very committed metaethical negativists of one sort or another; if I had to psychologize, I’d say that this is because both positions independently seem like the most “rational” option, for various reasons, and that nobody has made a compelling case that their conjunction is sufficiently irrational. The other is critical theory-inspired social justice-y leftists. This group seems quite confident that all morality is socially constructed, that ethics are determined by people in power, that everything is contingent and nothing is true outside of society and perception, and so on; and yet they make very confident claims using ethical terms, like “justice” itself. It would be a mistake to suggest that these groups share a lot here; after all, the motivation for the rationalists’ metaethical negativism is probably a commitment to physicalism (or at least naturalism, or at least Occam’s Razor), while the leftists’ motivation is close to its opposite, social constructivism. But maybe by focusing on potential theoretical stances in the abstract I can get two birds with one stone.

To begin, here’s an intuitive case that metaethical negativism is consistent with ethical exhortation: The subject matter of ethics is not the subject matter of metaethics; they cannot contradict one another, so it is silly to say that you need to hold a certain sort of metaethical position to hold a certain sort of ethical position. This, however, looks far too strong given the second note above. The error theorist, for instance, really can’t hold any substantive ethical position, at least not if they want us to remain confident in their rationality: error theory and any moral proposition contradict one another. However, it might look as though error theory and moral skepticism are special cases; that was probably why it was so easily to deal with them above. Non-cognitivism and anti-realism don’t say anything directly about whether it is possible for us to know any moral truths (whether because none are true or because none are justifiable) and therefore they will generate neither any direct contradictions nor any necessary irrationality (or rudeness) on the part of the speaker. So that’s a decent intuitive case for yes; I think the intuitive case for “no” is stronger, but I’ll tackle it separately for the non-cognitivist and the anti-realist.

Ethical exhorters are, in the first place, people who argue with others about what’s right and wrong, often heatedly and at great length, and try to convince them. For a non-cognitivist, this seems like a rather irrational process to engage in: if someone makes a moral statement, they haven’t expressed a proposition, and there is therefore nothing to contradict. For instance, one type of non-cognitivism isexpressivism, the theory that saying “x is right” and “y is wrong” amounts to saying “yay x!” and “boo y!” This is a fine theory as far as it goes, but it is unclear why a rational expressivist would feel capable of convincing someone else, or of being convinced, by these sorts of noises. It is not even clear that they should be of the opinion that they disagree: do fans on different sides of a stadium “disagree” with one another, and is it sensible for them to mount arguments? Under what circumstances is it rational to stop saying “yay” and start saying “boo”, and vice versa?

In a fairly interesting essay in Peter Singer Under Fire, “Singer’s Unstable Meta-Ethics”, Michael Huemer made some noteworthy points in this regard. Something that sticks out to me, that I would not have emphasized or even really noticed had I not read it, is that the moral statements of my targets are often both demanding and revisionary. (Huemer is writing only about Singer, of course.) They say: to be doing the right thing you must do a lot and you must do something different than you might have thought. Now we might imagine that in most cases, coming to this conclusion would require some sort of rational reflection on moral principles and so on. In fact, such rational reflection is characteristic of Singer’s own work. The reason this is intuitive regarding the demanding nature of these moral systems is that most of us will not have a reaction of “yay doing a lot of stuff for other people, boo doing a lot of stuff for myself”. The reason this is intuitive regarding the revisionary nature of these moral systems is that it is tough to think of a mechanism other than rational reflection that would cause us to revise our moral views. And, in fact, the “boo”/”yay” metaethical picture is especially difficult to stomach when it comes to Singer. We can kind of handle babykilling or whatever if advocating it means saying something true about an accounting of utils, but if what Singer wants us to say is “yay babykilling”, that seems a step too far.

There is something deeper Huemer points out that comes out of the discussion of preference utilitarianism specifically. This is, again, not something I would have come up with on my own; I’m not even sure Huemer gets it quite right (and in fact he may have inspired me to write something rather different), but it’s kind of fascinating. If the metaethical view is that moral statements express our preferences, something funny is going to happen when some people’s preferences turn out to be in line with preference utilitarianism. I don’t know if the funniness is quite “circularity”, but it is definitely odd. Preference utilitarianism seems to rely on a set of roughly self-interested preferences which I weigh against the similarly self-interested preferences of others. But if what preference utilitarians are doing when they advocate preference utilitarianism is expressing a preference, it doesn’t seem we have any prospect of identifying the right sorts of individual preferences for preference utilitarianism to pick out. Imagine a world with three people: two preference utilitarians and one person who isn’t. Even if the non-preference utilitarian’s self-interested preferences are very weak, they will still become a utility monster on this conjunction of metaethical and ethical views, because there are simply no other relevant preferences to maximize. This line of reasoning may prove too much; it may cut harder against the ethical rather than the metaethical part of Singer’s position. But I figured it was worth flagging.

Now Singer’s response is both interesting and confusing. He draws a lot of distinctions not found in Huemer’s essay. In some places it is difficult to figure out where he stands. He begins by saying he is actually ambivalent about non-cognitivism but ends up fleshing out a less “crude” non-cognitivism than what Huemer outlined, which derives from Hare. Under Hare’s view, moral statements are prescriptions, not preferences. And they are prescriptions of a very specific type: they have to be coherent, consistent, and universalizable in the sense that “to be able to make a moral judgment one has to put oneself in the position of all those affected by the prescription”. (This is, tangentially, not good news for the view that metaethics and normative ethics are unrelated; in fact, this metaethics, Singer thinks, very directly affects, maybe even entails, his substantive ethical views.) Singer actually allows, however, that “so strong a notion of universalizability cannot be defended by an appeal to moral language, and therefore goes beyond non-cognitivism. It must, instead, be grounded in some claim about the requirements of reason”.

It is unclear to me, however, that universalizability is the only culprit. In what sense can a statement that does not express a proposition be inconsistent with another statement? Well, maybe we can imagine this with prescriptions and imperatives. “Don’t go to the store” seems to be inconsistent with “Go to the store”. But this inconsistency looks a lot like propositional inconsistency: the inconsistency arises precisely because there aren’t any possible worlds at which the person hearing the command both did and did not go to the store. Inconsistencies don’t have to be so obvious, either; they can be of exactly the sort that only rational reflection will uncover. For example, one might make the following argument to a (specific, potentially inconsistent) utilitarian: “You say to maximize utility. But you also say not to engage in or allow mob justice, ever. This is inconsistent: you neglect the fact that mob justice and scapegoating may maximize utility even when the punished party is not guilty.” And if our (specific) utilitarian decides this is a good argument, they may stop making one of those two statements. As a reviewer writes, such revision and updating

rests on the implicit belief that there are moral facts, that reason can help us determine what they are, and that it can motivate us to act in accordance with them. It also rests on the implicit conviction that our motives, as well as our factual beliefs, are subject to rational requirements; in particular, it rests on the conviction that reason can motivate us to refuse to act on certain desires and to change some of the most basic ones. Often this takes the form of an appeal to consistency …

Under this view, Singer is required not only to be a cognitivist but a moral realist as well. This may, of course, explain why Singer himself notes that there is in addition to “Singer the non-cognitivist” a separate “Singer the objectivist” who came through in, for example, The Expanding Circle.

A cognitivist anti-realist will probably say something like the following: moral statements don’t merely express preferences; they express propositions about preferences. This, I will note, seems to solve a few pressing technical problems for non-cognitivism, like the embedding problem. As we saw just above, however, it is not clear it can avoid the sorts of worries raised about ethical exhortation any better than non-cognitivism can. I should note that it is very difficult to conceive of a non-cognitivist moral realism apart from error theory, which I’ve already dealt with above. This means that, for our purposes, realism is strictly a stronger condition than cognitivism, and (equivalently) non-cognitivism is strictly a stronger condition than anti-realism. In other words, if I convince you in this section, you should be convinced about the above section too, even if you didn’t find it convincing. (Huh? Well, I mean: about the thesis of the above section.) However, if you don’t find this section convincing, it shouldn’t affect your view of the above section. I’m not sure which of these sections is “the important one”. Non-cognitivism and anti-realism are actually elided quite a bit – you can see that in the Stanford Encyclopedia articles on both, as well as in the Huemer and Singer essays – so to an extent, what works for one section will kind of work for the other.

The “critical theory” view, in more detail, probably says something like: yes, moral statements express propositions, and those propositions are about the contingently embedded values of groups and societies. Here’s the intuitive case against an ethical exhorter having that sort of metaethics: any attempt to justify adopting one group or society’s values over another (i.e., any ethical exhortation) would need an ethical footing beyond reference merely to the existence of a group or society’s values. But the social constructivist metaethics determines that we can’t get such a footing. So cultural relativism doesn’t give much hope for ethical exhortation. This is a critique that dates back decades if not centuries, Wikipedia suggests.

We have a more central example of the “rationalist” view. Yudkowsky has said things to the effect that ethical exhortation should rightly be conceived not as trying to convince someone of the truth of certain moral facts but as trying to convince them of what their own attitudes (which is, after all, what moral statements are about) are or entail. This need not be limited to (allegedly) rational requirements relating to moral language, as above; cognitivist anti-realists can also draw on information about, say, psychology to make guesses or arguments about their interlocutors’ attitudes. Now the general intuitive case for “no” here is this: ethical exhortation generally takes the form of trying to convince someone to change their attitudes, to adopt different ones, whether in word or in deed. But if moral statements are just about attitudes, if they just describe them, this is exactly the opposite of what ethical exhortation would be doing.

Perhaps people can be mistaken about their own attitudes. If this is right, they can be mistaken in a few ways. First, the tools of the non-cognitivist: people can mistakenly think their attitudes are consistent or coherent and modify them when they find they’re not. This is merely an effect of rational reflection. Again, it is not clear that attitudes can conflict with or contradict each other in the way propositions can, but let’s move past that. Second, the cognitivist anti-realist has the tools of psychology; in other words, people might be mistaken point blank about their own attitudes, and when confronted with the attitudes of others, they might figure this out. This view incorporates nicely a lot of rationalist ideas. For instance, people’s mistakes about their own attitudes can be attributed to the fact that humans are bad at metacognition, or to various cognitive biases, etc.

However, if these are all the tools we’ve got, moral discourse is rendered impotent, if not impossible. Here’s an illustration of that; let’s call it the “what if you’re wrong” problem. What happens if we’re wrong that people share the same attitudes? (And given how many people there are, we’re probably wrong about that.) Imagine there exists some person who genuinely has the attitude that, say, all thirty-year-olds should be punched in the face. This person is saying something true when they say, “All thirty-year-olds should be punched in the face.” Of course, we are also speaking truly when we say, “No, they shouldn’t.” But we are not contradicting them. We are merely reporting our attitudes in response to them reporting theirs. It is difficult to justify the urgency of ethical exhortation when we aren’t even disagreeing.

But let’s say we managed to get past the problem about non-disagreement. Imagine we said “No, they shouldn’t,” and the puncher took this to mean, “You are wrong about your attitude.” We have already stipulated that the puncher is a genuine puncher and I think it is difficult to deny that someone, somewhere has this attitude or preference, or a similar one. (And even if we did, I don’t see how the very possibility of moral discourse can rest on an empirical claim about the existence of punchers or similar kinds of people.) Once we manage to disagree, we’re wrong. This means, among other things, that if moral disagreement is possible for this sort of cognitivist anti-realist, we necessarily end up looking inconsistent. Amongst ourselves, one of us says, “Not all thirty-year-olds should be punched in the face,” and the others say, “That’s right.” Then the puncher comes in and says, “All thirty-year-olds should be punched in the face,” and we all say again, “That’s right.”

Here is an analogous situation. A friend (Bob) says to me: “I just sent you something which proves theorem T.” I say: “Excellent. I trust your judgment that this proves Bob theorem T.” He gets irritated. “No; I think it actually proves theorem T.” I’m unmoved: “Great. Based on my semantics, I can discern that you believe this actually proves Bob theorem T.” Now you might say: Okay, but what if he asks, “Does this prove tangigo theorem T? That’s what I want to know.” But what if I say “No”? He asks “Why not?” And I say “The concept of proof tangigo requires the involvement of zucchini in some way. I don’t really care how. But they’ve gotta be involved.” And now he has to say one of two things: “That’s not the real concept of proof/proof tangigo ,” or “That’s not a good concept of proof/proof tangigo .” But it’s turtles all the way down: “Great, I see that it’s not a real Bob concept, nor a good Bob one…”

Here is my original idea, in brief. Moral statements are statements about actions and one way of cashing out cognitivist anti-realism is to say that the presence of moral terms in a statement about an action entails that performing that action aids the development or requires the use of a moral capacity. Which capacities are moral? I don’t know; if this were our metaethics, that would be a question of normative ethics. This metaethical view clearly lends itself to virtue ethics as a normative stance, although I’m not quite sure it does so necessarily. It also helps explain some odd results of experimental philosophy. For example, it’s been found (I think?) that in certain cases, individuals are praised far more for action than inaction, but blamed equally for the two. (I should look into this more!)

It is not immediately clear to me how this project would fare with regard to the possibility of moral disagreement and ethical exhortation. Some disagreement would look quite odd. For instance, if action A (let’s say, since we’re among effective altruists, donating to a nonprofit activist group) flexes my moral capacities but a mutually exclusive action B (let’s say donating to a malaria net nonprofit) flexes them more, it might turn out to be true when someone says “You should do A” and also true when I say “I should refrain from doing A and do B.” This is intuitively kind of a problem, but if it can be saved, it may be a pragmatically cool result. It codifies our sense that we shouldn’t let the perfect be the enemy of the good. On the other hand, it might be hard to measure one use or development of moral capacities against another. Maybe we can have a notion of “joutils” – combining joules and utils – or util/hours.

Thinking of morality as a capacity we can develop also helps militate against the problem Susan Wolf noted in “Moral Saints”, which is, roughly, that under some theories, moral saints end up boring. But people who develop and use interesting capacities are not boring. I think some of the hardest cases for this idea will probably be situations where one’s self-interest matches with what’s right, and thus where doing what’s right requires no real use or development of moral capacities, but doing something else might have. However, this, too, might match up with our sense that doing the right thing for the wrong reasons is not particularly praiseworthy, and that doing the wrong thing for the right reasons is not particularly blameworthy.