Do you make the world a worse place by purchasing factory-farmed chicken, or by paying for a seat on a transatlantic flight? Do you have moral reason to, and should you, refrain from doing these things? It is very unlikely that any individual act of either of these two sorts would in fact bring about a worse outcome, even if many such acts together would. In the case of factory-farming, the chance that your small purchase would be the one to signal that demand for chicken has increased, in turn leading farmers to increase the number of chickens raised for the next round, is very small. Nonetheless, there is some chance that your purchase would trigger this negative effect, and since the negative effect is very large, the expected disutility of your act is significant, arguably sufficient to condemn it. This is true of any such purchasing act, as long as the purchaser is ignorant (as is almost always the case) of where she stands in relation to the ‘triggering’ purchase.

Arguably there are many cases that cannot be dealt with in such a straightforward way. These are cases where a large number of acts, taken together, make the world a worse place, but none of these acts makes any negative difference on its own. In these cases, there is no ‘triggering’ act as in the factory-farming case, and so, arguably, a straightforward expected utility calculation would be insufficient to condemn any of the individual acts. Taking transatlantic flights or engaging in other carbon-emitting activities that collectively damage the environment are arguably like this, as there may be vagueness about when environmental damage occurs.

Plausibly, no plucking of any single hair on my scalp would make me into a bald man. And yet, together, several thousand such pluckings would do the trick. Perhaps there is a similar phenomenon in the case of environmental damage: no single walking across the grassy quad ruins it, no single small carbon emission destroys the atmosphere, and so on, but many such acts are collectively destructive. Consider a version of a more stylized case from Parfit: there are 1000 settings on an electric torture device, which has been hooked up to a victim. The victim can’t tell the difference between adjacent settings, but would certainly be in no pain at all if the device were at its lowest setting and would be in excruciating pain if it were cranked all the way up to ‘1000’. Next, each of 1000 people (who we can suppose don’t coordinate with each other) turn the device up just one setting each, leaving the victim in agony. Each of the 1000 people can, it seems, claim that their act made no negative difference at all, since the victim can’t tell the difference between adjacent settings (we can suppose there’s no phenomenological difference whatever to the victim between adjacent settings). It seems there is vagueness about when the victim’s pain level increases.

What to say about these cases which seem to involve vagueness? Here are some options. First, we could say that since there is no chance that any individual act would make the world a worse place, and since we can condemn such an act only if it would actually or likely make the world a worse place, no such individual act can be condemned at all. This option is unsatisfying for the reason that, intuitively, there is something morally to be said against each individual’s turning-up of the torture device in Parfit’s case. Second, we could condemn the individual acts without directly appealing to their effects. For example, with Kantians or rule consequentialists, we could say the relevant moral test of an act is ‘what if everyone acted that way?’ But there are independent problems with these views, and even if there weren’t it is intuitive that we can condemn the individual acts in these cases at least by some sort of direct appeal to their effects (i.e. it is intuitive that we can condemn these acts at least on act consequentialist grounds even if we aren’t act consequentialists). Third, we could, with epistemicists, claim that vagueness really is just a kind of ignorance: in fact there is a single hair plucking that would turn me into a bald man, we just don’t know which one it is; in fact there is an individual turning-up of the torture device that would increase the victim’s pain level, we just don’t know what it is, and so on. If this were right, it appears we could treat cases like Parfit’s and that of transatlantic flying as we would the factory-farming case – that is, as cases in which there are ‘triggering’ acts, such that we can as before use an expected utility calculation to condemn each individual act. I confess I am somewhat sympathetic to this third option, but epistemicism is controversial. So I will end the post with the following fourth option, to which I am also somewhat sympathetic:

In cases where it is genuinely indeterminate whether your act makes the world a worse place, you have a moral reason not to perform this act. The fact that it’s indeterminate whether it would make the world worse itself counts against the performance of the act (to what extent it counts against it, how to weigh this against competing considerations, and so on, is a further question). This simple thought seems attractive to my mind, but here’s a rival thought that doesn’t strike me as obviously incorrect: if it’s indeterminate whether your act makes the world a worse place, then it’s correspondingly indeterminate whether you have a moral reason not to perform it. A defender of this rival thought might argue that in being attracted to the ‘simple thought’, I am conceiving of indeterminacy as akin to uncertainty or ignorance (like an epistemicist), and reasoning that if there’s a chance of your act making the world worse, then you have reason not to do it. But this isn’t what I’m thinking; I’m simply thinking that it’s worth avoiding acting in a way such that it is indeterminate whether so acting makes the world a worse place. A similar thought seems attractive in the case of egoistic concern: Suppose I can either undergo a process that leaves me as well off as I would have been had nothing happened or I can undergo a second process whereby it is indeterminate whether things will be the same as the first process or instead whether I will suffer horribly for decades and then die. It strikes me as plausible that I have a reason to avoid the second process that I don’t have to avoid the first one.

Even if I’m wrong about this, and the ‘rival thought’ is correct, we would still face the question of what to do in cases where it is indeterminate whether you have reason not to perform an act. I have intuitions about what to do in some such cases, at least when other things are equal: that is, if it were between doing an act that you have determinately no positive reason to do and determinately no reason not to do, on the one hand, and an act that you have determinately no positive reason to do and indeterminately reason not to do, on the other, it is determinate that you should do the former act. What to do in cases where other things are not equal seems a further, more difficult, question. For some fuller discussions on what to do in cases of vagueness or indeterminacy, see this paper, as well as this one.

At any rate, the fourth option sketched above at least offers a defensible way to condemn individual acts in ‘collective harm’ cases like Parfit’s while avoiding the problems that other options face.

(Many thanks to Roger Crisp, Teru Thomas, and Caleb Ontiveros, for their very helpful comments.)