If you hold a weight of 10 grams in your left hand and 20 grams in your right hand, you will feel a difference, but if you hold a weight of 1,01 kilograms in your left hand and 1,02 kilograms in your right, you probably won’t feel a difference, even if the real difference in weight is 10 grams. This is an example of the Weber-Fechner law: a change in stimulus (such as the pressure of a weight on your hand) gives a smaller change in perception (such as the feeling of the weight) when the stimulus is larger. The stimulus-perception relation is often a logarithmic function. Other examples are the loudness of sounds (measured on a logarithmic decibel scale), the brightness of stars (measured on a logarithmic stellar magnitude scale) or the number of objects. You can immediately see a clear difference between 10 objects and 20 objects, but not between 1010 objects and 1020 objects. Another example is the price of a product: if in the supermarket you can choose between a product of 10 euro and an equivalent product of 11 euro, you are likely to choose the cheapest product that saves you 1 euro. But now suppose you are buying a car and you can choose between a car of 4711 euro and one of 4721 euro. Now you don’t bother the difference, even if the difference is ten times higher: you can save 10 euro by buying the cheapest car.

All our senses and our subjective judgments have this law of diminishing marginal effect. A marginal effect is the change in effect (e.g. a subjective valuation, estimation or perception) that is the result of a unit change in an objective, measurable variable (e.g. weight, sound amplitude, amount of light, number of objects). This marginal effect is a function of the objective variable, but it is not a linear function. The law of diminishing marginal effect says that the function is concave, such as the logarithmic or square root function.

This law of diminishing marginal effect as important implications for effective altruism, where the goal is to do good and help others as effectively as possible. Here are three implications.

1. Poverty reduction and diminishing marginal utility of money

If you are very poor, getting an extra amount of money will strongly increase your happiness. But if you already earn a lot of money, you probably don’t even notice a higher income. Consider someone like Bill Gates who earns about 114 dollar per second. If he finds a 100-dollar bill on the street, would he bother bending over to pick it up? His wealth is about 100.000 times higher than an average person in a developed country, which means that for him, buying a house feels like buying a loaf of bread to us.

In contrast, someone in extreme poverty is about 100 times poorer than us. For that poor person, finding 1 dollar on the street feels like finding a 100-dollar bill for us, in terms of increased happiness. That is why an organization like GiveDirectly can be highly effective in improving well-being and promoting happiness by giving the poorest people an unconditional cash transfer.

These are examples of the law of diminishing marginal utility of money, also known as Gossen’s law. Utility measures how valuable or preferable something is. The marginal utility of money measures how much we value or prefer an extra unit of money (an extra dollar). If we already have or earn a lot of money, our preference (measured by an increase in happiness or satisfaction) for an extra dollar diminishes. As a result of the law of diminishing marginal utility of money, increasing the income levels of the poorest people should get a priority: if rich people give some money to the poorest, the happiness of the rich people doesn’t decrease that much but the happiness of the poorest people increases a lot.

2. Suffering reduction and diminishing marginal suffering

The law of diminishing marginal utility (e.g. of money) results in a prioritarian ethic, where we must give a strong priority to improving the position of the worst-off people: the poorest humans or more generally the sentient beings that suffer the most. Avoiding extreme suffering gets a priority. However, there is also the law of diminishing marginal suffering. For example, we can easily detect the difference between 1 and 2 needles in our arm, but not between 101 and 102 needles. Adding more needles does not linearly increase the pain and suffering. Also, adding one prison day to a jail sentence of ten years is less painful for the prisoner than adding one prison day to a jail sentence of one week. As a result, we can sometimes overestimate the badness of extreme suffering. Or equally possible: we can sometimes underestimate the badness of less extreme suffering. This means that avoiding extreme suffering should not always get absolute priority. Sometimes avoiding less extreme suffering of many people can be more important than avoiding the extreme suffering of one person.

3. Saving lives and scope neglect

The most important and far reaching implication of the law of diminishing marginal effect for an effective altruist, is the problem of scope neglect. Looking at the donations given to charities, we see that most people have a diminishing marginal willingness to pay to prevent harm or save lives. The difference in willingness to pay to save 2 lives instead of 1 is higher than the difference to save 102 instead of 101 lives. Some studies indicate a logarithmic relationship between the willingness to pay and the size of the prevented harm. However, for an effective altruist, this should be a linear function: saving an extra live when 101 lives are already saved is not less valuable than saving an extra live when only one person was already saved. The moral value of saving an extra live is not dependent on the other lives. An effective altruist should try to avoid scope neglect. This has two important implications.

Linearity of the marginal utility of resources

When it comes to our own preferences and our own consumption, we have a diminishing marginal utility of resources such as money and time. The more money we have, the less valuable an extra euro becomes. The more food we can consume, the less valuable an extra loaf of bread becomes. The more leisure time we have, the less valuable an extra hour becomes. But with resources such as money and time, we can help others. The amount of good done is a linear function of the amount of help (e.g. the number of lives saved, or the amount of harm avoided). Therefore, when it comes to helping others, we should have a linear instead of a diminishing marginal utility of resources.

This requires a new way of thinking. If we buy the cheapest product in the supermarket, we can donate the money saved to an effective charity. But the same goes if we buy the cheapest car. We can reorganize our work such that we become more efficient and save one day on a small project that takes a week. Then we have an extra day to do good. But the same goes if we can save one day on a big project that takes a year. Hence, an effective altruist should try to make his or her marginal utility function more linear. This requires some effort, because we are not familiar with this kind of thinking. We are used to think in terms of relative instead of absolute numbers, and to ratios instead of differences. We spontaneously think that a 1% saving of time on a long project is as good as a 1% saving on a small project, that a 1% saving of costs on an expensive product is as good as a 1% saving on a cheap product, that a 1% reduction of mortality at a big catastrophe is as good as a 1% reduction at a small disaster. But when it comes to doing good, differences instead of ratios are what matters. If we can easily save 1 euro when we buy a car that costs a few thousand euros, it is as good as saving 1 euro on a loaf of bread, if this 1 euro goes to a charity.

Risk neutrality

Next to a marginal utility linearity, a second implication of avoiding scope neglect is risk neutrality. When it comes to our own preferences and consumption, we have a risk aversion. Imagine you can play a game. You can toss a fair coin, if it is heads, you get 100 euro. If it is tails, you must pay me 100 euro. On average the expected profit of playing the game is 0 euro, the same as not playing the game. If you have risk aversion, you would avoid playing this game, because you want to avoid the risk of losing 100 euro. This risk aversion is (partially) a consequence of the law of diminishing marginal utility of money. Suppose you already have 100 euro. The difference in utility of 100 euro (when you don’t play the game) and 0 euro (when you play the game and lose 100 euro) is bigger than the difference in utility between 200 euro (when you play and win 100 euro) and 100 euro (when you don’t play). That means that utility is a concave function of money: the more money you have, the less valuable an extra euro becomes.

Risk aversion also plays a role when it comes to saving lives, as was demonstrated by Kahneman and Tversky in their example of the Asian disease problem. Suppose there is a new Asian disease that will kill 600 people. There are two vaccines. With vaccine A, 200 people will be saved, with vaccine B there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved. With both vaccines, the expected number of people saved is 200. If people must choose between either vaccine A or vaccine B, a majority prefers vaccine A, because in that case they have a certainty that 200 people are saved. This demonstrates the risk aversion.

However, as Kahneman and Tversky also demonstrated, there is a framing effect. The above description of the vaccines was in terms of positive effects, i.e. saving lives. Another framing (choice of words) is possible, in terms of losses or deaths. With vaccine A 400 people will die, with vaccine B, there is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die. When people face the Asian disease problem with these words, a majority prefers vaccine B. In other words: people become risk seeking (a preference to gamble), because they want to avoid the certainty of 400 people dying.

The Asian disease problem is like the game of tossing a coin. Suppose heads means someone will save 100 lives, tails means someone will kill 100 people. When framed this way, most people prefer not playing the game. Now we can reframe it: of a population of 200 people, heads means no-one will die, tails means everyone will die, and not playing the game means 100 people will die. According to this framing, not playing the game becomes less attractive, because it results in a certain death of 100 people.

If we want to avoid this irrational framing effect, and if doing good implies a linear marginal utility, when it comes to saving lives, the most rational decision-making attitude is risk neutrality. This also requires a new way of thinking. When starting a new risky project to help others, switching a career to do more good, investing money to donate the profits to charities, we should avoid our spontaneous tendency of risk aversion. Effective altruists should take more risks if the expected value is higher. We should do more risky, dynamic instead of safe, defensive investing. We should try riskier scientific research, i.e. research with more uncertain results, if there is a probability of obtaining highly useful results such that the expected benefits are very high.

Consider doing a project A that will definitively safe 10 lives, and another project B that will have a 90% probability of saving no-one and a 10% probability of saving 110 lives. The expected value (the expected number of lives saved) is 11 for project B, which is 10% higher than the value of project A. If all effective altruists make such riskier choices, i.e. when everyone chooses the riskier project, 10% more lives are saved. Of those effective altruists, 9 out of 10 will help no-one, but 1 out of 10 will save 110 lives. For an effective altruist it doesn’t matter who saves lives, as long as the most lives are saved.