Moral illusions are spontaneous, intuitive moral judgments that are very persistent, but they violate our deepest moral values. They distract us away from a rational, authentic ethic. Probably the most problematic moral illusion is arbitrary group selection. This moral illusion lies at the heart of discrimination and it makes us less effective in doing good. In this article I first explain the two worst examples of discrimination: how two kinds of arbitrary group selection cause us to harm others. Next, I present two other examples why arbitrary group selection causes us to be less effective in helping others.

What is arbitrariness?

Arbitrariness means selecting an element or subset of a set, without following a rule. In general, there are two kinds of arbitrariness: vertical and horizontal. To explain this, let’s start with a set containing two elements: the numbers 0 and 1. Next, we can construct the power set of that set, i.e. the set of all subsets. This power set contains the empty subset that has no elements (written as {}), two subsets with one element (i.e. {1} and {2}) and one subset that has two elements (namely {1,2}).

Now I can select a subset with either zero, one or two elements. This is a choice between three cardinalities: ‘zero’, ‘one’ or ‘two’. A cardinality of a subset measures the number of elements in that subset. Let’s suppose I arbitrarily pick cardinality ‘one’, i.e. the subset should have one element. This choice for selecting a subset with one element instead of a subset with zero or two elements, is arbitrary, because I did not follow a rule. I cannot explain why the subset should have one instead of zero or two elements. This arbitrary selection of cardinality is vertical arbitrariness.

After arbitrarily selecting the cardinality, I can select a specific subset. If the cardinality is ‘one’, I can select either {0} or {1}. Suppose I pick {1}, without following a rule. This arbitrary selection of a specific subset within a cardinality, is horizontal arbitrariness. The reason why it is horizontal becomes clear when we write the subsets in a diamond shape: on top we have the subset {1,2}, the second level contains the two subsets {1} and {2}, and at the bottom we have the empty subset {}. Vertical arbitrariness means we arbitrarily select the level (e.g. the second level). Horizontal arbitrariness means we arbitrarily select a subset at this level.

Now suppose in selecting the cardinality I do follow a rule, such as “select the highest cardinality”. This rule is special, because it says that we should pick the cardinality that avoids horizontal arbitrariness and is not trivial. There are two cardinalities that logically avoid horizontal arbitrariness: the highest (i.e. ‘two’ in the above example) and the lowest (i.e. ‘zero’). The lowest cardinality only contains the empty subset, so this choice is in a sense trivial. The highest cardinality is the only non-trivial cardinality that avoids horizontal arbitrariness: when we select the cardinality ‘two’ in the above example, we have no choice but to select the subset {1,2}.

Now that we have a clear idea of the notion of arbitrariness and the unique rule that avoids horizontal arbitrariness, we can move to concrete examples of unwanted arbitrariness in ethics and how to avoid them.

Harming others because of discrimination

If we look at the two worst examples of harm done to others, they are the result of two kinds of discrimination: speciesism and nationalism. Discrimination is a difference of treatment between individuals or groups A and B, whereby three conditions are met:

A is treated better than B you would not tolerate swapping positions (treating A like B and vice versa) the difference of treatment is based on arbitrary criteria such as arbitrary group membership.

The latter condition means that discrimination relates to unwanted arbitrariness.

Arbitrary biological group selection: speciesism

Speciesism is the spontaneous moral judgment that all members of particular biological species are more important (e.g. deserve more or stronger rights) than members of other species. Respecting human rights and at the same time rejecting or violating the rights of non-human animals, or considering eating chickens as permissible and eating dogs as impermissible, are two examples of speciesism.

Speciesism involves both horizontal and vertical arbitrariness. First consider vertical arbitrariness. The biological classification can be considered as a cabinet with several drawers. Each drawer corresponds with a way to divide individuals into biological groups. I can open the bottom drawer of ethnic groups (races) and say that I belong to the ethnic group of white people. Or I can open the second drawer from below, containing all subspecies, and point at the subspecies Homo sapiens sapiens as my favored group. But we also belong to the species of humans (Homo sapiens) in the third drawer. Or moving higher in the cabinet: the family of great apes, the infraorder of simians, the order of primates, the infraclass of placentals, the class of mammals, the phylum of vertebrates, the kingdom of animals. The highest drawer contains only one group: the group of all entities in the universe.

We are simian, as much as we are human and mammal. So why would we open the third drawer from below and point at the species of humans and declare that only those individuals get basic rights? Why not pointing at other species or other categories such as the class of mammals or the infraorder of simians? None of the many definitions of biological species (e.g. referring to the possibility of interbreeding and getting fertile offspring) and none of the many descriptions of biological categories (e.g. referring to genealogy and having common ancestors) contain any information about who should get the right to live or the right not to be abused. Why should basic rights depend on fertility or ancestry? One could argue that having a rational, moral self-consciousness is the morally relevant property to grant someone rights, and that only humans have such a high level of consciousness. Yet some humans, such as babies or mentally disabled humans, have mental capacities not higher than those of some non-human animals such as pigs. Then one could object that most members of the species of humans do have that high level of consciousness. But the same goes for the infraorder of simians: most simians alive today have a rational, moral self-consciousness. So why not pick this infraorder as the criterion for membership of the moral community? One could reply that the species of humans is the smallest biological group whose majority of members have a high level of consciousness, but then we can ask the question why we should pick the smallest and not the largest biological group? A rule to pick the smallest biological group whose majority of members have a high level of consciousness becomes very farfetched and always remains arbitrary. Why pick a biological group and not simply pick the group of individuals who have a rational, moral self-consciousness, excluding mentally disabled humans? In the end it remains arbitrary, because what is the relation between a biological classification and the notion of rights?

Next to vertical arbitrariness, speciesism involves horizontal arbitrariness. After selecting the level of species in the biological hierarchy (i.e. the third drawer from below), you have to select a specific species such as the species of humans. This kind of speciesism where humans are considered central, is called anthropocentrism. This selection for the human species is arbitrary, because there are many other species and there is no special property that all and only humans have. That means there is no rule that selects the human species as the relevant species.

As explained above, there is one drawer that is unique in the sense that we can follow a rule to select that drawer. Take the drawer that contains only one group that is not empty. This is the top drawer, that contains the group of all entities. So we can avoid arbitrariness by selecting this top drawer (i.e. the highest cardinality), and that means that all entities in the universe equally deserve basic rights. Now the question becomes: what are those basic rights that can be granted to all entities without arbitrary exclusion? One such basic right is the right to bodily autonomy: your body should not be used against your will as a means for someone else’s ends. Of course, if an entity has no consciousness, it has no sense of his or her body. Consider a computer: does its body increase when we plug in some extra hardware? Where does its body end? The same can be said for plants: what is the body of a plant? Consider a clonal colony of aboveground trees that are connected with undergroud roots, such as an aspen colony. If two aboveground trees are connected with one root, we can consider it as one living being, but if we cut the root, are there now two living beings with two bodies? Also, if a plant does not have an organ such as a brain that creates a will, it does not have a will and hence cannot be used against its will. This means that for insentient objects such as computers and plants, the basic right is always automatically respected. The basic right is only non-trivial for sentient beings, because they have a sense of their bodies and they have a will. Similarly, the basic right to have your subjective preferences or well-being fully taken into account in moral considerations, is only non-trivial for sentient beings who have subjective preferences and a well-being.

If we avoid arbitrariness, we end up with some basic rights that should be granted to all entities. These basic rights are only non-trivial for sentient beings. Hence, we derived instead of merely assumed why sentience is important. And now we see that in our world those basic rights are violated. The two biggest violations of those rights occur in food production (i.e. livestock farming and fishing) and in nature (i.e. wild animal suffering). Every year about 70 billion vertebrate land animals and a trillion fish are used against their will as means (food for humans). Similarly, the well-being of wild animals in nature is not fully taken into account in our moral considerations. This results in a lot of harm done to non-human animals.

Some organizations that fight against speciesism are: Animal Ethics and Sentience Institute. Wild Animal Initiative wants to improve the well-being of wild animals. Animal Welfare Funds supports organizations that work on improving the wellbeing and avoiding the suffering of nonhuman animals, especially farmed animals.

Arbitrary geographical area selection: nationalism

When we consider the harm done to humans, probably the biggest harm is caused by nationalism. Nationalism results in a policy of migration restrictions and closed borders. This is harmful in many ways. First, every year more than 1000 refugees and migrants die as a result of the strict immigration policy of the EU (‘Fortress Europe’). Second, migration restriction results in the biggest wage gap among workers: for equal work, workers in low and middle income countries earn three to ten times less than equally capable workers in high income countries. Because of the number of people involved and the size of this global income gap, this is probably the biggest kind of economic injustice worldwide. Third, the global labor market is not in an effective economic market equilibrium. This results in a huge loss of productivity, worth trillions of dollars. Global GDP (world income) could almost double by opening borders. That means that open borders is probably the most effective means of poverty eradication and human development. Both natives in the host countries, migrants and remaining natives in the countries of origin can benefit from migration (the latter can benefit from the remittances send by migrants to their remaining families). Closing borders for immigrants is a kind of harm comparable to stopping job applicants and workers at the gates of companies, or stopping customers at the doors of shops. This restriction of freedom is not only harmful to the job applicant, the worker or the customer, but also to the employer and the shopkeeper.

The policy of closed national borders involves unwanted arbitrariness. There is a hierarchy of administrative or geographical areas: the whole planet or the United Nations at the top, continents at the next level, followed by unions of countries (e.g. the EU), countries, states or provinces, and finally municipalities, counties or towns at the bottom. Between those areas at the same level there are borders, but at most levels, these borders between areas are open. For example in the US, there are open borders between states and municipalities. In the EU, there are open borders between countries. So why should borders be closed at some levels but not at others? Selecting a level in this hierarchy of areas and stating that borders between areas at this level should be closed, is arbitrary.

Next to this vertical arbitrariness, there is horizontal arbitrariness, because the location of the borders is arbitrary. Why is the border between countries A and B here and not there? Why is the border between the US and Mexico not 100 meters more to the north? The historical reasons for these border locations are arbitrary.

There is in fact a third kind of arbitrariness which I call internal arbitrariness. The US border is not fully closed: it is very open for goods, capital and tourists, but very closed for labor migrants and refugees. This distinction is arbitrary: if borders are closed out of fear of terrorists among immigrants, then they should be closed for tourists as well, because there can be terrorists among tourists. If they are closed because some US workers are economically harmed by immigration of workers, borders should be closed for goods as well, because imports of goods can also harm US workers.

Organizations and platforms that support open borders and fight against nationalism, are: Open Borders, Free Migration Project and UNITED for Intercultural Action.

Not helping others because of ineffectiveness

Next to harming others, arbitrariness also disturbs our choices to help others. When helping others, we choose less effective means, which means that we do not help some other individuals as much as we could with our scarce resources.

Arbitrary problem selection

Cause prioritization is an important research area in effective altruism. The problem is that we often choose ineffective means to help others, based on the way how we think about problems or cause areas and divide those problems in subproblems and subsubproblems.

Suppose you have a friend who died of skin cancer, so you want to help patients who have skin cancer by donating money to the Skin Cancer foundation. Skin cancer is your cause area: the problem that you want to solve. Of course, when your friend died of skin cancer, s/he also died of cancer, so why would you not donate to the National Cancer Institute? Or donate to the Chronic Disease Fund, because skin cancer is a chronic disease? You could argue that the National Cancer Institute focuses more on lung cancer, and the Chronic Disease Fund focuses more on cardiovascular diseases, and you want to focus on skin cancer. However, suppose that you find out that your friend died of a specific type of skin cancer, namely melanoma. And suppose that the Skin Cancer Foundation focuses more on other types of skin cancer. Would you now shift your donations towards the Melanoma Foundation? What if there are several types of melanoma? So here we have a vertical arbitrariness: from melanoma at the bottom, to skin cancer, cancer, chronic diseases, diseases and finally all suffering at the top. This is a whole hierarchy of problems. And if you focus on skin cancer, there is horizontal arbitrariness, because there are other types of cancer as well.

An effective altruists asks the question: what is the real reason to donate to a charity such as the skin cancer foundation? Is it because a friend died of skin cancer? In that case, the badness is in the dying. So you want to avoid premature deaths of other people. Your friend cannot be saved by donating to the Skin Cancer Foundation, and if your friend died of lung cancer, you would be equally concerned about that deadly disease. So if you want to prevent premature deaths or save lives, and if you can save more lives by preventing malaria than skin cancers, focusing on malaria is more effective and should be chosen.

Another example: suppose you saw the documentary Blackfish about animal cruelty in the dolphinarium SeaWorld, so you decide to support an animal rights campaign against dolphinaria. However, animal suffering in dolphinaria is part of a bigger problem: animal cruelty for entertainment, which also includes cruelty in animal circuses. And this is part of an even bigger problem: animal cruelty for pleasure, which also includes cruelty in factory farms, where animals are bred for our taste pleasure. In return, this is part of an even bigger problem: animal suffering in general. Why is the campaign against dolphinaria the right level of problem? Why would you not focus on a bigger problem? You could also go a level lower, by focusing only on SeaWorld, because that is what the documentary Blackfish was about.

Again an effective altruists asks what is the real reason to fight against dolphinaria. Is it to reduce the suffering of animals kept in captivity? In that case, doing a campaign to decrease meat consumption with only 0,1% results in a stronger reduction than closing down SeaWorld.

This problem of arbitrary problem selection relates to many cognitive biases. First, there is a zero-risk bias, where you prefer to completely eliminate one specific risk or problem although reducing another bigger risk with a small fraction would result in a greater reduction of overall risk. Suppose deadly disease A affects 1% of people, and vaccine A reduces disease A with 100% (i.e. a complete elimination from 1% to 0%). Deadly disease B on the other hand affects 20% of people, and vaccine B reduces disease B with 10% (from 20% to 18%). You have to choose between either vaccine A or B. Most people prefer vaccine A, because that implies we no longer have to worry about disease A. Problem A is completely solved. Vaccine B appears to be more futile, because you will hardly notice a reduction from 20% to 18%. However, the total reduction of deadly diseases with vaccine B is 2 percentage points (from 21% to 19%), which is twice as high as the total reduction with vaccine A. The choice for vaccine A is irrational: suppose that I didn’t mention the difference between diseases A and B, and you believed that they are both the same disease Z which affects 21% of the population. Then you would prefer vaccine B. Or suppose that we find out that disease B has two types: diseases B1 and B2. Vaccine B completely eliminates disease B1 which affected 2% of the population. Again you would now prefer vaccine B.

A related cognitive bias is futility thinking (explained by Peter Unger in Living High and Letting Die which also has some experimental evidence). Suppose intervention A helps 1000 of 3000 people in need, which means 33% of the affected population are saved. Intervention B helps 2000 of another 100.000 people, so 2% of this other affected population are saved. In absolute numbers, intervention B is twice as effective, but a 2% reduction of the problem B seems more futile than a 33% reduction of problem A. Here again we have a hierarchy of affected populations. We can consider the total population of affected people, i.e. the 103.000 people together. Or we can consider a subpopulation affected by problem B, namely the 2000 people that are saved. Now intervention B helps 100% of this affected population. Compared to intervention B, intervention A seems more futile.

Next we have the certainty effect, which is a version of Allais paradox. Suppose there are two policies: with policy A everyone receives 1000€, so there is a certain benefit. Policy B gives 3000€ arbitrarily to 50% of the population, and the other part receives nothing. Everyone has a 50% probability of receiving 3000€. Although the total received benefit is higher for policy B (3000 times 50% is higher than 1000 times 100%), this seems less fair and more risky than policy A, so a lot of people prefer policy A. However, suppose the population is a subpopulation of a country: there are in fact ten regions in that country, and only one of those regions is arbitrarily chosen for the policy. So now only 10% of people receive 1000€ with policy A, whereas policy B distributes 3000€ to 5% of the population. Now for many people the preference for policy A becomes less clear.

A strategy within effective altruism to avoid these cognitive biases and arbitrary problem selection that makes us less effective, is to start with considering the whole problem first. The whole problem can be suffering or loss of well-being. Next we can focus on human suffering or animal suffering. Within human suffering, we can look for the most effective ways to alleviate extreme poverty or prevent serious diseases.

Arbitrary problem selection also relates to another group of cognitive biases that involve time. Time inconsistency is a cognitive bias where preferences can change over time in inconsistent ways. Do you prefer to save one person today or two people next year? If saving a person is something like receiving money, most people discount the future and prefer to receive one dollar today or save a person today instead of receiving two dollars or saving two people next year. The inconsistency arises because for most people, this is not the same dilemma as the choice between saving one person ten years from now versus two persons eleven years from now. In this second choice, people prefer to save the two persons.

Similarly, presentism is a moral theory that says it is better to help people who are alive today than to help people in the far future. We can see the arbitrariness by looking at time intervals. We can divide time in intervals spanning e.g. one day, or 100 years, or a million years. If you are a presentist, you have to ask the question: do you help people who are alive today, or alive this year, or alive this century? Choosing a specific time interval always involves arbitrariness. Next to this vertical arbitrariness, there is horizontal arbitrariness: suppose you prefer to help people who are alive this century. Why this century and not the next, or the 28th century? There are so many centuries to choose.

The only way to avoid this time inconsistency and time arbitrariness when it comes to helping others, is to take the long-term perspective, i.e. consider the whole future. The whole future contains only one time interval, so there is no horizontal arbitrariness. Because of this time impartiality, within the effective altruism community there is a big focus on improving long-term outcomes.

Arbitrary project selection

After selecting a problem that we want to solve, we have to find effective ways to solve it. The problem is that some kind of arbitrariness can sneak in our choices of projects or interventions. A project consists of subprojects and subsubprojects. This relates to the cognitive bias of narrow bracketing, explored by Rabin and Weiszäcker, where people evaluate decisions separately. This results in inconsistent preferences and a choice for less effective means.

Consider two dilemmas. Dilemma 1 gives you a choice between option A, saving 4 lives and option B, a 50% probability of saving 10 lives and a 50% probability of saving no-one. When it comes to saving lives, many people are risk averse, which means they prefer the first option: a certainty to save 4 lives instead of a risky bet to save 10 lives.

Next, we have dilemma 2 that gives you a choice between option C, losing 4 lives and option D, a 50% probability of losing 10 lives and a 50% probability of losing no-one. According to prospect theory, this framing in terms of lives lost or people died, results in a risk seeking attitude: people prefer the risky bet that gives a possibility to lose no-one.

When we consider the two dilemmas separately, there is no conflict between risk aversion in the first dilemma and risk seeking in the second. But suppose those two dilemmas are in fact two parts of one quadrilemma: a choice between four options. Let’s look at the combination of the two dilemmas. Option AC means saving 0 lives. Option AD means losing 6 lives with 50% probability and saving 4 lives with 50% probability. Option BC gives 50% of losing 4 lives and 50% of saving 6 lives. Option BD gives 50% of saving 0 lives, 25% of losing 10 lives and 25% of saving 10 lives. Most people prefer A above B, and D above C, so they should prefer AD above BC. However, option BC is clearly better than option AD.

Every project involves some risky outcomes. And to solve a problem such as people dying, several projects can be combined into a big project, or be split into several smaller projects. This creates a vertical hierarchy of projects and subprojects, or decisions and subdecisions. To avoid arbitrariness, we should look at the top level: the total project or the sum of all our decisions.

For an effective altruist, his or her total project is what he or she does over the course of his or her life. That includes all the decisions. That means an effective altruist should not set time specific targets such as helping at least one person every year (or donating at least 1000€ to a charity every year), because if that is easier, it might be better to help no-one in the first year and three people in the second year. A yearly target is arbitrary, because one could equally set another target to help ten people every decennium. The bigger the time interval, the more flexible you can choose the best opportunities to help the most people. It might be better to spend a few years doing nothing but looking for the most important problems and the most effective means to solve them. This seems like a waste of time because you do not help anyone during those years. However, after those years, due to this research, you can be much more effective in helping others. That is why effective altruists spend a lot of time doing research and cause prioritization.

Similarly, for the effective altruism community, the total project consists of all the decisions made by all effective altruists over the whole future. Suppose each of ten effective altruists has to make a decision for a project or intervention. They can follow two strategies. First, they can all choose the same project that has a certain but small altruistic return on investment. With this project, each of the ten effective altruists saves one life for sure. A second strategy is to become more risk neutral: they can choose projects that have a 10% probability of success, and if such a project succeeds, it saves 100 people. Nine out of those ten effective altruists will choose a project that most likely saves no-one. But one of those ten effective altruists will win the jackpot: that project saves 100 lives. Looking at the community of those ten effective altruists together: according to the first strategy they saved 10 people, according to the second they saved 100 people.

For an effective altruist it doesn’t matter who is the lucky winner who chose the effective high-impact project. All that matters is how many lives are saved by the community. This means that an effective altruist should become more risk neutral instead of risk averse. With a risk neutral attitude, an effective altruist is willing to take more high risk high impact decisions.