First written: 10 Apr. 2013; Last update: 17 Feb. 2015

Summary

Some people hold more hopeful beliefs about the world and the future than are justified. These include the feeling that life for wild animals isn't so bad and the expectation that humanity's future will reduce more suffering than it creates. By feeding these dreams, optimistic visions of suffering reduction, while noble, may in fact cause net harm. We should explore ways of increasing empathy that also expose the true extent of suffering in the world, e.g., information about factory farming, brutality in nature, and unfathomable amounts of suffering that may result from space colonization.

Epigraphs

The man who is a pessimist before forty-eight knows too much; if he is an optimist after it he knows too little. --Mark Twain I also think the future is where people project a lot of hopes. They're just less willing to be neutral about it. People are more willing to say, 'Yes, sad and terrible things happened in the past, but we get it. We once believed that our founding fathers were great people, and now we can see they were shits.' I guess that's so, but for the future their hopes are a little more hard to knock off. --Robin Hanson

Introduction

From Wikipedia:

Wishful thinking is the formation of beliefs and making decisions according to what might be pleasing to imagine instead of by appealing to evidence, rationality, or reality. Studies have consistently shown that holding all else equal, subjects will predict positive outcomes to be more likely than negative outcomes (see valence effect).

I'll mostly leave it to psychologists to debate the origins wishful thinking. One possibility is that optimism makes one appear more competent than one actually is. Another is that it actually makes one more competent and successful through self-fulfilling beliefs. It could also be primarily an accident on the part of evolution: We feel bad when we imagine bad outcomes, so we cheat by not imagining them as much as is epistemically warranted.

I would guess that some people believe that life in the wild isn't so bad and that the future of humanity will be mostly peaches and cupcakes primarily because, well, it would be pretty depressing otherwise. Just look at how much people want to believe in heaven to see an example of justifying one's desired outlook without epistemic grounding. The transhumanist community has its fair share of starry-eyed disciples awaiting rapture into computational bliss.

There are related cognitive biases about excess positivity, such as optimism bias and rosy retrospection. Conversely, the hypothesis of depressive realism has some empirical support, though it remains controversial.

There's an additional source of positive bias in decisions, namely that those with the most power also tend to be people who haven't lived in abject conditions. Society's leaders might sometimes have low hedonic setpoints, and they might have gone through trials and stresses, but they usually haven't experienced torture, starvation, serious violence, or paralyzing mental illnesses. When we extend our scope to include animals, the contrast is even more stark. Humans have low infant mortality, long lives, and are at the top of the food chain. Many of us have regular meals, shelter, medical care, air conditioning, pain killers, and so on. Most animals in the wild are born, live a few days or weeks, and then die painfully of starvation, predation, disease, etc.

Beliefs in a rosy future

Many well-meaning altruists are working to reduce extinction risks in the hopes of a bright future for humanity. They hope our descendants will solve the problems of the world and build computational castles in the sky full of awesomeness. In the process, some optimists neglect the fact that military, economic, and geopolitical forces will control the future of AI, and probably not quixotic altruists. They may forget the possibility that Darwinian forces beyond our control will supercede human values. Some even assume that super-intelligence will lead to super-compassion, when there's actually no necessary relation, and in fact, most super-intelligences probably have no compassion at all.

Sometimes even negative utilitarians get swept along in enthusiasm for the future. David Pearce's Hedonistic Imperative predicts the end of suffering and the advent of gradients of bliss in a post-human paradise. David anticipates cosmic rescue missions to help suffering sentients on other planets. Unfortunately, in reality, the spread of computational power is likely to multiply suffering rather than to end it, because there will be astronomically more resources for running suffering computations.

But what's the harm of optimism? Why not let negative utilitarians allay some of their worries by wishing for a better tomorrow? In fact, maybe the hope of abolishing suffering will make people more motivated to work toward it? This may be true, but the cost is too high. Hope for the future means people will favor technological development and space colonization, with the aim of ensuring human survival and dispatching interstellar probes. David himself is an advocate for technological progress. Yet this may be precisely the wrong thing to support if we have the goal of reducing suffering.

Note that being realistic is not the same as being depressed or apathetic. We have enormous opportunity to reduce suffering on behalf of sentient creatures. However, we need to be careful about language. There is a lot we can do, but even if we try our hardest, the future will still look very bleak. Our efforts alone will not make a noticeable dent in the sign of the value of the future.

Exposing suffering

If unwarranted optimism about the future may be net harmful, then pessimism may be net beneficial. In particular, we might hypothesize that it would be useful to expose people to the reality of suffering that the world contains and that the future may multiply. It would be good to test this hypothesis at some point.

It seems that many people who care a lot about suffering have experienced firsthand how overwhelming it can be. It's an interesting question to ponder whether some amount of suffering helps to inspire compassion or whether physical and mental pain instead make people more selfish, spiteful, and apathetic.

Either way, it seems plausible that some activities probably do elicit more empathy and horror at suffering than they prevent. For example, veg outreach that exposes life on factory farms seems like an excellent way to begin to demonstrate just how much suffering there is in the world. It's easy enough to forget this if we live in a bubble of affluent, mostly cheerful humans. We can extend this appreciation of the extent of suffering by pointing to wild animals, noting that almost all offspring of many species die, perhaps painfully, just a few days after being born.

Thus, it seems that discussing animal suffering in the right way can serve as a reality check against excess optimism, in stark contrast to promoting a Hedonistic Imperative vision that confuses the marginal impact of our efforts to make the world better with the overall probability that the world actually will become a delightful place for all. Highlighting the severity of suffering in nature can suggest one of many risks in colonizing space.

One case where the cause of wild-animal suffering could cause problems is if people assume that we need humans to stick around because they're the only hope for quintillions of potentially suffering insects on the planet for the next billion years. While this is true, (a) it's not clear that humans actually would decrease wild-animal suffering in the future, and (b) even if they did, the benefit of doing so is small in comparison with the potential damage that would result from spreading into space. While the expected value of promoting concern for wild animals is highly positive, this doesn't mean the overall probability of convincing the world to come around to our position on this matter is close to 1.

What are some other interventions that would help to expose the extent of suffering in our world and the even greater expected magnitudes in our future?