Introduction

Following is a quick summary of my beliefs on various propositions and my moral values. The topics include philosophy, physics, artificial intelligence, and animal welfare. A few of the questions are drawn from The PhilPapers Surveys. Sometimes I link to essays that justify the beliefs further. Even if I haven't taken the time to defend a belief, I think sharing my subjective probability for it is an efficient way to communicate information. What a person believes about a proposition may be more informative than any single object-level argument, because a probability assessment aggregates many facts, intuitions, and heuristics together.

While a few of the probabilities in this piece are the results of careful thought, most of my numbers are just quick intuitive guesses about somewhat vague propositions. I use numbers only because they're somewhat more specific than words like "probable" or "unlikely". My numbers shouldn't be taken to imply any degree of precision or any underlying methodology more complex than "Hm, this probability seems about right to express my current intuitions...".

Pablo Stafforini has written his own version of this piece.

Note: By "causes net suffering" in this piece, I mean "causes more suffering than is prevented", and the opposite for "prevents net suffering". For example, an action that causes 1 unit of suffering and prevents 4 other units of suffering prevents 3 units of net suffering. I don't mean the net balance of happiness minus suffering. Net suffering is the relevant quantity for a negative-utilitarian evaluation of an action; for negative utilitarians, an action is good if it prevents net suffering.

Beliefs

Values

While I'm a moral anti-realist, I find the Parliamentary Model of moral uncertainty helpful for thinking about different and incompatible values that I hold. One might also think in terms of the fraction of one's resources (time, money, social capital) that each of one's values controls. A significant portion of my moral parliament as revealed by my actual choices is selfish, even if I theoretically would prefer to be perfectly altruistic. Among the altruistic portion of my parliament, what I value roughly breaks down as follows:

Value system Fraction of moral parliament Negative utilitarianism focused on extreme suffering 90% Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about) 10%

However, as is true for most people, my morality can at times be squishy, and I may have random whims in a particular direction on a particular issue. I also may have a few deontological side-constraints on top of consequentialism.

While I think high-level moral goals should be based on utilitarianism, my intuition is that once you've made a solemn promise or entered into a trusting friendship/relationship with another person, you should roughly act deontologically ("ends don't justify the means") in that context. On an emotional level, this deontological intuition feels like a "pure" moral value, although it's also supported by sophisticated consequentialist considerations. Nobody is perfect, but if you regularly and intentionally violate people's trust, you might acquire a reputation as untrustworthy and lose out on the benefits of trusting relationships in the long term.

What kind of suffering?

The kind of suffering that matters most is... Fraction of moral parliament hedonic experience 70% preference frustration 30%

This section discusses how much I care about suffering at different levels of abstraction.

My negative-utilitarian intuitions lean toward a "threshold" view according to which small, everyday pains don't really matter, but extreme pains (e.g., burning in a brazen bull or being disemboweled by a predator while conscious) are awful and can't be outweighed by any amount of pleasure, although they can be compared among themselves. I don't know how I would answer the "torture vs. dust specks" dilemma, but this issue doesn't matter as much for practical situations.

I assess the degree of consciousness of an agent roughly in terms of analytic functionalism, i.e., with a focus on what the system does rather than other factors that don't relate to its idealized computation, such as what it's made of or how quickly it runs. That said, I reserve the right to care about non-functional parts of a system to some degree. For instance, I might give greater moral weight to a huge computer implementing a given subroutine than to a tiny computer implementing the exact same subroutine.

Weighting animals by neuron counts

I feel that the moral badness of suffering by an animal with N neurons is roughly proportional to N2/5, based on a crude interpolation of how much I care about different types of animals. By this measure, and based on Wikipedia's neuron counts, a human's suffering with some organism-relative intensity would be about 11 times as bad as a rat's suffering with comparable organism-relative intensity and about 240 times as bad as a fruit fly's suffering. Note that this doesn't lead to anthropocentrism, though. It's probably much easier to prevent 11 rats or 240 fruit flies from suffering terribly than to prevent the same for one human. For instance, consider that in some buildings, over the course of a summer, dozens of rats may be killed, while hundreds of fruit flies may be crushed, drowned, or poisoned.

My intuitions on the exact exponent for N change a lot over time. Sometimes I use N1/2, N2/3, or maybe even just N, for weighting different animals. Exponents closer to 1 can be motivated by not wanting tiny invertebrates to completely swamp all other animals into oblivion in moral calculations (Shulman 2015), although this could also be accomplished using a piecewise function for moral weight as a function of N, such as one that has a small exponent for N within the set of mammals and another small exponent for N within the set of insects but a big gap between mammals and insects.