I think I’m probably making some incorrect ethical choices.

I know a lot of thoughtful, compassionate people who have come to different conclusions than me about ethics. Many of our disagreements are based on hard judgment calls about complicated, thorny issues– things like whether animals have moral weight, or whether we can have a meaningful impact on the far future. It seems pretty unlikely to me that I’ve come to the right conclusion on every one of those complicated, thorny issues. When I look at historical figures’ opinions on issues we now consider to be settled, such as slavery, patriarchy, or homosexuality, I find that essentially no one was correct about everything.

One thing I’ve tried to do, when I notice a persistent disagreement with people I respect, is moral hedging. I look for actions that are low-cost if I’m correct about morality and have a big benefit if I’m wrong.

One issue I’ve morally hedged on is abortion. I do not think that killing a fetus is as wrong as killing an adult human. However, I use the implant as my form of contraception. The implant is highly reliable; it results in less than one pregnancy per thousand person-years of use. By switching to a more reliable form of contraception, I have prevented myself from needing to have an abortion in the future, and thus prevented myself from committing a murder.

Another issue I’ve hedged on is AI risk. I don’t think AI superintelligence is a near-term existential risk. However, a while ago, I asked myself what would be the most valuable thing I could do, assuming that AI superintelligence is ten years away. The answer is that I should continue to be critical of AI. Most critics of AI risk are uninformed and have a hard time getting through an entire essay without typing the phrase “rapture of the nerds.” If thinkers are not challenged by intelligent criticism, they tend to get sloppy and make avoidable mistakes. For various reasons– most notably that I don’t particularly enjoy being simultaneously dogpiled by r/slatestarcodex and r/sneerclub– I’ve tended to discuss my opinions about AI risk privately, but moral hedging is one of the reasons I’m considering discussing my beliefs more publicly.

There are two primary benefits from hedging. Most obviously, I reduce the harm I cause if I’m wrong. Even if killing a fetus is as bad as killing a person, I can rest assured that I have not personally committed any murders. But I also find that hedging tends to promote cooperation among compassionate, thoughtful people. When I tell someone that I hedge about their belief system, they tend to believe that I’m taking their concerns seriously. That opens up space for discussion and makes it easier to work together on the issues we agree about.

It can be difficult to figure out how best to hedge. It’s not an accident that I have both pro-life and AI-safety-supporting friends; I could talk to them about the best strategies for hedging. I have difficulty figuring out how to hedge if I don’t have any friends who support a particular belief system. I think it would make sense to make a list of ways to hedge for various sets of beliefs. (Effective animal advocates have made some small steps in this direction, such as by encouraging people to replace chicken with beef whenever possible.)

Do you practice moral hedging? Do you have advice for how people you disagree with can morally hedge based on your own beliefs?