The Trolley problem is an over-hyped thought experiment. Would you risk one life to spare ten others? It’s a seemingly vexing problem, one that has become even more vexing lately because self-driving cars now have to solve them. But this is yet another example AI panic.

If we apply the Banality of Futurism principle and assume everything is boring, we should recognize that we solve Trolley problems all the time without much fanfare. When an oncoming car swerves into your lane, and you have to risk possibly hitting a nearby bicyclist to get out of the way, you’re rapidly weighing the cost-benefits of risking one life to save another. Such decisions are an everyday occurrence.

We also pay relatively low wages to people for whom such decisions are routine. Police officers and soldiers frequently make these tradeoffs, such as in a car chase. Many of them don’t have deep philosophical training to make such decisions perfectly, especially under pressure, and yet we trust them all the same. If we can trust them, why can’t we trust robots? If anything, with robots we’ll perfect the ease with which we already make those decisions, now with better inputs, consistency, and the benefit of a level head.