The “trolley problem” is an old, familiar thought experiment in ethics. Lately it has been enjoying a rather outlandish level of exposure. Much of the credit goes to journalists applying the problem to autonomous cars. Those discussions end up feeling remote and theoretical. Yet by revising the problem just slightly, it can contribute to both a realistic and urgent debate on self-driving cars… which is precisely what I’m fixin’ to do in this article.

Google Trend for “trolley problem”. Apparently the holiday season makes people really, really philosophical.

First, let’s make sure everyone is on the same page. What follows is one iteration of the original trolley problem:

Version 1: There is a runaway trolley barreling towards a group of five people standing on the track, none of whom are cognizant of the train’s approach. They will surely be killed unless the trolley is diverted to its alternate track, on which only one person is standing. You are situated at the track switch, and are the only one able to divert the trolley so that it kills one person rather than five. Would you throw the switch?

Version 2, “The Fat Man”: Because most of us would claim that throwing the switch is the obvious answer, an alternate version of the problem was created to instill a more realistic sense of responsibility and challenge the otherwise utilitarian response. In this alternative, there is no second track, nor any switch you can throw to save the five people. The only thing you can do to stop the trolley is to shove an extremely fat man standing beside you onto the track, which will bring the trolley safely to a halt—but obviously kill the fat man in turn. Now, the question becomes less black and white: is it best to intentionally cause harm to one person for the greater good? Or is it best to not use your judgment of a situation as the decider of anyone’s fate?

Ok, so wait: I must push beloved Family Feud host Louie Anderson into a train, in order to save five assumed non-Family Feud hosts? I can’t get behind that.

I suggest a new version of this debate; one that I think is utterly necessary as we prepare to hand responsibility to the robots.

Consider “The Infinite Trolley.” You are now the conductor of the trolley, steaming down that single track towards a solitary victim-to-be stuck in your path. You can simply hit the brakes to reach a stop and save this person.

There is, of course, a caveat.

Your trolley is infinitely long. It’s filled with as many passengers as it takes to make you reconsider stopping the train. Thousands? Millions? Billions? All those people, each of them with their own needs, expectations and responsibilities, all of which will be thrown off by varying degrees should you decide to stop their trip. Now then, what’s your price? How long does your trolley need to be for the convenience of many to outweigh the life of one?

So far, I’ve posed this dilemma to four very intelligent people. While their reactions and conclusions varied, not one of them was willing to consider that this could be a real-world problem.

It is.

Americans take roughly 250 billion trips with their cars annually. In the process, we kill over 30,000 people through traffic accidents—which means that one car-related death is deemed an acceptable price to pay for you to have the convenience of taking 8 million trips. Or, for the sake of this article, for 8 million of us to take one trip in a very large vehicle.

So, with just a dash of fuzzy math, the Infinite Trolley problem is solved: you would choose to run down the victim if your trolley had more than 8,000,000 passengers on board. And when I say “you would,” I mean “you do.” Now, don’t tell me you’re refusing to take responsibility on the grounds that running someone over involves intent, and is an entirely different act from merely being aware that someone will have been run over for your benefit. To that, I can only respond with the wisdom of South Park: it sure is nice to have your cake and eat it too.