One of the most popular discussions in the field of technology today is that of self-driving vehicles. It’s a topic that brings up both optimistic joy and pessimistic fear, from the elimination of car-related fatalities to the elimination of millions of jobs. I usually stand on the optimistic side of the argument, but I also understand the fear.

After all:

According to the Bureau of Labor Statistics (BLS) , there were nearly 1.8 million heavy-truck and tractor-trailer drivers in 2014, with a 5% increase per year. Meaning, there are likely over 2 million of these drivers today.

There were around 1.33 million delivery truck drivers in 2014, with a 4% increase per year. Meaning, there are over 1.5 million today.

There were around 233,700 taxi drivers and chauffeurs in 2014, with a 13% increase per year. Meaning, there are over 300,000 of these drivers today.

In other words, with the full mobilization of self-driving vehicles, we’re looking at around (+/-) 4 million jobs being automated in the next few years, thus no longer requiring human labor. This particular risk, however, isn’t what I’m currently focused on. The main focus of this article is on what is known as the “trolley problem” — a thought experiment in ethics that has since been rehashed to serve as “criticism” towards self-driving vehicles.

Let me first explain what the “trolley problem” entails, and then I’ll proceed in explaining how it’s being used today. The “trolley problem” goes like this:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: 1. Do nothing, and the trolley kills the five people on the main track.

2. Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the most ethical choice?

You’ve likely heard this question under various different incarnations, e.g. the so-called “psychopath problem,” whereby you have a choice of either pushing a fat man off a bridge and using his weight to stop a car barreling towards a group of children or let the fat man live and thus the children dying as a result of your inaction.

Today, however, the latest incarnation of the “trolley problem” is targeting self-driving vehicles. The hypothetical scenario goes something like this:

You’re riding inside of a self-driving vehicle on a busy road or highway. Ahead, there’s someone trying to walk across the road. The self-driving vehicle becomes aware of the person. However, the vehicle is left with only two possible decisions, of which its answer will ultimately determine who lives and who dies: 1. Do nothing, thus the vehicle kills the person crossing the road, but keeps you (the passenger) safe; or

2. Take measures to avoid the person crossing the road, killing you (the passenger) instead. Should a self-driving vehicle be given the power to make such an ethical choice?

This hypothetical scenario is usually argued to serve as a baseline for why we shouldn’t trust self-driving vehicles, no matter what positive benefits will arise from their proliferation. The problem I have with this scenario, however, and thus my problem with using such a simplistic philosophical argument such as the “trolley problem,” is that it completely ignores the complexity of what a self-driving transportation system actually entails in terms of both safety and efficiency. If anything, self-driving vehicles are the perfect solution to the “trolley problem.”

The only way this hypothetical scenario would make sense is if we were to limit the number of vehicles operating on our roads to be equipped with full self-driving capabilities. Thus the problem for using such an argument as a means of justifying our limited use of these vehicles. In truth, the self-driving industry isn’t aiming for limited access; quite the contrary! The end goal is in establishing full self-driving capabilities in every single vehicle operating on the road.

Why is this important? Because it would maximize safety and exponentially decrease the percentage of likelihood that any vehicle ends up in any sort of accident. A good example of how this would work would be a group of magnets placed on a table. When placed on the correct side, no matter how hard you try to connect the group of magnets together, no matter how quick you try connecting them, every single magnet reassembles itself to accommodate the new change in direction of the magnet you’re moving.

Of course, a person might then argue, ‘well, if the table is a symbolic road, then what happens when one of the reassembling magnets falls off the table (ergo, fall off a cliff or into a ditch)?’ The problem with this question is that it ignores the fact that magnets aren’t intelligent. They’re not programmed to ensure maximum safety; self-driving vehicles would be.

Let’s try a better example instead: Rather than magnets, think of electrons. In accordance to quantum physics — specifically the Pauli exclusion principle — no two identical electrons shall occupy the same space. In other words, whenever an electron from one object comes close to another electron from a separate object (or even two electrons from a singular object), those two electrons never touch one another; they’re essentially repelled or moved away in different directions. This scientific principle is adhered to by every single electron, made up inside of every single material object, including our own bodies. As a result, although complex, order (as opposed to chaos) is successfully established.

Which then brings us back to self-driving vehicles. In this hypothetical scenario, every single vehicle operating on the road is equipped with full self-driving capabilities — i.e., radar, mapping — fully autonomous, and connected via the Internet of Things. Let’s say there’s a group of people attempting to walk across a busy highway. In response, a select few vehicles will not only detect the group of people walking across the highway, but they’ll also detect every other object within their vicinity. As they begin moving towards a safe distance, or even slow down to allow the group of people to safely walk across, every other vehicle will respond accordingly to compensate for every other vehicle’s new course of action. Just like electrons, they’ll begin reassembling themselves in an intelligent manner to maximize both safety and efficiency.

As a result, no one gets harmed and no vehicle finds itself getting into an accident. Which is why we shouldn’t limit the number of self-driving vehicles operating on the road. To ensure both maximum safety and efficiency, federal regulators should help build a pathway in equipping every vehicle operating on our roads with full self-driving capabilities. In doing so, the “trolley problem” would no longer be an actual problem.

***

This article was originally published on Medium.