The trolley problem originated as a scary thought experiment in ethics and morality. People had developed many variants of it, but at the core, it presents the dilemma of deliberately running over either one or five people, and asks which one is the more ethical/moral choice. In recent years, the problem has gained some more popularity, thanks to increased interest in self driving cars.

While the problem might be “fun” to think about, some have argued that it’s not only useless, but actually harmful. In the previously linked post, the authors did a great job pointing out the many flaws and the uselessness of the trolley problem. They did think, however, that people whose job is “to program collision avoidance algorithms for driverless cars”, are among the rare people who face the trolley problems in real life.

Having worked in the industry for the last couple years, I personally find the trolley problem irrelevant even for self-driving car engineers, at least for now. It’s not that self-driving cars would never get themselves into such situations, but currently there are way more practical and interesting dilemmas for self-driving car engineers to think about. I like to think about those problems and like to call some of them the Sheldon Cooper problems.

In the sitcom Big Bang Theory, Sheldon Cooper is portrayed as a highly intelligent physicist who also has a fundamental lack of social skills. He also has the tendency to rigidly stick to rules and routines — for example, he always knocks at the door three times. In this post, I’m assuming an even more caricatured version of Sheldon — one that is even more rigid and social inept, to make my points; the Sheldon in the show or in your own impression might probably be a lot better.

In the early seasons of the show, it was told that Sheldon didn’t know how to drive. The show and its spinoff have different theories later on why he didn’t have a license, but given Sheldon’s personality and preferences, it should be no surprises that he makes a terrible driver. After all, driving is a highly social activity that requires constant negotiation and coordination, things that Sheldon aren’t very good at. Broadly speaking, Sheldon Cooper problems are those that require a great deal of social understanding to be partially solved. I emphasized the partial part, because like the trolley problem, very often those problems don’t have a perfect solution. However, unlike the trolley problem, the Sheldon Cooper problems appear all the time when we or the self-driving cars drive.

How fast is sufficiently fast?

For example, even the simple act of deciding at which speed to drive could be a highly social activity. Posted speed limits give some guidelines; however, how fast we drive are very much driven by social norms. After all, posted limits might be wrong or outdated. Sometimes there are no speed limits at all. Sometimes they even conflict with each other. Even when speed limits are there, are correct and not conflicting, it’s estimated that 40% to 60% of drivers exceed the speed limit. This poses a dilemma to self-driving car developers: when other drivers are speeding, and maybe for good reasons — the speed limits could be severely outdated or inaccurate for example, should a self-driving car violate the limits like everyone else, or always drive legally regardless? Keep in mind that driving much slower than everyone else doesn’t just annoy other drivers, but could be highly unsafe as well: 175 people were injured and two were killed in Britain in 2017 alone due to crashes caused by slow drivers.

Collisions in lower speeds are obviously less likely to be fatal; thus a lot of self-driving car companies are enforcing a fixed (and usually low) max speed. It’s unlikely that they don’t appreciate the social aspects of driving and how dangerous “anti-social” behaviors, such as enforcing artificial speed limits, could be. It’s a lot more likely that enforcing a low speed is one of those “less bad” solutions, when a more complete solution (perhaps driving at a safe “social”, sometimes illegal, speed?) hasn’t been reliably solved yet. Not to mention that the legal team would probably freak out about the software guys designing an algorithm that would deliberately break the law sometimes.

If deciding just the driving speed could already be that social and complex, imagine how hard self-driving car development is. It also highlights how important it is that the self-driving car engineers possess good understanding of psychology, sociology and game theory, on top of their technical knowledge and skills. There have already been criticisms of how Silicon Valley, with all of its seemingly sophisticated algorithms and ethos, is still too binary for the complex real world. Simpleton thinking when designing a vast social network might have led us to fake news, online bullying, a whole generation of easily distractible narcissists, and a wrong president. Simpleton thinking when designing a self-driving 4,000-pound robot will definitely cost lives.

I hope we are way better than that.