Self-driving cars could choose who to crash into based on net worth.

Shutterstock

Science fiction is often fueled by computers and robots with self-aware artificial general intelligence — a sentient AI that has the capacity to understand the world in much the same way that humans do.

But some experts fear a greater threat from more narrow AIs — the kind we are getting good at creating today, optimized to solve a single kind of problem (like how to compose music or drive a car) and has no awareness aside from that.

These AIs often have access to vast amounts of data and their designers have a murky understanding of the AI's decision-making process, since machine learning systems are generally self-taught and too complex for human programmers to reverse engineer.

Shriram Ramanathan, a senior analyst at Lux Research, said he worries all of these factors are converging in ways we can't predict.

"Researchers are unleashing AI technologies into the marketplace with little understanding of the long-term impact," he told Business Insider. "Most of these AIs are trained for a narrow set of use cases and definitely not capable of handling black swan events," or events with severe but unexpected consequences.

The result, Ramanathan said, may be chilling. The trolley problem, for example, is a classic philosophical thought experiment in which a runaway train is barreling toward either one of two groups of people, and the conductor has to choose who will die. This thought experiment is real for self-driving car engineers, who must equip a car to make real-time decisions with life-and-death consequences.

"What if a semi-autonomous car makes a decision on which person to crash into by analyzing LinkedIn and Facebook data to determine an individual's net worth?" Ramanathan told Business Insider.