Earlier in 2017, artificial intelligence scientist Sebastian Thrun and colleagues at Stanford University demonstrated that a “deep learning” algorithm was capable of diagnosing potentially cancerous skin lesions as accurately as a board-certified dermatologist.

The cancer finding, reported in Nature, was part of a stream of reports offering an early glimpse into what could be a new era of “diagnosis by software,” in which artificial intelligence aids doctors—or even competes with them.

Experts say medical images, like photographs, x-rays, and MRIs, are a nearly perfect match for the strengths of deep-learning software, which has in the past few years led to breakthroughs in recognizing faces and objects in pictures.

Companies are already in pursuit. Verily, Alphabet's life sciences arm, joined forces with Nikon last December to develop algorithms to detect causes of blindness in diabetics. The field of radiology, meanwhile, has been dubbed the “Silicon Valley of medicine” because of the number of detailed images it generates.

Black-box medicine

Although the predictions by Thrun’s team were highly accurate, no one was sure exactly which features of a mole the deep-learning program used to classify it as cancerous or benign. The result is the medical version of what’s been termed deep learning’s “black box” problem.

Unlike more-traditional vision software, where a programmer defines rules—for example, a stop sign has eight sides—in deep learning the algorithm finds the rules itself, but often without leaving an audit trail to explain its decisions.

“In the case of black-box medicine, doctors can’t know what is going on because nobody does; it’s inherently opaque,” says Nicholson Price, a legal scholar from the University of Michigan who focuses on health law.