Google researchers got an eye-scanning algorithm to figure out on its own how to detect a common form of blindness, showing the potential for artificial intelligence to transform medicine remarkably soon.

The algorithm can look at retinal images and detect diabetic retinopathy—which affects almost a third of diabetes patients—as well as a highly trained ophthalmologist can. It makes use of the same machine-learning technique that Google uses to label millions of Web images.

Diabetic retinopathy is caused by damage to blood vessels in the eye and results in a gradual deterioration of vision. If caught early it can be treated, but a sufferer may experience no symptoms early on, making screening vital. It is diagnosed, in part, by having an expert examine images of a patient’s retina, captured with a specialized device, for signs of bleeding and fluid leakage.

Some form of automated detection could make the diagnosis more efficient and reliable, and could be especially useful in regions where the required expertise is scarce. “One of the most intriguing things about this machine-learning approach is that it has potential to improve the objectivity and ultimately the accuracy and quality of medical care,” says Michael Chiang, a professor of ophthalmology and a clinician at Oregon Health & Science University’s Casey Eye Institute.

AI has had mixed success in medicine in the past. Systems that use a database of knowledge to offer advice have been shown to outperform doctors in some settings, but there has been limited uptake. Still, the power of machine learning—especially a technique known as deep learning, may make AI more common in the future (see “10 Breakthrough Technologies 2013: Deep Learning”). A team at Google DeepMind, a subsidiary of Alphabet focused entirely on AI, is doing similar work, training computers to process optical coherence tomography scans for signs of macular degeneration and other eye disease in collaboration with researchers at Moorfields Eye Hospital in London (see “DeepMind’s First Medical Research Gig Will Use AI to Diagnose Eye Disease”).

This retinal-image research, published Tuesday, marked the first time a paper about deep learning has appeared in the Journal of the American Medical Association, according to the journal’s editor-in-chief, Howard Bauchner.

The paper’s authors, comprised of computer scientists at Google and medical researchers from the U.S. and India, developed an algorithm to analyze retinal images. But unlike existing ophthalmology software, it was not explicitly programmed to recognize features in images that might indicate the disease. It simply looked at thousands of healthy and diseased eyes, and figured out for itself how to spot the condition.

The researchers created a training set of 128,000 retinal images classified by at least three ophthalmologists. After the algorithm had been trained, the researchers tested its performance on 12,000 images and found that it matched or exceeded the performance of experts in identifying the condition and grading its severity.

The Google researchers collaborated with scientists at the Aravind Medical Research Foundation in India, where a clinical trial involving real patients is ongoing. This project involves patients receiving a normal consultation, but their images are also fed into the deep-learning system for comparison. Lily Peng, a researcher at Google and a medical doctor who was involved with the project, says results from this trial are not yet ready for publication.

Deep learning could be applied in many different areas of medicine that rely on image analysis, such as radiology and cardiology. But one of the biggest challenges will be to provide convincing evidence that the systems are reliable. Brendan Frey, a professor at the University of Toronto and the CEO and cofounder of a company called Deep Genomics, warns that researchers will need to develop machine-learning systems that are capable of explaining how they reached a particular conclusion (see “AI’s Language Problem”).

Peng, of Google, says this is something her team is already working on. “We understand that explaining will be very important,” she says.