After reshaping internet services, consumer devices and driverless cars in the early part of the decade, deep learning is moving rapidly into myriad areas of health care. Many organizations, including Google, are developing and testing systems that analyze electronic health records in an effort to flag medical conditions such as osteoporosis, diabetes, hypertension and heart failure.

Similar technologies are being built to automatically detect signs of illness and disease in X-rays, M.R.I.s and eye scans.

The new system relies on a neural network, a breed of artificial intelligence that is accelerating the development of everything from health care to driverless cars to military applications. A neural network can learn tasks largely on its own by analyzing vast amounts of data.

Using the technology, Dr. Kang Zhang, chief of ophthalmic genetics at the University of California, San Diego, has built systems that can analyze eye scans for hemorrhages, lesions and other signs of diabetic blindness. Ideally, such systems would serve as a first line of defense, screening patients and pinpointing those who need further attention.

Now Dr. Zhang and his colleagues have created a system that can diagnose an even wider range of conditions by recognizing patterns in text, not just in medical images. This may augment what doctors can do on their own, he said.

“In some situations, physicians cannot consider all the possibilities,” he said. “This system can spot-check and make sure the physician didn’t miss anything.”

The experimental system analyzed the electronic medical records of nearly 600,000 patients at the Guangzhou Women and Children’s Medical Center in southern China, learning to associate common medical conditions with specific patient information gathered by doctors, nurses and other technicians.