By Jessica Kent

January 10, 2019 - An artificial intelligence (AI) tool was able to analyze digital images of a woman’s cervix and identify precancerous changes with more accuracy than human experts.

Developed by researchers from the National Cancer Institute (NCI) and Global Good, the algorithm has the potential to improve cervical cancer screenings, particularly in low-resource settings.

To train the deep learning tool, the team used more than 60,000 cervical images collected during a cervical cancer screening study in Costa Rica in the 1990s.

More than 9400 women participated in that study, with follow-up that lasted 18 years. Researchers gained nearly complete information on which cervical changes developed into precancers and which did not.

Researchers digitized the images and used them to develop the deep learning model so that it could distinguish which cervical conditions required treatment.

When applied to cases diagnosed in the Costa Rica study, the AI tool outperformed all standard screening tests, achieving an area under curve of 0.91. In comparison, human expert review achieved an AUC of 0.69, and conventional cytology yielded an AUC of 0.71.

“Our findings show that a deep learning algorithm can use images collected during routine cervical cancer screening to identify precancerous changes that, if left untreated, may develop into cancer,” said Mark Schiffman, MD, MPH, of NCI’s Division of Cancer Epidemiology and Genetics, and senior author of the study.

“In fact, the computer analysis of the images was better at identifying precancer than a human expert reviewer of Pap tests under the microscope (cytology).”

The method could improve cervical cancer screenings across the healthcare industry, including in organizations with limited resources or communities where women’s health experts are scarce.

In these settings, providers typically use a screening method called visual inspection with acetic acid (VIA). This approach requires a clinician to apply dilute acetic acid to the cervix and inspect the cervix with the naked eye, looking for “aceto whitening,” which could indicate disease.

While VIA is convenient and inexpensive, this method is known to be inaccurate. A deep learning approach is similarly easy to use when evaluating medical images, and clinicians can perform this method with minimal training.

This deep learning model could be a valuable tool in countries with limited healthcare resources, where cervical cancer is the leading cause of death among women.

“When this algorithm is combined with advances in HPV vaccination, emerging HPV detection technologies, and improvements in treatment, it is conceivable that cervical cancer could be brought under control, even in low-resource settings,” said Maurizio Vecchione, executive vice president of Global Good.

This study adds to the growing body of evidence that demonstrates the potential for deep learning and artificial intelligence to enhance and accelerate the performance of human clinicians.

Researchers from Google recently developed a tool that was able to identify metastasized breast cancer with 99 percent accuracy, while previous studies have shown that human pathologists can have an accuracy of as low as 38 percent when detecting small metastases on individual slides.

Additionally, when using the deep learning model to review lymph nodes for metastatic cancer, clinicians found that it cut the average review time in half, requiring one minute instead of two minutes per slide.

Moreover, a recent study published in JAMIA described a deep learning algorithm that could replicate diagnostic scores for sleep staging, sleep apnea, and limb movements with a level of accuracy comparable to that of human clinicians. Researchers said the tool has the potential to automate the sleep scoring process, improve sleep disorder diagnosis, and increase care access.

Going forward, the NCI research team plans to further train the tool on representative images of cervical precancers and normal cervical tissue from women around the world, using a variety of cameras and other imaging options. Ultimately, the group will work to develop the best possible model for open, common use.