Sometime in the early 2000s, while sitting in my dentist’s chair, I began to wonder about the potential real-world pain that someone could potentially inflict on another human being simply by hacking the new digital x-ray system that the dentist had installed. Would it be possible, for example, for a hacker to modify the digital images from the x-rays so that the dentist would not be able to find and repair painful cavities, or to cause the dentist to perform an unnecessary root canal, filling, or other procedure? How certain could I be that the images of my own teeth were not tampered with?

Several years later, when I had my a digital MRI after an auto accident, I wondered even further – could hackers modify images in such a manner so as to cause a person to have his head cut open to remove a tumor when, in fact, he had no tumors? Or to cause a scan to appear normal when the victim actually had a life threatening condition requiring immediate attention?

This past week, a report issued by a combined team of researchers from the Department of Information Systems Engineering at Ben-Gurion University in Beersheba, Israel, and the Soroka University Medical Center nearby, answered the question in a clear – and quite frightening – way:

Not only can hackers using Artificial Intelligence (AI) technology successfully and consistently trick radiologists in ways that could potentially lead to human deaths, but evildoers can even trick artificial intelligence systems designed to diagnose medical conditions based on scans.

In the recent study, the Israeli researchers used a Generative Adversarial Network (GAN), a form of machine learning system that can generate photographs that look at least superficially authentic to human observers, even though the images are nothing more than sophisticated, high-resolution computer drawings of non-existent people or landscapes.

The researchers trained one GAN to add cancer into scans that showed no cancer, and another to remove cancer from scans that showed it. They then trained their engines to add or remove specifically lung cancer, by letting the systems learn from free online medical images.

They then hired 3 radiologists to read 100 scans: 30 authentic CT scans, and 70 that were modified by the AIs.

The results are downright scary:

The radiologists found cancer in 99 percent of the AI-altered normal scans that had malignant tumors added to them by the GAN, and found no cancer in 94 percent of the images from scans that showed cancer, but which had the cancer removed by the second GAN.

Even after the researchers told the radiologists about the GANS, and informed the doctors that many of the images had been tampered with, the radiologists were still unable to diagnose correctly, and incorrectly found cancer in 60 percent of the normal scans to which tumors had been artificially added, and did not find cancer in 87 percent of the scans from which the AI had removed tumors.

Artificial Intelligence systems designed to diagnose diseases from scans did not fare any better.

How hard would it be for an evildoer to manipulate images in order to perpetuate insurance fraud, or to inflict physical harm to another human being?

For insiders who have access to the imaging systems, such crimes would likely be simple to carry out. But, even for outsiders, the barriers are quite weak: While many MRI and CT scan systems are not connected to the Internet, getting physical access to the terminals used to manage images from these systems is not difficult. Whether by putting on a lab coat and impersonating a doctor, pretending to be an IT support person fixing a computer, or through a whole host of other social engineering type acts, it is not hard to get access to relevant hospital terminals for long enough to insert a device into a USB port. And, of course, as time progresses, a growing number of systems are, in fact, connected to the Internet – creating the potential for remote attacks. Of course, hospital WiFi networks may serve as a potential entry point as well.

Clearly, the entire medical imaging ecosystem needs better security.

Requiring the use of security software on any device that is any way involved in the imaging process, as well as mandating that all imaging systems use encryption and digital checksums (and/or watermarks) on images, and securing the infrastructure used by imaging-related processes, might go a long way in preventing what otherwise could ultimately emerge as a potential way to commit all sorts of crimes – even murder.











