Implanted medical devices are becoming increasingly sophisticated, moving from simple pacemakers to computerized devices that can actively respond to changes in a patient's condition. Perhaps the most sophisticated devices commonly in use are implanted defibrillators. These devices monitor the heart's electrical activity and, when an arrhythmic event is detected, can induce a shock that resets the heart. They also contain small radio transmitters that let doctors read their monitoring of the heart and even reprogram the device to customize it to the patient. Unfortunately, researchers have found that it's incredibly easy to reverse-engineer the communication protocol (PDF) of these radio transmissions and use that information to hack the implant.

The authors give a long list of wireless, reprogrammable medical devices that are currently on the market, including "pacemakers, implantable cardioverter defibrillators (ICDs), neurostimulators, and implantable drug pumps." They focused on a single model of defibrillator, but there is no reason to believe that other devices of this nature are any more secure. The equipment needed for the hack consisted of the device itself (obviously, they used one that was not currently implanted), the equipment normally used to reprogram it, an oscilloscope, and a small radio transmitter that was run using the GNU Radio software libraries.

It turns out that the transmitter portion of the reprogramming equipment is linked to the controller through a standard serial cable; the researchers were able to track what went through the serial cable and then read the transmissions that resulted. With that half of the communications channel in hand, they then requested information from the device and used GNU Radio to record the responses, feeding them into Matlab for analysis. The combination was enough to quickly decode the information coming out of the defibrillator, including details such as the patient's name, date of birth, medical ID number, and basic patient history. Further snooping pulled out the doctor's name and phone number, and the authors were even able to read actual heart function data that the device was recording.

In theory, these devices have a magnetic proximity switch that's supposed to act as a switch that keeps the device from transmitting unless an appropriate receiver is nearby; in practice, the authors found even this simply security measure was ignored for most of this information. The authors did need to place their transmitters within centimeters of the defibrillator, but noted that newer versions can transmit over longer distances.

If the invasion of privacy wasn't bad enough, the researchers were actually able to reprogram the device. They didn't even have to fully reverse-engineer the communications protocol—simply replaying the commands several times was sufficient to get them accepted even in the absence of a full communications stream. Using their knowledge, the authors could easily reset the owner's name and personal information, and reset the defibrillator's clock. But, more disturbingly, they could also shut off the device's ability to respond to cardiac events. The pinnacle of their hacking was to send the device into test mode, in which a carefully-timed current would trigger an arrhythmic event, something that's normally done under controlled conditions to determine if the device responds successfully. In effect, they hacked the device in a way that could stop a heart.

If you've currently got one of these devices, there's no reason for immediate panic. "We specifically and purposefully omitted methodological details from our paper," the authors write, "thereby preventing our findings from being used for anything other than improving patient security and privacy." A substantial portion of the paper is also devoted to zero-power ways of improving the security of these devices.

The authors come up with three solutions, all with power envelopes that are sufficiently small that they can be run by harvesting the radio transmissions involved in the normal communications process. They were all tested for effectiveness using a simulated body environment made of bacon and ground beef. The first is a simple device that emits audible chirps when appropriate radio frequencies are detected. The second changes the device's emissions from radio waves to sound that can only be registered with a sensor in extremely close proximity, making interception of communications very unlikely. The final option involves challenge-response handshaking and RC5 encryption; the authors suggest that this could validate an initial communications before switching devices into high-power mode, where they could support more robust encryption.

The ability of these devices to respond to external information is vital to their function, as it allows them to provide critical information to health professionals and to be adjusted to changing patient circumstances without the need for invasive surgery to replace them. Still, there is no excuse for the minimal security efforts used so far, and this paper provides a clear demonstration of the dangers that this lack of security presents.

Further reading: