A pair of security researchers in the U.K. have released a paper [PDF] documenting what they describe as the “first real world detection of a backdoor” in a microchip—an opening that could allow a malicious actor to monitor or change the information on the chip. The researchers, Sergei Skorobogatov of the University of Cambridge and Christopher Woods of Quo Vadis Labs, concluded that the vulnerability made it possible to reprogram the contents of supposedly secure memory and obtain information regarding the internal logic of the chip. I discussed the possibility of this type of hardware vulnerability in the August 2010 Scientific American article "The Hacker in Your Hardware."

The security breach is a particular concern because of the type of chip involved. The affected chip, a ProASIC3 A3P250, is a field programmable gate array (FPGA). These chips are used in an enormous variety of applications, including communications and networking systems, the financial markets, industrial control systems, and a long list of military systems. Each customer configures an FPGA to implement a unique—and often highly proprietary—set of logical operations. For example, a customer in the financial markets might configure an FPGA to make high speed trading decisions. A customer in aviation might use an FPGA to help perform flight control. Any mechanism that could allow unauthorized access to the internal configuration of an FPGA creates the risk of intellectual property theft. In addition, the computations and data in the chip could be maliciously altered.

Assuming that the researchers’ claims stand up to scrutiny, there are at least two important questions that immediately get raised. First, how did this vulnerability end up there in the first place? Second, what does it mean?

Regarding the source of the backdoor, some people are hinting that Chinese sources may be to blame. But, as Robert Graham of Errata Security explains in a blog post titled “Bogus story: no Chinese backdoor in military chip,” it’s premature to point fingers:

. . . it's important to note that while the researchers did indeed discover a backdoor, they offer only speculation, but no evidence, as to the source of the backdoor.

And, as Graham also observed, the term “military chip” can be deceptive as well, as these chips are used in a wide variety of applications, many of them unrelated to the military.

It’s possible that this vulnerability was inserted at the behest of a nation state. But it’s also possible that the backdoor is due to carelessness, not malice. Someone in the design process could have inserted the backdoor to enable testing, without realizing that it would later be discovered, publicized, and potentially exploited.

Regardless of the source of the vulnerability, its presence should serve as a wake-up call to the importance of hardware security. Cybersecurity, of course, is a well-recognized concern. Yet the overwhelming majority of cybersecurity vulnerabilities identified to date have involved software, which is the set of instructions that describe how a task inside a chip or system is performed. Software can be replaced, updated, altered, and downloaded from the Internet. By contrast, a hardware vulnerability is built in to the actual circuitry of a chip. As a result, it can be very difficult to address without replacing the chip itself.

This certainly won’t be the last time that a hardware security vulnerability will be identified. As chips continue to get more complex, hardware security flaws—whether malicious or accidental—will increasingly become a part of the cybersecurity landscape. We should put in place pre-emptive measures to minimize the risks they might pose.

Photo by tjmartens on Flickr