Scientists have developed a technique to sabotage the cryptographic capabilities included in Intel's Ivy Bridge line of microprocessors. The technique works without being detected by built-in tests or physical inspection of the chip.

The proof of concept comes eight years after the US Department of Defense voiced concern that integrated circuits used in crucial military systems might be altered in ways that covertly undermined their security or reliability. The report was the starting point for research into techniques for detecting so-called hardware trojans. But until now, there has been little study into just how feasible it would be to alter the design or manufacturing process of widely used chips to equip them with secret backdoors.

In a recently published research paper, scientists devised two such backdoors they said adversaries could feasibly build into processors to surreptitiously bypass cryptographic protections provided by the computer running the chips. The paper is attracting interest following recent revelations the National Security Agency is exploiting weaknesses deliberately built-in to widely used cryptographic technologies so analysts can decode vast swaths of Internet traffic that otherwise would be unreadable.

The attack against the Ivy Bridge processors sabotages random number generator (RNG) instructions Intel engineers added to the processor. The exploit works by severely reducing the amount of entropy the RNG normally uses, from 128 bits to 32 bits. The hack is similar to stacking a deck of cards during a game of Bridge. Keys generated with an altered chip would be so predictable an adversary could guess them with little time or effort required. The severely weakened RNG isn't detected by any of the "Built-In Self-Tests" required for the P800-90 and FIPS 140-2 compliance certifications mandated by the National Institute of Standards and Technology.

The tampering is also undetectable to the type of physical inspection that's required to ensure a chip is "golden," a term applied to integrated circuits known to not include malicious modifications. Christof Paar, one of the researchers, told Ars the proof-of-concept hardware trojan relies on a technique that requires low-level changes to only a "few hundred transistors." That represents a minuscule percentage of the more than 1 billion transistors overall. The tweaks alter the transistors' and gates' "doping polarity," a change that adds a small number of atoms of material to the silicon. Because the changes are so subtle, they don't show up in physical inspections used to certify golden chips.

"We want to stress that one of the major advantages of the proposed dopant trojan is that [it] cannot be detected using optical reverse-engineering since we only modify the dopant masks," the researchers reported in their paper. "The introduced trojans are similar to the commercially deployed code-obfuscation methods which also use different dopant polarity to prevent optical reverse-engineering. This suggests that our dopant trojans are extremely stealthy as well as practically feasible."

Besides being stealthy, the alterations can happen at a minimum of two points in the supply chain. That includes (1) during manufacturing, where someone makes changes to the doping, or (2) a malicious designer making changes to the layout file of an integrated circuit before it goes to manufacturing.

In addition to the Ivy Bridge processor, the researchers applied the dopant technique to lodge a trojan in a chip prototype that was designed to withstand so-called side channel attacks. The result: cryptographic keys could be correctly extracted on the tampered device with a correlation close to 1. (In fairness, they found a small vulnerability in the trojan-free chip they used for comparison, but it was unaffected by the trojan they introduced.) The paper was authored by Georg T. Becker of the University of Massachusetts, Amherst; Francesco Regazzoni of TU Delft, the Netherlands and ALaRI, University of Lugano, Switzerland; Paar of UMass; Horst Gortz Institut for IT-Security, Ruhr-Universitat, Bochum, Germany; and Wayne P. Burleson of UMass.

In an e-mail, Paar stressed that no hardware trojans have ever been found circulating in the real world and that the techniques devised in the paper are mere proofs of concept. Still, the demonstration suggests the covert backdoors are technically feasible. It wouldn't be surprising to see chip makers and certifications groups respond with new ways to detect these subtle changes.

Story updated to change "dozen" to "hundred," after researchers clarified the "a few dozen" modified transistors applied only to the side-channel trojan. The attack on the Ivy Bridge processors requires modification of 256 or 512 transistors, depending on the entropy the attacker wants.