Recently there have been discussions regarding Yubico’s OpenPGP implementation on the YubiKey 4. While open source and security remains central to our mission, we think some clarifications and context around current OpenPGP support would be beneficial to explain what we are doing, why, and how it reflects our commitment to improved security and open source.

To start off, let me say that Yubico is a strong supporter of free and open source software (FOSS). We use it daily in the development of new products, and a large portion of our software projects are released as open source software — we have close to 100 projects available on GitHub. This includes libraries for interfacing or integrating with our devices, tools used for programming and customization, server software which supports our products, specifications for custom protocols, and many more. We believe strongly that this benefits the community, as well as Yubico.

Some basic facts:

The YubiKey hardware with its integral firmware has never been open sourced, whereas almost all of the supporting applications are open source.

The YubiKey NEO is a two-chip design. There is one “non-secure” USB interface controller and one secure crypto processor, which runs Java Card (JCOP 2.4.2 R1). There is a clear security boundary between these two chips. This platform is limited to RSA with key lengths up to 2048 bits and ECC up to 320 bits.

The YubiKey 4 is a single-chip design without a Java Card/Global Platform environment, featuring RSA with key lengths up to 4096 bits and ECC up to 521 bits. Yubico has developed the firmware from the ground up. These devices are loaded by Yubico and cannot be updated.

The OpenPGP applet for the YubiKey NEO was (and still is) published as open source.

When the YubiKey NEO was released back in 2012, we had open (= known) card manager (CM) keys, allowing for applet management.

Since late 2013, we ship all NEOs with randomized card manager keys, which prevents applet management. So although the OpenPGP applet is available, users can’t load it on a NEO.

We do have a NEO developer program, where we allow custom applet development and key distribution.

There are quite a few reasons we’ve done it this way, but none of them represent a change in our commitment to a free, open internet. Here’s our thinking:

First, and most important in our decision-making, has been to move away from what we call “non-secure hardware” and into secure elements that are specifically designed for security applications and have passed at least Common Criteria EAL5+ certification.

The reason is simple — we have to provide security hardware that not only implements a cryptographic protocol correctly, but also physically protects key material and protects the cryptographic operations from leakage or modification. Over the past couple of years, many publications have provided evidence of various forms of intrusive and non-intrusive attacks against hardware devices (including the YubiKey 2). Much can be said (and has indeed been said) about this subject, but there is no question that this is a serious matter. Attacks varying from “chip-cloning” and “decapsulation and probing” to fault injection and passive side-channel analysis have shown that a large number of devices are vulnerable.

It’s important to understand what we mean by “secure hardware.” Secure hardware features a secure chip, which has built-in countermeasures to mitigate a long list of attacks. Standard microcontrollers lacks these features. Built-in countermeasures make intrusive- and non-intrusive attacks an order of magnitude more complicated to perform. Secure hardware relies on secure firmware, where additional firmware countermeasures are implemented to further strengthen the device against attacks.

Given these developments, we, as a product company, have taken a clear stand against implementations based on off-the-shelf components and further believe that something like a commercial-grade AVR or ARM controller is unfit to be used in a security product. In most cases, these controllers are easy to attack, from breaking in via a debug/JTAG/TAP port to probing memory contents. Various forms of fault injection and side-channel analysis are possible, sometimes allowing for a complete key recovery in a shockingly short period of time. In this specific context (fault injection and side-channel analysis), an open source strategy would provide little or no remedy to a serious and growing industry problem. One could say it actually works the other way. In fact, the attacker’s job becomes much easier as the code to attack is fully known and the attacker owns the hardware freely. Without any built-in security countermeasures, the attacker can fully profile the behavior in a way that is impossible with a secure chip.

So — why not combine the best of two worlds then, i.e. using secure hardware in an open source design? There are a few problems with that:

There is an inverse relationship between making a chip open and achieving security certifications, such as Common Criteria. In order to achieve these higher levels of certifications, certain requirements are put on the final products and their use and available modes.

There are, in practice, only two major players providing secure silicon and none of their products/platforms are available on the open market for developers except in very large volumes.

Even for large volume orders, there is a highly bureaucratic process to even get started with these suppliers: procedures, non-disclosure agreements, secure access to datasheets, export control, licensing terms, IP, etc.

Since there is no debug port, embedded development becomes a matter of having an expensive emulator and special developer licenses, again available only under NDA.

Although this does not prevent the source code from being published, without the datasheets, security guidelines, and a platform for performing tests, the outcome is questionable, with little practical value.

Secure elements are still a small market compared with generic bread-and-butter microcontrollers. Given the high costs to achieve and maintain certification and the procedural hassle, it is quite easy to understand the current state of affairs.

Let’s for a moment return to the question of the YubiKey NEO and why we decided to remove the ability to manage the applets. As we began to produce the NEO in larger volumes, we had to make some tough choices:

With open card manager keys, the devices are open to potential denial-of-service attacks as well as someone replacing a known applet with a bogus one. What if a bad guy took your new NEO and overwrote the OpenPGP applet with an evil one, thereby providing a key back door? If you’re hardcore about security, you’d immediately set your own CM keys, locking out that possibility, but then how would we control who is capable of this and who we actually expose to a potential threat?

Devices with known keys become vulnerable to modifications when in transit.

We tried a scheme of randomizing keys and making them available for developers under certain conditions. The practical problems of authenticating users and securely distributing keys plus the paperwork needed made it impossible.

Given that the NXP toolchain and extended libraries for JCOP are not free and available, applet development becomes more a theoretical possibility than a practical one.

Although we had initially hoped to take a different approach to applet management, I believe we made the right decisions given our choices. We do provide a developer program, giving access to the full toolchain as well as open CM keys. We don’t charge for it, but given the paperwork required, we need to have a compelling business case in order to justify the effort.

I’d like to bring up another aspect when it comes to providing integrated products. With the YubiKey, we see the firmware being integral with the hardware and we take responsibility for the aggregated functionality. We have made a conscious decision not to provide any means for upgrading the firmware out in the field, in order to eliminate the chance a device could be modified by an attacker.

That means that any device with a security issue is a lost device: if there are any problems, issues come up with returns, support for users moving their keys, destruction of the keys, etc. In a “software-only” open source project, handling a serious issue like that could be as simple as issuing a security bulletin and pushing a fix.

Enterprise customers deploying at million-unit scale have engaged independent third parties to review our firmware source code and algorithm implementations, and we would consider this with others of a similar or larger scale (given the extensive load on our engineering team to support such analysis). Such analysis is restricted to the contracting parties.

The chain of trust for any security product is pivotal to understanding how to implement a secure scheme for the entire lifecycle from production to deployment. Again, using commercial, off-the-shelf components with open designs creates some very hard nuts to crack. What prevents your hardware or chip from being compromised in the first place? What if the bootloader has been compromised, maybe in transit? Moving towards a fully-integrated design, like the YubiKey 4, actually solves a very practical problem. The security boundary includes the initial loader, which is protected by keys.

Consider the following questions and statements:

What is the attack scenario you’re most worried about — a backdoor or bug, accessible via the standard interface over the network, someone owning your computer while extracting sensitive information from your security token, or that someone in possession of your key could retrieve such information?

If you have to pick only one, is it more important to have the source code available for review or to have a product that includes serious countermeasures for attacks against the integrity of your keys?

Although you may feel good about having reviewed the source and loaded the firmware yourself, do you trust and feel comfortable that the very same interface you used for that loading procedure is not a backdoor for extracting the key? Is the bootloader there trustworthy? The memory fuse? The JTAG lock-out feature? Are these properly documented and scrutinized?

One has to recognize the hard problem of trust. Considering a utopian scenario with an open-and-fully-transparent-and-proven-secure-ip-less chip, given the complexity and astronomical costs of chip development, who would make it? And if it was available, how would they then provide the proof, making it more trustworthy than anything else already available?

Is it more rational to put a large amount of trust in a large monolith like a Java Card OS, while at the same time being highly suspicious of a considerably smaller piece of custom code? This assumes that both have been subject to third-party review in a similar fashion.

In conclusion, we want our customers and community to know that we have made conscious choices to some quite complex questions and that, in the end, we have landed with some sensible compromises. We are no less committed to security. We are no less committed to open source and to the open source community. We are always open to suggestions and could very well make changes if more sensible solutions arise. After all, the trust of our users is the most important asset we have.

If you have comments please visit our YubiKey 4 forum. If you don’t have access to the forum, send us a comment at comments@yubico.com.

– Jakob Ehrensvard is CTO at Yubico