A vastly larger percentage of the world's Web traffic will be encrypted under a near-final recommendation to revise the Hypertext Transfer Protocol (HTTP) that serves as the foundation for all communications between websites and end users.

The proposal, announced in a letter published Wednesday by an official with the Internet Engineering Task Force (IETF), comes after documents leaked by former National Security Agency contractor Edward Snowden heightened concerns about government surveillance of Internet communications. Despite those concerns, websites operated by Yahoo, the federal government, the site running this article, and others continue to publish the majority of their pages in a "plaintext" format that can be read by government spies or anyone else who has access to the network the traffic passes over. Last week, cryptographer and security expert Bruce Schneier urged people to "make surveillance expensive again" by encrypting as much Internet data as possible.

The HTTPbis Working Group, the IETF body charged with designing the next-generation HTTP 2.0 specification, is proposing that encryption be the default way data is transferred over the "open Internet." A growing number of groups participating in the standards-making process—particularly those who develop Web browsers—support the move, although as is typical in technical deliberations, there's debate about how best to implement the changes.

"There seems to be strong consensus to increase the use of encryption on the Web, but there is less agreement about how to go about this," Mark Nottingham, chair of the HTTPbis working group, wrote in Wednesday's letter. (HTTPbis roughly translates to "HTTP again.")

He went on to lay out three implementation proposals and describe their pros and cons:

A. Opportunistic encryption for http:// URIs without server authentication—aka "TLS Relaxed" as per draft-nottingham-http2-encryption. B. Opportunistic encryption for http:// URIs with server authentication—the same mechanism, but not "relaxed," along with some form of downgrade protection. C. HTTP/2 to only be used with https:// URIs on the "open" Internet. http:// URIs would continue to use HTTP/1 (and of course it would still be possible for older HTTP/1 clients to still interoperate with https:// URIs). In subsequent discussion, there seems to be agreement that (C) is preferable to (B), since it is more straightforward; no new mechanism needs to be specified, and HSTS can be used for downgrade protection. (C) also has this advantage over (A) and furthermore provides stronger protection against active attacks. The strongest objections against (A) seemed to be about creating confusion about security and discouraging use of "full" TLS, whereas those against (C) were about limiting deployment of better security. Keen observers have noted that we can deploy (C) and judge adoption of the new protocol, later adding (A) if necessary. The reverse is not necessarily true. Furthermore, in discussions with browser vendors (who have been among those most strongly advocating more use of encryption), there seems to be good support for (C), whereas there's still a fair amount of doubt/disagreement regarding (A).

Pros, cons, and carrots

As Nottingham acknowledged, there are major advantages and disadvantages for each option. Proposal A would be easier for websites to implement because it wouldn't require them to authenticate their servers using a digital certificate that is recognized by all the major browsers. This relaxation of current HTTPS requirements would eliminate a hurdle that stops many websites from encrypting traffic now, but it also comes at a cost. The lack of authentication could make it trivial for the person at an Internet cafe or the spy monitoring Internet backbones to create a fraudulent digital certificate that impersonates websites using this form of relaxed transport layer security (TLS). That risk calls into question whether the weakened measure is worth the hassle of implementing.

Proposal B, by contrast, would make it much harder for attackers, since HTTP 2.0 traffic by default would be both encrypted and authenticated. But the increased cost and effort required by millions of websites may stymie the adoption of the new specification, which in addition to encryption offers improvements such as increased header compression and asynchronous connection multiplexing.

Proposal C seems to resolve the tension between the other two options by moving in a different direction altogether—that is, by implementing HTTP 2.0 only in full-blown HTTPS traffic. This approach attempts to use the many improvements of the new standard as a carrot that gives websites an incentive to protect their traffic with traditional HTTPS encryption.

The options that the working group is considering do a fair job of mapping the current debate over Web-based encryption. A common argument is that more sites can and should encrypt all or at least most of their traffic. Even better is when sites provide this encryption while at the same time providing strong cryptographic assurances that the server hosting the website is the one operated by the domain-name holder listed in the address bar—rather than by an attacker who is tampering with the connection.

Unfortunately, the proposals are passing over an important position in the debate over Web encryption, involving the viability of the current TLS and secure sockets layer (SSL) protocols that underpin all HTTPS traffic. With more than 500 certificate authorities located all over the world recognized by major browsers, all it takes is the compromise of one of them for the entire system to fail (although certificate pinning in some cases helps contain the damage). There's nothing in Nottingham's letter indicating that this single point of failure will be addressed. The current HTTPS system has serious privacy implications for end users, since certificate authorities can log huge numbers of requests for SSL-protected websites and map them to individual IP addresses. This is also unaddressed.

It's unfortunate that the letter didn't propose alternatives to the largely broken TLS system, such as the one dubbed Trust Assertions for Certificate Keys, which was conceived by researchers Moxie Marlinspike and Trevor Perrin. Then again, as things are now, the engineers in the HTTPbis Working Group are likely managing as much controversy as they can. Adding an entirely new way to encrypt Web traffic to an already sprawling list of considerations would probably prove to be too much.