The Internet Engineering Task Force (IETF) has descended upon the City of Light this week to discuss ongoing work so the pipeline spewing out new RFCs and Internet standards doesn't stall. Probably close to a hundred working groups, covering topics from routing to various aspects of IPv6 (and even IPv4!) to Web security, will keep the some 1,300 participants busy throughout the week. Because eight groups meet in different rooms at any given time, different people work on different topics. Usually, one stands out. This time around, Web security seems to be in the air. The topic was discussed in the websec working group, but also in a panel during one of the few plenary sessions and in a lunchtime briefing by the Internet Society (ISOC).

Web security is a many-headed hydra. Security issues crop up because the security mechanisms are so complex, because there is so much money to be made on both sides, and because many users are lax (to name a few issues). On Monday, IETF's websec working group had a big discussion about how HSTS, the HTTP Strict Transport Security protocol, should fail. With HSTS, before it sends over the requested page, a Web server indicates the page in question may only be requested over a secure connection. The idea is that with HSTS in place, an attacker wouldn't be able to get her hands on the cleartext data flowing between the browser and the server if the user simply types "paypal.com" rather than "https://paypal.com". (Of course a man in the middle type attacker could remove the HSTS header in the HTTP protocol exchange, so it would be better to put HSTS in the [secure] DNS.)

But now the websec working group is faced with a difficult decision: what if the secure connection can't be established? Common reasons for this include incorrect and expired SSL certificates. These are so common that users have been inadvertently trained to simply click "continue" if the browser throws up a security warning. Obviously HSTS wouldn't be very successful if the only thing that stands in the way of an attacker is a requester that many users will dismiss without a second glance. On the other hand, a "hard fail" means that if there is an issue with the secure connection—any issue, even a common one such as an expired certificate—the site in question is completely unreachable. There doesn't seem to be any way to handle this that both cuts attackers off at the knees and is reasonably user friendly at the same time.

But server admins with poor calendaring skills and users with trigger happy mouse button fingers aren't the only issue. During the Monday plenary session, the first speaker talked about the need to make the entire Web run over secure HTTPS connections. But getting there isn't easy. Big sites load many external JavaScripts, mostly to serve ads and to provide usage analytics. If a site loads over HTTPS, then all the external elements it loads should also load over HTTPS. Otherwise, security could be compromised.

But good luck getting dozens of providers of external items to switch to HTTPS within a reasonable time, if at all. HTTPS has a rather heavy setup phase. This is not so bad for lengthy sessions, but if an ad network just wants to send a redirect to some other server, completing an entire SSL negotiation first imposes quite a hit on both the servers involved and the time it takes to load the element.

Of course, all of this assumes the certificates used for SSL (which powers HTTPS) work as advertised. Consider that the major browsers or operating systems come with trust anchors that allow some 1500 certificate authorities (CAs) worldwide (by one count) to give out these certificates. And each CA can give out certificates for any DNS name. So when DigiNotar in the Netherlands was compromised, it was possible to create fake certificates for Google, a US company. These fake certificates were trusted by users throughout the world. The solution: remove the trust anchors for the DigiNotar CA.

However, the Dutch government asked browser vendors to delay this for several weeks while they scrambled to get new certificates in place—depending on a known compromised CA. The alternative was a complete shutdown of secure communication with and between government agencies.

CAs make mistakes all the time by signing names they have no business signing, as well as signing names that don't exist, or don't even make sense (such as the .mars top-level domain). They have also been known to engage in rather dubious practices, like selling a wildcard certificate that matches everything for a lot of money. With such a certificate, it's possible to operate a proxy server that can decrypt all HTTPS traffic and inspect it. Normal HTTPS proxies don't decrypt; doing so without the wildcard certificate makes the browser show dire security warnings. And then there's the CAs that keep using the RC4 and MD5 crypto algorithms, which have been known to be vulnerable to attack for years.

Currently, the browser vendors have one very big stick, which they loathe to use because then they break tens or hundreds of thousands of sites, and users will flee to other browsers. Suppose the websec working group opts for the HSTS hard fail option, and removing a popular CA makes 20 of the 100 largest Web destinations go dark overnight. That's not a stick you'd want to wave around indiscriminately. The browser makers would much prefer to have a larger number of smaller sticks.

And that's only when everything works more or less as expected. Web browsers have become extremely complex over the years, so implementation mistakes are easily made. Another issue is that Web browsers keep changing and keep breaking Web developer's assumptions in the process. When AJAX rolled around and introduced the XMLHttpRequest, Web browsers became capable of issuing HTTP(S) requests under their own authority. This is restricted through a same origin policy, so requests can only be sent to the site where the JavaScript issuing the request was loaded from. But then cross-origin resource sharing (CORS) came around, because sometimes it's legitimately useful to share resources across origins. This caught Facebook flatfooted, as they didn't expect this new capability and initially allowed code loaded from random sites to get at logged-in users' data.

Tuesday during lunch, the Internet Society provided a briefing on OpenID and OAuth, two technologies related to identity and user login on the Web. OpenID has been around for a few years, and allows users to use their credentials at one site to log in to another. Apparently, it's even possible to log in to your Gmail account using your Hotmail login thanks to OpenID (although this feature isn't widely advertised). You may remember OAuth from the Twitter case study. OAuth is a way to authorize third parties to access your information or act on your behalf, as an alternative to handing over your password and hoping for the best. All third-party Twitter tools are required to use OAuth as a mechanism to gain your permission to access your Twitter account.

Unfortunately, these tools don't (yet) understand the concept of data minimization, where an application is only granted the access it really needs. For instance, favstar.fm promises to "see your own most popular tweets" and more. But when you click on "sign in via Twitter," it also asks for the ability to post updates. It doesn't seem to need this for the most prominent features, if at all. (Revoke access here.)

Of course, if I just use my Gmail address and password to sign up all over the Web I also get the benefit of only needing to remember a single password, without the need to use complex security mechanisms. But big e-mail providers such as Gmail, Hotmail, and Yahoo really hate that. This practice is extremely insecure. Once your account is compromised in one place, all your accounts everywhere are wide open. So Yahoo would rather have you log in for Flickr using your Gmail account through OpenID than have you use a throw-away account with 1234 as the password. But of course what they'd like even more is for you to have an account with your real name, which can be correlated with other information about you, which helps sell more targeted (and thus expensive) ads. That is exactly why people may be tempted to use a throw-away account.

So big money, complex technology, and privacy concerns all come together in Web security and Web identity. It all means silver bullets are hard to come by, but once in a while, progress is being made.