The recent Wi-Fi “KRACK” vulnerability, which allowed anyone to get onto a secure network (and which was quickly patched by reputable vendors), had been in plain sight behind a corporate-level paywall for 13 years. This raises a number of relevant, interesting, and uncomfortable questions.

When this week’s KRACK wi-fi vulnerabity hit, I saw a series of tweets from Emin Gür Sirer, who’s mostly tweeting on bitcoin topics but seemed to know something many didn’t about this particular Wi-Fi vulnerability: it had been in plain sight, but behind paywalls with corporate level fees, for thirteen years. That’s how long it took open source to catch up with the destructiveness of a paywall.

In this case, close scrutiny of the protocol would have (and in fact, did) uncovered the nonce reuse issues, but didn't happen for 13 years. — Emin Gün Sirer (@el33th4xor) October 16, 2017

Apparently, WPA2 was based on IEEE standards, which are locked up behind subscription fees that are so steep that open source activists and coders are just locked out from looking at them. This, in turn, meant that this vulnerability was in plain sight for anybody who could afford to look at it for almost a decade and a half. There are so many issues and followup questions on this, it deserves at least two more articles on the same topic, just for headlines to cover one important point at a time (yes, that’s necessary today).

This also means that one of two things were true: one, those who could afford to look at it didn’t bother to look at it, or two, those who would bother to look at it and understand it couldn’t afford to do so. Both are problematic. (There’s also a third option, even more problematic, below – when an actor who can both afford and understand it keeps the research to themselves as a zero-day sploit.)

The first obvious point is that security doesn’t work if it’s not out in the open. If this wasn’t the final nail in the coffin for security through obscurity – where paywalls are definitely included in the obscurity concept – then I don’t know what would be.

The second point is that this isn’t the only standard we rely on for security that is based on locked-up evidence of security. As has been shown, it may be that each component of the security stack passed its unit test, but the integration tests clearly were insufficient. In other words, it doesn’t matter if all proofs of security come out right, if you’re not sure you’ve proven the whole system to be secure (as opposed to just individual pieces of it). We can expect several more severe vulnerabilities to be in plain sight behind corporate paywalls.

The third point, which is going to be expanded in the first followup article, is that while ordinary activists and coders were locked out of reviewing these documents, the NSA and the like had no shortage of budget to pay for subscriptions to these specifications. Thus, the IEEE’s paywall was lopsiding the security field toward mass surveillance, away from security.

The fourth point, which also merits expansion, is that if something as severe as this was unread for thirteen years because it was behind a paywall — what does that say about legacy media’s current infatuation with paywalls to protect their “genuine journalism”?

Privacy remains your own responsibility.