I don’t know how Evernote stored my password, you know, the one they think might have been accessed by masked assassins (or the digital equivalent thereof). I mean I know that their measures are robust but then again, so were Tesco’s and according to their definition, “robust” means storing them in plain text behind a website riddled with XSS and SQL injection (among other security faux pas).

Last year we saw LinkedIn breached and some millions of SHA1 passwords with no salt exposed. Last week we saw Australia’s own ABC do the same thing; it took me 45 seconds to crack 53% of those and others have since gone on to crack more than 90% of them. These storage mechanisms are not robust, they’re stupid. The problem, of course, is that consumers don’t know a website’s password approach is stupid until after they’ve entrusted their passwords with the site. That is a fundamental problem.

I propose that websites should be required to disclose their password storage mechanism. The disclosure would sit right next to the point where the password is provided for persistent storage, namely on the registration and password change pages. It would be as simple as this:

Let me explain the thinking.

Consumers don’t care (but companies do)

Well actually, that’s not entirely true; consumers don’t care about password storage when they’re signing up for a service, they just want to get online, buy their book / t-shirt / porn then move along to something else. However, consumers do care if their credentials are leaked and other adverse things happen as a result. Of course they shouldn’t have reused their damn passwords in the first place but unfortunately, this is the world we live in.

No, public disclosure of password storage mechanisms won’t change consumer behaviour, but it will change corporate behaviour. Let’s take a case in point – Tesco. You would never see this on their registration page:

That’s not because they don’t store them in plain text, they do (allegedly), rather it’s because there’s no way an organisation like Tesco will ever willingly acknowledge it. Let’s face it, plain text storage of credentials is probably the single most widely criticised security practice in the industry right now and encrypted passwords (remember folks, this is different to “hashed”) is not far behind it. No, an organisation would far sooner fix their sloppy password storage than publicly acknowledge such a glaringly bad practice.

“But it will make it easier to crack passwords”

But it won’t, and there are several reasons why this argument doesn’t stick. First of all, let’s pull out a bit of Kerckhoff’s principle just for good measure:

A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.

One ought design systems under the assumption that the enemy will immediately gain full familiarity with them.

Now of course that last statement is precisely what happens in cases like the ABC; it was as simple as Googling a cipher to immediately understand how their cryptography had been implemented. In the vast majority of cases, the implementation is one of the default models offered by the web framework being used. We know this because we have extensive evidence from numerous breaches.

The other simple reason this statement isn’t valid is that the password crypto knowledge is only of any use after the site has already been breached and passwords disclosed. Stating the password storage mechanism provides absolutely zero value to an attacker attempting to exploit, say, a SQL injection risk – it’s just not even in the same ballpark.

Would a statement about poor password storage make a site a bigger target? Ignoring for a moment that other, totally independent flaws must exist in order to gain access to the (hopefully) protected passwords in the first place, if an organisation lacks the confidence in their password storage mechanism to publicly disclose it, should some pressure not be placed on them?

Another reason this doesn’t make much sense is that many websites leak information about the password storage mechanism anyway. Ever used a “forgot password feature” and been emailed your password? There’s disclosure that they’re not hashing it so it’s either immediately accessible once the database is disclosed or accessible once the key is obtained and once a box is popped, this is very, very frequently a trivial task. There’s an entire site dedicated to naming these purveyors of poor password management over at plaintextoffenders.com so certainly there is voluminous public data on them already.

How much information should be disclosed?

Password storage isn’t always just as simple as “we use this hashing algorithm with this salt” and indeed the protections offered by, say, symmetric encryption may be as good as null and void if the key management strategy is bad. So how much information should be disclosed? Where do you draw the line between a simple statement as seen in the badges above and a more comprehensive – and perhaps revealing – statement of a website’s security position?

As I said earlier, this is all comes back to the simple fact that whilst it’s possible to create more elaborative password protection schemes, it’s rarely the case. Nine times out of ten (ninety nine times out of one hundred?) the crypto algorithm, key strength and salt would be sufficient. If there are defences beyond this then great, but if you’re just whacking a single round of SHA1 with a salt on the password then regardless of what else you’ve done there’s the potential for very fast pwnage.

This information could be contained in a short statement at the point of password creation so at registration and password change facilities. The vast majority of cases will fall into one of a handful of simple pattern relating to algorithm and strength so would be easy to template. Smarter crypto people than me would find the right balance of information, but IMHO the first image in this post (which, incidentally, is from the ASP.NET universal membership provider) tells you everything you need to know.

Legislative implications

Of course the problem with any sort of requirement like this is legislation and the problem with legislation is that it’s going to different depending on where the website is located. There are no easy answers to this, but there are precedents.

Here’s a good case in point: the EU cookie law. Without a doubt, this is one of the stupidest online laws you will find. If you’ve had the distinct pleasure of not seeing this in action before, here are some examples:

“Hey, you’re going to get these little bytes of data stored securely within your browser but they make the website better, honest, and if you don’t like it then you can GTFO because there’s never a ‘decline’ button anyway (but you can then weed through your browser settings, disable them and screw up your ability to logon just about anywhere)”.

If the EU can pass such an inane, illogical, impractical, zero-value law that gives nothing back to the consumer and merely impedes usability, surely they can cope with such a fundamental requirement as to confirm that such an essential security practice has been properly put in place?

And speaking of security, mandatory data breach laws are a good example of where legislation has some number of precedents. The US has had laws in place to address this for a decade, the EU has had their Directive on Privacy and Electronic Communications for a few years now and even down here in Australia there’s a renewed push to implement a mandatory data breach notification law (it’s surely just a matter of time).

If such broad support can be garnered for notifying consumers after a company has screwed up their security, surely it must be feasible to do it before they’ve lost everyone’s passwords?

How would you roll this out?

You provide a moratorium after which companies need to comply. There are many precedents of this; the EU cookie law is one, Australia’s spam act of 2003 is another one. There is no shortage of precedents of similar laws where periods of grace have been provided for companies to get compliant. Naturally you allow sufficient time because let’s face it, a whole bunch of orgs are actually going to fix their crappy implementation rather than whack the big “plain text” badge up on the site.

What about the overhead on organisations who then need to abide by this new regulation? C’mon, we’re talking about a single statement that exists in maybe two places on the website and you provide a long lead time to implement it. The only time this is going to amount to anything more than a trivial amount of work is if an org feels compelled to change their approach to storage rather than disclose the current state and honestly, is this not a good thing? If an organisation is either too embarrassed to disclose their password storage mechanism or if disclosure leads to increased risk, they’ve failed at it badly and it’s better for that information to be public now rather than after a breach when it’s too late.

Then of course there’s enforcement – what happens when an org doesn’t comply? Or worse, lies about the password implementation? Inevitably this depends on the jurisdiction of the offending site, but there are many precedents of controls and penalties already in place to handle similar laws for such grievous offences as not declaring those extra few bites in the response header (cookies) or not disclosing a breach (ok, these guys should really get the book thrown at them). If we can fine companies for not putting an unsubscribe link in an email, we can fine them for jeopardising passwords.

The constructs exist already.

In summary…

We have a security problem on the web, of that there is no doubt. What compounds this is that we also have a bullshit problem. You can see this problem in action every time an organisation talks about being “robust” or “never being hacked” or any other number of subjective, unquantifiable statements that tell you nothing about the measures that are actually in place and amount to little more than marketing speak. This is just not good enough. Trust alone is not good enough. We need accountability before passwords are compromised, not after.

What password storage disclosure does is solves the bullshit problem which in turn goes a long way to solving the security problem. You cannot both make claims of “robustness” and admit to SHA1 with no salts, they’re simply not compatible statements. Of course it won’t address the vector by which the passwords were obtained in the first place when a breach occurs, but it will change the approach to password storage by many organisations as they simply will not stand up and publicly admit to storage in plain text. There will also be a significant number that will be reluctant to admit to insecure cryptographic storage and those that do will open themselves to public derision, and rightly so.

Password storage disclosure laws leave the incompetent open to ridicule and reward those who’ve taken customer data seriously. Surely only good things can come of that for consumers.