Today it was revealed that servers at Apache.org and Atlassian were successfully attacked, leading to thousands of stolen passwords. The attack on apache.org's servers was via JIRA, and since the attack on Atlassian came from the same source, it probably was also through JIRA.

I'm sure that JIRA's programmers feel embarrassed enough about all of this--I don't want to berate them, or insult them. Everybody makes mistakes; almost all software has some pretty bad security vulnerabilities at at times. Overall, the JIRA guys seem to do good work, and seem to be generally nice people. And on top of all of that, I understand how they feel! Whenever there's a reported security issue in Bugzilla, I freak out. Thankfully there hasn't been an attack on Bugzilla like this JIRA one in recent memory. But if there was an attack like this, I'd be absolutely mortified, and the last thing I'd need would be somebody trying to insult or attack me for simply having made a mistake.

Instead, I want to use this opportunity as a reminder to all web application developers for why web application security is so important, and talk about some of the things that we do in Bugzilla that would have prevented or mitigated an attack like this one, and that web applications should probably all do as standard practice:

Lock down the on-disk permissions of files and directories. When you install Bugzilla, the actual installation script makes sure that the permissions on Bugzilla's files and directories are as secure as possible. That way, even if there is a security compromise in Bugzilla, the attackers can't upload programs and run them, modify existing scripts, or generally do anything nasty to the machine. It's particularly important that web applications never allow anything to be uploaded into a location where the web server could execute it. This is actually something that I rarely see web applications stress, in any of their documentation. Some web applications recommend that system administrators fix permissions themselves, but the chance is that the vast majority of people installing your software are going to skip the optional security recommendations, and just go for whatever's easiest. The only way to guarantee that security happens right on every installation is to have the actual installer do the setting of the permissions. The attackers configured the Apache JIRA to allow uploads into a location where the webserver would execute files, which is what let them compromise Apache's servers and steal the passwords of every JIRA user who logged in to the system. If, like Bugzilla, it was impossible to configure JIRA in that way, that part of the attack would have been impossible.

When you install Bugzilla, the actual installation script makes sure that the permissions on Bugzilla's files and directories are as secure as possible. That way, even if there is a security compromise in Bugzilla, the attackers can't upload programs and run them, modify existing scripts, or generally do anything nasty to the machine. It's particularly important that web applications allow anything to be uploaded into a location where the web server could execute it. Httponly: Never allow Javascript to read the login cookie. This is one of the simplest and most effective protections you can make in a web application. Seriously--for Bugzilla, it was just a few lines of code, and eliminated a whole set of possible attacks. All you have to do is to set an extra attribute on cookies when you send them, and you gain a lot of security. If Javascript must read some of your cookie data, that's fine, just don't let it read the login cookie. If Httponly had been set on the Apache JIRA session cookie, then the cross-site scripting attack that the attackers used could not have stolen administrators' login privileges.

This is one of the simplest and most effective protections you can make in a web application. Seriously--for Bugzilla, it was just a few lines of code, and eliminated a whole set of possible attacks. All you have to do is to set an extra attribute on cookies when you send them, and you gain a lot of security. If Javascript must read some of your cookie data, that's fine, just don't let it read the login cookie. Open-source your software. Okay, look, I know that that's not practical or possible for everybody. But I will tell you, a lot of the security bugs that are found in Bugzilla are found by people we've never met who just happened to be reading the code. These users find all our security issues before they're ever exploited, and so we can release fixes before systems are harmed. In particular, I don't think there has ever been a successful Cross-site scripting attack performed on a Bugzilla--at least not any publicly discussed in the six years I've been working on Bugzilla. The "many eyes make all bugs shallow" maxim may not always be true, but for security issues, in my experience, it has absolutely held up. If the cross-site scripting vulnerability in JIRA had been found by an outside user before it was exploited, the Apache JIRA administrators would have been safe from it.

Okay, look, I know that that's not practical or possible for everybody. But I will tell you, a lot of the security bugs that are found in Bugzilla are found by people we've never met who just happened to be reading the code. These users find all our security issues before they're ever exploited, and so we can release fixes before systems are harmed. In particular, I don't think there has ever been a successful Cross-site scripting attack performed on a Bugzilla--at least not any publicly discussed in the six years I've been working on Bugzilla. Have automated tests scan your code for potential security issues. There are lots of ways to do this. In Bugzilla, we have an automated test that makes sure that we properly "filter" any data that we got from the user or the database before displaying it on a web page, so that people can't inject malicious HTML or JavaScript into our system. The automated tests don't always catch our security issues, but the number of times that I've fixed a security issue in my code thanks to the tests is uncountable--probably in the thousands, at this point. And those are fixes that happen before the code even gets checked in, so that's a security vulnerability that gets fixed before it even becomes a part of the product. There are lots of other ways to do automated security testing of code, these days. Static code analysis, Fuzz testing, and automated security scanners seem to be the most popular, from what I've seen. If the cross-site scripting vulnerability in JIRA had been found by automated tests before it was exploited, the Apache JIRA administrators would have been safe from it.

There are lots of ways to do this. In Bugzilla, we have an automated test that makes sure that we properly "filter" any data that we got from the user or the database before displaying it on a web page, so that people can't inject malicious HTML or JavaScript into our system. The automated tests don't always catch our security issues, but the number of times that I've fixed a security issue in my code thanks to the tests is uncountable--probably in the thousands, at this point. And those are fixes that happen before the code even gets checked in, so that's a security vulnerability that gets fixed before it even becomes a part of the product. Lock out users who fail to guess their password too many times. There are lots of approaches to account security, but this is one of the simplest and most failsafe. If an attacker can only guess five passwords every 30 minutes, and then they get locked out, the statistical probability that they will ever guess anybody's password is pretty slim. Starting with Bugzilla 3.6, we implement exactly that policy, and we even notify the Bugzilla administrators whenever somebody gets locked out, so that if there's a large brute-force password attack, the admins will know immediately. Some people say that the answer to password security is to have people change their passwords every three months. This is probably sensible on some systems, but on a web application, it's mostly pretty ridiculous. If you only change your password every three months, then that gives an attacker three months to guess your password. I can promise you that almost any normal user password could be guessed in that time, particularly if the system doesn't prevent brute force attacks. Then once the user has your password, they can usually do everything damaging that they want to do within a few minutes. So, almost any forced-rotation period is pretty silly, in a pure web application. (In other systems it can make sense--it all depends on the context.) Other people suggest that passwords need to be a certain level of complexity or a certain length, and up to a point, that's true. If your password is one of the 100 most-common passwords, then even with a sensible lockout policy, the attacker will eventually guess it, if they keep up over a few days. (Of course, in Bugzilla, the system administrators would see all of these lockout notices and probably stop the attack pretty quickly. Still, it's better to be safe than sorry.) So your application should probably enforce a level of password complexity that's sufficient to make it impossible to guess passwords when combined with your lockout policy. If the Apache JIRA had had brute-force password-guessing protection like Bugzilla's lockout method, the attackers would not have been able to discover administrators' passwords using that method. (My understanding is that newer versions of JIRA do have this protection.)

There are lots of approaches to account security, but this is one of the simplest and most failsafe. If an attacker can only guess five passwords every 30 minutes, and then they get locked out, the statistical probability that they will ever guess anybody's password is pretty slim. Starting with Bugzilla 3.6, we implement exactly that policy, and we even notify the Bugzilla administrators whenever somebody gets locked out, so that if there's a large brute-force password attack, the admins will know immediately. Store passwords securely. If you're going to store a password in the database or anywhere, store it using some standard, secure method. Don't just hash the password--you have to at least salt them. Preferably, don't even invent your own password-storage scheme--just use some library that already exists. Never store passwords as plain text. You might be saying to yourself, "Oh, nobody will ever break into the system and steal them." That sounds pretty good until somebody does break in and steal them, and then you'll really be wishing that you stored them properly. If the Apache JIRA had been storing passwords properly, then Apache.org's users would be at far less risk of the attackers now knowing all the passwords in JIRA.

And finally, on top of all those points, if you're a system administrator, upgrade your software regularly. Some of the security issues that let the Apache JIRA be compromised were supposedly fixed in newer versions of JIRA, before the attack ever happened. Almost every time I hear of an attack like this, it uses old, known problems to compromise the system.

Nobody likes getting attacked. Everybody feels bad about it, when it happens--system administrators, programmers, and most especially users. So let's just design secure applications to start with, and never have any of our system administrators or users have to bear the burden of compromised systems and stolen data.

-Max