Last week we were dragged over the coals — and rightfully so — for failing to communicate clearly regarding a security exploit that was discovered in early August.

It turned out that the issue was related to Ruby on Rails. The exploit affected multiple products (ours and others). After the initial investigation, and escalation to the Rails security team, the root cause was patched within a few days of the initial report. We then updated our apps, tested, and deployed updates.

Fixed promptly, communicated poorly

Problem was, we did a terrible job communicating from start to finish. We didn’t communicate well with the person who initially reported the vulnerability, we didn’t communicate well internally, and we didn’t communicate well publicly.

Perfect security is a moving target. New exploits and security discoveries pop up over time. They occur in OSes, web browsers, frameworks, embedded systems, and commercial software. Anyone who’s in the software business has to deal with these issues from time to time. What matters is that issues are taken seriously, delegated properly, handled appropriately based on severity and priority, and communicated clearly with all parties involved. When someone reports a security issue, they’re reporting it because they want to help. It’s important for us to keep that in mind.

Getting better

After ultimately reviewing what went wrong, we began to rework our internal process for dealing with security reports. This is a longer term project which we’ve just begun. Something we could do in the short term was review how we communicate publicly about security on our main security page on 37signals.com.

After a fresh slap in the face (which is definitely a healthy thing from time to time), it became clear that the words we were using were the wrong words. We weren’t setting the right expectations. Some of the lines were cringeworthy. It just wasn’t us. I don’t think any of us ever liked the way this page was written, but we never got around to changing it. Now was the time.