1) Learn from cautionary tales (without becoming one)

The initial reception is good. Our client is pleased with the product and the users don’t really care about how secure our application is or is not. The technical choices we made to get to this point don’t affect the user directly, as the product has the same features it would have had otherwise.

Our good friend Dr Emmett Lathrop Brown, Ph.D. explains the concept of causality

A few months pass. Maybe we roll into the summer, and the kids are off school. It’s vacation season, and the office is emptying out over July and August.

We’re also into the conference season, so DEF CON and blackhat are coming up.

An innocuous-sounding demo is tentatively pencilled in to the schedule, and our team is unaware because they’re: A) on vacation and B) have never heard of either of the events I just mentioned.

The demo presents a flaw on the level of a heartbleed or ETERNALBLUE. In other words, a really severe 0-day exploit.

It’s foolish to take for granted that the folks that find these flaws have noble intentions or conform to the principles of responsible disclosure, but let’s assume optimistically that they do.

So, well before the announcement these merciful folks have discreetly disclosed this issue to the major vendors, and the maintainers of the major frameworks.

Patches have been issued along with advisories explaining the severity, but not the specifics, of the attack vector. The third-party security framework we shunned four months ago has already been updated and an emergency patch has been pushed out to affected projects, and they’re safe. We are not.

We’re on vacation, enjoying the sunshine with friends and family, and our phone buzzes in our pocket.

We remember that we’re on call this week. It’s Joe, the junior developer from work who is holding the fort.

Did he forget his domain password again?

The website keeps crashing when people try to sign in. Joe is stumped.

That’s weird. We don’t do releases without people on hand. Did someone push a breaking change to live?

At first we assume the database is offline, so we check the status monitor. Everything is green, so it’s not a connectivity problem.

We bounce a few calls around to get permissions approved so we can inspect the live data for problems. We get our access, and try to connect — but the schema won’t load in our client.

Good. Now we know it’s a database issue. Let’s take a look.

We inspect the database, but the tables are empty. We figure our database explorer is just playing up, and we try to connect on the command line, but we know in our stomach that’s not the problem.

S**T.

It’s.. gone.

At this point, it’s not clear what’s happened. Everyone is now involved, all the way up the organisation (is it a breach? data corruption?). The first priority is to get the site back online, so the latest backup is restored and users are told that the downtime is being investigated. “Possible” data loss is mentioned briefly in the advisory notice.

As the site gets back online, an e-mail arrives with a sample of the missing data.. and a ransom demand threatening to dump the lot onto the dark web.