This timeline should kick off every subsequent meeting. It should be focused specifically on what the bad guys did. Every meeting should sync with additions and removals as new information is gathered, and plot the bad guys movements. This timeline will dictate everything from technical mitigations to your PR and Legal strategy. It will also help make narrow, efficient queries if large data sets become involved.

The situation will drastically change from updates to the timeline. It is the most important piece of incident response.

2. New Indicators of Compromise

So uh, what exactly am I searching for, then?

An “indicator of compromise” or IOC is a small data artifact that has a high signal in pointing out an intrusion. For instance, an IP address that was involved in exfiltration of data, the MD5 hash of some malware, etc. Here are IOC’s for our example incident:

‘badhacker@gmail.com’ (This email sent malware, so everything it has sent is suspect) The MD5 Hash of paystub.pdf (any other file matching this hash is evil, even if named differently) Usage of ‘avery-admin’ from June 15th and onward (it was only used for evil) The IP address data was exfiltrated to (anything else talking to that IP is evil)

This agenda item for the meeting involves asking everyone for updates to the IOC list. This list guides the investigation for everyone involved. Every new IOC is automatically a new task for every participant who is hunting for bad guys on your systems.

3. Investigative Q&A

We’re still finding more stuff the bad guys did, and still don’t know everything.

There will be always be huge gaps in the timeline. To close these gaps, build a list of questions you need answered to be confident that your timeline truly represents what the bad guys did. Maintain these questions and update it with answers, and keep them visible to the team for the duration of the incident. These new, incoming answers should update the timeline. Also, make sure there’s a focus on questions that may discover bad things that have happened between each sync up.

Who else clicked on paystub.pdf? (No one, according to IT, interviews, logs) What logs were wiped, were they anywhere else? (Logs were found in backups) What hosts did the malware dropped by paystub.pdf talk to? (Forensic support will answer tomorrow, retained by outside counsel) What other hosts did avery-admin speak to? (Three other hosts, adding these to timeline and adding new questions) Are we ready to go back to work? (Yes, critical systems like email, directory services, and other critical pivot-points are cleared from being compromised, time to come back online) Has anything bad happened since the last Sync-Up?

In subsequent meetings, you should have some answers to these questions, which may make new questions. So, update a running list of Q&A and sync everyone with the progress. This will also be how you get back online once you’ve gained confidence in the systems you use to communicate with the team, and are mostly certain those aren’t breached as well.

You might not be able to answer every question. This can be impossible, in some cases, and is a huge part of a security program and incident readiness. Better teams can answer harder questions faster than you probably are.

4. Emergency Mitigations

What do we need done, like, RIGHT NOW?

Similar to Q&A, make a list of accounts that need passwords reset, laptops that need to be wiped, keys / secrets that need rotation, IP’s that need to be banned, etc. There will be tactical and strategic questions to ensure the bad guys are expelled all at once, but that will be dependent on your incident. You want to focus this section on total removal all at once so bad guys don’t persist. This is one of the hard parts requiring good technical consensus if there aren’t security folks to help advise you.

Examples

Revoke avery-admin passwords and re-issue Ban IP addresses associated with the malware and any remote access Add signatures for paystub.pdf and all dropped malware to AV Rotate Avery’s personal passwords Rotate credit card processing passwords Patch exploit used to escalate privilege Delete paystub.pdf from all employee email

5. Long Term Mitigations

We have so much work to do.

The ideas you’ll have during firefighting will be golden. You can not let a good crisis go to waste. Update a list of the lessons learned so they can be implemented after the fire is out.

Examples

Certificate + Two factor for all system administration (and everything else) Secure and centralize logging so it’s more accessible for future forensic response Harden endpoints against exploits (OS updates, Application whitelisting, EMET, Click-to-Play, use Chrome, etc) Improve network segmentation in production

6. Everything Else

Can someone help me write the blog post?

The communications team, lawyers, sales team, and everyone else who isn’t directly contributing to the response efforts can now ask their questions so they can do their jobs. But let any folks who have been up for 48 hours leave the meeting and get their shit done. Be careful not to let outside priorities needs drive the response too much, as you should be focused on a comprehensive understanding of the incident, and removing your adversary.

Final Thoughts

This didn’t really cover involvement with Law Enforcement, breach notification, and a whole bunch of other painful stuff that goes along with an incident. Don’t consider this a comprehensive guide.

Stay positive because your team will be terrified. Good luck!