Here’s a look at some of the biggest mistakes you can make if (when) you get hit by an attack that results in a breach, along with some recommendations on how to better plan for the event.
Not Having a Plan (Or An Incomplete One)
Some organizations start with an incident response plan (IRP) and never write team procedures; others do the opposite. Either way, having a solid, documented, plan ahead of time can do wonders for reducing stress, expediting recovery, as well as reducing downtime and resultant cost.
It is critical that organizations have a plan, make sure everyone knows the plan (by either training on it, using tabletop exercises, or both), and that the plan can be executed no matter what happens (again, consult your tabletop exercise program, or even business continuity/disaster recovery and/or crisis management plans).
Plans should reflect the organization’s needs and policies on items like:
- Out-of-band communications
- Evidence archiving
- Securing of evidence
- Incident documentation procedures
- Regulatory, business partner, public relations, cyber insurance, or governmental/law enforcement reporting requirements and needs
- Policies on paying or negotiating ransoms
- Internal and external communications strategies (such as to employees and the public)
- Defining who can make which decisions regarding informing inside/outside legal counsel, regulators, or law enforcement as well as decisions like disabling all internet traffic or critical applications/servers.
Once you have a plan, make sure everyone on your CSIRT has an offline copy, accessible from home and available in the case of exigent circumstances. Organizational email and phone services may not always be available to all required parties in the case of a breach, depending on circumstances.
Don’t Panic For Not Having a Plan
As incident responders, we are used to high-flying emotions. Incidents can be some of the most stressful events an IT professional will ever experience (although I’m sure some outages come close).
If you haven’t been through one before, it is important to not panic. Get ahold of someone who can guide your next steps as soon as possible—panicking or running around trying to find assistance is critical time that an attacker is active in your environment and evidence is being destroyed—take a breath and engage that IRP (hopefully you have one from the step before!)
Self-care is just as important—if you or your team aren’t sleeping (a common issue in the early days of a breach), eating, or showering—you will not be as effective, and it will make things worse. Spread investigation workload around so the whole team can get rest, working in rotating shifts or engaging extra support, if needed.
It is important to know the limits of your team—even the most brilliant IT team may not understand the intricacies of a full-scale breach response.
If you do not have someone in-house (or documented procedures) for things like memory forensics, reverse engineering payloads, domain takedowns (in the case of look-alikes), domain rebuilds (in the case of all of AD getting compromised), know who you can call (and how to) to get this support at a moment’s notice. And, make sure you have access to that information offline!
That attacker you’re trying to eradicate? They just turned off all your endpoint security tools because they got elevated privileges—even easier when that user who so judiciously installed the malware had local admin rights.
Understanding that a skilled attacker is going to try to 1) evade every protection you have carefully procured and 2) destroy any evidence they can. These are important facts when it comes to modern incident response.
Having redundant logging, tooling, or data retention strategies can be critical when an attacker tries to hide their tracks. Although this may be a costly endeavor for some organizations, relying too heavily on any (one or few) security tool to give you 100% and persistent visibility into every action an attacker committed is not always an option.
Additionally, having a plan (such as pre-staged firewall rules) to isolate individual systems or sites if an endpoint isolate feature is not feasible can save large parts of the enterprise from spreading attacks should the attackers attempt to disable defenses.
Quarantined Malicious Files
We expect our endpoint protection software to stop all malware before it becomes a problem. However, just because it stops one payload does not mean the system is clean.
Malware infections and network intrusions are almost always comprised of multiple stages: loaders and droppers, for example, may have been executed and infected the machine before the next stage was even launched. The user who enabled the malicious macro may just go back to that same Word file and try to “read” the document again. It’s also possible a prior, ignored attack resulted in system or user credential compromise that can give the attacker future access.
Acknowledging and understanding the different types of threats your organization faces can be critical to effectively stopping an attack at its early stages. Obligatory Sun Tzu quote: “know the enemy and know yourself.”
Every organization thinks they are impervious to fallout from a breach if they’re running server backups. Being able to wipe and restore a virtual server, for example, from a snapshot can be a relatively easy process, costing only a couple hours of downtime, for some.
But what happens when the attacker gets admin on the management console/hypervisor? Or if the attacker was on your network longer than you have clean backups for? Or attacks a machine that you are not running backups for?
Ensure your recovery process includes cold (“air-gapped”) backups with at least 4 months of retention for critical servers and applications—the ones you really, really don’t want to rebuild from scratch if it comes down to it or cannot have offline for weeks on end.
If you do not have an EDR (endpoint detection and response) solution or are not logging system command-line along with process execution (and lineage), there is a good chance you will have no idea what an attacker or malware (especially fileless) did. How can you plan for the fallout from a data breach if you don’t even know what happened or what may have been stolen?
If you do have an EDR, but you manage it externally or are not forwarding anything other than its alerts to your SIEM, know how much data you have access to, for how long, and how to get access to more, if necessary.
Modern attackers often dwell in a network for several months (or there is a long gap between the initial attacker and the one who deploys the next attack component)—do you have enough data to trace root cause back 3-6 months?
Your security tools caught something—excellent! That is what you spent the budget for. But upon investigating the alert, you have no idea where the device is that corresponds to the internal IP in the alert.
If you have an active attack, that is not when you want to discover that you have devices that are unmanaged (such as operating without your antivirus or EDR software) or even worse—there is a system on the network you didn’t even know about. Now it’s beaconing out or spamming the environment with SMB packets and you have no way to locate it.
Spending the time to do asset management properly, control the network from rogue connections or infected personal devices, and even having ethernet/wireless access point location data can make all the difference during an incident response.
During an investigation, you may need to go all the way back to your oldest logs—if all you can find is malware beacon behavior, you do not have enough data to identify root cause.
You now have no idea how the attack was deployed, how long it was present, or what an attacker may have done (or exfiltrated) while on the system(s) in question. No security team wants to report to a regulator or board that they don’t know how something happened or what was stolen—logs should be kept for at least 6 months!
And while on the subject, don’t forget:
- Your web server has access/error logs that are not just for troubleshooting. If an exploit is run on these devices, these logs may be the only way to know when it happened or what exploit was run.
- That application server has an operating system underneath it—why are you not logging that layer?!
- Don’t forget about those Linux boxes!
- If you are using internet DNS servers, are not logging internal DNS requests, have no (or incomplete deployment of) proxies in place, and are not forcing all user traffic through the firewall —how are you going to know when there’s a connection to a bad site? Furthermore, how are you going to block it effectively?
- Cloud environments are just other people’s computers and are by no means impervious to attack. How are you going to catch when someone gets onto that tenant with all your well-preserved backups or the server running a critical application and wipes everything?
Ensure HR has a process to communicate terminated employees quickly to any teams responsible for account and asset access.
You may have a process for decommissioning devices and user accounts, but what about scripts or applications written by the terminated employee? Are there services launched under their user context on devices other than their workstation? Or, do they have access to web applications that may be external to your organization?
Having an updated network map, vendor list, and record of where these intersect is critical to protecting your enterprise from threats originating from third parties.
Always keep updated contact lists with all security partners and with all parties who may have to engage during an incident response.
Know who “owns” each asset, especially applications, and who supports them (internally, third-parties, or hybrid models).
Like a pilot during an emergency, the best outcomes result from preparedness, which involves simulating all the ways failures can occur and attacks can unfold. Thorough and repeated training, well-developed and comprehensive procedures, and open-minded expectations that involve a great level of respect for your adversary will give you the best chance to prevail in a time of crisis.
Elise Manna-Browne is Director of Advisory Services at Novacoast, specializing in threat response, analysis, hunting, penetration testing, and intelligence. She actively participates in the infosec community, including as a speaker and a volunteer.