Do not get hung up on the phrase: “Reasons you fail a pen test.”
There’s only one way to fail a pen test: to not do it.
Many less mature companies have used that excuse: “oh, we know our environment is messed up, therefore there’s no point in running a pen test.”
For companies like that I tend to ask: how long have you been postponing the pen test? The longer the delay, the worse. Pen testing can help organizations realistically prioritize steps to protect themselves against different threats. It can also help security teams secure budget for new security initiatives.
This article is a postmortem (“after death”) review of typical issues we see on pen tests that lead to Domain Admin compromise. For context, the focus is on external and internal scope penetration tests.
Cause of Death 1: Lack of Monitoring Visibility
As pen testers, more often than not, we are flying under the radar. Generally this is because:
- Customer doesn’t have any monitoring whatsoever
- Customer has monitoring enabled but no one looks at the logs.
- Customer has monitoring enabled, customer only looks at logs once every 1-2 weeks.
- Customer has monitoring enabled, logs are being observed, but not across the entire environment. Sometimes this leads to compromise via a route they didn’t expect, and they realize far too late that something happened. Real time monitoring is critical, especially for ransomware attacks.
- Some combination of the above.
Instead of using monitoring as an auditing tool for retrospective incident response, there should be alarms, policies, and processes to quickly isolate machines or react accordingly.
Also, this issue tends to be why all the other items discussed below work so well!
Cause of Death 2: Subpar Or Irregular Endpoint Protection Coverage
Somewhat similar to the above point, many customers have improper Endpoint Protection, allowing attackers to gain footholds on machines in their environments. Most commonly:
- Customer has solution deployed but it is left misconfigured, outdated, or relies on default AV protection which can trivially be disabled (e.g. if the attacker RDPs into the system) or it simply doesn’t catch typical payloads.
- Customer simply does not have any Endpoint Protection. Surprising, but it still happens.
- Customer has a patchwork of Endpoint Protection. This is very common as organizations migrate or try out different technologies then abandon them.
Having strong, properly configured Endpoint Protection makes obtaining footholds harder and helps alert when irregular or malicious behavior occurs. This is considered an active defense versus a reactive one.
Cause of Death 3: Misconfiguration vs. Vulnerabilities
Customers have gotten better at patching and reducing vulnerabilities in their systems. However, it is not uncommon for them to disregard the “medium” and “low” risk items reported by their vulnerability scanner as low priority. In reality there are a lot of “lower risk” vulnerabilities that can be effectively daisy-chained to achieve local or even full domain compromise.
For example, Windows environments that have default settings generally will have several hosts configured with SMB Signing not required and will prefer IPv6 DNS over IPv4. Both of these issues are considered just “medium” risk by several vulnerability management solutions, but combined they are an extremely common attack vector that can lead to domain admin compromise.
It’s a common misconception that only high or critical vulnerabilities can result in compromise. A customer may say: “I see no critical items in our vulnerability scanner”, or “everything is patched and up-to-date.” That’s awesome, but it’s only half the story. Misconfigurations are trickier to spot, and sometimes more time-consuming to fix which is why they are missed or ignored.
There are solutions specifically designed to find configuration vulnerabilities, such as Gyptol Validator.
Cause of Death 4: Issues with Identity & Access Management
This category of issues is quite broad so we’re breaking it out into several pieces.
It’s common for organizations to have “default” or “go-to” passwords. In our pen test experience we’ve seen that most organizations have at a minimum 10% of their users reusing the same password. This is a very easy way for a tester to escalate privileges and gain access to further hosts/accounts.
As an added bonus, we’ve also found instances of users who had their organization email affected by third-party credential breaches (e.g. LinkedIn, Adobe, Dropbox) which allowed us to perform successful credential stuffing.
Use of default passwords commonly affects appliances such as network switches, cameras, remote access controllers, printers, etc. The risk can vary depending on the appliance, but we have seen several critical appliances using default credentials. This is a useful technique that can be used later to gain access to sensitive data.
One example is iDRAC (integrated Dell Remote Access Controller) access, which grants underlying access to modify the host. Retaining the default password on a remote access service is extremely risky.
Easily Guessable Passwords
This one is pretty self-explanatory. Many organizations have their default password resets go to something like “Welcome1” or “Spring2022”. It should go without saying that this allows for trivial access to hosts and sensitive data, etc. It is unfortunate to see an otherwise solid environment fall due to something this trivial.
NIST/OWASP generally recommend screening passwords against a regularly updated list of weak/leaked passwords to ensure strong ones are being used.
Strict Password Rotation + No Password Manager = Sticky Notes
This is an interesting one since there’s some irony involved. Studies have shown that strict password rotations for users generally lead to the use of weak passwords, context-specific passwords, or passwords similar to their previous password.
Environments that have strict password rotation policies (ex. 90 days) and don’t use password managers typically revert to insecure practice of using sticky notes, Excel sheets, or plaintext files with lists of passwords that users save on their desktop. The more services/passwords they need to keep track of, the higher the likelihood they fall back on sloppy practices. And even if those sticky note passwords we find aren’t up-to-date its usually just a matter of slightly permutating it to get the right one (e.g. ManSpider2021 -> ManSpider2022)
Cause of Death 5: Social Engineering the Help Desk
Social Engineering is generally considered the “cheap” way of breaking in due to its common nature. Most companies that run social engineering tests (ex. phishing) generally target the overall employee population. However, that may be giving a limited picture of how well employees are doing.
We have found that adding vishing (voice calls) and smishing (SMS texts) highly increases the chances of a successful compromise. Even more interesting is we’ve found that targeting service desks tends to be a sure thing for access, sometimes providing not only password resets, but also MFA token re-assignments within the same interaction. People are just happy to help.
Cause of Death 6: Lack of Proper Segmentation
Large organizations tend to have a variety of environments, some more critical or vulnerable than others.
It is not uncommon for an organization to have flat or improperly segmented networks which allow attackers to gain footholds on the easier segments, stealing accounts, etc., then traversing to the juicier bits.
A typical example is dev environments which can be unmanaged and riddled with outdated/vulnerable software, which are then unsegmented from domain controllers or other critical segments.
Cause of Death 7: Lack of Holistic Testing
In the last few years, organizations have picked up the good habit of testing their environments (at the very least external and internal tests) on a regular basis (at least yearly).
That’s fantastic, but upon talking to customers we sometimes find out there are other environments/elements in their business such as custom web apps, mobile apps, cloud environments, etc. that are overlooked.
In a perfect world everyone would always have the time and budget to test everything all the time, but the next best thing is to really review what types of environments there are in the organization, understand which are potential blind spots, then work towards accessing them to at least get a security baseline for these items.
About The Author
Pedro M. Sosa leads the Novacoast Attack Team (NCAT), a group of white hat hackers who uncover new vulnerabilities and security flaws before the bad guys do. His group performs penetration tests, red teaming engagements, and other security assessments spanning external & internal infrastructure, cloud, IoT, web apps, DDoS stress tests, social engineering, physical security, and secure source code reviews, among others.
He also focuses on researching and developing new tools and applications to assist the endeavors of security researchers worldwide. He holds a Master’s degree in Post-Quantum Cryptography studies at the University of California, Santa Barbara.