Preface

To be clear, this article’s intention is to focus on the ‘why’ and not completely the ‘how’. There are countless videos and tutorials out there to explain how to use the tools, and much more information than can be laid out in one blog post. Additionally, I acknowledge that other testers out there may have an alternate opinion on these tools, and which are the most useful. This list is not conclusive. If you have a different opinion than that which is described in the article, I would love to hear it and potentially post about it in the future! Feel free to comment below, or shoot me an email, tweet, whatever. I am pretty receptive to feedback.

With that being said, let’s get into the list.

1. Responder

This tool, in my opinion, makes the absolute top of the list. When an auditor comes in and talks about “least functionality”, this is what comes immediately to mind. If you are a pentester, Responder is likely the first tool you will start running as soon as you get your Linux distro-of-choice connected to the network and kick off the internal penetration test. Why is it so valuable, you might ask? The tool functions by listening for and poisoning responses from the following protocols:

There is more to Responder, but I will only focus on these three protocols for this article.

As a sysadmin, these protocols may be vaguely familiar, but you can’t quite recall from where. You may have seen them referenced in a Microsoft training book that has long since become irrelevant, or depending how long you have been in the game remember actually making use of one or them.

NBT-NS is a remnant of the past; a protocol which has been left enabled by Microsoft for legacy/compatibility reasons to allow applications which relied on NetBIOS to operate over TCP/IP networks. LLMNR is a protocol designed similarly to DNS, and relies on multicast and peer-to-peer communications for name resolution. It came from the Vista era, and we all know nothing good came from that time-frame. You probably don’t even use either of these, but under the hood they are spraying packets all across your network. Attackers (real or simulated) know this, and use it to their advantage.

WPAD on the other hand serves a very real and noticeable purpose on the network. Most enterprise networks use a proxy auto-config (PAC) file to control how hosts get out to the Internet, and WPAD makes that relatively easy. The machines broadcast out into the network looking for a WPAD file, and receive the PAC which is given. This is where the poisoning happens.

People in cybersecurity are aware that most protocols which rely on any form of broadcasting and multicasting are ripe for exploitation. One of the best cases here from an attacker’s perspective is to snag credentials off the network by cracking hashes taken from the handshakes these protocols initiate (or replaying them).

Sysadmins are more inclined to focus on whether or not the system works before getting dragged off to another task on a never-ending list of things to do. Fortunately for the overwhelmed, the mitigations are straightforward, and revolve around disabling the protocols. Use safer methods to propagate your web proxy’s PAC file location, like through Group Policy. I know it is tempting to “Automatically detect settings”, but try to avoid it. Test thoroughly in the event you are still supporting that NT Server from the 90s hosting that mission critical application though… Just kidding, no one is hosting an NT server nowadays…right?

2. PowerShell Empire

Before Empire hit the scene, pentesters typically relied on Command and Control (C2) infrastructure where the agent first had to reside on-disk, which naturally would get uploaded to Virus Total upon public release and be included in the next morning’s antivirus definitions. Of course ways to get around antivirus existed, but it was always an extra step, and any solution that leveraged any kind of behavior based detection would give you extra headaches. The time spent evading detection was a seemingly never-ending cat-and-mouse game. It was at that moment brilliant white knight(s) crossed the horizon, and said “let there be file-less agents!”. And just like that, the game was changed.

It was as if the collective unconscious of pentesters everywhere came to the realization that the most powerful tool at their disposal was already present on most modern workstations around the world. A framework had to be built, and the Empire team made it so.

All theatrics aside, the focus on pentesting frameworks and attack tools have undoubtedly shifted towards PowerShell for exploitation and post-exploitation. Not only penetration testers, but also real attackers, who are interested in the file-less approach due to its high success rate.

Sysadmins, what does this mean for you? Well, it means that some of the security controls you have put in place may be easily bypassed. File-less agents (including malware) can be deployed by PowerShell and exist in memory without ever touching your hard disk or by connecting a USB (although this mode of entry is still available, as I will explain later). More and more malware exists solely in memory rather than being launched from an executable sitting on your hard disk. A write-up mentioned at the end of this post elaborates much further into this topic. Existing in memory makes antivirus whose core function is scanning disk significantly less effective. This puts the focus instead on attempting to catch the initial infection vector (Word/Excel files with macros very commonly), which can be ever-changing.

Fortunately, the best mitigation here is something you may already have access to — Microsoft’s Applocker (or the application whitelisting tool of your choice). Granted, whitelisting can take some time to stand up properly and likely requires executive sign-off, but it is the direction endpoint security is heading. This is a good opportunity to get ahead of the curve. I would ask that you also think about how you use PowerShell in your environment. Can normal users launch PowerShell? If so, why?

When it comes to mitigation, let me save you the effort; the execution policy restrictions in PowerShell are trivial to bypass (see the “-ExecutionPolicy Bypass” flag).

3. Hashcat (with Wordlists)

This combo right here is an absolute staple. Cracking hashes and recovering passwords is pretty straightforward of a topic at a high level, so I won’t spend a whole lot of time on it.

Hashcat is a GPU-focused powerhouse of a hash cracker which supports a huge variety of formats, typically used in conjunction with hashes captured by Responder. In addition to Hashcat, a USB hard drive with several gigs of wordlists is a must. On every pentest I have been on, time had to be allocated appropriately to maximize results, and provide the most value to the client. This made the use of wordlists vital versus other methods. It is always true that given enough time and resources any hash can be cracked, but keep things realistic.

Pentesters, if you are going in with just one laptop, you are doing it wrong. Petition for a second one; something beefy with a nice GPU that has some heft to it.

Sysadmins, think about your baseline policies and configurations. Do you align with one? Typically it is best practice to align with an industry standard, such as the infamous DISA STIG, as closely as possible. Baselines such as DISA STIG support numerous operating systems and software, and contain some key configurations to help you prevent against offline password cracking and replay attacks. This includes enforcing NIST recommended password policies, non-default authentication enhancements, and much more. DISA even does the courtesy of providing you with pre-built Group Policy templates that can be imported and custom-tailored to your organization’s needs, which cuts out much of the work of importing the settings (with the exception of testing). Do you still use NTLMv1 on your network? Do you have a password requirement of 8 or less characters? Know that you are especially vulnerable.

4. Web Penetration Testing Tools

To the pentesters out there, I am likely preaching to the choir. To everyone else, it is important to note that a web penetration testing tool is not the same as a vulnerability scanner.

Web-focused tools absolutely have scanning capabilities to them, and focus on the application layer of a website versus the service or protocol level. Granted, vulnerability scanners (Nessus, Nexpose, Retina, etc) do have web application scanning capabilities, though I have observed that it is best to keep the two separate. Use a web-based tool for testing your intranet or extranet page, and let the vulnerability scanners keep doing their blanket assessments on your ports, protocols, and services.

That being said, let’s examine what we are trying to identify. Many organizations nowadays build in-house web apps, intranet sites, and reporting systems in the form of web applications. Typically the assumption is that since the site is internal it does not need to be run through the security code review process, and gets published out for all personnel to see and use. With a focus on publishing the code versus making sure it is safe and secure, vulnerabilities present themselves that may be ripe for exploitation.

Personally, my process during a pentest would begin with attempting to find low hanging fruit with misconfigured services or unpatched hosts on the network. If that failed to get a significant finding, it was time to switch it up and move to the web applications. The surface area of most websites leaves a lot of room for play to find something especially compromising. Some of my favorites for demonstrating major issues are:

If you administer an organization that builds or maintains any internal web applications, think about whether or not that code is being reviewed frequently. Code reuse becomes an issue where source code is imported from unknown origins, and any security flaws or potentially malicious functions come with it. Furthermore, the “Always Be Shipping” methodology which has overtaken software development as of late puts all of the emphasis on getting functional code despite the fact that flaws may exist.

Acquaint yourself with OWASP, whose entire focus is on secure application development. Get familiar with the development team’s Software Development Lifecycle (SDLC) and see if security testing is a part of it. OWASP has some tips to help you make recommendations.

Understand the two methodologies for testing applications, including:

Static Application Security Testing (SAST) — The application’s source code is available for analysis.

Dynamic Application Security Testing (DAST) — Analyzes the application while in an operational state.

Additionally you will want to take the time to consider your web applications as separate from typical vulnerability scans. Tools (open and closed source) exist out there, including Burp Suite Pro, OWASP Zed Attack Proxy (ZAP), Acunetix, or Trustwave, with scanning functionality that will crawl and simulate attacks against your web applications. Scan your web apps at least quarterly.

5. Arpspoof and Wireshark

When I mentioned “get back to basics” at the beginning of the article, this combination of tools exemplifies what I am talking about.

Arpspoof is a tool that allows you to insert yourself between a target and its gateway, and Wireshark allows you to capture packets from an interface for analysis. You redirect the traffic from an arbitrary target, such as an employee’s workstation during a pentest, and snoop on it. Sometimes just creeping on communications and seeing what they are reaching out to was enough to capture some cleartext data which would blow the whole test wide open.

Likely the first theoretical attack presented to those in cybersecurity, the infamous Man-in-the-Middle (MitM) attack is still effective on modern networks. Considering most of the world still leans on IPv4 for internal networking (and likely will for a good long while), and the way that the Address Resolution Protocol (ARP) has been designed, a traditional MitM attack is still quite relevant.

Many falsely assume that because communications occur inside their own networks, they are safe from being snooped on by an adversary and therefore do not have to take the performance hit of encrypting all communications in their own subnets. Granted, your network is an enclave of sorts from the wild west of the Internet, and an attacker would first have to get into your network to stand between communications.

In that same notion, let’s assume that a workstation is compromised by an attacker in another country using a RAT equipped with tools that allow a MitM to take place. Alternately (and possibly more realistically), consider the insider threat. What if that semi-technical employee with a poor attitude got to watching script-kiddie tutorials on YouTube, and wanted to creep on that receptionist that they suddenly became unusually fond of? The insider threat scenario is especially present if your organization’s executives are well-known and very wealthy. Never forget that envy makes people do stupid and reckless things.

Now, let’s talk about defense. Encrypt your communications. Yes, internally as well. Never assume communications inside your network are safe just because there is a gateway device separating you from the Internet. All client/server software should be encrypting their communication, period.

Keep your VLAN segments carefully tailored, and protect your network from unauthenticated devices. Implementing a Network Access Control (NAC) system is something you may want to add to your security roadmap in the near future, or implementing 802.1X on your network may be a good idea. Shut down those unused ports, and think about sticky MACs if you are on a budget.

Did the idea of testing out an IDS system ever pique your interest? It just so happens that there are open source options available, including Security Onion, to test and demonstrate effectiveness. IDS rules are focused on identifying anomalous network activity that may indicate an attempted ARP poisoning attack. Give it a whirl if you have a spare box laying around, and approval of course. Honeypot systems may be a great idea for a trial run, to which there are a number of open source options available. Take a look at Honeyd.