Many hackers set up automated software to probe websites looking for vulnerabilities. Their attacks are random and non-specific, designed to inflict damage on anyone and everyone. More sophisticated hackers set up what security experts call “strategic Web compromises” (SWC). They target particular organizations for attacks, distributing drive-by exploits through trusted websites.

Many attackers gain entry into trusted organizations using spear phishing attacks. They target an employee who has a particular level of access, gain that employee’s credentials and use it to vandalize the website. Others use a technique called Web vulnerability scanning to gain access to organization Web servers. In addition to relying on the protection provided by virtualization security solutions, one can use log analysis to detect Web vulnerability scanning in progress. Combing the logs for these six indicators can prevent one from being blindsided by a drive-by.

1. Many Requests, Small Time Frame

Reconnaissance, according to a paper by Lockheed Martin analysts Eric Hutchins, Michael Cloppert and Rohan Amin, Ph.D., is the first step in any adversarial campaign. Within network logs, a high volume of requests from a single IP address or small group of IP addresses within a narrow time frame may indicate an attacker is probing a website for vulnerabilities. Many requests start with links from the target organization’s home page, and they start by requesting common directories and file paths. When discussing an organization victimized by SWC, The Shadowserver Foundation said that log review revealed three different reconnaissance attempts averaging 8,000 requests per attempt. One bombardment spanned 20 hours while another required only one hour.

2. Large Numbers of 404s Generated

When a high volume of requests from just a few IP addresses within a small time frame also generates a large number of 404s, one should take notice. The 404 logs can show a number of different ways that people try to access database, admin and login pages by brute force. In many cases, these 404s are generated by blind attacks on applications within their default directories.

3. Files Accessed With Commands Issued Via URI Parameters

Applications within a registry usually contain a URI hook so they can be accessed and interacted with using a browser. Unfortunately, hackers often probe URI links to find applications that have code flaws or are vulnerable to abuse. Also, attackers can use URIs for command injections, which could redirect website visitors to malicious URLs and cause applications to execute malicious code.

4. Unusual POST Requests

When you notice POST requests in your logs either to files that don’t accept POST data or to files that you don’t recognize, assume an attacker is trying to vandalize the website. Also, as in the case of the Pushdo Trojan, attackers could flood your servers with POST requests and then conceal malicious code within the tsunami. Another indicator of Pushdo is a series of POST requests against the root domain or home directory.

5. Requests for Known Web Shells

Web shells are lines of executable code that run on servers, functioning similarly to remote access Trojans (RAT) or backdoor Trojans. Most Web shells make their way onto a server through SQL injection (SQLi) or remote file inclusion through vulnerable Web applications. One of the most common Web shells, according to Akamai, is the c99madshell, or c99.php, which is 1,550 lines long. Other simple one-line shells can do something as simple — and as dangerous — as process commands sent through the “cmd” variable, giving the hacker remote system access.

6. Requests Attempting to Exploit XSS and SQLi Vulnerabilities

Cross-site scripting (XSS) attacks embed malicious Javascript within a URL that points to a vulnerable part of the website, allowing attackers to execute malicious scripts. SQLi allows an attacker to inject SQL commands into Web page content like login fields or contact forms to bypass the form and execute commands. It’s often tough to catch these attacks in action while minimizing false positives. However, the presence of large numbers of XSS and SQLi-like requests in logs indicates that the website is being probed, potentially for a SWC.

Efficient Log Analysis

Web log analysis can suck a great deal of time out of the workday, and many IT workers would prefer to avoid it. However, it’s one of the best ways to catch vulnerability scanning and progress, and to protect websites from SWCs.

Binary numbers image by Henti Smith from Flickr Creative Commons

Log review image by derfian from Flickr Creative Commons

SQL injection image by Dan Tentler from Flickr Creative Commons