To work on the Incapsula team at Imperva is to be exposed to distributed denial of service (DDoS) attacks all of the time. From watching 100 Gbps assaults making waves on computer screens around the office, to having our inboxes bombarded with reports of mitigated assaults, DDoS is just another part of our awesome daily routine.

Yet, every once in a while an attack stands out that makes us really take notice. These are the ones we email each other screenshots of, discuss with the media and write about in our blog.

Often, these assaults are canaries in a coal mine for emerging attack trends. It’s one of these canaries that I want to talk about here—an attack that challenges the way we think about application layer DDoS protection.

A bit about application layer DDoS attacks

Broadly speaking, layer 7–aka application layer–DDoS attacks are attempts to exhaust server resources (e.g., RAM and CPU) by initiating a large number of processing tasks with a slew of HTTP requests.

In the context of this post it should be mentioned that, while deadly to servers, application layer attacks are not especially large in volume. Nor do they have to be, as many application owners only over-provision for 100 requests per second (RPS), meaning even small attacks can severely cripple unprotected servers.

Moreover, even at extremely high RPS rates—and we have seen attacks as high as 268,000 RPS—the bandwidth footprint of application layer attacks is usually low, as the packet size for each request tends to be no larger than a few hundred bytes.

Consequently, even the largest application layer attacks fall way below 500 Mbps. This is why some security vendors and architects pitch that it is safe to counter them with filtering solutions that don’t necessarily offer additional scalability.

A ginormous HTTP POST flood

The attack that challenged this theory occurred a few weeks ago, when one of our clients—a China-based lottery website—was the target of a HTTP POST flood attack, which peaked at a substantially high rate of 163,000 RPS.

Attack traffic in RPS (requests per second)

As significant as this request count was, the real surprise came when we realized that the assault was also consuming bandwidth at 8.7 gigabits per second (!)—a record for an application layer attack and definitely the largest we had ever seen or even heard about up until that point.

Attack traffic in Gbps (gigabits per second)

Looking to understand how an application layer attack could reach such heights, we inspected the malicious POST requests. What we found was a script that randomly generated large files and attempted to upload (POST) them to the server.

By doing so, the perpetrators were able to create a ginormous HTTP flood, consisting of extremely large content-length requests. These appeared legitimate, up until the TCP connections were established and the requests could be inspected by Website Protection—our application layer DDoS mitigation solution.

Request: POST / HTTP/1.1

Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/xaml+xml, application/x-ms-xbap, application/x-ms-application, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/vnd.ms-xpsdocument, */*

Accept-Language: zh-cn

User-Agent: Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)

Connection: Keep-Alive

Host: [target]

Content-Length: [very large number]

Content-Type: multipart/form-data; boundary=SOME_BOUNDARY Body:

[truncated]…

filename=[random value].[GZ/gz/TAR.7z/tar/ZIP/zip]

…[truncated]

Sample attack POST request

The attack campaign was launched from a botnet infected with a Nitol malware variant. From there, it was accessing the website under the guise of a Baidu spider, as seen above.

Overall, the attack traffic originated from 2,700 IP addresses. The bulk were located in China, as evidenced by the map below.

Geo-location of compromised devices used in the attack

Why 8.7 Gbps attack spells trouble for hybrid DDoS protection

When taken out of context, an 8.7 Gbps attack may not seem like cause for concern—especially these days, when security service providers, including ourselves, regularly share reports of 200, 300 and 400 Gbps assaults.

However, these attacks are all network layer- they’re expected to be large. On the other hand, a multi-gigabit application layer assault is an unforeseen threat. As such, it can succeed where a much larger network layer attack would fail.

This is because application layer traffic can only be filtered after the TCP connection has been established. Unless you are using an off-premises mitigation solution, this means that malicious requests are going to be allowed through your network pipe, which is a huge issue for multi-gig attacks.

A case in point are hybrid DDoS protection solutions, in which an off-premises service is deployed to counter network tier threats, but a customer-premises equipment (CPE) is used to mitigate application tier attacks.

The bottleneck in hybrid DDoS protection topology

While conceptually effective, the Achilles heel of this topology is network pipe size. For example, to successfully mitigate a ~9 Gb layer 7 attack—like the one described—a CPE would require a 10 Gb uplink.

Otherwise, the network connection would simply get clogged with DDoS requests, which cannot be identified as such until they establish a connection with the appliance.

An insufficient uplink in this situation would result in a denial of service, even if the appliance filters the requests after they go through the pipe.

Granted, some of the larger organizations today do have a 10 Gb burst uplink. Still, perpetrators could easily ratchet up the attack size, either by initiating more requests or by utilizing additional botnet resources. Hence, the next attack could easily reach 12 or 15 Gbps, or more. Very few non-ISP organizations have the size of infrastructure required to mitigate attacks of that size on premises.

Furthermore, application layer attacks are easy to sustain. Recently we witnessed one that extended for over 101 days straight, while even ten days of burst creates a nightmare in overage fees. From a financial point-of-view, this is one of the main reasons why DDoS mitigation solutions exist—to offer cost-effective scalability as an alternative to paying for high commits and overages.

The canary in the coal mine

Experience has shown that effective DDoS methods are rarely an exception to the rule. As we speak, the aforementioned attacking botnet remains active and the technique used in the attack is still being employed. Furthermore, it is likely to become more pervasive as additional botnet operators discover its damage potential.

The existence of these threats make another good case for off-premises mitigation solutions that terminate HTTP/S outside of the network perimeter. They are unrestricted by your network’s pipe size and are able to scale on-demand to filter any amount of application layer traffic.

This is exactly what happened with the above mentioned 8.7 Gbps layer 7 assault, when our Website Protection was able to handle the specific HTTP/S flood attack vector automatically and out-of-the-box.

Having said that, we do realize that some organizations are under regulatory obligation to terminate all TCP connections on premises, and have no choice but to use mitigation appliances. If this is the case, our best advice is to consider upgrading your uplink so that it can at least counter attacks below 10 Gbps.

One way or another, this assault is a reminder to consider scalability when strategizing defense plans against application layer attacks.