The DDoS world hits new records lately, with the attacks on KrebsOnSecurity.com and later on OVH and Dyn reached a bandwidth of more than 1T of traffic. While the bandwidth numbers are impressive indeed, the numbers themselves were expected. The DDoS security experts expect the previous record (about 450G bps) will be broken soon. This 1 Terabyte throughput record will probably be broken again by the end of this year, or early in the next one. The amazing part of the latest attack was the fact that this was not the reflective attack the DDoS world got used to, which leverages large internet servers amplifying the attacker requests. This time, the attack consisted of many semi-legit HTTP get requests. Such layer 7 attacks, which are aimed at the internet pipe as well as the application server behind it, are much harder to block than a layer 3 and layer 4 attack. Such attacks are also much harder to conduct.



A lot was already published about the IoT nature of these attacks and the large bot-net that was used to conduct it. In this post I will speak about the other side – why is it so hard to protect against these layer 7 attacks?

A good mitigation technology has to distinguish between legitimate traffic and a malicious attack. The mitigator needs to allow the legitimate traffic to pass through, while dropping the attack traffic and not letting it hit the application server. There are several techniques to block attacks automatically – responding to the request to verify correct network behavior from the source, identifying repeating patterns in the traffic headers or a rate limit based on traffic parameters. To demonstrate the application layer mitigation and to show the difference in mitigation complexity between layer 3 and layer 4 network attacks and layer 7 HTTP application attacks, let’s look at these techniques one by one.

[You might also like: DNS and DNS Attacks]

The request response technique, also known as challenges, is often used to block attacks. On the network layer, when a “syn” packet is received from a source IP, the mitigator can respond with a quick “syn-ack” response, and verify the “ack” response back. Only once a valid “ack” response is received is the session allowed through. However, on the application-based attack, the attacker device will answer such “syn-ack” challenge with a valid “ack” packet. If the mitigator wants to continue with this technique it will need more CPU power, this time to create a valid HTTP packet with a valid HTTP response that will result in session continuation (usually a redirect-to-self request). This layer 7 challenge can indeed be used to block attacks, but as described above, it requires a lot more complexity in code and more CPU cycles. Having said that, it’s important to know that such techniques can usually block application attacks, and a proper DDoS mitigator needs to have it in its mitigation arsenal, not relying on only application level challenges.

Pattern identification is often used in the DDoS world to block attacks. After all, the attack traffic is generated using some sort of a software that is sending it from multiple locations. This one software usually has something in common which distinguishes it from legit scattered users. In a network-based attack, the mitigator should create a signature of repeating patterns. Good mitigators can even create such patterns automatically without manual intervention – such automation also saves time for fast mitigation. On network layer mitigation, the pattern will match the layer 3 or layer 4 headers. These headers are easy to parse using software, and the options to search are limited – the headers are well formed and each field has a limited value range. Going into the HTTP layer is much more complicated – the HTTP headers are more loosely defined, and the values (in most cases text) have variable ranges and length. Finding a pattern in the HTTP is possible, but as before – much more complex. The application first needs to parse the packet to get to the layer 7 part, then parse the various parts of the HTTP headers and data and then find the repeated pattern. Even once the pattern is found, it’s much harder to block – the mitigation action should parse each packet’s layer 3, layer 4 and layer 7 data to get to the right place where the pattern is hiding. Again, more code and CPU cycles are needed to block the attack.

Rate limiting, while a valid mitigation technique, is one that should be considered a last resort in the DDoS world. The main problem with rate limiting is the lack of ability to distinguish between legitimate and attack traffic. When the rate limit is in action, it does not know which packets come from a human being or a bot, as it’s blindly blocking based on rate alone. However, in some cases, rate-limit is the only way to block the attack and protect the service infrastructure from going down. Even rate limit is harder in the application layer – rate limiting packets is not enough since you do want to allow legitimate users to pass though – you should rate limit the HTTP request. Here again, the mitigator needs to parse and understand if this is indeed an HTTP packet. The data is seen only after the TCP 3-way handshake is done, and in many cases the session is already opened to the server.

To summarize, we see that any known mitigation technology has to perform harder in order to block an application-based attack. Since application-based attacks are already part of our life, and will continue to grow and get more sophisticated, it is important to find a good mitigator that can actually block them, and will keep up with the attackers constantly developing methods.