Previous posts (Part 1 and Part 2) offer background on DNS amplification attacks being observed around the world. These attacks continue to evolve. Early attacks focused on authoritative servers using "ANY" queries for domains that were well known to offer good amplification. Response Rate Limiting (RRL) was developed to respond to these early attacks. RRL, as the name suggests, is deployed on authoritative servers to rate limit responses to target names. It basically groups requesters IP addresses (/24 for IPV4 and /56 for IPv6) together with the name and sends a truncated response to requests that exceed a configured limit.

A truncated response tells the querier to resend their query using TCP. Since DNS amplification attacks rely on spoofed source addresses this response will be received by the target of an attack and ignored. Truncated responses are not "amplified" — they're about the same size as the original incoming query, and thus do not create excessive load on the target. But since the attacks discussed in this post use ISP resolvers RRL does not come into play as it is currently only implemented on authoritative servers.

When attackers discovered home gateways were a useful resource for expanding their exploits it was a major innovation. As discussed in previous posts it obscured their exploits, since open DNS proxies mask the IP source address the resolver sees, and it also provided scaling leverage since ISP resolvers are highly managed and available resources.

More innovation is on the way. Domain names and Query Types used for amplification continue to change. DNS operators have familiarized themselves with obvious red flags such as common names offering substantial amplification and "ANY" queries. Although DNSSEC has been implicated as a contributor to attacks and it was expected Query Types such as DNSKEY or RRSIG would be used since they increase record sizes it doesn't appear to be the case. Instead careful analysis of DNS data is now revealing purpose-built domains with extremely large Resource Records — dozens of "A" records for instance.

This opens up entirely new avenues for attackers and will require even more intensive evaluation of DNS data. Since these domains are likely to be far out in the "long tail" sophisticated algorithmic techniques and substantial processing infrastructure may be needed to uncover them, not unlike what's required for discovering malware domains. Some operators may have the facilities and expertise; others may have to depend on vendors.

Attackers will get even stealthier going forward so it will become necessary to look more closely at queries — including DNS answers. For network and DNS operators these changing tactics further validate the need, as discussed in previous posts, to collect DNS data. Data is necessary to establish baseline behavior and can reveal variances from the norm. Breadth and depth of data gathering - selective storage of information from all of the fields in DNS questions and answers should become a new Best Practice. Data will also play a critical role in uncovering new domains being used solely for amplification as discussed above. Operationally, the implications of data collection on server performance need to considered, as well as how to aggregate the data. There can be a wide range of impacts depending on the DNS software releases; tools to aggregate data vary as well.

Resolvers also need to be equipped with fine grained query filtering (based on FQDN, Query Type, perhaps client IP, and any combination) to accommodate changing tactics and ensure there's no collateral damage. This will tip the balance in favor of network providers and more importantly the legitimate Internet users that depend on them.

Join Nominum for a 30 minute webinar on this topic on September 24. Sign up here.