New data on the state of distributed denial-of-service attacks indicates that not only are supersized DDoS attacks increasingly common, but also that sophisticated application-layer assaults now account for nearly a quarter of all DDoS attacks.
For its ninth annual Worldwide Infrastructure Security Report, Burlington, Mass.-based distributed denial-of-service (DDoS) mitigation vendor Arbor Networks Inc. surveyed 220 organizations from around the world on network security threats, specifically DDoS attacks, encountered during a 12-month period from November 2012 through October 2013. In addition to data from survey respondents, half of which were Internet service providers, the company also included data it monitored through its security appliances and internal researchers.
DDoS attacks unsurprisingly continue to rank among the top "operational threats" experienced by Arbor's respondents, with around two-thirds of them having witnessed a DDoS attack last year.
Though it's a slight drop from the just over three-quarters of organizations in its previous survey, "DDoS remains a key issue; therefore, it should be no surprise that nearly two-thirds of service provider respondents are seeing increased demand for DDoS detection/mitigation services from their customers," Arbor wrote in the report.
Volumetric attacks lead the way
For the last two years, the largest attacks reported to Arbor were in the 60 Gbps range. The size of volumetric attacks, those focused on consuming the bandwidth of a target network or service, increased dramatically; the largest reported DDoS attack peaked at 309 Gbps.
Several respondents experienced DDoS attacks that surpassed the 100 Gbps mark, and according to Gary Sockrider, solutions architect for Arbor and a co-author of the report, some of those organizations experienced multiple high-bandwidth DDoS events.
Arbor noted in the report that the customers of the respondents were the main targets of volumetric DDoS attacks, but Sockrider said the infrastructure of network and service providers is coming under greater duress, with 17% of these reported attacks hitting those targets, compared to 11% in the previous year.
"Because of those large-scale attacks, we saw a shift back to infrastructure taking a hit. And that really makes sense," Sockrider said. "The very nature of a large, volumetric attack [means] there are very few entities on the planet that have that much bandwidth, so by nature, if you're going to have an attack that big, it's going to affect the infrastructure providers and service providers because that's the only place you can even perpetrate that level of attack."
Sockrider said Arbor has been closely monitoring these large-scale attacks, with the largest verified attack it has seen clocking in at 245 Gbps. He said "copycats" have sought to mimic the high-profile attack on Spamhaus, a non-profit organization that combats Internet spam through blacklists.
Launched by rogue hosting firm Cyberbunker in March of 2013, the DDoS attack that targeted Spamhaus purportedly generated traffic in the 300 Gbps range and was believed to be responsible for a large-scale slowdown in Internet speeds across swaths of Europe.
To generate that bandwidth, Cyberbunker utilized a DNS amplification DDoS attack, which takes advantage of misconfigured domain name system (DNS) resolvers that allow queries without filtering or rate-throttling. Attackers spoof the victim's IP address and craft small DNS request messages that are sent to those misconfigured servers around the Internet. The end result is massive waves of traffic capable of knocking an Internet infrastructure offline.
Nearly 85% of those organizations highlighted in Arbor's report operate DNS servers on their network, and 36% of respondents actually experienced an impactful DNS-based DDoS attack. Despite the publicity surrounding the Spamhaus attacks, one-fifth of those respondents operating DNS servers, approximately the same as last year, still do not implement the best practice of restricting recursive lookups.
"I think that the increase in lack of internal organizations with specific responsibility for DNS infrastructure, [just over a quarter of respondents], is partly to blame," said Andrew Cockburn, a consulting engineer with Carrier Group North America and contributor to the report, in a blog post. "Without a targeted and holistic approach to security, such organizations have no way to connect the dots between their decisions to leave a resolver open, and the associated security risks."
Attackers focus on application layer
In contrast to volumetric DDoS attacks, application-layer DDoS attacks do not rely on huge amounts of bandwidth. Instead, DDoS attacks targeting layer 7 tend to rely on low traffic rates, but a high number of concurrent connections. By targeting the underlying application logic of Web servers, credential systems and others that can only handle a certain number of requests, attackers can bring down infrastructures in a stealthier manner.
Arbor found that 24% of organizations experienced application-layer attacks during the reporting period, and witnessed them across a number of different services. In particular, 82% of respondents said that HTTP was targeted, representing the most common application-layer attack vector. Additionally, 54% of those surveyed also incurred an application-layer against HTTPS-encrypted Web services, compared to 37% in last year's report, and only 24% the year before.
Sockrider said application-layer DDoS attacks are on the rise, in part, because organizations are not defending them properly. Traditional security appliances such as firewalls, intrusion prevention systems and load balancers, he said, are largely useless in such scenarios because application-layer attacks are "explicitly crafted to beat them."
Simultaneously, too many organizations "depend on their service providers" to mitigate DDoS attacks, according to Sockrider, because the only place to deal with the more visible volumetric attacks is upstream.
"While it is possible to defend against application-layer DDoS attacks in the network, it's going to be a whole lot harder to identify them at the network level," Sockrider said. "At the very least, you need some visibility on the internal network and closer to the resources so that you can then send that information about the nature of the attack upstream and have it dealt with there."