Cartika Blog

Why Are Distributed Denial Of Service Attacks So Hard To Defend Against?

DDoS attacks have been hitting the headlines with increasing frequency over the last few months. They’re a favored strategy of “hacktivists”, extortionists, and online criminals hoping to create a distraction. In principle, DDoS attacks are quite simple. At the most basic level, a collective of compromised Internet-connected machines direct a flood of data at the target with the aim of degrading its performance, either by saturating its connection to the Internet or using up its resources. The result is a site or service that is no longer usable by visitors. If you’re a Feedly user, you’ll have experienced the results of a DDoS attack recently. Attackers flooded the RSS feed reader’s servers with data, in effect knocking it out of service for several days with the intention of extracting a payment from the company — a sort of modern protection racket. In theory, it’s not difficult to block incoming packets of data — firewalls do it all the time — so why is it so hard to adequately defend a site against a DDoS attack? There are a few reasons, most of which are fundamentally related to the fact that it is very hard to block the data from an attacker without also blocking requests from legitimate users, which would achieve the same result as not having defended against the attack at all — the site would disappear for those users.

Sites Don’t Know Where The Attacks Are Coming From

It’s not as simple as blocking an IP address. Botnets are often made up of many thousands of infected machines spread out all over the world. Blocking them one at a time is feasible, but blocking every zombie machine without accidentally blocking genuine requests is a hard problem.

Firewalls Aren’t Designed To Handle DDoS Attacks

For a firewall to work against a DDoS attack, especially those using protocols like HTTP or DNS that constitute the bulk of genuine use, it has to record IPs and a history of their requests. During a DDoS attack, that can be thousands of constantly changing IPs and millions of packets of data to keep track of in state tables. The memory and processing resources required to do that quickly for every packet is enormous and most firewalls simply can’t handle the load.

The Defense Can’t Be Mounted On The Hosting Provider’s Infrastructure

By the time the data gets close to the point of attack, there’s such a flood that it’s practically impossible to do anything other than go offline, which is typically the response of smaller web hosting companies when facing a DDoS attack – they close down the site and IP being targeted so that service isn’t degraded for their other clients . Routers, switches, firewalls, and load balancers become overloaded. Very few web hosting providers have the resources and bandwidth to handle that sort of attack. The defense has to be mounted within ISP’s networks and at edge nodes, which is one of the ways that DDoS mitigation services like CloudFlare help. In a nutshell, DDoS attacks are so hard to defend against because the attackers know where the victim is, but the victim doesn’t know where the attackers are. Plus, it’s extremely difficult to tell which packets come from the bad guys and which are legitimate users. Image: Flickr/michaelroper