DDoS Attacks
DDoS Attacks
DDoS Attacks
Background:
Over the last two years, the term “DDoS attack” has made its way into the public media stream.
Today even non-technical people are aware of the existence and potential impact of such attacks. In
years past, DDoS attacks have been dominated by “volumetric” attacks usually generated by
compromised PCs that are grouped together in large-scale botnets. Some well-publicized examples
include the DDoS attacks against UK-based online betting sites where the hackers extorted the
gambling firms, and the politically motivated DDoS attacks against the Georgian government.
Not only are attacks increasing in size, but they are also increasing in complexity as new types of
DDoS attacks continue to emerge and threaten the availability of Internet-facing businesses and
services. Conduct a quick search on the Internet and it’s not difficult to find media coverage
regarding online banking, e-commerce and even social media sites that have been victims of
application-layer DDoS attacks. The motivation? Most of the time it’s for financial gain, but other
incentives includes political “hactivisim” or just plain old ego. And thanks to a growing trend of do-it-
yourself attack tools and “botnets for hire,” even a computer novice can execute a successful DDoS
attack.
For example, possibly one of the most publicized series of DDoS attacks happened in 2010 when a
group of Wikileaks supporters and hactivists known as “Anonymous” used social media sites to
recruit and instruct supporters on how to download, configure and execute an application-layer DoS
attack against several targets (the group called these attacks “Operation Payback”). For those
supporters who were not computer-savvy enough to conduct the DDoS attacks themselves, there
was an option to “Volunteer your PC for the Cause,” in which case a member of Anonymous would
take over the supporter’s PC and make it part of the botnet!
Denial of Services is also an attack on a computer system or network that causes a loss of service to
users, typically the loss of network connectivity and services through the consumption of bandwidth
of the victim network, or the overloading the computational resources of the victim system.
The motivation for DoS attacks is not to break into a system. Instead, it is to deny the legitimate use
of the system or network to others who need its services. One can say that this will typically happen
through one of the following means:
The DoS concept is easily applied to the networked world. Routers and servers can handle a finite
amount of traffic at any given time based on factors such as hardware performance, memory and
bandwidth. If this limit or rate is surpassed, new requests will be rejected. As a result, legitimate
traffic will be ignored and the object's users will be denied access. So, an attacker who wishes to
disrupt a specific service or device can do so by simply overwhelming the target with packets
designed to consume all available resources.
A DoS is not a traditional "crack", in which the goal of the attacker is to gain unauthorized privileged
access, but it can be just as malicious. The point of DoS is disruption and inconvenience. Success is
measured by how long the chaos lasts. When turned against crucial targets, such as root DNS
servers, the attacks can be very serious in nature. DoS threats are often among the first topics that
come up when discussing the concept of information warfare. They are simple to set up, difficult to
stop, and very efficient.
http://www.youtube.com/watch?feature=player_detailpage&v=jc-S4fa5BxQ
The process is relatively simple. A cracker breaks into a large number of Internet-connected
computers and installs the DDoS software package (of which there are several variations). The DDoS
software allows the attacker to remotely control the compromised computer, thereby making it a
“slave”. From a “master” device, the cracker can inform the slaves of a target and direct the attack.
Thousands of machines can be controlled from a single point of contact. Start time, stop time, target
address and attack type can all be communicated to slave computers from the master machine via
the Internet. When used for one purpose, a single machine can generate several megabytes of
traffic. Several hundred machines can generate gigabytes of traffic. With this in mind, it’s easy to see
how devastating this sudden flood of activity can be for virtually any target.
The network exploit techniques vary. With enough machines participating, any type of attack will be
effective: ICMP requests can be directed toward a broadcast address (Smurf attacks), bogus HTTP
requests, fragmented packets, or random traffic. The target will eventually become so overwhelmed
that it crashes or the quality of service will be worthless. It can be directed at any networked device:
routers (effectively targeting an entire network), servers (Web, mail, DNS) or specific machines
(firewalls, IDS).
But what makes a DDoS difficult to deal with? Obviously the sudden, rapid flood of traffic will catch
the eye of any competent administrator .Unfortunately though, all of this traffic will likely be
spoofed, an attack technique in which the true source address hidden. An inspection of these
packets will yield little information other than the router that sent it (your upstream router). This
means there isn’t an obvious rule that will allow the firewall to protect against the attack, as the
traffic often appears legitimate and can come from anywhere.
SAMPLE ANATOMY OF A DDoS ATTACK
2.5 ICMP
The reflective ICMP attack uses public sites that respond to ICMP echo request packets to flood the
victim’s site. Most well known public sites block ICMP to their networks as a result. However,
routers respond very efficiently to ICMP and if not properly rate limited, can be an excellent
reflective media. This attack by itself does not amplify the packets sent to the victim’s site. If used
in conjunction with a remote controlled network of computers, this attack can be very difficult to
trace.
2.10.2 Fast-Flux
The evolution of the technology that attackers are taking advantage of continues today with the
recent trend in fast-flux networks. Here, botnets manipulate DNS records to hide malicious Web
sites behind a rapid-changing network of compromised hosts acting as proxies. The fast-flux trend
reflects the need for attackers to try to mask the source of their attacks so that they are able to
sustain the botnet for as long as possible.
While the DNS servers utilized in these type of attacks were not compromised, they did have a fl aw
that allowed them to be used as reflectors: they were open, recursive DNS servers. That is, they did
all of the recursion queries necessary to service a DNS client without requiring that client to be from
the same network as they were. This poses a similar problem to that of open mail relays. As such,
network operators should view open, recursive DNS servers as being just as important to secure
Denial of service attack programs, root kits, and network sniffers have been around in the computer
underground for a very long time. They have not gained nearly the same level of attention by the
general public as did the Morris Internet Worm of 1988, but have slowly progressed in their
development. As more and more systems have come to be required for business, research,
education, the basic functioning of government, and now entertainment and commerce from
people’s homes, the increasingly large number of vulnerable systems has converged with the
development of these tools to create a situation that resulted in distributed denial of service attacks
that took down the largest e-commerce and media sites on the Internet. Meanwhile, researchers
said the recent uptick of DDoS attacks in recent years that are being for used for political
‘hacktivism,’ extortion and other criminal purposes can be attributed, in part, to the proliferation of
DDoS tools.
The LOIC tool has been in the news for quite some time now. LOIC (Low Orbit Ion Cannon) is
an open source network stress testing application, written in C#.LOIC performs a denial-of-service
(DoS) attack (or when used by multiple individuals, a DDoS attack) on a target site by flooding the
server with TCP packets or UDP packets with the intention of disrupting the service of a particular
host. From the code it does a HTTP request from the target site and has some elements in the code
as to not adversely affect the browser being used. Target changes are communicated via the IRC
channel to participants. From the looks of it the code could easily be modified to "autofire" rather
than require a user to chose to participate. A detailed information on LOIC can be found in the below
link :
http://www.simpleweb.org/reports/loic-report.pdf
Dirt Jumper is a bot that performs DDoS attacks on urls provided by its Command and Control
server. Each infected system made an outbound connection to the C&C and receives instructions on
which sites to attack. Analysis revealed that this particular piece of malware was launching DDoS
attacks and we have direct evidence of DDoS attack on two Russian websites. One of these was a
gaming website, the other involved in selling a popular Smartphone.
Further research determined that this malware was also used in attacks on yet another Russian
gaming site, test attacks on various other sites, attacks on a large corporations load balancer, and a
damaging attack on a Russian electronic trading platform. Just like many other DDoS bot families,
Dirt Jumper aka Russkill continues to undergo active development to help feed a market that’s
hungry for DDoS services.
The developers behind the open source Apache Foundation issued a warning for all users of the
Apache HTTPD Web Server, as an attack tool it has been made available on the Internet and has
already been spotted being actively used. The bug in question is a Denial-of-Service vulnerability that
allows the attacker to execute the attack remotely and to take over a great amount of memory and
CPU usage with only a modest number of request directed at the Web Server. And even though the
vulnerability has been spotted more than four years ago by Google security engineer Michal
Zalewski, it has never been patched. The attack can be deployed against all versions in the 1.3 and
2.0 lines, but as the Foundation no longer supports the 1.3 line, a patch will be issued only for
Apache 2.0 and 2.2.
3.1.3.1 THC SSL Dos/DDoS
THC-SSL-DOS is a tool to verify the performance of SSL. Establishing a secure SSL connection requires
15x more processing power on the server than on the client. THC-SSL-DOS exploits this asymmetric
property by overloading the server and knocking it off the Internet. This problem affects all SSL
implementations today. The vendors are aware of this problem since 2003 and the topic has been
widely discussed. This attack further exploits the SSL secure Renegotiation feature to trigger
thousands of renegotiations via single TCP connection.
A traditional flood DDoS attack cannot be mounted from a single DSL connection. This is because the
bandwidth of a server is far superior to the bandwidth of a DSL connection: A DSL connection is not
an equal opponent to challenge the bandwidth of a server.
This is turned upside down for THC-SSL-DOS: The processing capacity for SSL handshakes is far
superior at the client side: A laptop on a DSL connection can challenge a server on a 30Gbit link.
Traditional DDoS attacks based on flooding are sub optimal: Servers are prepared to handle large
amount of traffic and clients are constantly sending requests to the server even when not under
attack.
The SSL-handshake is only done at the beginning of a secure session and only if security is required.
Servers are _not_ prepared to handle large amount of SSL Handshakes. The worst attack scenario is
an SSL-Exhaustion attack mounted from thousands of clients (SSL-DDoS).
The reactive mechanisms (also referred to as Early Warning Systems) try to detect the attack and
respond to it immediately. Hence, they restrict the impact of the attack on the victim. Again, there is
the danger of characterizing a legitimate connection as an attack. For that reason it is necessary for
researchers to be very careful.
The main detection strategies are signature detection, anomaly detection, and hybrid systems.
Signature-based methods search for patterns (signatures) in observed network traffic that match
known attack signatures from a database. The advantage of these methods is that they can easily
and reliably detect known attacks. Moreover, the signature database must always be kept up-todate
in order to retain the reliability of the system.
Finally, hybrid systems combine both these methods. These systems update their signature database
with attacks detected by anomaly detection. Again the danger is great because an attacker can fool
the system by characterizing normal traffic as an attack. In that case an Intrusion Detection
System (IDS) becomes an attack tool. Thus IDS designers must be very careful because their research
can boomerang.
After detecting the attack, the reactive mechanisms respond to it. The relief of the impact of the
attack is the primary concern. Some mechanisms react by limiting the accepted traffic rate. This
means that legitimate traffic is also blocked. In this case the solution comes from traceback
techniques that try to identify the attacker. If attackers are identified, despite their efforts to spoof
their address, then it is easy to filter their traffic. Filtering is efficient only if attackers' detection is
correct. In any other case filtering can become an attacker's tool.
Reacting to DoS/DDoS
Unfortunately, the options are somewhat limited because most DDoS attacks used spoofed source
addresses that are likely generated at random. So what can be done:
Create a whitelist of the IP addresses and protocols you must allow if prioritizing traffic during an
attack.
The "ip verify unicast reverse-path" (or non-Cisco equivalent) command should be enabled on the
input interface of the upstream connection. This feature drops spoofed packets, a major difficulty in
defeating DDoS attacks, before they can be routed. Additionally, make sure incoming traffic with
source addresses from reserved ranges (i.e., 192.168.0.0) is blocked. This filter will drop packets
whose sources are obviously incorrect.
Ingress and egress filtering techniques are also crucial to the prevention of DDoS attacks. These
simple ACLs, if properly and consistently implemented by ISPs and large networks, could eliminate
spoofed packets from reaching the Internet, greatly reducing the time involved in tracking down
attacker. The filters, when placed on border routers, ensure that incoming traffic does not have a
source address originating from the private network and more importantly, that outbound traffic
does have an address originating from the internal network. RFC2267 is a great foundation for such
filtering techniques.
Rate Limiting
A better option for immediate relief, one available to most ISPs, would be to "rate limit" the
offending traffic type. Rate limiting restricts the amount of bandwidth a specific type of traffic can
consume at any given moment. This is accomplished by dropping the limited packets received when
the threshold is exceeded. It's useful when a specific packet is used in the attack. Cisco provides this
example for limiting ICMP packets used in a flood:
This example brings up an interesting problem, which was noted earlier. What if the offending traffic
appears to be completely legitimate? For instance, rate limiting a SYN flood directed at a Web server
will reject both good and bad traffic, since all legitimate connections require the initial 3-way
handshake of TCP. It's a difficult problem, without an easy answer. Such concerns make DDoS
attacks extremely tricky to handle without making some compromises.
Route Filter Techniques
Blackhole routing and Sinkhole routing, can be used when the network is under attack. These
techniques try to temporarily mitigate the impact of the attack. The first one directs routing traffic
to a null interface, where it is finally dropped. At first glance, it would be perfect to "Blackhole"
malicious traffic. But is it always possible to isolate malicious from legitimate traffic? If victims know
the exact IP address being attacked, then they can ignore traffic originating from these sources. This
way, the attack impact is restricted because the victims do not consume CPU time or memory as a
consequence of the attack. Only network bandwidth is consumed. However, if the attackers' IP
addresses cannot be distinguished and all traffic is blackholed, then legitimate traffic is dropped as
well. In that case, this filter technique fails.
Sinkhole routing involves routing suspicious traffic to a valid IP address where it can be analyzed.
There, traffic that is found to be malicious is rejected (routed to a null interface); otherwise it is
routed to the next hop. A sniffer on the sinkhole router can capture traffic and analyze it. This
technique is not as severe as the previous one. The effectiveness of each mechanism depends on the
strength of the attack. Specifically, sinkholing cannot react to a severe attack as effectively as
blackholing. However, it is a more sophisticated technique, because it is more selective in rejecting
traffic.
Filtering malicious traffic seems to be an effective countermeasure against DDoS. The closer to the
attacker the filtering is applied, the more effective it is. This is natural, because when traffic is
filtered by victims, they "survive," but the ISP's network is already flooded. Consequently, the best
solution would be to filter traffic on the source; in other words, filter zombies' traffic.
Until now, three filtering possibilities have been reported concerning criteria for filters. The first one
is filtering on the source address. This one would be the best filtering method, if we knew each time
who the attacker is. However, this is not always possible because attackers usually use spoofed IP
addresses. Moreover, DDoS attacks usually derive from thousands of zombies and this makes it too
difficult to discover all the IP addresses that carry out the attack. And even if all these IP addresses
are discovered, the implementation of a filter that rejects thousands of IP addresses is practically
impossible to deploy.
The second filtering possibility is filtering on the service. This tactic presupposes that we know the
attack mechanism. In this case, we can filter traffic toward a specific UDP port or a TCP connection
or ICMP messages. But what if the attack is directed toward a very common port or service? Then
we must either reject every packet (even if it is legitimate) or suffer the attack.
Finally, there is the possibility of filtering on the destination address. DDoS attacks are usually
addressed to a restricted number of victims, so it seems to be easy to reject all traffic toward them.
But this means that legitimate traffic is also rejected. In case of a large-scale attack, this should not
be a problem because the victims will soon break down and the ISP will not be able to serve anyone.
So filtering prevents victims from breaking down by simply keeping them isolated.
IPv4 anycast implementations have been in use on the Internet for long-time now. Particularly
suited for single response UDP queries, DNS anycast architectures are in use in most tier 1 Internet
providers’ backbones. Anycast implementations can be used for both DNS authoritative and
recursive implementations. Several root name servers are implementing anycast architectures to
mitigate DDoS attacks .
Black hole filtering is a specialized form of anycast. Sinkholes can use anycast to distribute the load
of an attack across many locations
Many DNS anycast implementations are done using eBGP announcements. Anycast networks can be
contained in a single AS or spam multiple AS's across the globe. Anycast provides two distinct
advantages in regard to DoS/DDoS attacks. In a DoS attack, anycast localizes the effect of the attack.
In a DDoS attack, the attack is distributed over a much larger number of servers, distributing the load
of the attack and allowing the service to better withstand it.
The main disadvantage of an anycast implementation is brownout conditions. This is where the
server is still functioning but running at full capacity. Some legitimate queries go unanswered due to
resource exhaustion. This may be due to a DoS/DDoS attack or failure of a neighbouring anycast
server without adequate reserve capacity. If this resource is taken fully off-line and queries are
redirected through anycast to the next server, a cascading effect can result taking down the entire
service. To prevent this from occurring, a true secondary anycast system is needed, separate from
the primary anycast. This allows one area to failover to an independent anycast system.
Due diligence is needed when setting up and maintaining an eBGP anycast system. All BGP routing
parameters are set the same for each anycast site. If a configuration erroris made on a site to lower
its routing preference relative to the others, it will act as a magnet for the traffic and the entire
service can go down (as long as that route is advertised).
Any attempt of filtering the incoming flow means that legitimate traffic will also be rejected. And if
legitimate traffic is rejected, how will applications that wait for information react? On the other
hand, if zombies number in the thousands or millions, their traffic will flood the network and
consume all the bandwidth. In that case filtering is useless because nothing can travel over the
network. Filtering is efficient only if attackers' detection is correct. In any other case filtering can
become an attacker's tool.
Attack packets usually have spoofed IP addresses. Hence it is more difficult to trace back to their
source. Furthermore, it is possible that intermediate routers and ISPs may not cooperate in this
attempt. Sometimes attackers, by spoofing source IP addresses, create counterfeit armies. Packets
might derive from thousands of IP addresses, but zombies number only a few tens, for example.
Defense mechanisms are applied in systems with differences in software and architecture. Also
systems are managed by users with different levels of knowledge. Developers must design a
platform, independent of all these parameters.
Conclusion
Although DDoS attacks have largely escaped the front page of major news organizations over the
past few years, replaced by elaborate identity theft, spam, and phishing schemes, the threat still
remains. In fact, attack architectures and technology have evolved so rapidly that enterprises large
and small should be concerned. Unfortunately, it appears the attacks will only increase in complexity
and magnitude as computer network technology permits.
Whether attackers are driven by financial, political, religious, or technical motives, the tools that
they have at their disposal have changed the dynamics of network security. Whereas firewall
management used to be a sufficient strategy to manage attacks, botnets and reflectors have since
reduced the effectiveness of blocking attacks at the network edge.
Attack techniques continue to advance and the number of software vulnerabilities continues to
increase. Internet worms that previously took days or weeks to spread now take minutes. Service
providers and vendors are quickly adapting to the new landscape. Defense in depth must be
practiced by service providers as zero day exploits are released.
DDoS attacks are a difficult challenge for the Internet community. The reality of the situation is that
only the biggest attacks are fully investigated, the smaller ones that happen everyday slip through
the cracks. And while a bevy of products exist, most are not practical for smaller networks and
providers. Ultimately, you are in charge of dealing with and defending against a DDoS. This means
knowing how to respond when under attack: identifying traffic, designing and implementing filters,
the follow-up investigation. Preparation and planning are, by far, the best methods for mitigating
DDoS attacks and risk.
The bottom line: Never before has it been easier to execute a DDoS attack.
References:
[2] P. Ferguson and D. Senie, "Network Ingress Filtering: Defeating Denial of Service Attacks which
employ IP Source Address Spoofing," RFC 2267.
[3] http://ddos.arbornetworks.com/
[4] http://www.symantec.com/business/security_response/weblog/
[5] http://blog.spiderlabs.com/