In our last article on the GovCyberHub, we featured the first part of our two-part Q&A series with Roland Dobbins, a Principal Engineer on NETSCOUT’s ASERT Team who is Netscout’s foremost DDoS subject-matter expert and is one of the top DDoS mitigation specialists in the world. Roland joined us to talk about the recently-discovered Plex DDoS attack, which leveraged vulnerabilities in the Plex media server application in reflection/amplification DDoS attacks.
Roland explained how the popular media server application was able to be weaponized and used in sophisticated DDoS-for-hire services. He also explained the mechanics behind these reflection/amplification DDoS attacks and how they’re perpetrated.
In part two of our two-part Q&A interview, we asked Roland about the trends that he’s seeing in DDoS attacks, who is at fault for attacks like the Plex attack, and what can be done to protect against them.
Here is what he had to say:GovCyberHub (GCH): Is the DDoS attack leveraging Plex indicative of a larger trend in DDoS attacks? Are other applications and connected devices being leveraged in this way? How has that impacted the size and impact of DDoS attacks that NETSCOUT has witnessed over the past few years?
Roland Dobbins: Reflection/amplification attacks of various types have actually been around for 22 or 23 years. But as attackers have matured, they’ve become much more of an organized criminal enterprise. However, not all of these DDoS attacks are for profit – some of the attackers also have ideological motivations. We’ve also seen a steep increase in online gaming as a motivation behind DDoS attacks.
As the DDOS attacker space has deepened and broadened, and as DDOS-for-hire services have emerged, we’ve seen the attacker base move downstream. You don’t have to be technical at all to launch a DDoS attack anymore, and that’s really transformed the DDoS space.
All of the attacks that you have heard about that have really made an impact and have been covered by the mainstream media, almost all of those are reflection/amplification attacks. And, as we’ve discussed, those allow hackers to launch large attacks using relatively small network resources. And the DDoS-for-hire services allow miscreants to spend a very small amount of money to generate an outsized negative impact on their targets.
Those converging trends – outsized attacks that require few resources and the emergence of inexpensive DDoS-for-hire services – have made the economics of DDoS attacks highly asymmetrical in favor of the attackers. As a result, they’re increasing in frequency and intensity, and the malicious actors that perpetrate and enable them are constantly scouring for new vectors.
Right now, I think we we’re tracking something like thirty-three or thirty-four different User Datagram Protocol (UDP) reflection/amplification vectors. And we continue to discover more.GCH: Why are all of these abusable or exploitable applications and devices out there? Are they the largest factor enabling these DDoS attacks?
Roland Dobbins: In some instances, software and devices have protocols that, due to inherent flaws in their design, can be abused in this manner. In other instances, there is a protocol or service that can be configured in a secure way so that it can’t be abused, but unfortunately that isn’t the default configuration. Most users keep things set to their default settings – especially when we’re talking about individual users. Although that does also happen within IT departments, as well.
Those exploitable vulnerabilities that are either mistakes in design or configuration are a huge part of the problem. But the biggest single contributor is the fact that we do not have universal Source Address Validation (SAV), or anti-spoofing applied to all Internet-connected networks.
Networks that don’t enforce SAV allow the spoofing of source IP addresses. The ability to spoof source IPs is what makes this entire category of reflection/amplification attacks viable. And remember, these are the largest, highest-impact DDoS attacks in the world.
There are a number of reasons why we don’t have universal Source Address Validation. First off, it is very difficult – if not impossible – to configure networking gear like routers and switches to enforce it by default, because Internet-facing networks have unique IP address ranges. This means that network devices need to be configured to enforce anti-spoofing in a situationally-appropriate manner.
“…there are network operators who simply don’t understand – or are not interested – in implementing what we consider best current practices, including Source Address Validation. In some cases, they’re just not aware of it or why it’s necessary. In other instances, it’s because they deliberately cater to the criminals…” – Roland Dobbins
Then you have large wholesale ISPs that sell downstream Internet transit, which is then resold further down the line — sometimes several times. This can make it challenging to understand how outbound traffic from endpoint networks is being routed, which is required to enforce anti-spoofing without over-blocking legitimate traffic.
That being said, most of the largest ISP networks on the Internet – such as the ISP networks – have deployed Source Address Validation at their network edges. However, there are network operators who simply don’t understand – or are not interested – in implementing what we consider best current practices, including Source Address Validation. In some cases, they’re just not aware of it or why it’s necessary. In other instances, it’s because they deliberately cater to the criminals who operate DDoS attack initiation infrastructure.
GCH: What should application and device makers be doing to ensure that their devices aren’t being leveraged in DDoS attacks?
Roland Dobbins: That’s a very good question. And the key here is to design protocols that cannot be abused in this way. And I’ll give you a couple of examples of protocols that have been designed well so that they cannot be abused. One of them is QUIC, which you may have heard of, that’s effectively a replacement for HTTP.
One of the things that the QUIC designers implemented was a cookie that is issued to a client when it connects to a QUIC-enabled Web server. That client then needs to perform a computation and give the answer back to the Web server to prove that it was really that client that sent the request. That’s an excellent example of good protocol design – it means that QUIC isn’t really attractive to attackers looking to launch reflection/amplification DDoS attacks.
There’s another protocol called D/TLS that also has some of these features. D/TLS was designed in 2005, and includes a similar anti-spoofing cookie mechanism.
“Those converging trends – outsized attacks that require few resources and the emergence of inexpensive DDoS-for-hire services – have made the economics of DDoS attacks highly asymmetrical in favor of the attackers. As a result, they’re increasing in frequency and intensity…” – Roland Dobbins
However, these anti-spoofing protocols are only effective if they’re actually utilized. There was a recent example – a reflection/amplification DDoS attack vector leveraging, among others, abusable Citrix NetScalers – where a suboptimal default configuration resulted in D/TLS-enabled nodes not sending the anti-spoofing cookie. The bad guys figured this out and they worked to leverage this, abusing more than 4300 D/TLS reflectors/amplifiers to launch DDoS attacks.
This is an example of where protocol design was done right, but there was still a failure in the standards process. In the protocol specifications for D/TLS the statement was made that using a cookie was a “should do” activity, instead of a “must do” activity — and, as a result, another reflection/amplification DDoS vector was spawned.
Ultimately, it’s essential that newer protocols be designed so that they’re not abusable. If services are built on protocols that are inherently subject to abuse, there should be mechanisms that allow them to mitigate the possibility of abuse. And the implementors and operators of those services must leverage those mechanisms.