DNS is short for Domain Name System, the online service that converts server names into network numbers. Without it, you wouldn’t be able to refer to a server called example.com – you’d have to remember 192.168.1.140 instead. Actually, it’s even worse than that, because busy websites like www.facebook.com don’t have just one server ip.
How to secure an interesting article
Big web properties may have racks and racks of customer-facing servers in operations centres all over the world, giving them a wide variety of network number ranges on a wide variety of different networks.
Busy sites typically use DNS to direct you to a specific server based on load levels, maintenance schedules, your current location, and so on, in order to improve speed, spread load and avoid bottlenecks.
In other words, DNS is extremely important, to the point that the internet would be unusable without it.
For that reason, DNS is implemented as a hierarchical, distributed global database, which is a fancy way of saying that no one DNS server holds the entire database, and no one server is critical to the operation of all the others.
Root server operators have already highlighted one problem that makes such attacks possible: the failure of large numbers of ISPs to implement network ingress filtering, which limits the ability to spoof internet traffic and so carry out DDoS attacks. That said, one analysis shows that 82 per cent of the internet's traffic is now not spoofable thanks to the broad implementation of the BCP 38 standard.
Another solution put forward by the former operator of the F-root server, Paul Vixie, is to develop a liability model that would penalize network operators that allow attack traffic to flow across their networks.
"In the world of credit cards, ATM cards, and wire transfers, state and federal law explicitly points the finger of liability for fraudulent transactions toward specific actors," Vixie wrote in a post last month.
"And in that world, those actors make whatever investments they have to make in order to protect themselves from that liability, even if they might feel that the real responsibility for preventing fraud ought to lay elsewhere."
"We have nothing like that for DDoS. The makers of devices that become part of botnets, the operators of open servers used to reflect and amplify DDoS attacks, and the owners and operators of networks who permit source address forgery, bear none of the costs of inevitable storms of DDoS traffic that result from their malfeasance."
For example, to figure out where sham.in lives, your own company’s (or ISP’s) DNS server takes a top-down approach:
That greatly reduces the number of times a full, top-down hierarchical query is needed, while ensuring that the system can recover automatically from incorrect or outdated answers.
As you can imagine, the root servers are the key to the entire DNS service, because all as-yet-unknown answers must be requested by starting at the top.
So there are 13 root servers, prosaically named A to M, operated by 12 different organisations, on 6 different continents.
In fact, each “server” actually consists of a server farm of many physical servers in multiple locations, for reliability.
Server L, for example, is mirrored in 128 locations in 127 towns and cities (San Jose, California, hosts two instances) in 68 countries, from Argentina to Yemen.
Because you need to consult a root server by number to look up where the root servers are by name, DNS servers themselves keep a static numeric list of all the root servers.
Generally speaking, only one root server IP number ever changes at a time, and such changes are rare, so even an old root server list will work, at least to start with.
A DNS server with an outdated list can try each of the 13 roots in turn, until it figures out where to update to the latest list.
In short: DNS is surprisingly resilient, by design, and DDoSing it is correspondingly hard.
Unsurprisingly, however, the root servers do get DDoSed from time to time, sometimes on an astonishing scale.
Indeed, the Root Server Operators recently reported a DDoS on the last day of November 2015, and the first day of December, that reached 5,000,000 bogus requests per second per root server letter.
The total attack time was just under four hours, so the DNS root servers would have experienced close to 1 trillion (1012) bogus requests during the two attack windows.
Simply put, the DNS root servers took an unprecendented hammering, but nevertheless stood firm, keeping the global DNS fully functional throughout.
For defense: it's long past time to implement source address validation in the DNS system.
How to secure an interesting article
Big web properties may have racks and racks of customer-facing servers in operations centres all over the world, giving them a wide variety of network number ranges on a wide variety of different networks.
Busy sites typically use DNS to direct you to a specific server based on load levels, maintenance schedules, your current location, and so on, in order to improve speed, spread load and avoid bottlenecks.
In other words, DNS is extremely important, to the point that the internet would be unusable without it.
For that reason, DNS is implemented as a hierarchical, distributed global database, which is a fancy way of saying that no one DNS server holds the entire database, and no one server is critical to the operation of all the others.
Root server operators have already highlighted one problem that makes such attacks possible: the failure of large numbers of ISPs to implement network ingress filtering, which limits the ability to spoof internet traffic and so carry out DDoS attacks. That said, one analysis shows that 82 per cent of the internet's traffic is now not spoofable thanks to the broad implementation of the BCP 38 standard.
Another solution put forward by the former operator of the F-root server, Paul Vixie, is to develop a liability model that would penalize network operators that allow attack traffic to flow across their networks.
"In the world of credit cards, ATM cards, and wire transfers, state and federal law explicitly points the finger of liability for fraudulent transactions toward specific actors," Vixie wrote in a post last month.
"And in that world, those actors make whatever investments they have to make in order to protect themselves from that liability, even if they might feel that the real responsibility for preventing fraud ought to lay elsewhere."
"We have nothing like that for DDoS. The makers of devices that become part of botnets, the operators of open servers used to reflect and amplify DDoS attacks, and the owners and operators of networks who permit source address forgery, bear none of the costs of inevitable storms of DDoS traffic that result from their malfeasance."
For example, to figure out where sham.in lives, your own company’s (or ISP’s) DNS server takes a top-down approach:
- Ask the so-called root servers, “Who looks after the .COM domain name data?”
- Ask the .COM part of the hierarchy, “Who is officially responsible for DNS for SHAM ?”
- Ask the SHAM name servers, “Where do I go to read SHAM ?”
That greatly reduces the number of times a full, top-down hierarchical query is needed, while ensuring that the system can recover automatically from incorrect or outdated answers.
As you can imagine, the root servers are the key to the entire DNS service, because all as-yet-unknown answers must be requested by starting at the top.
So there are 13 root servers, prosaically named A to M, operated by 12 different organisations, on 6 different continents.
In fact, each “server” actually consists of a server farm of many physical servers in multiple locations, for reliability.
Server L, for example, is mirrored in 128 locations in 127 towns and cities (San Jose, California, hosts two instances) in 68 countries, from Argentina to Yemen.
Because you need to consult a root server by number to look up where the root servers are by name, DNS servers themselves keep a static numeric list of all the root servers.
Generally speaking, only one root server IP number ever changes at a time, and such changes are rare, so even an old root server list will work, at least to start with.
A DNS server with an outdated list can try each of the 13 roots in turn, until it figures out where to update to the latest list.
In short: DNS is surprisingly resilient, by design, and DDoSing it is correspondingly hard.
Unsurprisingly, however, the root servers do get DDoSed from time to time, sometimes on an astonishing scale.
Indeed, the Root Server Operators recently reported a DDoS on the last day of November 2015, and the first day of December, that reached 5,000,000 bogus requests per second per root server letter.
The total attack time was just under four hours, so the DNS root servers would have experienced close to 1 trillion (1012) bogus requests during the two attack windows.
Simply put, the DNS root servers took an unprecendented hammering, but nevertheless stood firm, keeping the global DNS fully functional throughout.
For defense: it's long past time to implement source address validation in the DNS system.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.