There are many legitimate though controversial reasons to prevent users from accessing content. There are also legitimate reasonse to enact laws that call for the removal of content, and these are controversial as well. You may be familiar with the technical measures used to block access or remove content, because you’ve already implemented them to protect or manage content for your organization. But you also may be interested to learn about the unintended consequences when these measures are included in legal orders to remove or control content hosted on publicly accessible servers. Here are some examples.
Blocking a link
Links associated with malicious code or phishing campaigns are identified, verified as harmful, and distributed in "lists" maintained by MalwarePatrol, PhishTank, and others. Security products (antivirus, antispam, or firewall) make use of commercial-grade or in-house block lists (also called reputation data) to prevent users from visiting dangerous content. It’s important to note that users, businesses, or ISPs voluntarily use block lists to protect against infection, fraud, or impersonation. Block lists are commonly used to filter email messages containing dangerous URLs before they are delivered to users and to warn or prevent users from visiting dangerous Websites. |
Photo by Andrea Pacelli |
Removing a link
Blocking actions prevent users from visiting dangerous links, but other actions must be taken to remove the malicious content. Security professionals will often attempt to contact a site operator to notify it of the presence of dangerous content and to ask that it be removed. If the security folks cannot identify the site operator (perhaps as a result of subdomain registries) or if the operator refuses to assist, they may work with law enforcement agencies to obtain a legal order to compel cooperation. Blocking and removing links are surgical approaches. They affect only the identified content, which may have been hosted at an otherwise legitimate Website that was compromised and used by the attacker to host or direct users to malicious content. |
Photo by Andre Weason |
Blocking a domain name
When security professionals determine that a domain name is being used to support spam, phishing, or criminal activity, they can seek to block the domain name. "Harmful" domains appear in the SURBL and SpamHaus domain block lists (DBLs), among other places. This is a coarser approach than link blocking, because users cannot resolve any host name in a domain and cannot access any service associated with the domain name (Web, mail, file sharing). Blocking actions of this kind are again voluntary.
Link and domain block list providers are careful to minimize false positives. But it’s worth designating someone in your organization to be responsible for list removal and whitelisting services and for maintaining complete and accurate Whois information for all of your organizations’ domains.
Suspending a domain name
Security professionals typically query the Whois service to identify the party that registered a harmful domain name. If they cannot reach this "registrant," they contact the registrar (also available from Whois) and request that the domain name be suspended. If the registrar (or registry) determines that the registrant has violated its terms of service, it may act directly. In other circumstances, the registrar (or registry) may insist on a legal order before acting. Domain name suspension requests don’t always make clear what actions registrars should take. Loosely, suspend means, "Stop the DNS from resolving the IP address of the server that hosts harmful content." When this is the intention, a legal order or request typically instructs the registrar or top level domain (TLD) registry to delete the domain name and associated name server records from the TLD zone file, and the TLD name server for harmfuldomain.tld will produce "nonexistent domain" responses. |
Photo by H Dagon |
This may be an appropriate action for indisputably harmful domains, but it can have unintended consequences, because no host name listed in the harmfuldomain.tld zone file resolves. An incident involving Jotform offers a good example of the unintended consequences of taking this action instead of blocking specific content or an individual account in a multiuser Website. The entire site was taken offline, all the user accounts were unreachable, and email service was interrupted for the operator and users for several days. Consider whether your customer organization is prepared for a similar false positive scenario.
Other options permit continued monitoring or notification. For example, a legal order may instruct the registrar or registry to change the name server records associated with harmfuldomain.tld to a designated DNS operator, which will assume name resolution. The designated operator may be instructed to configure name resolution so that www.harmfuldomain.tld resolves to a Text of Notice page (as SOPA would do). Alternatively, the designated DNS operator may be instructed to have the name harmfuldomain.tld resolve to the IP address of a supervised botnet command and control host for traffic collection and analysis. Consider false positives in both scenarios and what these might mean for your organization.
These types of actions are performed every day, typically with very low false positive rates, minimal or no collateral damage, and due process observed. These actions are generally accepted as "appropriate and preventative," but it’s worth asking whether your customer is prepared to respond in a false positive scenario.
In my next and final post in this series, I’ll look at more controversial actions involving content filtering, DNS filtering, and the seizure of equipment, assets, or content.
Originally posted at The Champion Community 9 May 2012
Comments
You can follow this conversation by subscribing to the comment feed for this post.