All posts by Richard Clayton

DCMS illustrates the key issue about blocking

This morning the Department for Culture Media and Sport (DCMS) have published a series of documents relating to the implementation of the Digital Economy Act 2010.

One of those documents, from OFCOM, describes how “Site Blocking” might be used to prevent access to websites that are involved in copyright infringement (ie: torrent sites, Newzbin, “cyberlockers” etc.).

The report appears, at a quick glance, to cover the ground pretty well, describing the various options available to ISPs to block access to websites (and sometimes to block access altogether — since much infringement is not “web” based).

The site also explains how each of the systems can be circumvented (and how easily) and makes it clear (in big bold type) “All techniques can be circumvented to some degree by users and site owners who are willing to make the additional effort.

I entirely agree — and seem to recall a story from my childhood about the Emperor’s New Blocking System — and note that continuing to pursue this chimera will just mean that time and money will be pointlessly wasted.

However OFCOM duly trot out the standard line one hears so often from the rights holders: “Site blocking is likely to deter casual and unintentional infringers and by requiring some degree of active circumvention raise the threshold even for determined infringers.

The problem for the believers in blocking is that this just isn’t true — pretty much all access to copyright infringing material involves the use of tools (to access the torrents, to process NZB files, or just to browse [one tends not to look at web pages in Notepad any more]). Although these tools need to be created by competent people, they are intended for mass use (point and click) and so copyright infringement by the masses will always be easy. They will not even know that the hurdles were there, because the tools will jump over them.

Fortuitously, the DCMS have provided an illustration of this in their publishing of the OFCOM report…

The start of the report says “The Department for Culture, Media and Sport has redacted some parts of this document where it refers to techniques that could be used to circumvent website blocks. There is a low risk of this information being useful to people wanting to bypass or undermine the Internet Watch Foundation‟s blocks on child sexual abuse images. The text in these sections has been blocked out.

What the DCMS have done (following in the footsteps of many other incompetents) is to black out the text they consider to be sensitive. Removing this blacking out is simple but tedious … you can get out a copy of Acrobat and change the text colour to white — or you can just cut and paste the black bits into Notepad and see the text.

So I confidently expect that within a few hours, non-redacted (non-blocked!) versions of the PDF will be circulating (they may even become more popular than the original — everyone loves to see things that someone thought they should not). The people who look at these non-blocked versions will not be technically competent, they won’t know how to use Acrobat, but they will see the material.

So the DCMS have kindly made the point in the simplest of ways… the argument that small hurdles make any difference is just wishful thinking; sadly for Internet consumers in many countries (who will end up paying for complex blocking systems that make no practical difference) these wishes will cost them money.

PS: the DCMS do actually understand that blocking doesn’t work, or at least not at the moment. Their main document says “Following advice from Ofcom – which we are publishing today – we will not bring forward site blocking regulations under the DEA at this time.” Sadly however, this recognition of reality is too late for the High Court.

Will Newzbin be blocked?

This morning the UK High Court granted an injunction to a group of movie companies which is intended to force BT to block access to “newzbin 2” by their Internet customers. The “newzbin 2” site provides an easy way to search for and download metadata files that can be used to automate the downloading of feature films (TV shows, albums etc) from Usenet servers. ie it’s all about trying to prevent people from obtaining content without paying for a legitimate copy (so called “piracy“).

The judgment is long and spends a lot of time (naturally) on legal matters, but there is some technical discussion — which is correct so far as it goes (though describing redirection of traffic based on port number inspection as “DPI” seems to me to stretch the jargon).

But what does the injunction require of BT? According to the judgment BT must apply “IP address blocking in respect of each and every IP address [of newzbin.com]” and “DPI based blocking utilising at least summary analysis in respect of each and every URL available at the said website and its domains and sub domains“. BT is then told that the injunction is “complied with if the Respondent uses the system known as Cleanfeed“.

There is almost nothing about the design of Cleanfeed in the judgment, but I wrote a detailed account of how it works in a 2005 paper (a slightly extended version of which appears as Chapter 7 of my 2005 PhD thesis). Essentially it is a 2-stage system, the routing system redirects port 80 (HTTP) traffic for relevant IP addresses to a proxy machine — and that proxy prevents access to particular URLs.

So if BT just use Cleanfeed (as the injunction indicates) they will resolve newzbin.com (and www.newzbin.com) which are currently both on 85.112.165.75, and they will then filter access to http://www.newzbin.com/, http://newzbin.com and http://85.112.165.75. It will be interesting to experiment to determine how good their pattern matching is on the proxy (currently Cleanfeed is only used for child sexual abuse image websites, so experiments currently pose a significant risk of lawbreaking).

It will also be interesting to see whether BT actually use Cleanfeed or if they just ‘blackhole’ all access to 85.112.165.75. The quickest way to determine this (once the block is rolled out) will be to see whether or not https://newzbin.com works or not. If it does work then BT will have obeyed the injunction but the block will be trivial to evade (add a “s” to the URL). If it does not work then BT will not be using Cleanfeed to do the blocking!

BT users will still of course be able to access Newzbin (though perhaps not by using https), but depending on the exact mechanisms which BT roll out it may be a little less convenient. The simplest method (but not the cheapest) will be to purchase a VPN service — which will tunnel traffic via a remote site (and access from there won’t be blocked). Doubtless some enterprising vendors will be looking to bundle a VPN with a Newzbin subscription and an account on a Usenet server.

The use of VPNs seems to have been discussed in court, along with other evasion techniques (such as using web and SOCKS proxies), but the judgment says “It is common ground that, if the order were to be implemented by BT, it would be possible for BT subscribers to circumvent the blocking required by the order. Indeed, the evidence shows the operators of Newzbin2 have already made plans to assist users to circumvent such blocking. There are at least two, and possibly more, technical measures which users could adopt to achieve this. It is common ground that it is neither necessary nor appropriate for me to describe those measures in this judgment, and accordingly I shall not do so.

There’s also a whole heap of things that Newzbin could do to disrupt the filtering or just to make their site too mobile to be effectively blocked. I describe some of the possibilities in my 2005 academic work, and there are doubtless many more. Too many people consider the Internet to be a static system which looks the same from everywhere to everyone — that’s just not the case, so blocking systems that take this as a given (“web sites have a single IP address that everyone uses”) will be ineffective.

But this is all moot so far as the High Court is concerned. The bottom line within the judgment is that they don’t actually care if the blocking works or not! At paragraph #198 the judge writes “I agree with counsel for the Studios that the order would be justified even if it only prevented access to Newzbin2 by a minority of users“. Since this case was about preventing economic damage to the movie studios, I doubt that they will be so sanguine if it is widely understood how to evade the block — but the exact details of that will have to wait until BT have complied with their new obligations.

TalkTalk's new blocking system

Back in January I visited TalkTalk along with Jim Killock of the Open Rights Group (ORG) to have their new Internet blocking system explained to us. The system was announced yesterday, and I’m now publishing my technical description of how it works (note that it was called “BrightFeed” when we saw it, but is now named “HomeSafe”).

Buried in all the detail of how the system works are two key points — the first is the notion that it is possible for a centralised checking system (especially one that tells a remote site its identity) to determine whether sites are malicious are not. This is problematic and I doubt that malware distributors will see this as much of a challenge — although on the other hand, perhaps by setting your browser’s User Agent string to pretend to be the checking system you might become rather safer!

The second is that although the system is described as “opt in”, that only applies to whether or not websites you visit might be blocked. What is not “opt in” is whether or not TalkTalk learns the details of the URLs that all of their customers visit, whether they have opted in or not. All of these sites will be visited by TalkTalk’s automated system — which may take some explaining if the remote system told you a URL in confidence and is checking their logs to see who visits.

On their site, ORG have expressed an opinion as to whether the system can be operated lawfully, along with TalkTalk’s own legal analysis. TalkTalk argue that the system’s purpose is to protect their network, which gives them a statutory exemption from wire-tapping legislation; whereas all the public relations material seems to think it’s been developed to protect the users….

… in the end though, the system will be judged by its effectiveness, and in a world where less than 20% of new threats are detected — that may not be all that high.

Resilience of the Internet Interconnection Ecosystem

The Internet is, by very definition, an interconnected network of networks. The resilience of the way in which the interconnection system works is fundamental to the resilience of the Internet. Thus far the Internet has coped well with disasters such as 9/11 and Hurricane Katrina – which have had very significant local impact, but the global Internet has scarcely been affected. Assorted technical problems in the interconnection system have caused a few hours of disruption but no long term effects.

But have we just been lucky ? A major new report, just published by ENISA (the European Network and Information Security Agency) tries to answer this question.

The report was written by Chris Hall, with the assistance of Ross Anderson and Richard Clayton at Cambridge and Panagiotis Trimintzios and Evangelos Ouzounis at ENISA. The full report runs to 238 pages, but for the time-challenged there’s a shorter 31 page executive summary and there will be a more ‘academic’ version of the latter at this year’s Workshop on the Economics of Information Security (WEIS 2011).
Continue reading Resilience of the Internet Interconnection Ecosystem

Securing and Trusting Internet Names (SATIN 2011)

The inaugural SATIN workshop was held at the National Physical Laboratory (NPL) on Monday/Tuesday this week. The workshop format was presentations of 15 minutes followed by 15 minutes of discussions — so all the 49 registered attendees were able to contribute to success of the event.

Many of the papers were about DNSSEC, but there were also papers on machine learning, traffic classification, use of names by malware and ideas for new types of naming system. There were also two invited talks: Roy Arends from Nominet (who kindly sponsored the event) gave an update on how the co.uk zone will be signed, and Rod Rasmussen from Internet Identity showed how passive DNS is helping in the fight against eCrime. All the papers, and the presenters slides can be found on the workshop website.

The workshop will be run again (as SATIN 2012), probably on March 22/23 (the week before IETF goes to Paris). The CFP, giving the exact submission schedule, will appear in late August.

Everyone’s spam is unique

How much spam you get depends on three main things, how many spammers know (or guess) your email address, how good your spam filtering is, and of course, how active the spammers are.

A couple of years back I investigated how spam volumes varied depending on the first letter of your email address (comparing aardvark@example.com with zebra@example.com), with the variations almost certainly coming down to “guessability” (an email address of john@ is easier to guess than yvette@).

As to the impact of filtering, I investigated spam levels in the aftermath of the disabling of McColo — asking whether it was the easy-to-block spam that disappeared? The impact of that closure will have been different for different people, depending on the type (and relative effectiveness) of their spam filtering solution.

Just at the moment, as reported upon in some detail by Brian Krebs, we’re seeing a major reduction in activity. In particular, the closure of an affiliate system for pharmacy spam in September reduced global spam levels considerably, and since Christmas a number of major systems have practically disappeared.

I’ve had a look at spam data going back to January 2010 from my own email server, which handles email for a handful of domains, and that shows a different story!

It shows that spam was up in October … so the reduction didn’t affect how many of the spam emails came to me, just how many “me’s” there were worldwide. Levels have been below the yearly average for much of December, but I am seeing most (but not all of) the dropoff since Christmas Day.

Click on the graph for an bigger version… and yes, the vertical axis is correct, I really do get up to 60,000 spam emails a day, and of course none at all on the days when the server breaks altogether.

Protecting Europe against large-scale cyber-attacks

As on two previous occasions, I’ve been acting as specialist adviser to a House of Lords Committee. This time it was the European Union Committee, who held an inquiry into “Protecting Europe against large-scale cyber-attacks”.

The report is published today and is available in PDF and in HTML. It’s been covered by The Telegraph, the BBC, the Washington Post, and on Parliament’s own TV channel. Interestingly, there’s not all that consensus on what the main story is, or quite what the recommendations were!

Continue reading Protecting Europe against large-scale cyber-attacks

Ineffective self-blocking by the National Enquirer

It used to be simple to explain how browsing works. You type a link into the browser, the browser asks a DNS server at your ISP to translate the human-friendly hostname into the IP address of the web server, and then the browser contacts the server with an HTTP request requesting the page that you want to view.

It’s not quite that simple any more — which is rather bad news for the National Enquirer, the US tabloid which decided, three years or so ago, following a brush with the UK libel laws, that it would not publish a UK edition, or allow visits to its website from the UK. Unfortunately, the Enquirer’s blocking is no longer working as effectively as it used to.

Continue reading Ineffective self-blocking by the National Enquirer

Panorama looks at unlawful filesharing

Last night’s Panorama looked at the issue of unlawful filesharing and the proposals within the Digital Economy Bill that the UK Government thinks will deal with it.

The Open Rights Group has criticised the programme for spending too long examing the differences of opinion among music makers, and too little time talking about rights — perhaps that’s an inevitable side effect for fronting the programme with Jo Whiley, a Radio One DJ. This probably increased the audience amongst the under-30s who do a great deal of the file sharing; and for whom this may be the first time that they’ve had the bill’s proposals explained to them. So lose some, win some!

The programme had a number of stunts : they slowed down the broadband of a student household (not only was their MP3 going to take 13 weeks to download, they found they couldn’t effectively look at their email). They got a digital forensics expert to look at a family’s computers, finding copies of LimeWire (tricky stuff forensics!) and portraying this as a smoking gun for unlawfulness. The same expert camped outside the student house and piggybacked on their WiFi (apparently by employing a default password on their broadband router to authorise themselves to have access).

You can also see yours truly:
Richard Clayton on Panorama
demonstrating an anonymity network (it was in fact Tor, but I’d done a little tweaking to ensure that its standard discouragement of file sharing activity didn’t have any impact) : and showing that a Bit Torrent tracker stopped recording me as being in Cambridge, but placed me at the Tor exit node in Germany instead.

I argued that as soon as large numbers of people were getting in trouble for file sharing because they were traceable — then they wouldn’t stop file sharing, but they would stop being traceable.

All in all, within the limitations of a 30-minute prime-time main-channel show, I think the Panorama team provided a good introduction to a complex topic. You can judge for yourself (from within the UK) for the next 7 days on the BBC iPlayer, or in three parts on YouTube (I’m two minutes into part 3, at least until a web blocking injunction bars your access to what might well be an infringement of copyright).

What's worrying the spooks?

As I mentioned a few days ago, the security services have some concerns about the Digital Economy Bill:

If evading blocking systems becomes a mainstream activity (and there’s said to be 6-7 million illegal file sharers in the UK) then it will be used, almost automatically, by subversive groups — preventing the spooks from examining the traffic patterns and comprehending the threat.

There seems to be some confusion about quite what is worrying the security services. Last October, The Times reported that “both the security services and police are concerned about the plans, believing that threatening to cut off pirates will increase the likelihood that they will escape detection by turning to encryption”, and this meme that the concern is encryption has been repeated ever since.

However, I think that Patrick Foster, the Times media correspondent, got hold of the wrong end of the stick. The issue isn’t encryption but traffic analysis.

Continue reading What's worrying the spooks?