Category Archives: Academic papers

Oral evidence to the malware inquiry

The House of Commons Science and Technology Select Committee is currently holding an inquiry into malware.

I submitted written evidence in September and today I was one of three experts giving oral evidence to the MPs. The session was televised and so conceivably it may turn up on the TV in some strange timeslot — but if you’re interested then there’s a web version for viewing at your convenience. Shortly there will be a written transcript as well.

The Committee’s original set of questions included one about whether malware infection might usefully be treated as a public health issue — of particular interest to me because I have a published paper which considers the role that Governments might play in countering malware for the public good!

In the event, this wasn’t asked about at all. The questions were much more basic, covering the security of hardware and software, the role of the police (and at one point, bizarrely, considering the merits of the Amstrad PCW; a product I was jointly involved in designing and building, some 25 years ago).

In fact it was all rather more about dealing with crime than dealing with malware — which is fine (and obviously closely connected) but it wasn’t the topic on which everyone submitted evidence. This may mean that the Committee has a shortage of material if their report aims to address the questions that they raised today.

Will LBT be blocked?

Back in July I wrote a blog article “Will Newzbin be blocked?” which discussed the granting of an injunction to a group of movie companies to force BT to block access to “Newzbin2“.

The parties were back in court this last week to hammer out the exact details of the injunction.

The final wording of the injunction requires BT to block customer access to Newzbin2 by #1(1) rerouting traffic to relevant IPs and #1(2) applying “DPI based” URL blocking. The movie companies have to tell BT which IPs and which URLs are relevant.

#2 of the injunction says that BT can use its existing “Cleanfeed” system (which I wrote about here and at greater length in my PhD thesis here) to meet the requirements of #1, even though Cleanfeed isn’t believed to use DPI at all !

#3 and #4 of the injunction allows the parties to agree to suspend blocking and to come back to court in the future, and #5 relates to the costs of the court action.

One of the (few) upsides of this injunction will be to permit lawful experimentation as to the effectiveness of the Cleanfeed system, assuming that it is used — if the studios ask for all URLs on a website to be blocked, I expect that null routing the website entirely will be simpler for BT than redirecting traffic to the Cleanfeed proxy.

Up until now, discovering a flaw in the technical implementation of Cleanfeed would result in successful access to a child sexual abuse image website. Anyone monitoring the remote end of the connection might then draw the conclusion that images had been viewed and a criminal offence committed. Although careful experimental design could avoid law-breaking, it might be some time into the investigation process before this was properly understood by the criminal justice system, and the intervening period would be somewhat stressful for the investigator.

There is no law that prevents viewing of the contents of Newsbin2, and so the block circumvention techniques proposed over the past few years (starting of course with just using “https”) can now start to be evaluated as to their actual effectiveness.

However, there is more to #1 of the injunction, in that it applies to:

[…] www.newzbin.com, its domains and sub-domains and including payments.newzbin.com and any other IP address or URL whose sole or predominant purpose is to enable or facilitate access to the Newzbin2 website.

I don’t expect that publishing circumvention experience here on LBT could be seen as the predominant purpose of this blog… so I don’t really expect these pages to suddenly become invisible to BT customers. But, since the whole process has an Alice in Wonderland feel to it (someone who believes that blocking websites is possible clearly had little else to do before breakfast), it cannot be entirely ruled out.

Fashion crimes: trending-term exploitation on the web

News travels fast. Blogs and other websites pick up a news story only about 2.5 hours on average after it has been reported by traditional media. This leads to an almost continuous supply of new “trending” topics, which are then amplified across the Internet, before fading away relatively quickly. Many web companies track these terms, on search engines and in social media.

However narrow, these first moments after a story breaks present a window of opportunity for miscreants to infiltrate web and social network search results in response. The motivation for doing so is primarily financial. Websites that rank high in response to a search for a trending term are likely to receive considerable amounts of traffic, regardless of their quality.

In particular, the sole goal of many sites designed in response to trending terms is to produce revenue through the advertisements that they display in their pages, without providing any original content or services. Such sites are often referred to as “Made for AdSense” (MFA) after the name of the Google advertising platform they are often targeting. Whether such activity is deemed to be criminal or merely a nuisance remains an open question, and largely depends on the tactics used to prop the sites up in the search-engine rankings. Some other sites devised to respond to trending terms have more overtly sinister motives. For instance, a number of malicious sites serve malware in hopes of infecting visitors’ machines, or peddle fake anti-virus software.

Together with Nektarios Leontiadis and Nicolas Christin, I have carried out a large-scale measurement and analysis of trending-term exploitation on the web, and the results are being presented at the ACM Conference on Computer and Communications Security (CCS) in Chicago this week. Based on a collection of over 60 million search results and tweets gathered over nine months, we characterize how trending terms are used to perform web search-engine manipulation and social-network spam. The full details can be found in the paper and presentation. Continue reading Fashion crimes: trending-term exploitation on the web

Pico: no more passwords (at Usenix Security)

The usability community has long complained about the problems of passwords (remember the Adams and Sasse classic). These days, even our beloved XKCD has something to say about the difficulties of coming up with a password that is easy to memorize and hard to brute-force. The sensible strategy suggested in the comic, of using a passphrase made of several common words, is also the main principle behind Jakobsson and Akavipat’s fastwords. It’s a great suggestion. However, in the long term, no solution that requires users to remember secrets is going to scale to hundreds of different accounts, if all those remembered secrets have to be different (and changed every couple of months).

This is why, as I previously blogged, I am exploring the space of solutions that do not require the memorization of any secrets—whether passwords, passphrases, PINs, faces, graphical squiggles or anything else. My SPW paper, Pico: No more passwords, was finalized in June (including improvements suggested in the comments to the previous blog post) and I am about to give an invited talk on Pico at Usenix Security 2011 in San Francisco.

Usenix talks are recorded and the video is posted next to the abstracts: if you are so inclined, you will be able to watch my presentation shortly after I give it.

To encourage adoption, I chose not to patent any aspect of Pico. If you wish to collaborate, or fund this effort, talk to me. If you wish to build or sell it on your own, be my guest. No royalties due—just cite the paper.

Measuring Search-Redirection Attacks in the Illicit Online Prescription Drug Trade

Unauthorized online pharmacies that sell prescription drugs without requiring a prescription have been a fixture of the web for many years. Given the questionable legality of the shops’ business models, it is not surprising that most pharmacies resort to illegal methods for promoting their wares. Most prominently, email spam has relentlessly advertised illicit pharmacies. Researchers have measured the conversion rate of such spam, finding it to be surprisingly low. Upon reflection, this makes sense, given the spam’s unsolicited and untargeted nature. A more successful approach for the pharmacies would be to target users who have expressed an interest in purchasing drugs, such as those searching the web for online pharmacies. The trouble is that dodgy pharmacy websites don’t always garner the highest PageRanks on their own merits, and so some form of black-hat search-engine optimization may be required in order to appear near the top of web search results.

Indeed, by gathering daily the top search web results for 218 drug-related queries over nine months in 2010-2011, Nektarios Leontiadis, Nicolas Christin and I have found evidence of substantial manipulation of web search results to promote unauthorized pharmacies. In particular, we find that around one-third of the collected search results were one of 7,000 infected hosts triggered to redirect to a few hundred pharmacy websites. In the pervasive search-redirection attacks, miscreants compromise high-ranking websites and dynamically redirect traffic different pharmacies based on the particular search terms issued by the consumer. The full details of the study can be found in a paper appearing this week at the 20th USENIX Security Symposium in San Francisco.
Continue reading Measuring Search-Redirection Attacks in the Illicit Online Prescription Drug Trade

Will Newzbin be blocked?

This morning the UK High Court granted an injunction to a group of movie companies which is intended to force BT to block access to “newzbin 2” by their Internet customers. The “newzbin 2” site provides an easy way to search for and download metadata files that can be used to automate the downloading of feature films (TV shows, albums etc) from Usenet servers. ie it’s all about trying to prevent people from obtaining content without paying for a legitimate copy (so called “piracy“).

The judgment is long and spends a lot of time (naturally) on legal matters, but there is some technical discussion — which is correct so far as it goes (though describing redirection of traffic based on port number inspection as “DPI” seems to me to stretch the jargon).

But what does the injunction require of BT? According to the judgment BT must apply “IP address blocking in respect of each and every IP address [of newzbin.com]” and “DPI based blocking utilising at least summary analysis in respect of each and every URL available at the said website and its domains and sub domains“. BT is then told that the injunction is “complied with if the Respondent uses the system known as Cleanfeed“.

There is almost nothing about the design of Cleanfeed in the judgment, but I wrote a detailed account of how it works in a 2005 paper (a slightly extended version of which appears as Chapter 7 of my 2005 PhD thesis). Essentially it is a 2-stage system, the routing system redirects port 80 (HTTP) traffic for relevant IP addresses to a proxy machine — and that proxy prevents access to particular URLs.

So if BT just use Cleanfeed (as the injunction indicates) they will resolve newzbin.com (and www.newzbin.com) which are currently both on 85.112.165.75, and they will then filter access to http://www.newzbin.com/, http://newzbin.com and http://85.112.165.75. It will be interesting to experiment to determine how good their pattern matching is on the proxy (currently Cleanfeed is only used for child sexual abuse image websites, so experiments currently pose a significant risk of lawbreaking).

It will also be interesting to see whether BT actually use Cleanfeed or if they just ‘blackhole’ all access to 85.112.165.75. The quickest way to determine this (once the block is rolled out) will be to see whether or not https://newzbin.com works or not. If it does work then BT will have obeyed the injunction but the block will be trivial to evade (add a “s” to the URL). If it does not work then BT will not be using Cleanfeed to do the blocking!

BT users will still of course be able to access Newzbin (though perhaps not by using https), but depending on the exact mechanisms which BT roll out it may be a little less convenient. The simplest method (but not the cheapest) will be to purchase a VPN service — which will tunnel traffic via a remote site (and access from there won’t be blocked). Doubtless some enterprising vendors will be looking to bundle a VPN with a Newzbin subscription and an account on a Usenet server.

The use of VPNs seems to have been discussed in court, along with other evasion techniques (such as using web and SOCKS proxies), but the judgment says “It is common ground that, if the order were to be implemented by BT, it would be possible for BT subscribers to circumvent the blocking required by the order. Indeed, the evidence shows the operators of Newzbin2 have already made plans to assist users to circumvent such blocking. There are at least two, and possibly more, technical measures which users could adopt to achieve this. It is common ground that it is neither necessary nor appropriate for me to describe those measures in this judgment, and accordingly I shall not do so.

There’s also a whole heap of things that Newzbin could do to disrupt the filtering or just to make their site too mobile to be effectively blocked. I describe some of the possibilities in my 2005 academic work, and there are doubtless many more. Too many people consider the Internet to be a static system which looks the same from everywhere to everyone — that’s just not the case, so blocking systems that take this as a given (“web sites have a single IP address that everyone uses”) will be ineffective.

But this is all moot so far as the High Court is concerned. The bottom line within the judgment is that they don’t actually care if the blocking works or not! At paragraph #198 the judge writes “I agree with counsel for the Studios that the order would be justified even if it only prevented access to Newzbin2 by a minority of users“. Since this case was about preventing economic damage to the movie studios, I doubt that they will be so sanguine if it is widely understood how to evade the block — but the exact details of that will have to wait until BT have complied with their new obligations.

Make noise and whisper: a solution to relay attacks

About a moth ago I’ve presented at the Security Protocols Workshop a new idea to detect relay attacks, co-developed with Frank Stajano.

The idea relies on having a trusted box (which we call the T-Box as in the image below) between the physical interfaces of two communicating parties. The T-Box accepts 2 inputs (one from each party) and provides one output (seen by both parties). It ensures that none of the parties can determine the complete input of the other party.

T-Box

Therefore by connecting 2 instances of a T-Box together (as in the case of a relay attack) the message from one end to the other (Alice and Bob in the image above) gets distorted twice as much as it would in the case of a direct connection. That’s the basic idea.

One important question is how does the T-Box operate on the inputs such that we can detect a relay attack? In the paper we describe two example implementations based on a bi-directional channel (which is used for example between a smart card and a terminal). In order to help the reader understand these examples better and determine the usefulness of our idea Mike Bond and I have created a python simulation. This simulation allows you to choose the type of T-Box implementation, a direct or relay connection, as well as other parameters including the length of the anti-relay data stream and detection threshold.

In these two implementations we have restricted ourselves to make the T-Box part of the communication channel. The advantage is that we don’t rely on any party providing the T-Box since it is created automatically by communicating over the physical channel. The disadvantage is that a more powerful attacker can sample the line at twice the speed and overcome our T-Box solution.

The relay attack can be used against many applications, including all smart card based payments. There are already several ideas, including distance bounding, for detecting relay attacks. However our idea brings a new approach to the existing methods, and we hope that in the future we can find a practical implementation of our solutions, or a good scenario to use a physical T-Box which should not be affected by a powerful attacker.

Resilience of the Internet Interconnection Ecosystem

The Internet is, by very definition, an interconnected network of networks. The resilience of the way in which the interconnection system works is fundamental to the resilience of the Internet. Thus far the Internet has coped well with disasters such as 9/11 and Hurricane Katrina – which have had very significant local impact, but the global Internet has scarcely been affected. Assorted technical problems in the interconnection system have caused a few hours of disruption but no long term effects.

But have we just been lucky ? A major new report, just published by ENISA (the European Network and Information Security Agency) tries to answer this question.

The report was written by Chris Hall, with the assistance of Ross Anderson and Richard Clayton at Cambridge and Panagiotis Trimintzios and Evangelos Ouzounis at ENISA. The full report runs to 238 pages, but for the time-challenged there’s a shorter 31 page executive summary and there will be a more ‘academic’ version of the latter at this year’s Workshop on the Economics of Information Security (WEIS 2011).
Continue reading Resilience of the Internet Interconnection Ecosystem