Category Archives: Academic papers

Award Winners #2

Two years ago, almost exactly, I wrote:

Congratulations to Steven J. Murdoch and George Danezis who were recently awarded the Computer Laboratory Lab Ring (the local alumni association) award for the “most notable publication” (that’s notable as in jolly good) for the past year, written by anyone in the whole lab.

Well this year, it’s the turn of Tyler Moore and myself to win, for our APWG paper: Examining the Impact of Website Take-down on Phishing.

The obligatory posed photo, showing that we both own ties (!), is courtesy of the Science Editor of the Economist.

Tyler Moore and Richard Clayton, most notable publication 2008
Tyler Moore and Richard Clayton, most notable publication 2008

Securing Network Location Awareness with Authenticated DHCP

During April–June 2006, I was an intern at Microsoft Research, Cambridge. My project, supervised by Tuomas Aura and Michael Roe, was to improve the privacy and security of mobile computer users. A paper summarizing our work was published at SecureComm 2007, but I’ve only just released the paper online: “Securing Network Location Awareness with Authenticated DHCP”.

How a computer should behave depends on its network location. Existing security solutions, like firewalls, fail to adequately protect mobile users because they assume their policy is static. This results in laptop computers being configured with fairly open policies, in order to facilitate applications appropriate for a trustworthy office LAN (e.g. file and printer sharing, collaboration applications, and custom servers). When the computer is taken home or roaming, this policy leaves an excessively large attack surface.

This static approach also harms user privacy. Modern applications broadcast a large number of identifiers which may leak privacy sensitive information (name, employer, office location, job role); even randomly generated identifiers allow a user to be tracked. When roaming, a laptop should not broadcast identifiers unless necessary, and on moving location either pseudonymous identifiers should be re-used or anonymous ones generated.

Both of these goals require a computer to be able to identify which network it is on, even when an attacker is attempting to spoof this information. Our solution was to extend DHCP to include an network location identifier, authenticated by a public-key signature. I built a proof-of-concept implementation for the Microsoft Windows Server 2003 DHCP server, and the Vista DHCP client.

A scheme like this should ideally work on both small PKI-less home LANs and still permit larger networks to aggregate multiple access points into one logical network. Achieving this requires some subtle naming and key management tricks. These techniques, and how to implement the protocols in a privacy-preserving manner are described in our paper.

Chip & PIN terminals vulnerable to simple attacks

Steven J. Murdoch, Ross Anderson and I looked at how well PIN entry devices (PEDs) protect cardholder data. Our paper will be published at the IEEE Symposium on Security and Privacy in May, though an extended version is available as a technical report. A segment about this work will appear on BBC Two’s Newsnight at 22:30 tonight.

We were able to demonstrate that two of the most popular PEDs in the UK — the Ingenico i3300 and Dione Xtreme — are vulnerable to a “tapping attack” using a paper clip, a needle and a small recording device. This allows us to record the data exchanged between the card and the PED’s processor without triggering tamper proofing mechanisms, and in clear violation of their supposed security properties. This attack can capture the card’s PIN because UK banks have opted to issue cheaper cards that do not use asymmetric cryptography to encrypt data between the card and PED.

Ingenico attack Dione attack

In addition to the PIN, as part of the transaction, the PED reads an exact replica of the magnetic strip (for backwards compatibility). Thus, if an attacker can tap the data line between the card and the PED’s processor, he gets all the information needed to create a magnetic strip card and withdraw money out of an ATM that does not read the chip.

We also found that the certification process of these PEDs is flawed. APACS has been effectively approving PEDs for the UK market as Common Criteria (CC) Evaluated, which does not equal Common Criteria Certified (no PEDs are CC Certified). What APACS means by “Evaluated” is that an approved lab has performed the “evaluation”, but unlike CC Certified products, the reports are kept secret, and governmental Certification Bodies do not do quality control.

This process causes a race to the bottom, with PED developers able to choose labs that will approve rather than improve PEDs, at the lowest price. Clearly, the certification process needs to be more open to the cardholders, who suffer from the fraud. It also needs to be fixed such that defective devices are refused certification.

We notified APACS, Visa, and the PED manufactures of our results in mid-November 2007 and responses arrived only in the last week or so (Visa chose to respond only a few minutes ago!) The responses are the usual claims that our demonstrations can only be done in lab conditions, that criminals are not that sophisticated, the threat to cardholder data is minimal, and that their “layers of security” will detect fraud. There is no evidence to support these claims. APACS state that the PEDs we examined will not be de-certified or removed, and the same for the labs who certified them and would not even tell us who they are.

The threat is very real: tampered PEDs have already been used for fraud. See our press release and FAQ for basic points and the technical report where we discuss the work in detail.

Update 1 (2008-03-09): The segment of Newsnight featuring our contribution has been posted to Google Video.

Update 2 (2008-03-21): If the link above doesn’t work try YouTube: part1 and part 2.

Relay attacks on card payment: vulnerabilities and defences

At this year’s Chaos Communication Congress (24C3), I presented some work I’ve been doing with Saar Drimer: implementing a smart card relay attack and demonstrating that it can be prevented by distance bounding protocols. My talk (abstract) was filmed and the video can be found below. For more information, we produced a webpage and the details can be found in our paper.

[ slides (PDF 9.6M) | video (BitTorrent — MPEG4, 106M) ]

Update 2008-01-15:
Liam Tung from ZDNet Australia has written an article on my talk: Bank card attack: Only Martians are safe.

Other highlights from the conference…

How effective is the wisdom of crowds as a security mechanism?

Over the past year, Richard Clayton and I have been tracking phishing websites. For this work, we are indebted to PhishTank, a website where dedicated volunteers submit URLs from suspected phishing websites and vote on whether the submissions are valid. The idea behind PhishTank is to bring together the expertise and enthusiasm of people across the Internet to fight phishing attacks. The more people participate, the larger the crowd, the more robust it should be against errors and perhaps even manipulation by attackers.

Not so fast. We studied the submission and voting records of PhishTank’s users, and our results are published in a paper appearing at Financial Crypto next month. It turns out that participation is very skewed. While PhishTank has several thousand registered users, a small core of around 25 moderators perform the bulk of the work, casting 74% of the votes we observed. Both the distributions of votes and submissions follow a power law.

This leaves PhishTank more vulnerable to manipulation than would be the case if every member of the crowd participated to the same extent. Why? If a few of the most active users stopped voting, a backlog of unverified phishing sites might collect. It also means an attacker could join the system and vote maliciously on a massive scale. Since 97% of submissions to PhishTank are verified as phishing URLs, it would be easy for an attacker to build up reputation by voting randomly many times, and then sprinkle in malicious votes protecting the attacker’s own phishing sites, for example. Since over half of the phishing sites in PhishTank are duplicate rock-phish domains, a savvy attacker could build reputation by voting for these sites without contributing to PhishTank otherwise.

So crowd-sourcing your security decisions can leave you exposed to manipulation. But how does PhishTank compare to the feeds maintained by specialist website take-down companies hired by the banks? Well, we compared PhishTank’s feed to a feed from one such company, and found the company’s feed to be slightly more complete and significantly faster in confirming phishing websites. This is because companies can afford employees to verify their submissions.

We also found that users who vote less often are more likely to vote incorrectly, and that users who commit many errors tend to have voted on
the same URLs.

Despite these problems, we do not advocate against leveraging user participation in the design of all security mechanisms, nor do we believe that PhishTank should throw in the towel. Some improvements can be made by automating obvious categorization so that the hard decisions are taken by PhishTank’s users. In any case, we implore caution before turning over a security decision to a crowd.

Infosecurity Magazine has written a news article describing this work.

Covert channel vulnerabilities in anonymity systems

My PhD thesis — “Covert channel vulnerabilities in anonymity systems” — has now been published:

The spread of wide-scale Internet surveillance has spurred interest in anonymity systems that protect users’ privacy by restricting unauthorised access to their identity. This requirement can be considered as a flow control policy in the well established field of multilevel secure systems. I apply previous research on covert channels (unintended means to communicate in violation of a security policy) to analyse several anonymity systems in an innovative way.

One application for anonymity systems is to prevent collusion in competitions. I show how covert channels may be exploited to violate these protections and construct defences against such attacks, drawing from previous covert channel research and collusion-resistant voting systems.

In the military context, for which multilevel secure systems were designed, covert channels are increasingly eliminated by physical separation of interconnected single-role computers. Prior work on the remaining network covert channels has been solely based on protocol specifications. I examine some protocol implementations and show how the use of several covert channels can be detected and how channels can be modified to resist detection.

I show how side channels (unintended information leakage) in anonymity networks may reveal the behaviour of users. While drawing on previous research on traffic analysis and covert channels, I avoid the traditional assumption of an omnipotent adversary. Rather, these attacks are feasible for an attacker with limited access to the network. The effectiveness of these techniques is demonstrated by experiments on a deployed anonymity network, Tor.

Finally, I introduce novel covert and side channels which exploit thermal effects. Changes in temperature can be remotely induced through CPU load and measured by their effects on crystal clock skew. Experiments show this to be an effective attack against Tor. This side channel may also be usable for geolocation and, as a covert channel, can cross supposedly infallible air-gap security boundaries.

This thesis demonstrates how theoretical models and generic methodologies relating to covert channels may be applied to find practical solutions to problems in real-world anonymity systems. These findings confirm the existing hypothesis that covert channel analysis, vulnerabilities and defences developed for multilevel secure systems apply equally well to anonymity systems.

Steven J. Murdoch, Covert channel vulnerabilities in anonymity systems, Technical report UCAM-CL-TR-706, University of Cambridge, Computer Laboratory, December 2007.

Phishing take-down paper wins 'Best Paper Award' at APWG eCrime Researcher's Summit

Richard Clayton and I have been tracking phishing sites for some time. Back in May, we reported on how quickly phishing websites are removed. Subsequently, we have also compared the performance of banks in removing websites and found evidence that ISPs and registrars are initially slow to remove malicious websites.

We have published our updated results at eCrime 2007, sponsored by the Anti-Phishing Working Group. The paper, ‘Examining the Impact of Website Take-down on Phishing’ (slides here), was selected for the ‘Best Paper Award’.

A high-level abridged description of this work also appeared in the September issue of Infosecurity Magazine.

Latest on security economics

Tyler and I have a paper appearing tomorrow as a keynote talk at Crypto: Information Security Economics – and Beyond. This is a much extended version of our survey that appeared in Science in October 2006 and then at Softint in January 2007.

The new paper adds recent research in security economics and sets out a number of ideas about security psychology, into which the field is steadily expanding as economics and psychology become more intertwined. For example, many existing security mechanisms were designed by geeks for geeks; but if women find them harder to use, and as a result are more exposed to fraud, then could system vendors or operators be sued for unlawful sex discrimination?

There is also the small matter of the extent to which human intelligence evolved because people who were good at deceit, and at detecting deception in others, were likely to have more surviving offspring. Security and psychology might be more closely entwined than anyone ever thought.

Chip-and-PIN relay attack paper wins "Best Student Paper" at USENIX Security 2007

In May 2007, Saar Drimer and Steven Murdoch posted about “Distance bounding against smartcard relay attacks”. Today their paper won the “Best Student Paper” award at USENIX Security 2007 and their slides are now online. You can read more about this work on the Security Group’s banking security web page.

Steven and Saar at USENIX Security 2007

USENIX WOOT07, Exploiting Concurrency Vulnerabilities in System Call Wrappers, and the Evil Genius

I’ve spent the day at the First USENIX Workshop on Offensive Technologies (WOOT07) — an interesting new workshop on attack strategies and technologies. The workshop highlights the tension between the “white” and “black” hats in security research — you can’t design systems to avoid security problems if you don’t understand what they are. USENIX‘s take on such a forum is less far down the questionable ethical spectrum than some other venues, but it certainly presented and talked about both new exploits for new vulnerabilities, and techniques for evading current protections in concrete detail.

I presented, “Exploiting Concurrency Vulnerabilities in System Call Wrappers,” a paper on the topic of compromising system call interposition-based protection systems, such as COTS virus scanners, OpenBSD and NetBSD’s Systrace, the TIS Generic Software Wrappers Toolkit (GSWTK), and CerbNG. The key insight here is that the historic assumption of “atomicity” of system calls is falacious, and that on both uniprocessor and multiprocessing systems, it is trivial to construct a race between system call wrappers and malicious user processes to bypass protections. I demonstrated sample exploit code against the Sysjail policy on Systrace, and IDwrappers on GSWTK, but the paper includes a more extensive discussion including vulnerabilities in sudo‘s Systrace monitor mode. You can read the paper and see the presentation slides here. All affected vendors received at least six months, and in some cases many years advance notice regarding these vulnerabilities.

The moral, for those unwilling to read the paper, is that system call wrappers are a bad idea, unless of course, you’re willing to rewrite the OS to be message-passing. Systems like the TrustedBSD MAC Framework on FreeBSD and Mac OS X Leopard, Linux Security Modules (LSM), Apple’s (and now also NetBSD’s) kauth(9), and other tightly integrated kernel security frameworks offer specific solutions to these concurrency problems. There’s plenty more to be done in that area.

Concurrency issues have been discussed before in computer security, especially relating to races between applications when accessing /tmp, unexpected signal interruption of socket operations, and distributed systems races, but this paper starts to explore the far more sordid area of OS kernel concurrency and security. Given that even notebook computers are multiprocessor these days, emphasizing the importance of correct synchronization and reasoning about high concurrency is critical to thinking about security correctly. As someone with strong interests in both OS parallelism and security, the parallels (no pun intended) seem obvious: in both cases, the details really matter, and it requires thinking about a proverbial Cartesian Evil Genius. Anyone who’s done serious work with concurrent systems knows that they are actively malicious, so a good alignment for the infamous malicious attacker in security research!

Some of the other presentations have included talks about Google’s software fuzzing tool Flayer based on Valgrind, attacks on deployed SIP systems including AT&T’s product, Bluetooth sniffing with BlueSniff, and quantitative analyses of OS fingerprinting techniques. USENIX members will presumably be able to read the full set of papers online immediately; for others, check back in a year or visit the personal web sites of the speakers after you look at the WOOT07 Programme.