Category Archives: Academic papers

Latest on security economics

Tyler and I have a paper appearing tomorrow as a keynote talk at Crypto: Information Security Economics – and Beyond. This is a much extended version of our survey that appeared in Science in October 2006 and then at Softint in January 2007.

The new paper adds recent research in security economics and sets out a number of ideas about security psychology, into which the field is steadily expanding as economics and psychology become more intertwined. For example, many existing security mechanisms were designed by geeks for geeks; but if women find them harder to use, and as a result are more exposed to fraud, then could system vendors or operators be sued for unlawful sex discrimination?

There is also the small matter of the extent to which human intelligence evolved because people who were good at deceit, and at detecting deception in others, were likely to have more surviving offspring. Security and psychology might be more closely entwined than anyone ever thought.

Chip-and-PIN relay attack paper wins "Best Student Paper" at USENIX Security 2007

In May 2007, Saar Drimer and Steven Murdoch posted about “Distance bounding against smartcard relay attacks”. Today their paper won the “Best Student Paper” award at USENIX Security 2007 and their slides are now online. You can read more about this work on the Security Group’s banking security web page.

Steven and Saar at USENIX Security 2007

USENIX WOOT07, Exploiting Concurrency Vulnerabilities in System Call Wrappers, and the Evil Genius

I’ve spent the day at the First USENIX Workshop on Offensive Technologies (WOOT07) — an interesting new workshop on attack strategies and technologies. The workshop highlights the tension between the “white” and “black” hats in security research — you can’t design systems to avoid security problems if you don’t understand what they are. USENIX‘s take on such a forum is less far down the questionable ethical spectrum than some other venues, but it certainly presented and talked about both new exploits for new vulnerabilities, and techniques for evading current protections in concrete detail.

I presented, “Exploiting Concurrency Vulnerabilities in System Call Wrappers,” a paper on the topic of compromising system call interposition-based protection systems, such as COTS virus scanners, OpenBSD and NetBSD’s Systrace, the TIS Generic Software Wrappers Toolkit (GSWTK), and CerbNG. The key insight here is that the historic assumption of “atomicity” of system calls is falacious, and that on both uniprocessor and multiprocessing systems, it is trivial to construct a race between system call wrappers and malicious user processes to bypass protections. I demonstrated sample exploit code against the Sysjail policy on Systrace, and IDwrappers on GSWTK, but the paper includes a more extensive discussion including vulnerabilities in sudo‘s Systrace monitor mode. You can read the paper and see the presentation slides here. All affected vendors received at least six months, and in some cases many years advance notice regarding these vulnerabilities.

The moral, for those unwilling to read the paper, is that system call wrappers are a bad idea, unless of course, you’re willing to rewrite the OS to be message-passing. Systems like the TrustedBSD MAC Framework on FreeBSD and Mac OS X Leopard, Linux Security Modules (LSM), Apple’s (and now also NetBSD’s) kauth(9), and other tightly integrated kernel security frameworks offer specific solutions to these concurrency problems. There’s plenty more to be done in that area.

Concurrency issues have been discussed before in computer security, especially relating to races between applications when accessing /tmp, unexpected signal interruption of socket operations, and distributed systems races, but this paper starts to explore the far more sordid area of OS kernel concurrency and security. Given that even notebook computers are multiprocessor these days, emphasizing the importance of correct synchronization and reasoning about high concurrency is critical to thinking about security correctly. As someone with strong interests in both OS parallelism and security, the parallels (no pun intended) seem obvious: in both cases, the details really matter, and it requires thinking about a proverbial Cartesian Evil Genius. Anyone who’s done serious work with concurrent systems knows that they are actively malicious, so a good alignment for the infamous malicious attacker in security research!

Some of the other presentations have included talks about Google’s software fuzzing tool Flayer based on Valgrind, attacks on deployed SIP systems including AT&T’s product, Bluetooth sniffing with BlueSniff, and quantitative analyses of OS fingerprinting techniques. USENIX members will presumably be able to read the full set of papers online immediately; for others, check back in a year or visit the personal web sites of the speakers after you look at the WOOT07 Programme.

Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Users of the Tor anonymous communication system are at risk of being tracked by an adversary who can monitor both the traffic entering and leaving the network. This weakness is well known to the designers and currently there is no known practical way to resist such attacks, while maintaining the low-latency demanded by applications such as web browsing. For this reason, it seems intuitively clear that when selecting a path through the Tor network, it would be beneficial to select the nodes to be in different countries. Hopefully government-level adversaries will find it problematic to track cross-border connections as mutual legal assistance is slow, if it even works at all. Non-government adversaries might also find that their influence drops off at national boundaries too.

Implementing secure IP-based geolocation is hard, but even if it were possible, the technique might not help and could perhaps even harm security. The PET Award nominated paper, “Location Diversity in Anonymity Networks“, by Nick Feamster and Roger Dingledine showed that international Internet connections cross a comparatively small number of tier-1 ISPs. Thus, by forcing one or more of these companies to co-operate, a large proportion of connections through an anonymity network could be traced.

The results of Feamster and Dingledine’s paper suggest that it may be better to bounce anonymity traffic around within a country, because it is less likely that there will be a single ISP monitoring incoming and outgoing traffic to several nodes. However, this only appears to be the case because they used BGP data to build a map of Autonomous Systems (ASes), which roughly correspond to ISPs. Actually, inter-ISP traffic (especially in Europe) might travel through an Internet eXchange (IX), a fact not apparent from BGP data. Our paper, “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries“, by Steven J. Murdoch and Piotr Zieliński, examines the consequences of this observation.

Continue reading Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Distance bounding against smartcard relay attacks

Steven Murdoch and I have previously discussed issues concerning the tamper resistance of payment terminals and the susceptibility of Chip & PIN to relay attacks. Basically, the tamper resistance protects the banks but not the customers, who are left to trust any of the devices they provide their card and PIN to (the hundreds of different types of terminals do not help here). The problem some customers face is that when fraud happens, they are the ones being blamed for negligence instead of the banks owning up to a faulty system. Exacerbating the problem is the impossibility of customers to prove they have not been negligent with their secrets without the proper data that the banks have, but refuse to hand out.

Continue reading Distance bounding against smartcard relay attacks

How (not) to write an abstract

Having just finished another pile of conference-paper reviews, it strikes me that the single most common stylistic problem with papers in our field is the abstract.

Disappointingly few Computer Science authors seem to understand the difference between an abstract and an introduction. Far too many abstracts are useless because they read just like the first paragraphs of the “Introduction” section; the separation between the two would not be obvious if there were no change in font or a heading in between.

The two serve completely different purposes:

Abstracts are concise summaries for experts. Write your abstract for readers who are familiar with >50% of the references in your bibliography, who will soon have read at least the abstracts of the rest, and who are quite likely to quote your work in their own next paper. Answer implicitely in your abstract experts’ questions such as “What’s new here?” and “What was actually achieved?”. Write in a form that squeezes as many technical details as you can about what you actually did into about 250 words (or whatever your publisher specifies). Include details about any experimental setup and results. Make sure all the crucial keywords that describe your work appear in either the title or the abstract.

Introductions are for a wider audience. Think of your reader as a first-year graduate student who is not yet an expert in your field, but interested in becoming one. An introduction should answer questions like “Why is the general topic of your work interesting?”, “What do you ultimateley want to achieve?”, “What are the most important recent related developments?”, “What inspired your work?”. None of this belongs into an abstract, because experts will know the answers already.

Abstract and introduction are alternative paths into your paper. You may think of an abstract also as a kind of entrance test: a reader who fully understands your abstract is likely to be an expert and therefore should be able to skip at least the first section of the paper. A reader who does not understand something in the abstract should focus on the introduction, which gently introduces and points to all the necessary background knowledge to get started. Continue reading How (not) to write an abstract

Award winners

Congratulations to Steven J. Murdoch and George Danezis who were recently awarded the Computer Laboratory Lab Ring (the local alumni association) award for the “most notable publication” (that’s notable as in jolly good) for the past year, written by anyone in the whole lab.

Their paper, “Low cost traffic analysis of Tor”, was presented at the 2005 IEEE Symposium on Security and Privacy (Oakland 2005). It demonstrates a feasible attack, within the designer’s threat model, on the anonymity provided by Tor, the second generation onion routing system.

George was recently back in Cambridge for a couple of days (he’s currently a post-doc visiting fellow at the Katholieke Universiteit Leuven) so we took a photo to commemorate the event (see below). As it happens, Steven will be leaving us for a while as well, to work as an intern at Microsoft Research for a few months… one is reminded of the old joke about the Scotsman coming south of the border and thereby increasing the average intelligence of both countries 🙂

George Danezis and Steven J. Murdoch, most notable publication 2006
George Danezis and Steven J. Murdoch, most notable publication 2006