A Merry Christmas to all Bankers

The bankers’ trade association has written to Cambridge University asking for the MPhil thesis of one of our research students, Omar Choudary, to be taken offline. They complain it contains too much detail of our No-PIN attack on Chip-and-PIN and thus “breaches the boundary of responsible disclosure”; they also complain about Omar’s post on the subject to this blog.

Needless to say, we’re not very impressed by this, and I made this clear in my response to the bankers. (I am embarrassed to see I accidentally left Mike Bond off the list of authors of the No-PIN vulnerability. Sorry, Mike!) There is one piece of Christmas cheer, though: the No-PIN attack no longer works against Barclays’ cards at a Barclays merchant. So at least they’ve started to fix the bug – even if it’s taken them a year. We’ll check and report on other banks later.

The bankers also fret that “future research, which may potentially be more damaging, may also be published in this level of detail”. Indeed. Omar is one of my coauthors on a new Chip-and-PIN paper that’s been accepted for Financial Cryptography 2011. So here is our Christmas present to the bankers: it means you all have to come to this conference to hear what we have to say!

Financial Cryptography and Data Security 2011 — Call for Participation

Financial Cryptography and Data Security (FC 2011)
Bay Gardens Beach Resort, St. Lucia
February 28 — March 4, 2011

Financial Cryptography and Data Security is a major international forum for research, advanced development, education, exploration, and debate regarding information assurance, with a specific focus on commercial contexts. The conference covers all aspects of securing transactions and systems.

NB: Discounted hotel rate is available only until December 30, 2010

Topics include:

Anonymity and Privacy, Auctions and Audits, Authentication and Identification, Backup Authentication, Biometrics, Certification and Authorization, Cloud Computing Security, Commercial Cryptographic Applications, Transactions and Contracts, Data Outsourcing Security, Digital Cash and Payment Systems, Digital Incentive and Loyalty Systems, Digital Rights Management, Fraud Detection, Game Theoretic Approaches to Security, Identity Theft, Spam, Phishing and Social Engineering, Infrastructure Design, Legal and Regulatory Issues, Management and Operations, Microfinance and Micropayments, Mobile Internet Device Security, Monitoring, Reputation Systems, RFID-Based and Contactless Payment Systems, Risk Assessment and Management, Secure Banking and Financial Web Services, Securing Emerging Computational Paradigms, Security and Risk Perceptions and Judgments, Security Economics, Smartcards, Secure Tokens and Hardware, Trust Management, Underground-Market Economics, Usability, Virtual Economies, Voting Systems

Important Dates

Hotel room reduced rate cut-off: December 30, 2010
Reduced registration rate cut-off: January 21, 2011

Please send any questions to fc11general@ifca.ai

Continue reading Financial Cryptography and Data Security 2011 — Call for Participation

The Gawker hack: how a million passwords were lost

Almost a year to the date after the landmark RockYou password hack, we have seen another large password breach, this time of Gawker Media. While an order of magnitude smaller, it’s still probably the second largest public compromise of a website’s password file, and in many ways it’s a more interesting case than RockYou. The story quickly made it to the mainstream press, but the reported details are vague and often wrong. I’ve obtained a copy of the data (which remains generally available, though Gawker is attempting to block listing of the torrent files) so I’ll try to clarify the details of the leak and Gawker’s password implementation (gleaned mostly from the readme file provided with the leaked data and from reverse engineering MySQL dumps). I’ll discuss the actual password dataset in a future post. Continue reading The Gawker hack: how a million passwords were lost

Wikileaks, security research and policy

A number of media organisations have been asking us about Wikileaks. Fifteen years ago we kicked off the study of censorship resistant systems, which inspired the peer-to-peer movement; we help maintain Tor, which provides the anonymous communications infrastructure for Wikileaks; and we’ve a longstanding interest in information policy.

I have written before about governments’ love of building large databases of sensitive data to which hundreds of thousands of people need access to do their jobs – such as the NHS spine, which will give over 800,000 people access to our health records. The media are now making the link. Whether sensitive data are about health or about diplomacy, the only way forward is compartmentation. Medical records should be kept in the surgery or hospital where the care is given; and while an intelligence analyst dealing with Iraq might have access to cables on Iraq, Iran and Saudi Arabia, he should have no routine access to stuff on Korea or Brazil.

So much for the security engineering; now to policy. No-one questions the US government’s right to try one of its soldiers for leaking the cables, or the right of the press to publish them now that they’re leaked. But why is Wikileaks treated as the leaker, rather than as a publisher?

This leads me to two related questions. First, does a next-generation censorship-resistant system need a more resilient technical platform, or more respectable institutions? And second, if technological change causes respectable old-media organisations such as the Guardian and the New York Times to go bust and be replaced by blogs, what happens to freedom of the press, and indeed to freedom of speech?

Resumption of the crypto wars?

The Telegraph and Guardian reported yesterday that the government plans to install deep packet inspection kit at ISPs, a move considered and then apparently rejected by the previous government (our Database State report last year found their Interception Modernisation Programme to be almost certainly illegal). An article in the New York Times on comparable FBI/NSA proposals makes you wonder whether policy is being coordinated between Britain and America.

In each case, the police and spooks argue that they used to have easy access to traffic data — records of who called whom and when — so now people communicate using facebook, gmail and second life rather than with phones, they should be allowed to harvest data about who wrote on your wall, what emails appeared on your gmail inbox page, and who stood next to you in second life. This data will be collected on everybody and will be available to investigators who want to map suspects’ social networks. A lot of people opposed this, including the Lib Dems, who promised to “end the storage of internet and email records without good reason” and wrote this into the Coalition Agreement. The Coalition seems set to reinterpret this now that the media are distracted by the spending review.

We were round this track before with the debate over key escrow in the 1990s. Back then, colleagues and I wrote of the risks and costs of insisting that communications services be wiretap-ready. One lesson from the period was that the agencies clung to their old business model rather than embracing all the new opportunities; they tried to remain Bletchley Park in the age of Google. Yet GCHQ people I’ve heard recently are still stuck in the pre-computer age, having learned nothing and forgotten nothing. As for the police, they can’t really cope with the forensics for the PCs, phones and other devices that fall into their hands anyway. This doesn’t bode well, either for civil liberties or for national security.

The Smart Card Detective: a hand-held EMV interceptor

During my MPhil within the Computer Lab (supervised by Markus Kuhn) I developed a card-sized device (named Smart Card Detective – in short SCD) that can monitor Chip and PIN transactions. The main goal of the SCD was to offer a trusted display for anyone using credit cards, to avoid scams such as tampered terminals which show an amount on their screen but debit the card another (see this paper by Saar Drimer and Steven Murdoch). However, the final result is a more general device, which can be used to analyse and modify any part of an EMV (protocol used by Chip and PIN cards) transaction.

Using the SCD we have successfully shown how the relay attack can be mitigated by showing the real amount on the trusted display. Even more, we have tested the No PIN vulnerability (see the paper by Murdoch et al.) with the SCD. A reportage on this has been shown on Canal+ (video now available here).

After the “Chip and PIN is broken” paper was published some contra arguments referred to the difficulty of setting up the attack. The SCD can also show that such assumptions are many times incorrect.

More details on the SCD are on my MPhil thesis available here. Also important, the software is open source and along with the hardware schematics can be found in the project’s page. The aim of this is to make the SCD a useful tool for EMV research, so that other problems can be found and fixed.

Thanks to Saar Drimer, Mike Bond, Steven Murdoch and Sergei Skorobogatov for the help in this project. Also thanks to Frank Stajano and Ross Anderson for suggestions on the project.

Research, public opinion and patient consent

Paul Thornton has brought to my attention some research that the Department of Health published quietly at the end of 2009 (and which undermines Departmental policy).

It is the Summary of Responses to the Consultation on the Additional Uses of Patient Data undertaken following campaigning by doctors, NGOs and others about the Secondary Uses Service (SUS). SUS keeps summaries of patient care episodes, some of them anonymised, and makes them available for secondary uses; the system’s advocates talk about research, although it is heavily used for health service management, clinical audit, answering parliamentary questions and so on. Most patients are quite unaware that tens of thousands of officials have access to their records, and the Database State report we wrote last year concluded that SUS is almost certainly illegal. (Human-rights and data-protection law require that sensitive data, including health data, be shared only with the consent of the data subject or using tightly restricted statutory powers whose effects are predictable to data subjects.)

The Department of Health’s consultation shows that most people oppose the secondary use of their health records without consent. The executive summary tries to spin this a bit, but the data from the report’s body show that public opinion remains settled on the issue, as it has been since the first opinion survey in 1997. We do see some signs of increasing sophistication: now a quarter of patients don’t believe that data can be anonymised completely, versus 15% who say that sharing is “OK if anonymised” (p 23). And the views of medical researchers and NHS administrators are completely different; see for example p 41. The size of this gap suggests the issue won’t get resolved any time soon – perhaps until there’s an Alder-Hey-type incident that causes a public outcry and forces a reform of SUS.

Capsicum: practical capabilities for UNIX

Today, Jonathan Anderson, Ben Laurie, Kris Kennaway, and I presented Capsicum: practical capabilities for UNIX at the 19th USENIX Security Symposium in Washington, DC; the slides can be found on the Capsicum web site. We argue that capability design principles fill a gap left by discretionary access control (DAC) and mandatory access control (MAC) in operating systems when supporting security-critical and security-aware applications.

Capsicum responds to the trend of application compartmentalisation (sometimes called privilege separation) by providing strong and well-defined isolation primitives, and by facilitating rights delegation driven by the application (and eventually, user). These facilities prove invaluable, not just for traditional security-critical programs such as tcpdump and OpenSSH, but also complex security-aware applications that map distributed security policies into local primitives, such as Google’s Chromium web browser, which implement the same-origin policy when sandboxing JavaScript execution.

Capsicum extends POSIX with a new capability mode for processes, and capability file descriptor type, as well as supporting primitives such as process descriptors. Capability mode denies access to global operating system namespaces, such as the file system and IPC namespaces: only delegated rights (typically via file descriptors or more refined capabilities) are available to sandboxes. We prototyped Capsicum on FreeBSD 9.x, and have extended a variety of applications, including Google’s Chromium web browser, to use Capsicum for sandboxing. Our paper discusses design trade-offs, both in Capsicum and in applications, as well as a performance analysis. Capsicum is available under a BSD license.

Capsicum is collaborative research between the University of Cambridge and Google, and has been sponsored by Google, and will be a foundation for future work on application security, sandboxing, and security usability at Cambridge and Google. Capsicum has also been backported to FreeBSD 8.x, and Heradon Douglas at Google has an in-progress port to Linux.

We’re also pleased to report the Capsicum paper won Best Student Paper award at the conference!

Passwords in the wild, part IV: the future

This is the fourth and final part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

Given the problems associated with passwords on the web outlined in the past few days, for years academics have searched for new technology to replace passwords. This thinking can at times be counter-productive, as no silver bullets have yet materialised and this has distracted attention away from fixing the most pressing problems associated with passwords. Currently, the trendiest proposed solution is to use federated identity protocols to greatly reduce the number of websites which must collect passwords (as we’ve argued would be a very positive step). Much focus has been given to OpenID, yet it is still struggling to gain widespread adoption. OpenID was deployed at less than 3% of websites we observed, with only Mixx and LiveJournal giving it much prominence.

Nevertheless, we optimistically feel that real changes will happen in the next few years, as password authentication on the web seems to be becoming increasingly unsustainable due to the increasing scale and interconnectivity of websites collecting passwords. We actually think we are already in the early stages of a password revolution, just not of the type predicted by academia.

Continue reading Passwords in the wild, part IV: the future