Category Archives: Academic papers

JPEG canaries: exposing on-the-fly recompression

Many photo-sharing websites decompress and compress uploaded images, to enforce particular compression parameters. This recompression degrades quality. Some web proxies can also recompress images/videos, to give the impression of a faster connection.

In Towards copy-evident JPEG images (with Markus Kuhn, in Lecture Notes in Informatics), we present an algorithm for imperceptibly marking JPEG images so that the recompressed copies show a clearly-visible warning message. (Full page demonstration.)

Original image:

Original image

After recompression:

The image after recompression, with a visible warning message

(If you can’t see the message in the recompressed image, make sure your browser is rendering the images without scaling or filtering.)

Richard Clayton originally suggested the idea of trying to create an image which would show a warning when viewed via a recompressing proxy server. Here is a real-world demonstration using the Google WAP proxy.

Our marking technique is inspired by physical security printing, used to produce documents such as banknotes, tickets, academic transcripts and cheques. Photocopied versions will display a warning (e.g. ‘VOID’) or contain obvious distortions, as duplication turns imperceptible high-frequency patterns into more noticeable low-frequency signals.

Our algorithm works by adding a high-frequency pattern to the image with an amplitude carefully selected to cause maximum quantization error on recompression at a chosen target JPEG quality factor. The amplitude is modulated with a covert warning message, so that foreground message blocks experience maximum quantization error in the opposite direction to background message blocks. While the message is invisible in the marked original image, it becomes visible due to clipping in a recompressed copy.

The challenge remains to extend our approach to mark video data, where rate control and adaptive quantization make the copied document’s properties less predictable. The result would be a digital video that would be severely degraded by recompression to a lower quality, making the algorithm useful for digital content protection.

A Merry Christmas to all Bankers

The bankers’ trade association has written to Cambridge University asking for the MPhil thesis of one of our research students, Omar Choudary, to be taken offline. They complain it contains too much detail of our No-PIN attack on Chip-and-PIN and thus “breaches the boundary of responsible disclosure”; they also complain about Omar’s post on the subject to this blog.

Needless to say, we’re not very impressed by this, and I made this clear in my response to the bankers. (I am embarrassed to see I accidentally left Mike Bond off the list of authors of the No-PIN vulnerability. Sorry, Mike!) There is one piece of Christmas cheer, though: the No-PIN attack no longer works against Barclays’ cards at a Barclays merchant. So at least they’ve started to fix the bug – even if it’s taken them a year. We’ll check and report on other banks later.

The bankers also fret that “future research, which may potentially be more damaging, may also be published in this level of detail”. Indeed. Omar is one of my coauthors on a new Chip-and-PIN paper that’s been accepted for Financial Cryptography 2011. So here is our Christmas present to the bankers: it means you all have to come to this conference to hear what we have to say!

Financial Cryptography and Data Security 2011 — Call for Participation

Financial Cryptography and Data Security (FC 2011)
Bay Gardens Beach Resort, St. Lucia
February 28 — March 4, 2011

Financial Cryptography and Data Security is a major international forum for research, advanced development, education, exploration, and debate regarding information assurance, with a specific focus on commercial contexts. The conference covers all aspects of securing transactions and systems.

NB: Discounted hotel rate is available only until December 30, 2010

Topics include:

Anonymity and Privacy, Auctions and Audits, Authentication and Identification, Backup Authentication, Biometrics, Certification and Authorization, Cloud Computing Security, Commercial Cryptographic Applications, Transactions and Contracts, Data Outsourcing Security, Digital Cash and Payment Systems, Digital Incentive and Loyalty Systems, Digital Rights Management, Fraud Detection, Game Theoretic Approaches to Security, Identity Theft, Spam, Phishing and Social Engineering, Infrastructure Design, Legal and Regulatory Issues, Management and Operations, Microfinance and Micropayments, Mobile Internet Device Security, Monitoring, Reputation Systems, RFID-Based and Contactless Payment Systems, Risk Assessment and Management, Secure Banking and Financial Web Services, Securing Emerging Computational Paradigms, Security and Risk Perceptions and Judgments, Security Economics, Smartcards, Secure Tokens and Hardware, Trust Management, Underground-Market Economics, Usability, Virtual Economies, Voting Systems

Important Dates

Hotel room reduced rate cut-off: December 30, 2010
Reduced registration rate cut-off: January 21, 2011

Please send any questions to fc11general@ifca.ai

Continue reading Financial Cryptography and Data Security 2011 — Call for Participation

Research, public opinion and patient consent

Paul Thornton has brought to my attention some research that the Department of Health published quietly at the end of 2009 (and which undermines Departmental policy).

It is the Summary of Responses to the Consultation on the Additional Uses of Patient Data undertaken following campaigning by doctors, NGOs and others about the Secondary Uses Service (SUS). SUS keeps summaries of patient care episodes, some of them anonymised, and makes them available for secondary uses; the system’s advocates talk about research, although it is heavily used for health service management, clinical audit, answering parliamentary questions and so on. Most patients are quite unaware that tens of thousands of officials have access to their records, and the Database State report we wrote last year concluded that SUS is almost certainly illegal. (Human-rights and data-protection law require that sensitive data, including health data, be shared only with the consent of the data subject or using tightly restricted statutory powers whose effects are predictable to data subjects.)

The Department of Health’s consultation shows that most people oppose the secondary use of their health records without consent. The executive summary tries to spin this a bit, but the data from the report’s body show that public opinion remains settled on the issue, as it has been since the first opinion survey in 1997. We do see some signs of increasing sophistication: now a quarter of patients don’t believe that data can be anonymised completely, versus 15% who say that sharing is “OK if anonymised” (p 23). And the views of medical researchers and NHS administrators are completely different; see for example p 41. The size of this gap suggests the issue won’t get resolved any time soon – perhaps until there’s an Alder-Hey-type incident that causes a public outcry and forces a reform of SUS.

Capsicum: practical capabilities for UNIX

Today, Jonathan Anderson, Ben Laurie, Kris Kennaway, and I presented Capsicum: practical capabilities for UNIX at the 19th USENIX Security Symposium in Washington, DC; the slides can be found on the Capsicum web site. We argue that capability design principles fill a gap left by discretionary access control (DAC) and mandatory access control (MAC) in operating systems when supporting security-critical and security-aware applications.

Capsicum responds to the trend of application compartmentalisation (sometimes called privilege separation) by providing strong and well-defined isolation primitives, and by facilitating rights delegation driven by the application (and eventually, user). These facilities prove invaluable, not just for traditional security-critical programs such as tcpdump and OpenSSH, but also complex security-aware applications that map distributed security policies into local primitives, such as Google’s Chromium web browser, which implement the same-origin policy when sandboxing JavaScript execution.

Capsicum extends POSIX with a new capability mode for processes, and capability file descriptor type, as well as supporting primitives such as process descriptors. Capability mode denies access to global operating system namespaces, such as the file system and IPC namespaces: only delegated rights (typically via file descriptors or more refined capabilities) are available to sandboxes. We prototyped Capsicum on FreeBSD 9.x, and have extended a variety of applications, including Google’s Chromium web browser, to use Capsicum for sandboxing. Our paper discusses design trade-offs, both in Capsicum and in applications, as well as a performance analysis. Capsicum is available under a BSD license.

Capsicum is collaborative research between the University of Cambridge and Google, and has been sponsored by Google, and will be a foundation for future work on application security, sandboxing, and security usability at Cambridge and Google. Capsicum has also been backported to FreeBSD 8.x, and Heradon Douglas at Google has an in-progress port to Linux.

We’re also pleased to report the Capsicum paper won Best Student Paper award at the conference!

Passwords in the wild, part IV: the future

This is the fourth and final part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

Given the problems associated with passwords on the web outlined in the past few days, for years academics have searched for new technology to replace passwords. This thinking can at times be counter-productive, as no silver bullets have yet materialised and this has distracted attention away from fixing the most pressing problems associated with passwords. Currently, the trendiest proposed solution is to use federated identity protocols to greatly reduce the number of websites which must collect passwords (as we’ve argued would be a very positive step). Much focus has been given to OpenID, yet it is still struggling to gain widespread adoption. OpenID was deployed at less than 3% of websites we observed, with only Mixx and LiveJournal giving it much prominence.

Nevertheless, we optimistically feel that real changes will happen in the next few years, as password authentication on the web seems to be becoming increasingly unsustainable due to the increasing scale and interconnectivity of websites collecting passwords. We actually think we are already in the early stages of a password revolution, just not of the type predicted by academia.

Continue reading Passwords in the wild, part IV: the future

Passwords in the wild, part III: password standards for the Web

This is the third part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Joseph Bonneau.

In our analysis of 150 password deployments online, we observed a surprising diversity of implementation choices. Whilst sites can be ranked by the overall security of their password scheme, there is a vast middle group in which sites make seemingly incongruous security decisions. We also found almost no evidence of commonality in implementations. Examining the details of Web forms (variable names, etc.) and the format of automated emails, we found little evidence that sites are re-using a common code base. This lack of consistency in technical choices suggests that standards and guidelines could improve security.

Numerous RFCs concern themselves with one-time passwords and other relatively sophisticated authentication protocols. Yet, traditional password-based authentication remains the most prevalent authentication protocol on the Internet, as the International Telecommunication Union–itself a United Nations specialized agency to standardise telecommunications on a worldwide basis–observes in their ITU-T Recommendation X.1151, “Guideline on secure password-based, authentication protocol with key exchange.” Client PKI has not seen wide-spread adoption and tokens or smart-cards are prohibitively cost-inefficient or inconvenient for most websites. While passwords have many shortcomings, it is essential deploy them as carefully and securely as possible. Formal standards and guidelines of best practices are essential to help developers.

Continue reading Passwords in the wild, part III: password standards for the Web

Passwords in the wild, part II: failures in the market

This is the second part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

As we discussed yesterday, dubious practices abound within real sites’ password implementations. Password insecurity isn’t only due to random implementation mistakes, though. When we scored sites’ passwords implementations on a 10-point aggregate scale it became clear that a wide spectrum of implementation quality exists. Many web authentication giants (Amazon, eBay, Facebook, Google, LiveJournal, Microsoft, MySpace, Yahoo!) scored near the top, joined by a few unlikely standouts (IKEA, CNBC). At the opposite end were a slew of lesser-known merchants and news websites. Exploring the factors which lead to better security confirms the basic tenets of security economics: sites with more at stake tend to do better. However, doing better isn’t enough. Given users’ well-documented tendency to re-use passwords, the varying levels of security may represent a serious market failure which is undermining the security of password-based authentication.

Continue reading Passwords in the wild, part II: failures in the market

Passwords in the wild, part I: the gap between theory and implementation

Sören Preibusch and I have finalised our in-depth report on password practices in the wild, The password thicket: technical and market failures in human authentication on the web, presented in Boston last month for WEIS 2010. The motivation for our report was a lack of technical research into real password deployments. Passwords have been studied as an authentication mechanism quite intensively for the last 30 years, but we believe ours was the first large study into how Internet sites actually implement them. We studied 150 sites, including the most visited overall sites plus a random sample of mid-level sites. We signed up for free accounts with each site, and using a mixture of scripting and patience, captured all visible aspects of password deployment, from enrolment and login to reset and attacks.

Our data (which is now publicly available) gives us an interesting picture into the current state of password deployment. Because the dataset is huge and the paper is quite lengthy, we’ll be discussing our findings and their implications from a series of different perspectives. Today, we’ll focus on the preventable mistakes. In academic literature, it’s assumed that passwords will be encrypted during transmission, hashed before storage, and attempts to guess usernames or passwords will be throttled. None of these is widely true in practice.

Continue reading Passwords in the wild, part I: the gap between theory and implementation

Who controls the off switch?

We have a new paper on the strategic vulnerability created by the plan to replace Britain’s 47 million meters with smart meters that can be turned off remotely. The energy companies are demanding this facility so that customers who don’t pay their bills can be switched to prepayment tariffs without the hassle of getting court orders against them. If the Government buys this argument – and I’m not convinced it should – then the off switch had better be closely guarded. You don’t want the nation’s enemies to be able to turn off the lights remotely, and eliminating that risk could just conceivably be a little bit more complicated than you might at first think. (This paper follows on from our earlier paper On the security economics of electricity metering at WEIS 2010.)