Category Archives: Academic papers

Analysis of FileVault 2 (Apple's full disk encryption)

With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume encryption mechanism known as FileVault 2.

During the past year Joachim Metz, Felix Grobert and I have been analysing this encryption mechanism. We have identified most of the components in FileVault 2’s architecture and we have also built an open source tool that can read volumes encrypted with FileVault 2. This tool can be useful to forensic investigators (who know the encryption password or recovery token) that need to recover some files from an encrypted volume but cannot trust or load the MAC OS that was used to encrypt the data. We have also made an analysis of the security of FileVault 2.

A few weeks ago we have made public this paper on eprint describing our work. The tool to recover data from encrypted volumes is available here.

Workshop on the Economics of Information Security 2012

I’m liveblogging WEIS 2012, as I did in 2011, 2010 and 2009. The event is being held today and tomorrow at the Academy of Sciences in Berlin. We were welcomed by Nicolas Zimmer, Berlin’s permanent secretary for economics and research who mentioned the “explosive cocktail” of streetview, and of using social media for credit ratings, in he context of very different national privacy cultures; the Swedes put tax returns online and Britain has CCTV everywhere, while neither is on the agenda in Germany. Yet Germany like other countries wants the benefits of public data – and their army has set up a cyber-warfare unit. In short, cyber security is giving rise to multiple policy conflicts, and security economics research might help policymakers navigate them.

The refereed paper sessions will be blogged in comments below this post.

Debunking cybercrime myths

Our paper Measuring the Cost of Cybercrime sets out to debunk the scaremongering around online crime that governments and defence contractors are using to justify everything from increased surveillance to preparations for cyberwar. It will appear at the Workshop on the Economics of Information Security later this month. There’s also some press coverage.

Last year the Cabinet Office published a report by Detica claiming that cybercrime cost the UK £27bn a year. This was greeted with derision, whereupon the Ministry of Defence’s chief scientific adviser, Mark Welland, asked us whether we could come up with some more defensible numbers.

We assembled a team of experts and collated what’s known. We came up with a number of interesting conclusions. For example, we compared the direct costs of cybercrimes (the amount stolen) with the indirect costs (costs in anticipation, such as countermeasures, and costs in consequence such as paying compensation). With traditional crimes that are now classed as “cyber” as they’re done online, such as welfare fraud, the indirect costs are much less than the direct ones; while for “pure”cybercrimes that didn’t exist before (such as fake antivirus software) the indirect costs are much greater. As a striking example, the botnet behind a third of the spam in 2010 earned its owner about $2.7m while the worldwide costs of fighting spam were around $1bn.

Some of the reasons for this are already well-known; traditional crimes tend to be local, while the more modern cybercrimes tend to be global and have strong externalities. As for what should be done, our research suggests we should perhaps spend less on technical countermeasures and more on locking up the bad guys. Rather than giving most of its cybersecurity budget to GCHQ, the government should improve the police’s cybercrime and forensics capabilities, and back this up with stronger consumer protection.

Of contraseñas, סיסמאות, and 密码

Over a year ago, we blogged about a bug at Gawker which replaced all non-ASCII characters in passwords with ‘?’ prior to checking. Along with Rubin Xu and others I’ve investigated issues surrounding passwords, languages, and character encoding throughout the past year. This should be easy: websites using UTF-8 can accept any password and hash it into a standard format regardless of the writing system being used. Instead though, as we report a new paper which I presented last week at the Web 2.0 Security and Privacy workshop in San Francisco, passwords still localise poorly both because websites are buggy and users have been trained to type ASCII passwords only. This has broad implications for passwords’ role as a “universal” authentication mechanism. Continue reading Of contraseñas, סיסמאות, and 密码

The science of password guessing

I’ve written quite a few posts about passwords, mainly focusing on poor implementations, bugs and leaks from large websites. I’ve also written on the difficulty of guessing PINs, multi-word phrases and personal knowledge questions. How hard are passwords to guess? How does guessing difficulty compare between different groups of users? How does it compare to potential replacement technologies? I’ve been working on the answers to these questions for much of the past two years, culminating in my PhD dissertation on the subject and a new paper at this year’s IEEE Symposium on Security and Privacy (Oakland) which I presented yesterday. My approach is simple: don’t assume any semantic model for the distribution of passwords (Markov models and probabilistic context-free-grammars have been proposed, amongst others), but instead learn the distribution of passwords with lots of data and use this to estimate the efficiency of an hypothetical guesser with perfect knowledge. It’s been a long effort requiring new mathematical techniques and the largest corpus of passwords ever collected for research. My results provide some new insight on the nature of password selection and a good framework for future research on authentication using human-chosen distributions of secrets. Continue reading The science of password guessing

The quest to replace passwords

As any computer user already knows, passwords are a usability disaster: you are basically told to “pick something you can’t remember, then don’t write it down“, which is worse than impossible if you must also use a different password for every account. Moreover, security-wise, passwords can be shoulder-surfed, keylogged, eavesdropped, brute-forced and phished. Notable industry insiders have long predicted their demise. Over the past couple of decades, dozens of alternative schemes have been proposed. Yet here we are in 2012, still using more and more password-protected accounts every year. Why? Can’t we do any better? Don’t the suggested replacements offer any improvements?

The paper I am about to present at the IEEE Symposium on Security and Privacy in San Francisco (Oakland 2012), grown out of the “related work” section of my earlier Pico paper and written with coauthors Joe Bonneau, Cormac Herley and Paul van Oorschot, offers a structured and well-researched answer that, according to peer review, “should have considerable influence on the research community”. It offers, as its subtitle says, a framework for comparative evaluation of password replacement schemes.

We build a large 2D matrix. Across the columns we define a broad spectrum of 25 benefits that a password replacement scheme might potentially offer, starting with USABILITY benefits, such as being easy to learn, or not requiring a memory effort from the user, and SECURITY benefits, such as resilience to shoulder-surfing or to phishing. These two broad categories, and the tension between them, are relatively well-understood: it’s easy to provide more usability by offering less security and vice versa. But we also introduce a third category, DEPLOYABILITY, that measures how easy it would be to deploy the scheme on a global scale, taking into account such benefits as cost per user, compatibility with deployed web infrastructure and accessibility to people with disabilities.

Next, in the rows, we identify 35 representative schemes covering 11 broad categories, from password managers through federated authentication to hardware tokens and biometric schemes. We then carefully rate each scheme individually, with various cross-checks to preserve accuracy and consistency, assessing for each benefit whether the given scheme offers, almost offers or does not offer the benefit. The resulting colourful matrix allows readers to compare features at a glance and to recognize general patterns that would otherwise be easily missed.

Contrary to the optimistic claims of scheme authors, who often completely ignore some evaluation criteria when they assert that their scheme is a definite improvement, none of the examined schemes does better than passwords on every benefit when rated on all 25 benefits of this objective benchmark.

From the concise overview offered by the summary matrix we distil key high level insights, such as why we are still using passwords in 2012 and are probably likely to continue to do so for quite a while.

How can we make progress? It has been observed that many people repeat the mistakes of history because they didn’t understand the history book. In the field of password replacements, it looks like a good history book still needed to be written! As pointed out during peer review, our work will be a foundational starting point for further research in the area and a useful sanity check for future password replacement proposals.

An extended version of the paper is available as a tech report.

I'm from the Government and I'm here to help

Two years ago, Hyoungshick Kim, Jun Ho Huh and I wrote a paper On the Security of Internet banking in South Korea in which we discussed an IT security policy that had gone horribly wrong. The Government of Korea had tried in 1998 to secure electronic commerce by getting all the banks to use an officially-approved AciveX plugin, effectively locking most Koreans into IE. We argued in 2010 that this provided less security than it seemed, and imposed high usability and compatibility costs. Hyoungshick presented our paper at a special conference, and the government withdrew the ActiveX mandate.

It’s now apparent that the problem is still there. The bureaucracy created a procedure to approve alternative technologies, and (surprise) still hasn’t approved any. Korean web businesses remain trapped in the bubble, and fall farther and farther behind. This may well come to be seen as a warning to other governments to adopt true open standards, if they want to avoid a similar fate. The Cabinet Office should take note – and don’t forget to respond to their consultation!

Three paper Thursday: Shamir x3 at Eurocrypt

For the past 4 days Cambridge has been hosting Eurocrypt 2012.

There were many talks, probably interesting, but I will only comment on 3 talks given by Adi Shamir, 1 during the official conference and 2 during the rump session.
Among the other sessions I mention that the best paper was given to this paper by Antoine Joux and Vanessa Vitse for the enhancement of index calculus to break elliptic curves.

Official Talk: Minimalism in cryptography, the Even-Mansour scheme revisited

In this work, Adi et al. presented an analysis on the Even-Mansour scheme:

E(P) = F(P ⊕ K1) ⊕ K2

Such scheme, some times referred to as key whitening, is used in the DESX construction and in the AES-XTS mode of operation (just a few examples).

Adi et al. shown a new slide attack, called SLIDEX, which has been used to prove a tight bound on the security of the Even-Mansour scheme.

Even more, they show that using K1 = K2 you can achieve the same security.

Rump talk 1: security of multiple key encryption

Here Adi considered the case of encrypting data multiple times with multiple keys, as in 3DES:
data -> c1 = E_k1(data) ->  c2 = E_k2(c1) -> c3 = E_k3(c2) -> c4 = E_k3(c3) …. and so on.

The general approach to break a scheme where a key is used 2 times or 3 times (2DES, 3DES for e.g.) is the meet-in-the-middle attack, where you encrypt from one side and then decrypt from the other side, and by storing a table of the size of the key space (say n bits) you can eventually find the keys used in a scheme using only a few pairs of plaintext/ciphertext. For 2 keys such an attack would require 2^{n} time, for 3 keys 2^{2n}. Therefore some people may assume that increasing the number of keys by 1 (i.e. to use 4 keys) may increase the security of this scheme. This is in fact not true.

Adi shown that once we go beyond 3 keys (e.g. 4, 5, 6, etc…) the security only increases once every few keys. If you think of it, using 4 keys you can just apply the meet-in-the-middle attack in 2^{2n} time to the left 2 encryptions and also in 2^{2n} time to the right 2 decryptions. After this, he shown how to use the meet-in-the-middle attack to solve the knapsack problem and proposed the idea of using such an algorithm to solve other problems as well.

Rump talk 2: the cryptography of John Nash

Apparently John Nash, member of MIT during the 1950s, wrote some letters to the NSA in 1955 explaining the implications of computational complexity for security (this wasn’t known at the time).

John Nach also sent a proposal for an encryption scheme that is similar with today’s stream ciphers. However the NSA’s replied saying that the scheme didn’t match the security requirements of the US.
Adi Shamir and Ron Rivest then analysed the scheme and found that in the known plaintext model it would require something like 2^{sqrt(n)} time to break (which John Nach considered not to be a polynomial time, and therefore assumed would be secure).

The letters are now declassified. This blog also comments on the story.

Risk and privacy in payment systems

I’ve just given a talk on Risk and privacy implications of consumer payment innovation (slides) at the Federal Reserve Bank’s payments conference. There are many more attendees this year; who’d have believed that payment systems would ever become sexy? Yet there’s a lot of innovation, and regulators are starting to wonder. Payment systems now contain many non-bank players, from insiders like First Data, FICO and Experian to service firms like PayPal and Google. I describe a number of competitive developments and argue that although fraud may increase, so will welfare, so there’s no reason to panic. For now, bank supervisors should work on collecting better fraud statistics, so that if there ever is a crisis the response can be well-informed.