Category Archives: Academic papers

Of contraseñas, סיסמאות, and 密码

Over a year ago, we blogged about a bug at Gawker which replaced all non-ASCII characters in passwords with ‘?’ prior to checking. Along with Rubin Xu and others I’ve investigated issues surrounding passwords, languages, and character encoding throughout the past year. This should be easy: websites using UTF-8 can accept any password and hash it into a standard format regardless of the writing system being used. Instead though, as we report a new paper which I presented last week at the Web 2.0 Security and Privacy workshop in San Francisco, passwords still localise poorly both because websites are buggy and users have been trained to type ASCII passwords only. This has broad implications for passwords’ role as a “universal” authentication mechanism. Continue reading Of contraseñas, סיסמאות, and 密码

The science of password guessing

I’ve written quite a few posts about passwords, mainly focusing on poor implementations, bugs and leaks from large websites. I’ve also written on the difficulty of guessing PINs, multi-word phrases and personal knowledge questions. How hard are passwords to guess? How does guessing difficulty compare between different groups of users? How does it compare to potential replacement technologies? I’ve been working on the answers to these questions for much of the past two years, culminating in my PhD dissertation on the subject and a new paper at this year’s IEEE Symposium on Security and Privacy (Oakland) which I presented yesterday. My approach is simple: don’t assume any semantic model for the distribution of passwords (Markov models and probabilistic context-free-grammars have been proposed, amongst others), but instead learn the distribution of passwords with lots of data and use this to estimate the efficiency of an hypothetical guesser with perfect knowledge. It’s been a long effort requiring new mathematical techniques and the largest corpus of passwords ever collected for research. My results provide some new insight on the nature of password selection and a good framework for future research on authentication using human-chosen distributions of secrets. Continue reading The science of password guessing

The quest to replace passwords

As any computer user already knows, passwords are a usability disaster: you are basically told to “pick something you can’t remember, then don’t write it down“, which is worse than impossible if you must also use a different password for every account. Moreover, security-wise, passwords can be shoulder-surfed, keylogged, eavesdropped, brute-forced and phished. Notable industry insiders have long predicted their demise. Over the past couple of decades, dozens of alternative schemes have been proposed. Yet here we are in 2012, still using more and more password-protected accounts every year. Why? Can’t we do any better? Don’t the suggested replacements offer any improvements?

The paper I am about to present at the IEEE Symposium on Security and Privacy in San Francisco (Oakland 2012), grown out of the “related work” section of my earlier Pico paper and written with coauthors Joe Bonneau, Cormac Herley and Paul van Oorschot, offers a structured and well-researched answer that, according to peer review, “should have considerable influence on the research community”. It offers, as its subtitle says, a framework for comparative evaluation of password replacement schemes.

We build a large 2D matrix. Across the columns we define a broad spectrum of 25 benefits that a password replacement scheme might potentially offer, starting with USABILITY benefits, such as being easy to learn, or not requiring a memory effort from the user, and SECURITY benefits, such as resilience to shoulder-surfing or to phishing. These two broad categories, and the tension between them, are relatively well-understood: it’s easy to provide more usability by offering less security and vice versa. But we also introduce a third category, DEPLOYABILITY, that measures how easy it would be to deploy the scheme on a global scale, taking into account such benefits as cost per user, compatibility with deployed web infrastructure and accessibility to people with disabilities.

Next, in the rows, we identify 35 representative schemes covering 11 broad categories, from password managers through federated authentication to hardware tokens and biometric schemes. We then carefully rate each scheme individually, with various cross-checks to preserve accuracy and consistency, assessing for each benefit whether the given scheme offers, almost offers or does not offer the benefit. The resulting colourful matrix allows readers to compare features at a glance and to recognize general patterns that would otherwise be easily missed.

Contrary to the optimistic claims of scheme authors, who often completely ignore some evaluation criteria when they assert that their scheme is a definite improvement, none of the examined schemes does better than passwords on every benefit when rated on all 25 benefits of this objective benchmark.

From the concise overview offered by the summary matrix we distil key high level insights, such as why we are still using passwords in 2012 and are probably likely to continue to do so for quite a while.

How can we make progress? It has been observed that many people repeat the mistakes of history because they didn’t understand the history book. In the field of password replacements, it looks like a good history book still needed to be written! As pointed out during peer review, our work will be a foundational starting point for further research in the area and a useful sanity check for future password replacement proposals.

An extended version of the paper is available as a tech report.

I'm from the Government and I'm here to help

Two years ago, Hyoungshick Kim, Jun Ho Huh and I wrote a paper On the Security of Internet banking in South Korea in which we discussed an IT security policy that had gone horribly wrong. The Government of Korea had tried in 1998 to secure electronic commerce by getting all the banks to use an officially-approved AciveX plugin, effectively locking most Koreans into IE. We argued in 2010 that this provided less security than it seemed, and imposed high usability and compatibility costs. Hyoungshick presented our paper at a special conference, and the government withdrew the ActiveX mandate.

It’s now apparent that the problem is still there. The bureaucracy created a procedure to approve alternative technologies, and (surprise) still hasn’t approved any. Korean web businesses remain trapped in the bubble, and fall farther and farther behind. This may well come to be seen as a warning to other governments to adopt true open standards, if they want to avoid a similar fate. The Cabinet Office should take note – and don’t forget to respond to their consultation!

Three paper Thursday: Shamir x3 at Eurocrypt

For the past 4 days Cambridge has been hosting Eurocrypt 2012.

There were many talks, probably interesting, but I will only comment on 3 talks given by Adi Shamir, 1 during the official conference and 2 during the rump session.
Among the other sessions I mention that the best paper was given to this paper by Antoine Joux and Vanessa Vitse for the enhancement of index calculus to break elliptic curves.

Official Talk: Minimalism in cryptography, the Even-Mansour scheme revisited

In this work, Adi et al. presented an analysis on the Even-Mansour scheme:

E(P) = F(P ⊕ K1) ⊕ K2

Such scheme, some times referred to as key whitening, is used in the DESX construction and in the AES-XTS mode of operation (just a few examples).

Adi et al. shown a new slide attack, called SLIDEX, which has been used to prove a tight bound on the security of the Even-Mansour scheme.

Even more, they show that using K1 = K2 you can achieve the same security.

Rump talk 1: security of multiple key encryption

Here Adi considered the case of encrypting data multiple times with multiple keys, as in 3DES:
data -> c1 = E_k1(data) ->  c2 = E_k2(c1) -> c3 = E_k3(c2) -> c4 = E_k3(c3) …. and so on.

The general approach to break a scheme where a key is used 2 times or 3 times (2DES, 3DES for e.g.) is the meet-in-the-middle attack, where you encrypt from one side and then decrypt from the other side, and by storing a table of the size of the key space (say n bits) you can eventually find the keys used in a scheme using only a few pairs of plaintext/ciphertext. For 2 keys such an attack would require 2^{n} time, for 3 keys 2^{2n}. Therefore some people may assume that increasing the number of keys by 1 (i.e. to use 4 keys) may increase the security of this scheme. This is in fact not true.

Adi shown that once we go beyond 3 keys (e.g. 4, 5, 6, etc…) the security only increases once every few keys. If you think of it, using 4 keys you can just apply the meet-in-the-middle attack in 2^{2n} time to the left 2 encryptions and also in 2^{2n} time to the right 2 decryptions. After this, he shown how to use the meet-in-the-middle attack to solve the knapsack problem and proposed the idea of using such an algorithm to solve other problems as well.

Rump talk 2: the cryptography of John Nash

Apparently John Nash, member of MIT during the 1950s, wrote some letters to the NSA in 1955 explaining the implications of computational complexity for security (this wasn’t known at the time).

John Nach also sent a proposal for an encryption scheme that is similar with today’s stream ciphers. However the NSA’s replied saying that the scheme didn’t match the security requirements of the US.
Adi Shamir and Ron Rivest then analysed the scheme and found that in the known plaintext model it would require something like 2^{sqrt(n)} time to break (which John Nach considered not to be a polynomial time, and therefore assumed would be secure).

The letters are now declassified. This blog also comments on the story.

Risk and privacy in payment systems

I’ve just given a talk on Risk and privacy implications of consumer payment innovation (slides) at the Federal Reserve Bank’s payments conference. There are many more attendees this year; who’d have believed that payment systems would ever become sexy? Yet there’s a lot of innovation, and regulators are starting to wonder. Payment systems now contain many non-bank players, from insiders like First Data, FICO and Experian to service firms like PayPal and Google. I describe a number of competitive developments and argue that although fraud may increase, so will welfare, so there’s no reason to panic. For now, bank supervisors should work on collecting better fraud statistics, so that if there ever is a crisis the response can be well-informed.

Call for nominations for PET Award 2012

Nominations are invited for the 2012 PET Award by 31 March 2012.

The PET Award is presented annually to researchers who have made an outstanding contribution to the theory, design, implementation, or deployment of privacy enhancing technology. It is awarded at the annual Privacy Enhancing Technologies Symposium (PETS).

The PET Award carries a prize of 3000 USD thanks to the generous support of Microsoft. The crystal prize itself is offered by the Office of the Information and Privacy Commissioner of Ontario, Canada.

Any paper by any author written in the area of privacy enhancing technologies is eligible for nomination. However, the paper must have appeared in a refereed journal, conference, or workshop with proceedings published in the period from 1 June 2010 until 31 March 2012.

For eligibility requirements, refer to the award rules.

Anyone can nominate a paper by sending an email message containing the following to award-chairs12@petsymposium.org:

  • Paper title
  • Author(s)
  • Author(s) contact information
  • Publication venue and full reference
  • Link to an available online version of the paper
  • A nomination statement of no more than 500 words.

All nominations must be submitted by 31 March 2012. The Award Committee will select one or two winners among the nominations received. Winners must be present at the 2012 PET Symposium in order to receive the Award. This requirement can be waived only at the discretion of the PET Advisory board.

More information about the PET award (including past winners) is see the award website.

Three Paper Thursday: BGP and its security

BGP security was a hot topic a few years ago, but is not as much studied these years. However, with technologies such as IPv6 and DNSSEC, BGP security is making a comeback, especially in the industry. We academics also have much to contribute in this space. In today’s Three Paper Thursday, I will highlight three recent work related to BGP security. It is also a good starting point to catch up in BGP security for those whose last memories of BGP security involve proposals such as S-BGP and SoBGP.

Capsicum in CACM Research Highlights

The Research Highlights section of Communications of the ACM from March 2012 features two articles on Capsicum, collaborative research by the Cambridge security group and Google on capability-oriented security for contemporary operating systems. The first, Technical Perspective: The Benefits of Capability-Based Protection by Steven Gribble, considers the value of capability systems (such as Capsicum) in addressing current security problems. The second, A taste of Capsicum: practical capabilities for UNIX, is an abridged and updated version of our USENIX Security paper from 2010. These articles have since been picked up by Slashdot, Reddit, and others, and are linked to from the Capsicum publications, talks, and documentation page.

Privacy economics: evidence from the field

It has been argued that privacy is the new currency on the Web. Services offered for free are actually paid for using personal information, which is then turned into money (e.g., using targeted advertising). But what is the exchange rate for privacy? In the largest experiment ever and the first done in field, we shed new light on consumers’ willingness to pay for added privacy.

One in three Web shoppers pay half a euro extra for keeping their mobile phone number private. If privacy comes for free, more than 80% of consumers choose the company that collects less personal information, our study concludes.

Continue reading Privacy economics: evidence from the field